Unlimited Detail Real-Time Rendering Technology Preview 2011

Q: What is more interesting that this tech video ?



http://www.youtube.com/watch?v=00gAbgBu8R4http://www.youtube.com/watch?v=00gAbgBu8R4



A: Notch's response.

Very interesting stuff, enjoy.

Cheers
James

One thing you can notice is that in the “unlimited detail” scene, the grass doesn’t move with the wind while in the polygon scenes they do.

Also hardware tessellation is going to solve a lot of the issues mentioned in that video

Also, it looks like hires minecraft with all the blocks. Maybe that’s how the get “unlimited” data. :wink:

hehe. Guess that video is making it’s rounds still.



@Momoko_Fan touched on the biggest and most important thing that is never mentioned: animation. Using this technique in animation is much akin to fluid dynamic simulations; If you’ve played with before, you know how hardware demanding that can get in real-time. I read in some reply to the video somewhere that a college thesis was written on a theory on how to actually solve this issue, but it’s never actually been done. It’s a lot easier to define movement with polygons than trying to micromanage and contain millions of atoms to retain a shape that is morphing.



Another issue that I touched on while talking with someone else about this is the fact that it takes me long enough to crap out some terrible 3d model of something. I can’t imagine not being limited anymore. I’d have no excuse to model every bolt on a tank, or every strand of hair on a human body. The increase of detail ends up being a huge increase of demand on the artists. Although, it might not be a bad thing for the people who want ultra-real looking games. Too bad they’ll take even longer than games take now to develop! :stuck_out_tongue:



Honestly, what I’d like to see happening is some sort of hybrid engine that uses this technology for highly detailed static objects and polygons for the rest/more traditional stuff.



Good stuff though. 2c deposited!

~FlaH

For those interested, this came out this morning. An interesting read to say the least. (At the bottom)



I’ve got an open mind. I won’t dismiss that idea because it’s something unseen/inapplicable/unheard of up until now. Technology goes way too fast to dismiss anything that has a footing on the ground, even if said footing is not that solid.



In that same vein, a garage hobbyist came up with a novel, better, stronger and faster way to make steel. He went to a professor and submitted his idea. The professor told him “It’s impossible!” but after probing and nagging, the guy got the professor to attend a demonstration with his students. No need to say that the professor ate his words raw when he saw the result.



All this to say that you HAVE to keep an open mind. I’m not saying this technology would be viable, nor that it would be easy to implement. Then there’s the question of tools, adaptation, etc, etc…



Wait and see, wait and see. :slight_smile:



Here’s the link to the article: http://kotaku.com/5827192/

Forgot to add…



Notch might be good, some would even go the distance and call him great. But whatever he is, he’s not all-knowing. He’s just a guy that made a game. His opinion is just that. (I do have a bone against him). Whatever his view on this is his own take on the subject and I know for a fact that because Notch said so, his words will be gospel to some. What. Ever.

even if this wasn’t fake, polygons to unlimited cloud data, would take a long time to convert if we’re talking about the detail they suggest, and probably wouldn’t be feasible

Actually, is it just me or does this all look kinda non lighted?

I mean you still need some kind of lightcalculation, forward lighting would be a problem, as this is based on the light and fragment amount → so you either would be very limited in regards to light count as you currently are.

Deferred rendering would be problem for any transparent object as there currently already is, however to solve this the only way is to do forward rendering for transparend objects

→ Forward rendering requires the geomtries to be sorted, however if you don’t have discrete geometry you either have to sort the atoms (with the pro that the unsolveable transparent issues in forward rendering are solved when transparent bojects are intersecting) however doing that kind of sorting for any nonstatic atoms would probably require enourmous amounts of cpu power (from the current perspective, lets talk again over this in 5-10 years).



Secondary there is the texturing problem, if you color each atom in one single colour you will probably kill most gc’s with the needed ram amount already for the amount of voxel.

If you on the other side use normal texturing you need uv mapping for atoms, and the unlimited detail is somewhat reduced to the amount of textures.



Not to mention that for proper lighting you would need tangents and normals for EVERY visible atom surface, I can’t really think of any currently aviable system to store that much data efficiently.

I found that the claim of “infinite detail” to be a tad fishy. It’s like in all those CSI, NCIS, LMNOP shows where they take a photo taken on a cellphone and just zoom in inifinitely then when it gets blurry they hit the all mighty “enhance” button and SHAZAAM we have a clear photo. Or to better illustrate it through the art of Red Dwarf:

http://www.youtube.com/watch?v=KUFkb0d1kbU



I suppose my point is that there will always be an upper limit to detail no mater what you do and (as Notch pointed out) the volume of raw data in doing what they claim to be doing is just frankly pants on head insane (for now).
1 Like
wezrule said:
even if this wasn't fake, polygons to unlimited cloud data, would take a long time to convert if we're talking about the detail they suggest, and probably wouldn't be feasible

Convert? How? From what I understand their tool would take whatever model you have and convert it in-house with their tools. After that conversion is done, I imagine you'd have to use their technology (engine) with a license...

If all that is an engine (or something similar), then I don't see a conversion problem at all.
EmpirePhoenix said:
Actually, is it just me or does this all look kinda non lighted?
I mean you still need some kind of lightcalculation, forward lighting would be a problem, as this is based on the light and fragment amount -> so you either would be very limited in regards to light count as you currently are.
Deferred rendering would be problem for any transparent object as there currently already is, however to solve this the only way is to do forward rendering for transparend objects
-> Forward rendering requires the geomtries to be sorted, however if you don't have discrete geometry you either have to sort the atoms (with the pro that the unsolveable transparent issues in forward rendering are solved when transparent bojects are intersecting) however doing that kind of sorting for any nonstatic atoms would probably require enourmous amounts of cpu power (from the current perspective, lets talk again over this in 5-10 years).

Secondary there is the texturing problem, if you color each atom in one single colour you will probably kill most gc's with the needed ram amount already for the amount of voxel.
If you on the other side use normal texturing you need uv mapping for atoms, and the unlimited detail is somewhat reduced to the amount of textures.

Not to mention that for proper lighting you would need tangents and normals for EVERY visible atom surface, I can't really think of any currently aviable system to store that much data efficiently.

I understand your concern and I actually agree with that. But I think the problem we have right now is that we're all thinking with a polygon mindset, and that might be the wrong way to think about it.

I also have to admit their scene look like it's "fullbright" with no shadow, no lighting, no anything. I can't say it's a concern really, but like you, I didn't see any of that. Maybe that's why they're taking their sweet time.

Think about it. Last time we heard about them was over a year ago with a "basic scene" (if you could call that a scene) and last week, they go viral with a real scene that looks totally sweet, awesome and great.

If you think back during the late 1990s, the upper limits of tris on a scene was at the max, around 20000. For normal users ideal tris was in the cound of around 12-15000. You could go higher, but it was very risky and demanded a powerful machine.

Lastly, 50 years ago, microprocessors we're inexistant. If someone happened to say they had this new "thing"... etc. You know where I'm getting to. Now, it doesn't take 10 years for technological advancement to jump to the next step.

That's all I'm saying. :)

Don’t get me wrong what thea acomplihed is quite good, ina academic view, but practicall it has no efect so far.

thecyberbob said:
I found that the claim of "infinite detail" to be a tad fishy. It's like in all those CSI, NCIS, LMNOP shows where they take a photo taken on a cellphone and just zoom in inifinitely then when it gets blurry they hit the all mighty "enhance" button and SHAZAAM we have a clear photo. Or to better illustrate it through the art of Red Dwarf:

Yeah. I always laugh at those "enhancements". This is the most ludicrous thing I have ever seen. A picture can't have more pixels than the camera it was used with. Everyone in their right mind knows that. Well, let's just say most informed people know that. ;)


I suppose my point is that there will always be an upper limit to detail no mater what you do and (as Notch pointed out) the volume of raw data in doing what they claim to be doing is just frankly pants on head insane (for now).

If you think in polygons, yeah. I agree. But that supposed technology smells different to me.

I guess what I'm trying to convey is my wish for it to be true non-standing what we are used to, know and expect. I'm sure serious developers are arching their eyebrows with a dubious look, but AFAIK only Notch made a statement albeit a negative one. John Carmak quickly mentioned something on Twitter but like most gurus, he's waiting for real data before passing judgment. As we should do.
madjack said:
... he's waiting for real data before passing judgment. As we should do.


But but but but... How can we get a good incoherent yelling match going if we're reasonable? :(
thecyberbob said:
But but but but... How can we get a good incoherent yelling match going if we're reasonable? :(

You can have your yelling match if you want. Being reasonable means that we should not JUDGE before having the data. You can, if you're close-minded, make your mind right now. You can also discuss and pretend you're in the know and strike the hammer on them. Nothing stop you from discussing the issue. But it would be premature to outright say it's a scam/hoax/impossible before we have SOMETHING tangible.

That's all I'm saying. So yell if you feel like it. ;)

I don’t understand how their technology works. 20fps using only the CPU with trillions of atoms… How can this be acheived ? o.o



If this is true, it will be a huge revolution in 3D graphics… But they still have a lot to solve, especially about animated objects.

Not only do they have a lot to solve, but a lot of technical explanation to do. A video is just that. A video. We can suppose and argue until the cows come home but without being able to plunge into the code, or at least the technical documentation, it’s impossible to pass judgement.



For now all I can say is, it’s impressive, beautiful and very interesting. But I can also say that about many movie previews… Until I’ve seen it, I won’t know if it’s utter crap or a masterpiece. :wink:

The one thing that annoys me the most is their lack of transparency. Maybe they’re right that people would make a fuss about half-decent animations being showcased on day 1, but if then on day 7 they showed improved animations, and on day 14 they were even better (and so on), people would immediately get they’re not stuck.



Also, I don’t get why these guys are getting all the attention, when there are other projects seemingly further ahead than they are, like these guys:

http://www.atomontage.com/



I still think the tech needs a couple years before it’s ready for prime-time, but it sure is exciting. Developers just have to be careful not to beat their drums of revolution before they’re fully armed and ready.

I agree @erlend_sh but it depends on whom they want to build the hype to. When I look at these videos and the simple explanations they give, it suggests to me they’re targeting the gamers. By that I mean those that play the games, not developers. In my mind, the reasoning is sound. Gamers are passionate. They’re exuberant and will “viralize” a video like that for the sheer beauty of what it portent. Gamers will want that NOA! for their favorite game engine and create a movement.



After that, you hit the technical savvy wall of developers that know this isn’t a cake walk. Some might argue it’s impossible… or point out the inherent problems with animation, lighting, post-processing, etc.



But I think they wanted, and succeeded albeit a limited success, to get the info out. Create a wave of interest or scorn for some. But to anyone who knows about marketing/hype: “Talk about me in bad or talk about me in good, it doesn’t matter as long as you talk about me.” Everyone knows about them. But that’s a two-edge sword. Now they have to deliver and if they don’t, those guys are putting ALL their credibility on the table. If it fails… ouch. But, if they do succeed all the way through… They’ll be revered and rightly so.



It’s a gamble.

Given that they were talking about “revolutionizing” the industry without really getting specific as to type of game, etc… I took it as viral marketing for investors. They are aiming at the investor who is slightly less tech savvy but is looking to invest in the “next big thing” before the “big boys” do.



Just because they are looking for these easier “marks” doesn’t mean that it’s snake oil… but that’s the way they are currently marketing it. So it gets the already skeptical riled up a bit.