I found it an interesting read:
awesome thanks P, i love these types of articles
And it had pictures and stuff, too… which always helps.
that’s what I mean, articles with pictures so I don’t need to read anything
Would be cool if someone used that description to make a jME version of the rendering pipeline.
I check this forum every day, how did this slip past me 17 days ago??
Thoughts on DoF and motion blur? I turn them off the first chance I get. I particularly hate motion blur applied to static objects when you move your mouse to look around. I swear that isn’t even a real thing because of the whole ‘saccadic masking’ thing.
Interesting techniques they are using…
Some points i have written in my memo:
*They are using screen space normals in the gbuffer (Saving 16bits per pixel at the cost of a matrix multiplication and some vector math)
*On the other side, they are spending a full rgb-gbuffer to get the motion blur effect
*Depth prepass: (Doubles the drawcalls, might be expensive on the cpu side if not nicely batched)
a) a result of the heavy computation in the forward lighting
b) to save texture reads.
c) they might have some free gpu time at this stage since the light info has to be passed to the gpu.
(Still very skeptical about a full prepass.)
SSR: if the use the previous frame as source, they could get a two bounce reflections for free. (At the cost of precision)
Shadowning and Megatexturing, nothing to say… but interesting that you don’t see any glitches or popups due the reactive system they are using)
I think it’s a pretty clear win if a) you already have relatively few draw calls as compared to b) you do a LOT of work in the pixel shaders.
I’ve been tempted to see if my triaxial texture interpolation (with normals + bumps, etc.) would actually be faster done as separate passes with the EQUALS function on the subsequent passes. a) I get to bypass all of the conditional logic (and probable wasted texture lookups that result), b) I get the flexibility to have as many texture layers as I want for the cost of a draw call, and finally b) I get to trivially not do some layers if it makes sense to avoid them.
It’s not yet clear at what point the trade-off happens so it will be fun to experiment some day. It also might be a way more obvious thing to do if I were doing stuff like occlusion queries that could work fine just with the first pass.
Idk if they use this, but even if the velocity buffer sounds expensive, it can help for many other things than motion blur. namely temporal reprojection for better and faster anti aliasing. I guess it’s always the trade off between memory consumption and speed.
Fine, so anyone interested in stupidly redoing the exact same rendering pipeline with jME with all the given information? I mean, thinking and talking is nice, but maybe someone wants to do it like a factory in China and just reproduce the given template? That’d be cool.
It’s like saying “hey guys I’ve seen this making off Star Wars the other day, instead of talking for hours, can’t anyone here just do the same movie based on it?”.
There are so many reasons why games are made by professional programmers/artist/game designers who are paid to do it… this is one of them.
Add support for megatexturing, ubo’s, ssbo’s, compute shaders, and the missing texturing modes including profiles for the image reader/writer to the core, then i will try to clone the pipeline.
Ah, i forgot conditional rendering
Okay … so now we need only one more copy-cat-robot who stupidly implements those features by copying them from C++ or article to Java.
Wasn’t that how Episode 1, 2, and 3 came about?
Still a bad analogy.
Build the exact same camera and then improve it from there on.
It’s what made Japanese great and what will make Chinese great.
Today they start implementing their own hardware features.
Any movie could be made with such a great camera.
At some point, a particular pipeline becomes so restrictive that the only games you can make it in are just direct variations on the original. I think this is one of those cases.
Damn, where is the google summer of code when you need it
I know you were joking… but it turns out that green college students are not always so great at rapidly coming up to speed on advanced rendering techniques and how to surgically inject them into a high performance game engine. Who knew?
I still struggle with this and the last time I set foot on a college campus was a looong time ago. (If I’d had a kid that day, he’d be just about ready to graduate college soon. :))
Compute Shaders are really something amazing which we still miss, I guess along with all the other Shader Types we don’t have (Geometry Shader?), but I guess you could atleast replace the compute shader with our openCL Implementation
I only don’t know if this interfers with openGL? As in: Only one can access the GPU at a time.
Anyway: Where is the difference between a MegaTexture and a TextureAtlas? If I got that right the idea of a MegaTexture might be slightly different as in: Contains every terrain of the scene and is simply unwrapped but in theory, isn’t an atlas the same thing? (If you only had one atlas for the whole scene)
We have geometry and tesselation shaders