Article: Doom (2016) Graphics Study

I found it an interesting read:

11 Likes

awesome thanks P, i love these types of articles

And it had pictures and stuff, tooā€¦ which always helps. :slight_smile:

thatā€™s what I mean, articles with pictures so I donā€™t need to read anything

5 Likes

Would be cool if someone used that description to make a jME version of the rendering pipeline. :slight_smile:

I check this forum every day, how did this slip past me 17 days ago??

Thoughts on DoF and motion blur? I turn them off the first chance I get. I particularly hate motion blur applied to static objects when you move your mouse to look around. I swear that isnā€™t even a real thing because of the whole ā€˜saccadic maskingā€™ thing.

Interesting techniques they are usingā€¦
Some points i have written in my memo:

*They are using screen space normals in the gbuffer (Saving 16bits per pixel at the cost of a matrix multiplication and some vector math)

*On the other side, they are spending a full rgb-gbuffer to get the motion blur effect

*Depth prepass: (Doubles the drawcalls, might be expensive on the cpu side if not nicely batched)
a) a result of the heavy computation in the forward lighting
b) to save texture reads.
c) they might have some free gpu time at this stage since the light info has to be passed to the gpu.
(Still very skeptical about a full prepass.)

SSR: if the use the previous frame as source, they could get a two bounce reflections for free. (At the cost of precision)

Shadowning and Megatexturing, nothing to sayā€¦ but interesting that you donā€™t see any glitches or popups due the reactive system they are using)

1 Like

I think itā€™s a pretty clear win if a) you already have relatively few draw calls as compared to b) you do a LOT of work in the pixel shaders.

Iā€™ve been tempted to see if my triaxial texture interpolation (with normals + bumps, etc.) would actually be faster done as separate passes with the EQUALS function on the subsequent passes. a) I get to bypass all of the conditional logic (and probable wasted texture lookups that result), b) I get the flexibility to have as many texture layers as I want for the cost of a draw call, and finally b) I get to trivially not do some layers if it makes sense to avoid them.

Itā€™s not yet clear at what point the trade-off happens so it will be fun to experiment some day. It also might be a way more obvious thing to do if I were doing stuff like occlusion queries that could work fine just with the first pass.

Idk if they use this, but even if the velocity buffer sounds expensive, it can help for many other things than motion blur. namely temporal reprojection for better and faster anti aliasing. I guess itā€™s always the trade off between memory consumption and speed.

Fine, so anyone interested in stupidly redoing the exact same rendering pipeline with jME with all the given information? I mean, thinking and talking is nice, but maybe someone wants to do it like a factory in China and just reproduce the given template? Thatā€™d be cool. :stuck_out_tongue_winking_eye:

1 Like

Itā€™s like saying ā€œhey guys Iā€™ve seen this making off Star Wars the other day, instead of talking for hours, canā€™t anyone here just do the same movie based on it?ā€.
There are so many reasons why games are made by professional programmers/artist/game designers who are paid to do itā€¦ this is one of them.

2 Likes

Add support for megatexturing, uboā€™s, ssboā€™s, compute shaders, and the missing texturing modes including profiles for the image reader/writer to the core, then i will try to clone the pipeline.
Ah, i forgot conditional rendering

Okay ā€¦ so now we need only one more copy-cat-robot who stupidly implements those features by copying them from C++ or article to Java. :stuck_out_tongue_winking_eye:

Wasnā€™t that how Episode 1, 2, and 3 came about?

3 Likes

Still a bad analogy.
Build the exact same camera and then improve it from there on.
Itā€™s what made Japanese great and what will make Chinese great.
Today they start implementing their own hardware features.
Any movie could be made with such a great camera.
:stuck_out_tongue_winking_eye:

At some point, a particular pipeline becomes so restrictive that the only games you can make it in are just direct variations on the original. I think this is one of those cases.

Damn, where is the google summer of code when you need it :smiley:

I know you were jokingā€¦ but it turns out that green college students are not always so great at rapidly coming up to speed on advanced rendering techniques and how to surgically inject them into a high performance game engine. Who knew? :wink:

I still struggle with this and the last time I set foot on a college campus was a looong time ago. (If Iā€™d had a kid that day, heā€™d be just about ready to graduate college soon. :))

Compute Shaders are really something amazing which we still miss, I guess along with all the other Shader Types we donā€™t have (Geometry Shader?), but I guess you could atleast replace the compute shader with our openCL Implementation :stuck_out_tongue:
I only donā€™t know if this interfers with openGL? As in: Only one can access the GPU at a time.

Anyway: Where is the difference between a MegaTexture and a TextureAtlas? If I got that right the idea of a MegaTexture might be slightly different as in: Contains every terrain of the scene and is simply unwrapped but in theory, isnā€™t an atlas the same thing? (If you only had one atlas for the whole scene)

We have geometry and tesselation shaders

1 Like