What are the real benefits of deferred shading?

I’m reading that modern GPUs support Early Depth Test that allows the gpu to skip hidden fragments and in jme we have now single pass lighting, so i’m wondering if there would be any tangible benefit in term of performances by implementing deferred shading in jme.

The main benefit of deferred shading is that you don’t need to apply every light to every object on scene (or to every object in its range) but instead of that you apply lights to previously prepared and rendered scene (g-buffer). Long story short - much less drawcalls.

So if you have many lights it will speed up your rendering process.
The drawback is that you need to handle big piece of memory called g-buffer, which can be a trouble on older graphics cards with small memory bandwidth.

I’m using deferred shading in Skullstone. I can ‘spawn’ lights during gameplay and don’t be worried about decreasing fps. Firing a lighting bolt or entering the room with many lights does not affect my fps.

Yes, but if you use single pass lighting all the lights are computed in a single draw call and my understanding is that only visible fragments are rendered, so the performances should be the same as deferred shading.

All of the lights up to the configured limit… then it must use multiple passes.

Also, I believe deferred shading can incorporate shadows at the same time instead of as a second per-light step.

But I agree that for a vast number of cases, that the benefits of deferred shading may no longer be worth it with modern forward rendering techniques. Even a lot of the industry is moving this direction, too, I guess.

1 Like

True, but you still have to set a limit because the hardware is not capable of rendering unlimited lights, so it’s more a matter of fixed limit vs dynamic-ish limit.

Yes, but ‘fixed limit’ closer to 10 versus “dynamic-ish limit” closer to “how many geometries can I possibly render”.

Why? You still need to compute the same number of lights for each visible fragment.

My understanding of deferred rendering is that lights are painted essentially just like geometry. At least the demos people have posted here had hundreds and hundreds of lights in them.

With classic Forward rendering you render the entire scene once for each light. That’s how 3.0 works.
So basically 100 objects with 100 lights = 10000 draw calls.
With classic deferred rendering you render a geometry pass to create a gbuffer with different informations : material color, position of the pixel in world space or view space (or depth), normals, everything you need to compute lighting. Then you create some kind of light buffer that is essentially a texture with light influence on screen (usually a gizmo geom for each light type). then from those 2 buffers, you compute light in a post process.
So basically 100 objects with 100 lights = 200 draw calls (note that the light pass is usually less expensive than the gbuffer pass).
This allows you to build more complex dynamic lighting into your scene and keep a decent frame rate.

Sounds like a good solution and you may wonder why it’s not in JME.

  • Well first it’s a complete different pipeline, and @FrozenShade here can assess this. So basically a lot more of things to maintain.
  • Second, it makes transparency a lot more difficult to handle (like is was easy before).
  • Third, MSAA becomes unusable (though it’s not true anymore since opengl3.0, but it’s even more a perf killer in deferred, hence the FXAA, techniques that popped up in the industry).

Why we don’t need it (IMHO) :

  1. JME3.1 introduced 2 tremendous improvements in light management : Light Culling and Single Pass Lighting.
  • Light Culling compute for each geometry on each frame what light influence it.
    So when you have a scene with 100 geoms and 100 lights there is a good chance that each geom is actually lit by a very small subset of those lights. So even with the multipass lighting this greatly reduce the number of draw calls.
  • Single Pass Lighting allows you to render a geometry with lighting coming from several lights in one pass (the number of light can be changed but it’s fixed for one scene, this may be improved in 3.2).
    So let’s say we render 8 lights per pass, if a geom is influenced by 12 lights at the same time, there will be 2 passes for this geom.
    So if your 100 geom and 100 lights scene is well partitioned there is a good chance that it will be rendered in one pass making it a 100 draw calls pass. Maybe more of course depending on the case.
  1. Lighting in modern graphic industry is more and more handled by Global illumination through Image Base Lighting.
    That’s basically what we have with PBR, a light probe holds lighting information virtually coming from thouthands of lights. This greatly reduce the need of dynamic standard lights.

  2. the rise of 4k screens makes the deferred lighting technique gbuffer huge… and this is a problem for GPU bandwidth as you need to upload huge amount of data to the GPU on each frame. Though…this may just be question of time as GPU are just kicking new asses everyday.

All this combined is IMO making the need of deferred lighting pretty worthless.

EDIT: @FrozenShade note that I’m not saying what you did is worthless, it’s not like you had any choice back then, and you did a pretty good work.

8 Likes

Thanks @nehon , that’s the part that was missing in my reasoning.

Note that it’s the most common solution, it’s not the only one.

Lights are just spheres. In my implementation if camera is inside light radius I’m painting light as full screen quad, all other lights are painted as spheres in 3d world. Painting the light you need to use all maps from g-buffer, you can also apply shadows at once.

You need diffuse, specular and normals only (normalmap and geometry’s normals can be packed into one RGBA8). Pixel’s position can be easily reconstructed using z-buffer, gl_FragCoord and screen resolution.

If it were simple I would share it long time ago. Such solutions are a kind of ‘custom pipelines’, I have a lot things that are ‘dungeon specific’, integrated into one shader.

And that’s the reason my teammate hates me :wink:

Or they just make another generation of data bus 1000000x faster than actual to eliminate the problem :smiley:

Thanks :slight_smile:

What do you do if you want more properties in your materials? For example, if you want a surface to glow, or have some sort of special effect (wavy texture distortion or something) that would normally be done in forward rendering - how do you do that with deferred? Would you just need more in the g-buffer?

Also what is the difference between deferred shading and deferred lighting?

Glow is made the old way: full screen texture is rendered and then applied to glow filter.
With small difference - I render that texture while rendering my g-buffer, this is just another output of MRT. So yes, you can call it a part of g-buffer, but you need the same texture in forward pipeline. Here I can render it a bit faster, because I don’t need to iterate again all the geometries with different technique.

I think I could make wavy texture distortion in similar way, using filter.

The difference between deferred shading and lighting is explained here: Deferred Shading Shines. Deferred Lighting? Not So Much. – Game Angst