I’m considering tackling this myself, but was just checking if any other jME3 veterans had any word of caution (or implementation tricks) on these ideas:
If a filter needs a “PreNormalPass”, save it for potential use later. For example, I do a CartoonEdgeFilter & a SSAO filter… both want a PreNormalPass, but shouldn’t it be possible to do just one & use it for both filters? This isn’t already being done, right?
For virtual reality: only render 1 shadow map, and use it for both eyes.
Yep, I have this in an experimental project somewhere.
It require a bit more love.
Basically you need MRT, so the lighting material (or whatever material you use) needs to be changed a bit.
Then the FPP keeps a reference to the normal buffer and gives it to the filters that needs it.
It really boosts SSAO perfs. Didn’t tests cartoon edge though but it shoudl be the same
I actually gave this a go & didn’t notice much improvement. I didn’t dig into it much further, because the better solution was just to combine the cartoon edge & ambient occlusion shaders into one filter & shader. That way, not only do you save multiple normal passes, but all the other overhead that happens with two (or more) filters.
I’ll look into the shadow map sharing a bit later…
I use my own single-pass lighting system that doesn’t have a normal pass (e.g. it is in the geometry’s only material), so that only normal pass I know of is for the cartoon edge & SSAO filters. I combined those into one shader, so I think I now don’t have any additional geometry passes. Correct?
Although, MRT would be very useful for many cases.
Well…even if you combined them in one shader you still have 2 geometry pass instead of 3. That’s nice, but with MRT you’d have only one.
One pass for the back buffer (your single pass lighting does a geometry pass)
One pass to render the screen normals (triggered in the FIlter)
The idea is to use the back buffer pass to populate the additional buffers that you need (here normals). Actually we already do that for Depth, because it’s mostly free and you don’t need MRT. But for an additional color buffer (normal that is) you need MRT.
We could extend this to whatever needs an additional geometry pass in the scene view.
In a way it’s some kind of hybrid forward/deferred, because lighting is still coputed in a forward way, but all the buffers needed for post processing are rendered during the same pass as lighting.
I have an question about MRT, because I’m new in this topic: Is it safe to use it? I mean if it would works on any current graphics card?
My current shader uses it to generate diffuse, specular, normal, tangent, binormal and glow maps for the whole scene and in general it works very nice on my GTX 750 Ti.
Is it some kind of ‘standard’ for current GPUs?
with OpenGL 2, MRT is available via extension “ARB_draw_buffers”. But on macosx, the extension is not available, and you have to use explicitly OpenGL 3 without backward compatibility, so all your shader should be GLSL130+