I have a rather complex scene (block like world) and the problem is, that the whole scene is rendered multiple times if I add a bloom (glow maps) and water (own implementation) post filter. This results in a very low fps ~55. Without these post filters I get ~200fps. As far as I understood the default renderer has no framebuffer to allow rendering to the screen.
Is there any way to render the block world into a MRT framebuffer and then display it to the screen while rendering all other geometries (models, particle effects, …) as usual? I don’t want to render the world as post filter because then I have several new issues with the rendering order of other post filters and the transparent/translucent bucket.
The world shader should have 3 outputs: default, glow map, water mask and I hope it is possible to only render the world once (vertex and fragment). At the moment I have defined 3 techniques to skip lighting calculations for the glowmap and water mask pass but they all have to use the same heavy vertex shader, because it contains vertex animations. Skipping the lighting calculations gave me a little boost but thats not enough. If they would use different vertex shaders, the animated vertices wouldn’t occlude the glow map or water mask.
Sure this can be done, but you would need to write your own shaders that write to your multiple render targets and your own glow filter that uses whatever texture you’re writing the glow render to in your shaders.
yes I know that I have to write my own shaders, but how can I render the world not as post filter? Another important question is whether rendering with MRT will be significant faster than the current state?
I’m not sure what you mean by render the world not as post filter. What you would do is use custom shaders for every geometry in your scene and those shaders would write to different frame buffers using multi-target rendering which is that in the shaders instead of setting gl_FragColor you set gl_FragData, gl_FragData, and gl_fragData to whatever color you want. When you use gl_FragColor you’re setting the color of a particular pixel in the current frame buffer, when you use gl_FragData you’re doing the same thing except on multiple frame buffers in a single pass. Once the frame is rendered you do whatever you want with those buffers.
For the buffer you’re writing the glow colors to you hook it into a post processor that blurs the buffer, probably more than once, then add the result to buffer holding the rendered scene, in this case probably the one you wrote to at gl_FragData.
As for performance, I’m sure there would be a benefit to doing it this way, as to how much I wouldn’t know.
it is possible yes, but as Tryder said you need to make your own lighting shader (basically just add int the output you want to it) and you have to modify the FilterPostProcessor to attach several outputs to the main framebuffer (Note that the FPP already renders the scene to a frameBuffer). Then you have to “dispatch” those outputs to the corresponding filter.
If you have a scene with a lot of objects, this will drastically increase your performances.
I made a local test doing this for SSAO, by rendering the normal pass as a MRT instead of an additional geom pass, and it worked pretty well.
Maybe the FPP should allow users to add some additional back buffer informations… That would give us some kind of hybrid forward / deferred rendering… hybrid in the sense of the geometry pass generates several back buffers like in deferred, but still computes lighting like in forward.
EDIT: note that MRT is not part of opengl2 (I don’t remember at what version it was added though).So you have to check the CAPS to be sure it’s available on the hardware.
well if you want to render something in your additional buffers for them yes you need it. else you’ll have black areas where they are in the buffer (can be alright fo glow, but maybe not for your water mask).
and yes gl_FragColor tis he same as gl_FragColor, but if you are going for opengl 3.3 you ca use glsl 1.5 and just define several outputs like this:
out vec4 outColor;
out vec4 outGlow;
out vec4 outWaterMask;
and then assign values to them in the shader. they’ll be assign to the frame buffer output in the order they’ve been declared.
I have a problem with the glow map and water mask. Both outputs contains the sky maybe due to the default background color (black no alpha). I’ve another problem with the translucent bucket. It is also visible because it is drawn after the opaque bucket (with my 3 outputs)? How can i prevent drawing the sky into/onto the glow output?
The sky is blue because i use rayleigh scattering. I already have added a TranslucentBucketFilter. Otherwise translucent geomtry is visible through opaque geometry.
Until now i modified the FPP by adding the two new outputs to the frame buffer at the end of the reshape() method, but before the for loop where the filters will be initialized.
glowMap = new Texture2D(w, h, Format.RGBA8);
waterMaskMap = new Texture2D(w, h, Format.RGBA8);
After that I modified the BloomFilter by disabling the preGlowPass in postQueue() and modified the texture of the extractPass