[SOLVED] Prevent rendering scene multiple times using MRT

Hi,

I have a rather complex scene (block like world) and the problem is, that the whole scene is rendered multiple times if I add a bloom (glow maps) and water (own implementation) post filter. This results in a very low fps ~55. Without these post filters I get ~200fps. As far as I understood the default renderer has no framebuffer to allow rendering to the screen.

Is there any way to render the block world into a MRT framebuffer and then display it to the screen while rendering all other geometries (models, particle effects, …) as usual? I don’t want to render the world as post filter because then I have several new issues with the rendering order of other post filters and the transparent/translucent bucket.

The world shader should have 3 outputs: default, glow map, water mask and I hope it is possible to only render the world once (vertex and fragment). At the moment I have defined 3 techniques to skip lighting calculations for the glowmap and water mask pass but they all have to use the same heavy vertex shader, because it contains vertex animations. Skipping the lighting calculations gave me a little boost but thats not enough. If they would use different vertex shaders, the animated vertices wouldn’t occlude the glow map or water mask.

best regards and hope you can help me

Alrik

Sure this can be done, but you would need to write your own shaders that write to your multiple render targets and your own glow filter that uses whatever texture you’re writing the glow render to in your shaders.

yes I know that I have to write my own shaders, but how can I render the world not as post filter? Another important question is whether rendering with MRT will be significant faster than the current state?

I’m not sure what you mean by render the world not as post filter. What you would do is use custom shaders for every geometry in your scene and those shaders would write to different frame buffers using multi-target rendering which is that in the shaders instead of setting gl_FragColor you set gl_FragData[0], gl_FragData[1], and gl_fragData[2] to whatever color you want. When you use gl_FragColor you’re setting the color of a particular pixel in the current frame buffer, when you use gl_FragData you’re doing the same thing except on multiple frame buffers in a single pass. Once the frame is rendered you do whatever you want with those buffers.

For the buffer you’re writing the glow colors to you hook it into a post processor that blurs the buffer, probably more than once, then add the result to buffer holding the rendered scene, in this case probably the one you wrote to at gl_FragData[0].

As for performance, I’m sure there would be a benefit to doing it this way, as to how much I wouldn’t know.

it is possible yes, but as Tryder said you need to make your own lighting shader (basically just add int the output you want to it) and you have to modify the FilterPostProcessor to attach several outputs to the main framebuffer (Note that the FPP already renders the scene to a frameBuffer). Then you have to “dispatch” those outputs to the corresponding filter.

If you have a scene with a lot of objects, this will drastically increase your performances.

I made a local test doing this for SSAO, by rendering the normal pass as a MRT instead of an additional geom pass, and it worked pretty well.

Maybe the FPP should allow users to add some additional back buffer informations… That would give us some kind of hybrid forward / deferred rendering… hybrid in the sense of the geometry pass generates several back buffers like in deferred, but still computes lighting like in forward.

EDIT: note that MRT is not part of opengl2 (I don’t remember at what version it was added though).So you have to check the CAPS to be sure it’s available on the hardware.

Thanks nehon, i’ll give it a try and write again if i have problems :slight_smile: … I know that part with the opengl version. This is what i find out:

opengl 3.3 since Nvidia GeForce 8 (2007) max 256 array textures, max 2048 3D textures
opengl 3.3 since ATI HD2000 (2007) max 8192 array textures, max 2048 3D textures

so opengl 3 is no a problem because the hardware which supports it is 10 years old.

yes!but you know, doesn’t hurt to check, and fall back to the old solution, or … have an opengl 3.3 minimum requirement for your game.

ok thanks. What is with all other shaders (like for the models and particles)? Is it necessary to modify them to use MRT? Or acts gl_FragColor the same as gl_FragColor[0]?

well if you want to render something in your additional buffers for them yes you need it. else you’ll have black areas where they are in the buffer (can be alright fo glow, but maybe not for your water mask).
and yes gl_FragColor tis he same as gl_FragColor[0], but if you are going for opengl 3.3 you ca use glsl 1.5 and just define several outputs like this:
out vec4 outColor;
out vec4 outGlow;
out vec4 outWaterMask;

and then assign values to them in the shader. they’ll be assign to the frame buffer output in the order they’ve been declared.

I have a problem with the glow map and water mask. Both outputs contains the sky maybe due to the default background color (black no alpha). I’ve another problem with the translucent bucket. It is also visible because it is drawn after the opaque bucket (with my 3 outputs)? How can i prevent drawing the sky into/onto the glow output?

this is a screenshot of the glow output:

For the sky it’s strange… seems the clear color is a dark gray / blue here.

For the translucent bucket, add a TranslucentBucketFilter at the end of your filters stack.

The sky is blue because i use rayleigh scattering. I already have added a TranslucentBucketFilter. Otherwise translucent geomtry is visible through opaque geometry.
Until now i modified the FPP by adding the two new outputs to the frame buffer at the end of the reshape() method, but before the for loop where the filters will be initialized.

	...
        if (caps.contains(Caps.FrameBufferMRT))
	{
		renderFrameBuffer.setMultiTarget(true);

		glowMap = new Texture2D(w, h, Format.RGBA8);
		renderFrameBuffer.addColorTexture(glowMap);

		waterMaskMap = new Texture2D(w, h, Format.RGBA8);
		renderFrameBuffer.addColorTexture(waterMaskMap);
	} ...

After that I modified the BloomFilter by disabling the preGlowPass in postQueue() and modified the texture of the extractPass

// extractMat.setTexture("GlowMap", preGlowPass.getRenderedTexture());
extractMat.setTexture("GlowMap", processor.getGlowMap());

mhhh… then that’s very strange that the translucent bucket is drawn in your glow buffer…what does the other buffers looks like?

  1. output (glow)
  2. output (water mask with encoded depth as RGB values)

edit: scaled depth buffer

how do you output this? you just replace what you output in the final shader? Because that would explain why the translucent bucket is rendered. But that doesn’t mean it’s in the buffer

yes I used a simple post shader and passed the texture to it like this:

material.setTexture("SceneTexture", processor.getGlowMap());

the fragment shader looks like this

void main(){
  fragColor = getColor(m_SceneTexture, texCoord);
}

Is this the wrong way? What I expect for the glow map is an black image with only some highlights.

yes but this is part of your filter stack right?

yes

then put the transluscent bucket filter before it and test

1 Like

:open_mouth: It works! I did not think it would be so easy :smile: Ok then this solves my problems for now. Thank you!

glow map

edit: I realized that the world is still visible in the glow map. But it looks like its almost black :open_mouth:
I think this is another bug and is not related to the problems above