FilterPostProcesor and HDRRenderer does not work together

If I do this (HDRRenderer is added after FPP), the screen stays on the first frame. In other words, it does not change. I noticed this when I was writing my own SceneProcessors based on these. I believe it has something to do with framebuffers.

Yep, that’s a known issue.
There is an item in my todo “make the HDRRenderer a Filter” since a long time, but there was always something more important/interesting to do :stuck_out_tongue:
I quickly looked into it once, but it would require some adaptations of the filterPostProcessor because the HDRRenderer needs the backBuffer as a RGB16F or RGB111110F texture, but the FPP renders it in a RGBA8 texture.

I don’t know what implication it could have on the rest of the filters…looks like an expensive cost for the effect IMO…
Also i never achieve to have a quick transition between different luminance values and I found it ruins the effect.
If you have something working on your own I’d be interested.

@nehon can you explain the cause of the issue?

I don’t know exactly, but that’s something along those lines:
Both processors principle is to hijack the forward render pass and keep it in a framebuffer. then they use the rendered texture to apply it on a fullscreen quad (rendered in front of everything in ortho mode) with a particular material. So basically once one of those processors are added to a view port, you don’t really see the scene, you see a quad with a texture of the scene.
The thing is both are initializing their own framebuffer and go viewPort.setOutputFrameBuffer(myFrambuffer), so basically, they just conflict with each other.
For some reason one manage to have its framebuffer set on the first frame then the other hijack the outputFrameBuffer and never give it back. the scene is probably rendered “behind” but you only see the last quad rendered on screen that still holds the texture of the first frame.

One way to avoid this would be to somehow allow them to share the same frameBuffer…but there again rises the format issue I talked about earlier.
I guess it would be easier to make it a Filter.

Right now the difference between Filters and Processors is not obvious, and we are thinking to redesign those systems to kind of unify their functioning.
For now, as a rule of thumb, each time you are dealing with a 2D post processing of the output image, you’d better go with a filter.