Save last frame until current frame

I want to save the last rendered frame and use it while rendering next frame (using last frame as a texture in the new frame), it will sure be possible but I can’t see how I should do it.



I’ve tried to look at the source for the filters and scene processors but there are so many classes involved (FrameBuffer, Image, Texture2D, Texture, Picture) and I’ve lost myself somewhere in jungle.



Is there some docs which explains this kind of things?

The TestRenderToTexture/Memory tests do exactly that.

Not quite, or at least the not way I’m thinking of it. That test case renders another scene to a texture and then uses that when rendering the main scene.



I want to copy the current rendered frame and then use it as a texture when rendering the next frame. I want to blend last frame with last frame with the current frame in a filter to simulate motion blur.

Just apply the processor to the main view, like the ScreenShotAppState and VideoRecorderAppState do. (right-click those classnames in the editor and select “Navigate->Go to Source” to see their source.

@kwando said:
Not quite, or at least the not way I'm thinking of it. That test case renders another scene to a texture and then uses that when rendering the main scene.

I want to copy the current rendered frame and then use it as a texture when rendering the next frame. I want to blend last frame with last frame with the current frame in a filter to simulate motion blur.


Do you want to keep "pure" frames or is it ok to have the motion blur from the previous frames?

@normen I shall dig into those classes later, thanks for pointing me in the right direction.



@pspeed I’m not sure if I want the pure frame or not, at least for now I would like the pervious blured frame (I think I want the ghosting effect :slight_smile: )

In my experience, it can be too blurry… but either way the high level approach is the same.



Use code similar to what’s in screen shot app state to pull out the image but turn it into a texture. Give this texture to a post-processing filter as a texture. It will have access to the scene render up to that point as a texture also… then you can mix the two.



I tried to make the depth of field filter a decent example of a custom post-processing filter but there are probably others. The only tricky part will be getting your second texture in but that shouldn’t be too bad.



I’m guessing this will be kind of slow though but there isn’t much choice, I guess. All of those round trips for a screen-sized texture can get expensive. I don’t know if there is some way to shunt off the texture in the post-processor for later use… because that would be one way to get a pure frame (without the blurr) and keep it for the next pass without having to copy it from GPU to RAM and back.

I’ve already created another PostProcess filter like they proposed here:



http://http.developer.nvidia.com/GPUGems3/gpugems3_ch27.html



It works but there are some limitations of it and I just wanted to test if it this approach would give a better result or not.



How does the FilterPostProcessor get the scene texture in the first place? Is it read from the GPU to RAM and then moved back or is it allways on the GPU?

In fact you can set the outputFrameBuffer to a viewport. By default it’s null which render to a screen, but if you create a frame buffer and pass it to the viewport you can render the scene to a texture.



The FilterPostProcessor renders the scene in a FrameBuffer using this feature.

The first filter of the filter stack is fed with the resulting texture.

Each filter creates a full screen quad rendered in ortho mode and the rendered scene is applied as a texture on this quad. Of course the filter “effect” is added in the process.



It’s an iterative process, the first filter gets the rendered scene, the second one gets the result of the first filter render (also rendered to a frame buffer) and so on.

Only the last filter is rendered to the screen.



What you would have to do is a Filter that stores the previous frame texture