Rendering from Textures

I want to do something very specific with framebuffers, but I’m not sure it is possible. My primary goal with this is to have an independent rendering pass that doesn’t care where its input textures come from, and doesn’t care where its output textures go.

As an example, for a render pass, I have two sets of textures: color+depth input and color+depth output. The input textures have already been rendered to by another pass, and I want to use the results from that pass in this pass’s render, and write the results to the output textures. Ideally, I’d set the input textures as the “base” of this pass’s framebuffer, and set the output textures as the color and depth targets, except I don’t know how to do the former.

So, my specific question is, how do I set the input textures as the “base” for rendering, without altering the input textures at all? (I’ll probably do this operation a lot, so not being fast is a deal-breaker).

Note: I don’t want to do framebuffer blitting if possible, because that requires (to my knowledge) passing the actual framebuffers around, when I really just want to pass the textures around.

By gl specifications it is not possible to read and write from the same texture.
In practice it is possible but the result is undefined so you will get artifacts since the texture access is not synchronized.

If the pass in question is a fullscreen pass you have no problem, if it is some sort of depth peeling for transparancy they you need to copy/blit the texture

Thanks for the reply.

Would you post some code for texture blitting?

RenderManager.copyFramebuffer it is. if you do not need to read from the depth buffer you can share it between the framebuffers and set copy depth to false.

I dont think there is a cheaper solution available, since at the end you need a copy of the texture.

If it is for transparency i can provide you with the required changes for the core to support a order independent technique. The reason i never made a pull request is that the renderstate class with all the serialization and comparing stuff gets super messy and i have not checked it for any regressions beside my own project

Copying framebuffers would work, but I want to avoid that if possible. It would require the render pass to have access the other pass’s framebuffer, which would makes things more complicated and less flexible.

I don’t necessarily need to copy the input textures. In fact, it would be ideal if the framebuffer could read directly from the original input textures for constructing the output textures.

I’m not working on transparency, although your technique sounds intriguing. I don’t want to take on even more work right now, but I’d be interested in seeing how easily it will integrate with the framegraph api further down the road.

I dont think jme exposes glCopySubImage to any public api. At least i cannot remember seeing it. The easy workaround would be to render a fullscreen quad first.

I am not sure if there is a nice solution to this problem in gl46. As an framegraph api designer i would throw an exception. Then at least you have isolated a potential costly operation. A fallback would be to internally copyFrameBuffer without exposing the api if you see the need. Wonder how the big engines handle such cases…

Since my implwmentations does nothing special but requires separate blend modes i do not see any issues that ahould come up down the road.

1 Like

Ok, I’ll go with rendering a fullscreen quad for each input texture that needs to be written to the framebuffer. I’m not incredibly happy about it, but at least it fits the API criteria.