[SOLVED] Raw pixels of rendered frame

I want to get access to raw pixels of the current rendered frame. Following the inspiration of TestRenderToTexture, I created and offscreen viewPort which renders to a Texture2D. All works fine if I create a material, apply it to a quad and set its texture to the one to which the offscreen viewport does its work. Whatever, i want to access raw pixels data contained in the texture. but the Image’s getData(…) method seems to not work. Even if texture renders out correctly, its image’s data are empty. I think that data rendered by the viewport is saved somewhere else, but I can’t recognize where. I explored FrameBuffer, Renderer and similar sources, but I’ve still not come to a conclusion, so, where are texture’s data saved by viewport? Is there a way i can access them?

Have you looked into what ScreenshotAppState does?

Yeah, but doesn’t it impact performances too much? I mean, does the readFrameBuffer repeat any operation that the offscreen viewport already does?

Also, it reads the data into FrameBuffer into another buffer. Isn’t there a way to access them directly?

You tell me. Just saying: if I wanted to access rendered pixels, ScreenshotAppState is where I’d start looking for clues how to do it.

It’s rare that adding a feature doesn’t impact performance in any way. The surest way to quantify the impact is to measure it. I haven’t measured the performance impact of accessing rendered pixels.

1 Like

Ok, i’ll make some tries so. Thanks!

The GPU.

No. It copes the data from the GPU to CPU memory where you can access it.

Per-frame? Yes, probably.

3 Likes

A few months back at my day job we were attempting to use the JME->JFX integration to put a JFX UI on a JME-powered tool we were working on. The JME->JFX bridge renders the JME scene at full resolution, copies the pixels from the GPU to CPU, blits them into the JFX scenegraph, and then that’s rendered back on the GPU. Running that at 1920x1080 came at too high a cost on the hardware we were running on (low power integrated graphics chips), and we had to give up on that scheme because the framerates were too low. That’s not to say that it can’t be done and that it can’t be performant (our particular setup wasn’t), but yes, streaming a large pixel buffer each frame could take a nasty performance toll.

1 Like