I’m doing some rather complicated rendering experiments and it would be extremely handy to have some way to periodically view the z-buffer / save it to a .png file (largely for debugging purposes).
What is the correct way to do this? I see there’s a getRenderer().readFrameBuffer() method, but it seems to only allow reading the color buffer.
Hi
It’s not possible to directly read the depth buffer back to CPU memory. What I typically do is the following:
Create an offscreen framebuffer with depth and color attached as textures. For good rendering quality, use multisampled textures. Let’s call this fbA
Create a second offscreen framebuffer with textures, but without multisampling. That’s fbB
Render your scene into fbA
Copy fbA into fbB. The hardware takes care of merging the multisampling color and depth textures into the single-sample textures of fbB
Render th depth texture of fbB as a fullscreen quad. (At least on my pc, I can’t use multisample depth textures as shader inputs, hence the copy operation in step 4). Here in that shader, I perform depth transformation, like projective to linear depth, scale adaption, color coding with a transfer function, … whatever your application demands
Optional: Read the color output from step 5 to the CPU if you want to save it. Probably you want to use a third offscreen framebuffer in 5 then.
I hope, this gives you some ideas how you can achieve your goal
You also may get some useful knowledge poking around in the filter post processors. They deal with multisampled depth textures in shaders… without the additional copy, I think.
It seems to me like it would be pretty straight forward to write a FPP that would draw the depth into another accessible texture. Though I don’t know the ins/outs of why that depth texture couldn’t be accessed directly and copied to the CPU.