Rendering depth to texture

I have a simple render to texture setup, and I have 2 textures

depthTex = new Texture2D((int)width, (int)height, Format.Depth24);
depthTex.setMinFilter(Texture.MinFilter.Trilinear);
depthTex.setMagFilter(Texture.MagFilter.Bilinear);
    
colTex = new Texture2D((int)width, (int)height, Format.RGBA8);
colTex.setMinFilter(Texture.MinFilter.Trilinear);
colTex.setMagFilter(Texture.MagFilter.Bilinear);

offBuffer.setDepthBuffer(Format.Depth24);
offBuffer.setColorTexture(colTex);
offBuffer.setDepthTexture(offTex);

(Note: I tried Format.Depth, depth24 and depth 16 no change)

Used Picture class which is attached to guiNode

        offPic2 = new Picture("depth");
        offPic2.updateGeometricState();
        offPic2.setTexture(assetManager, depthTex, true);
        offPic2.setWidth(width);
        offPic2.setHeight(height);

This is done for the colortexture too.

The other code is just taken from the forum, offCamera, offViewPort that stuff. It all worked for just the color texture but I am not sure if this is what the depth should look like?

(Both textures are 320x240 attached to the guiNode positioned next to each other on a 640 width window. color on left, depth on right)

The scene has 1 pointlight attached so the colortexture version can actually be seen.

If I go super close to a wall I get at least something in the depth texture on the right

I just wanted to ask, is this normal? My intention is to render the depth texture then save it as an image to use for a specific purpose (so not how it’s used in the soft particles shader and others - I don’t want to be taking it in real time in my case). I expected it to have a bit more… contrast? I mean looking at the first picture, is that legit what it will use for depth and work correctly? Perhaps I am just not considering subtle differences that my eyes can’t really see?

(I know I haven’t posted all the code but it’s just basic render to texture stuff (can post if necessary though))

EDIT:

I tried scaling my model down to 0.1 and it improved the situation slightly

Still not sure about it though

Yes, this is how z-buffer should look like. Z buffer is not linear so your eye should see only nearest things. If you want to show it on screen for some debug purpose, you need to linearize it using some function.

1 Like

Oh right. Thanks for that, explains it. Nah I actually want to render it for super high detail (>3mil triangle) scenes, save it, then load it to use in conjunction with a pre-rendered background, so no real time scenery at all.

Additionally, afaik there is no visual difference between depth24 and depth16, both have values between 0 and 1 but with different precision. That precision is needed for example when you want to reconstruct 3D position with onscreen coordinates, z-buffer value and a proper transformation matrix. Higher precision is also used in shadowmaps to avoid z-fighting artifacts.

Z buffer is not linear, that’s why it’s all whitish.
https://www.sjbaker.org/steve/omniv/love_your_z_buffer.html