Learning to Love your Z-buffer - different on different hardware?

I was reading nehons favourite

https://www.sjbaker.org/steve/omniv/love_your_z_buffer.html

and there is one thing I am hoping someone can answer for me.

This

Many of the early 3D adaptors for the PC have a 16 bit Z buffer, some others have 24 bits - and the very best have 32 bits

and

What people often fail to realise is that in nearly all machines, the Z buffer is non-linear

made me wonder if the depth buffer for a game, given the exact same place/view/clipping planes/everything would result in an identical depth buffer? Can it differ per graphics card/driver or anything else? Or software? (I’d like to create depth maps in Blender and use them in JME)

I’m interested because with my still image background and 3d overlay project I’d like to render the depth for the background image to an image and load that rather than use an invisible model to give depth which I do at the moment.

Your best bet here is to make a linear depth buffer and make the depth test in the shader instead of relying on the built in mechanism.

To answer your questions, the depth buffer will be different with 16 bits or 24 bits, but they will be very similar. The 24 bit one will just be more precise the for values far from the camera than the 16bit one.

Now for the image format to use, PNG supports 16 bit per channel, but JME “downscale” it to 8 bit when loading. You’ll have to use hdr or dds format that supports 16 or 32 bits per channel images. Or… you can somehow pack your 24 bit depth map in a 8 bit rgb png…but you’re going to have a hard time with that from blender.

1 Like

Thanks nehon that is very helpful.

One final thing, I was planning to set gl_FragDepth in my shader like you have suggested, but when you say

best bet here is to make a linear depth buffer

Does that mean that when you set FragDepth youself in the shader you use linear values and not non-linear, or am I misinterpreting you?

I guess you don’t have very big scenes (If I recall it mostly interiors with not much range). So a linear buffer can be enough if the camera frustum is short.
My idea was more to not use the HW depth at all meaning, have a shader that takes your baked depth buffer as a regular texture, compare the depth stored in it and the depth of the current fragment and discard the fragments if it’s behind the depth value.

Where is says Value in Z Buffer = 6908956

I thought depth values where all floats between 0 and 1, so is this just a non normalized value?

probably the 24bit int representation of 0.17…idk tbh

Yes, but all floats are written as a set of 0s and 1s. This mean you can see a float value as normal numbers, it is up to you (or program) to interpret what lies in particular memory cells.

Btw, reading float as long by casting the pointers was used in Quake 3 to fast calculation of inverse square root. There is a nice code with funny comments on Wiki: Fast inverse square root - Wikipedia

I’m gonna mess about with it, but on the page it has the bit about non-linear, then this calculator.

With Z distance == zNear result: 0
With Z distance == zFar result: 16777215, so almost 1<<24 (which I think is actually 16777216), with 24 being the number of bits I put in

With 0.1 near, 1000 far, putting in 500 I get 16775538

So it’s non linear as expected and I’m just gonna take a stab at normalizing it to get my 0-1 and see how it works.

Worse case scenario I’ll just go back to using invisible depth models.

Thanks for all the replies and help

Are you trying to reconstruct world space coordinates using depth buffer?

Nah

Basically I want to create a material in Blender that textures everything by its depth…

The reason I want to do this is a little involved, but essentially:

(In Blender)
-I take a sphere
-Make it 100% reflective
-Unwrap it onto a square texture
-Put it inside my scene somewhere
-Give it a texture and bake it

This just gives you one of those equirectangular projections; I can now put the sphere inside JME with the baked texture, place the camera directly in the center of the sphere and look around at my projection.

I then have a character walking around on a collision model of the room I made in blender, and it looks like he’s walking around on the baked render, but without depth.

Up until now I have been using a model with color write off (thanks to everyone here who helped) which works. However this isn’t 100% accurate and the model needs to be pretty high poly for some circumstances with high detail.

I was hoping to bake another texture onto the sphere, this time the depth, and then just set it in the shader for the sphere. Blender can provide you with the distance from the camera to the pixel so I have implemented the formula from that page with a small amount of success.

When I first started trying to load the depth I was using a single background picture, not this sphere thing, and I was loading the entire scene into JME and rendering the depth buffer to a texture and saving it. Now I have the sphere though with its UV map I can’t do that anymore, so I am attempting to create them in Blender with baking.