There is a red cube, The length, width and height are 1×1×1,its center is (0,0,0).The camera is located in (0,0,3). I calculated the depth of the middle point —vector3f(0,0,1) —of the screen using cam.getScreenCoordinates().getZ(); it is 0.5005005. The bottom right image is the depth map I calculated with the shader. Its center point only has a depth value of 0.5019608.The shader code is following :

In practice however, a linear depth buffer like this is almost never used. Because of projection properties a non-linear depth equation is used that is proportional to 1/z. The result is that we get enormous precision when z is small and much less precision when z is far away.

z = 2.0, plug it into the following formula and you get 0.5005005

but I got a value of 0.5019608 based on the depth map. The depth map is the default property named m_DepthTexture in the shader. The depth map is wrong?

Edit: for what it’s worth, every time I’ve used the depth values for meaningful things they have been accurate. But I used resources similar to what is described on my first link to derive real Z.

…as mentioned, you can also look at the JME post-processors.

Note that the code you have is for calculating the real z value… 1 to 1000 or whatever. I don’t think frag color is going to tell you anything better than 1.0.

I want to get two Z-buffers, one from the CPU and one from the shader. The z-buffer calculated by the shader I get by float depth = getDepth(m_DepthTexture, texCoord).r;. CPU calculated by camera.getScreenCoordinates().getZ() to get. And then compare them, and theoretically at one point they should have the same value

Hmmm… that doesn’t sound like a very interesting game.

I assume you actually want to DO something with that.

Anyway, the relationship between the “view space” Z that getScreenCoordinates() provide and the getDepth() in the shader requires some additional transform and I don’t remember exactly what it is.

…but note the Z you get in shader from the math you’ve included is “actual meters from eyeball”.

but depth = 0.5019608. I think he “view space” Z that getScreenCoordinates() provide and the getDepth() in the shader are equivalent, but what I don’t understand is I put in a fixed value and then I take it out and it changes.

yes, I initially thought it was a conversion, but found that a fixed value has different results. I guess the image format is not correct (I set RGBA8). I’m not clear about this knowledge.

How many floating-point math ops on the values in that shader?

The difference that you are seeing is on the order of (1 * 2^-7) of the value, which could be accounted for by floating point errors - including conversion from and to decimal.

(0.5005005f can’t be represented exactly. The closest value that a single-precision float can hold is actually 0.500500500202178955078125, and 0.5019608 is a rounded approximation of 0.501960813999176025390625)

I’m feeling a little dim here. I just realized that you are using a texture that only has 8 bits per component. You are running into rounding errors.

Even if the image was an 8-bit integer, both of the listed floats would end up being represented as the same value in the range [0…255]. (they would be 128) The second value is a pretty-print shortened form of converting back to a decimal [0…1] range. (Actual value is 0.501960784314)

If you are truly using an 8-bit float, the precision is even less.

I’m probably not the right person to try to explain it all. I’ll try to give a simple one for the basic 8-bit version:

A single byte of memory (8 bits) can only represent one of 256 specific values. We often say that these are the integers { 0, 1, 2... 253, 254, 255 }.

When you use a byte as a depth buffer over [0..1] you are instead saying that it can hold the fractions { (0/255), (1/255), (3/255)...(253/255), (254/255), (255/255) }.

Every value that you put in is rounded to one of those fractional values, and when you read it back out, that fraction is the number you get back

For floating point, the details are a little more convoluted, but the essence is the same: Computers use the bits in memory as fingers to count on, and they only have so many fingers.

You need to keep in mind that this is not a mistake or malfunction on the part of the computer or the program. It’s just how numbers in memory are.

For further study, look up “floating point precision”. I don’t really have a recommended reading list, because my own understanding was put together a bit at a time.