There is a red cube, The length, width and height are 1×1×1,its center is (0,0,0).The camera is located in (0,0,3). I calculated the depth of the middle point —vector3f(0,0,1) —of the screen using cam.getScreenCoordinates().getZ(); it is 0.5005005. The bottom right image is the depth map I calculated with the shader. Its center point only has a depth value of 0.5019608.The shader code is following :
In practice however, a linear depth buffer like this is almost never used. Because of projection properties a non-linear depth equation is used that is proportional to 1/z. The result is that we get enormous precision when z is small and much less precision when z is far away.
I want to get two Z-buffers, one from the CPU and one from the shader. The z-buffer calculated by the shader I get by float depth = getDepth(m_DepthTexture, texCoord).r;. CPU calculated by camera.getScreenCoordinates().getZ() to get. And then compare them, and theoretically at one point they should have the same value
but depth = 0.5019608. I think he “view space” Z that getScreenCoordinates() provide and the getDepth() in the shader are equivalent, but what I don’t understand is I put in a fixed value and then I take it out and it changes.
How many floating-point math ops on the values in that shader?
The difference that you are seeing is on the order of (1 * 2^-7) of the value, which could be accounted for by floating point errors - including conversion from and to decimal.
(0.5005005f can’t be represented exactly. The closest value that a single-precision float can hold is actually 0.500500500202178955078125, and 0.5019608 is a rounded approximation of 0.501960813999176025390625)
I’m feeling a little dim here. I just realized that you are using a texture that only has 8 bits per component. You are running into rounding errors.
Even if the image was an 8-bit integer, both of the listed floats would end up being represented as the same value in the range [0…255]. (they would be 128) The second value is a pretty-print shortened form of converting back to a decimal [0…1] range. (Actual value is 0.501960784314)
If you are truly using an 8-bit float, the precision is even less.