[SOLVED] Why is depth map information different from calculated?

image
There is a red cube, The length, width and height are 1×1×1,its center is (0,0,0).The camera is located in (0,0,3). I calculated the depth of the middle point —vector3f(0,0,1) —of the screen using cam.getScreenCoordinates().getZ(); it is 0.5005005. The bottom right image is the depth map I calculated with the shader. Its center point only has a depth value of 0.5019608.The shader code is following :

float depth = getDepth(m_DepthTexture, texCoord).r;
gl_FragColor = vec4(depth, depth, depth, 1.0f);

I wonder why they are different.

The depth buffer does not hold direct z values like you think.

Or look at almost any of the post processing filters.

Edit: Also in this article you can scroll down to the section “Depth value precision”:
https://learnopengl.com/Advanced-OpenGL/Depth-testing

In practice however, a linear depth buffer like this is almost never used. Because of projection properties a non-linear depth equation is used that is proportional to 1/z. The result is that we get enormous precision when z is small and much less precision when z is far away.

z = 2.0, plug it into the following formula and you get 0.5005005
image
but I got a value of 0.5019608 based on the depth map. The depth map is the default property named m_DepthTexture in the shader. The depth map is wrong?

What are near and far?

Edit: for what it’s worth, every time I’ve used the depth values for meaningful things they have been accurate. But I used resources similar to what is described on my first link to derive real Z.

…as mentioned, you can also look at the JME post-processors.

near: cam.getFrustumNear() = 1.0, far: cam.getFrustumFar() = 1000.0.

now, my code of shader is following :

    float depth = fetchTextureSample(m_DepthTexture, texCoord, 1).r;
    vec4 pos = vec4(texCoord, depth, 1.0) * 2.0 - 1.0;
    pos = g_ViewProjectionMatrixInverse * pos;
    pos.xyz /= pos.w;
    gl_FragColor = vec4(pos.z, pos.z, pos.z, 1.0f);

but pos.z = 1.0. :sob: why?

Note that the code you have is for calculating the real z value… 1 to 1000 or whatever. I don’t think frag color is going to tell you anything better than 1.0.

What are you actually trying to do in the end?

I want to get two Z-buffers, one from the CPU and one from the shader. The z-buffer calculated by the shader I get by float depth = getDepth(m_DepthTexture, texCoord).r;. CPU calculated by camera.getScreenCoordinates().getZ() to get. And then compare them, and theoretically at one point they should have the same value

Hmmm… that doesn’t sound like a very interesting game.

I assume you actually want to DO something with that.

Anyway, the relationship between the “view space” Z that getScreenCoordinates() provide and the getDepth() in the shader requires some additional transform and I don’t remember exactly what it is.

…but note the Z you get in shader from the math you’ve included is “actual meters from eyeball”.

I set the depth to a fixed value, then read it, and I find that it is no longer the fixed value I set. For example:
code of the frag shader:

depth = 0.5005005f; 
gl_FragColor = vec4(depth, depth, depth, 1.0f);

Pass setting:

pass.init(renderManager.getRenderer(), tempWidth, tempHeight, Format.RGBA8, Format.Depth, 1, tempDepthMat);

I implemented a simple compute shader and then sent the Pass rendered image to the shader.

GL20.glUseProgram(programId);
GL42.glBindImageTexture(bindingIndex, image.getId(), 0, false, 0, GL15.GL_READ_ONLY, GL11.GL_RGBA8);

Then I get the depth value through the compute shader:

float depth = imageLoad(mipmap_0, ivec2(width / 2, height / 2)).r;

but depth = 0.5019608. I think he “view space” Z that getScreenCoordinates() provide and the getDepth() in the shader are equivalent, but what I don’t understand is I put in a fixed value and then I take it out and it changes.

So you are saying, regardless of anything else… you set a color and then get out a different color?

…sounds like something like gamma is messing with you.

yes, I initially thought it was a conversion, but found that a fixed value has different results. I guess the image format is not correct (I set RGBA8). I’m not clear about this knowledge.

Normally color in = color out.

…unless you are out of the 0…1 range OR you have gamma correction turning linear color back to srgb.

Edit: but note that one byte does not provide a lot of precision.

I set RGBA8F to RGBA16F, the results(0.003921569) I got were even more outrageous.

Quick thought: (0.003921569+1)/2=0.5019608

Seems like the original value [0, 1] is being mapped into a range [-1, 1]. However I do not know where the error to 0.5005005 is coming from.

How many floating-point math ops on the values in that shader?

The difference that you are seeing is on the order of (1 * 2^-7) of the value, which could be accounted for by floating point errors - including conversion from and to decimal.

(0.5005005f can’t be represented exactly. The closest value that a single-precision float can hold is actually 0.500500500202178955078125, and 0.5019608 is a rounded approximation of 0.501960813999176025390625)

I’m feeling a little dim here. I just realized that you are using a texture that only has 8 bits per component. You are running into rounding errors.

  • Even if the image was an 8-bit integer, both of the listed floats would end up being represented as the same value in the range [0…255]. (they would be 128) The second value is a pretty-print shortened form of converting back to a decimal [0…1] range. (Actual value is 0.501960784314)
  • If you are truly using an 8-bit float, the precision is even less.

right, it does seem to be rounding errors. When I set RGBA8F to RGBA32F——RGBA16F is still wrong, the two values are the same, they are 0.5005005.

Can you explain in detail why this is happening?

I’m probably not the right person to try to explain it all. I’ll try to give a simple one for the basic 8-bit version:

  • A single byte of memory (8 bits) can only represent one of 256 specific values. We often say that these are the integers { 0, 1, 2... 253, 254, 255 }.
  • When you use a byte as a depth buffer over [0..1] you are instead saying that it can hold the fractions { (0/255), (1/255), (3/255)...(253/255), (254/255), (255/255) }.
  • Every value that you put in is rounded to one of those fractional values, and when you read it back out, that fraction is the number you get back

For floating point, the details are a little more convoluted, but the essence is the same: Computers use the bits in memory as fingers to count on, and they only have so many fingers.

You need to keep in mind that this is not a mistake or malfunction on the part of the computer or the program. It’s just how numbers in memory are.

For further study, look up “floating point precision”. I don’t really have a recommended reading list, because my own understanding was put together a bit at a time.

2 Likes

thanks a lot!!!

1 Like