An idea someone has probably had before about depth buffers

I was thinking about how inadequate the Z-buffer tends to be in most game engines. A lot of games use two cameras to render one scene, one close and one far camera. But I had a thought that’s somewhat hard to explain, so I’ll do my best.

A depth image goes from 0…1, where 0 is closest to the camera and 1 is the max distance. We use an RGB image and place the depth value in all 3 channels of the texture, then pass it in. (Now bare with me, I know my math isn’t perfect here but it’s just an example) so for a 1000 unit buffer, a reading of 0.5 is 500 units. Further areas of the screen would become a bit harder to discern and this is what causes Z-fighting.

So here’s my thought: The application or user can decide a number of layers and a depth per layer. So say our depth is split into 10 layers of 1000 units each. We store the 0…1 depth in the R/G channels, then the B channel is filled with the layer number divided by max layers. So for a 10 layer image, 1000-unit layers, something 3200 units away would be stored as (0.2, 0.2, 0.3). Your math to extract this as a normal depth value would simply be the following:

(DepthTexture.b * NumLayers) + DepthTexture.r
(0.3 * 10) + 0.2 = 3.2

Again, I know this isn’t exact math, there’s other things that go into depth calculations, but it’s just an example. The big question is, would this even be possible?

Doing something like that is very common, simply use multiple viewports. In most “open world” games this is used to simulate a “horizon” with a generated rendering of the far away terrain etc.

Rendering something with full resolution that is far away enough to cause z-fighting doesn’t make sense in the first place though, you should anyway have multiple ways to create “near” and “far” geometry.

I see. I just had a random thought and wondered if it was applicable or even possible. I knew lots of games used the multiple viewports tactic (jMEPlanet uses it as well), I just thought maybe compounding it to one viewport with a slightly more complex depth buffer might help performance in some way.

Except you end up subverting the hardware that is designed to do this for you. It’s not like the depth buffer handling is done in code… it’s done down on the GPU. Depth-based rejection is intrinsic to the fragment pipeline.

So that answers my question haha. I kinda figured it wasn’t necessarily possible, at least not traditionally. Still, it’s an interesting thought I suppose.

What everybody does, is separate far objects and near objects into two (or more) rendering passes.
Near objects are rendered first, with stencil write enabled. Then the depth buffer is cleared, and the camera near / far values are set much higher, like 1000 - 100000 and far objects are rendered with stencil test equal 0. This allows objects to have far higher depth precision than normal, as normally you would have to use a relatively huge depth range like 1 - 100000 which would cause a lot of Z-fighting.

All space simulation games do it this way. Other types of games like Skyrim or Source engine games do it as well, as a “3D Skybox”, which is a skybox with actual geometry in it.

EDIT: The same technique is also used in first-person shooters to render the weapon. In that case, the weapon would be a “near” object, allowing it to be very detailed and appear “separate” from the rest of the scene.