Make my own Z buffer in the shader?

This is a bit different than the usual shader workflow, so I’m having trouble making this work.

I have a mesh, and I need to set up a buffer for the Z position of all backfacing polygons.
How do I set that up? Issues:
#1: The Z buffer must cover the minimum rectangular area that contains the mesh. I think that can be solved with some quick calculations. Ideally, the shader pipeline would do that calculation, but I guess a vertex shader program never gets to see more than a single vertex. (Is that assumption 100% correct, or are there exceptions than I could exploit? If there is no exception, then I probably have to compute the size and position of the Z buffer on the Java side - I think I know how to do that.)
#2: If the Z buffer needs to be set up on the Java side, I do not want to create a buffer prefilled with maximum values and transfer that. I’d rather send the size and position, and have the GPU set up the buffer on its own. Is there a way to do that?

Edit: it’s just occurring to me that the hardware Z buffer might work well enough.
Is there a way to access it from a fragment shader?
Edit #2: The Z buffer seems to be filled with the Z value of the current fragment, not with the Z value of the fragment behind it. My use case wants to calculate distance between front-facing and back-facing polygon (convex meshes only). I should have mentioned that in the problem description above, sorry.

Any insights appreciated - thanks in advance!

Option 1:

Paint the object to a texture with backface culling off and nearface culling on.

That does mean that you will see the nearest back face not the furthest backface.

Option 2:

Paint the object to a texture from the other side (as in create a viewport with the camera facing the opposite direction the other side of the object being painted and add this object to it) and then use the inverse of the Z values from the painted object.

Option 3:

Explain why you are trying to do this so we can see if there is a simpler way to achieve the final objective…

@toolforger said:

Edit: it’s just occurring to me that the hardware Z buffer might work well enough.
Is there a way to access it from a fragment shader?
Edit #2: The Z buffer seems to be filled with the Z value of the current fragment, not with the Z value of the fragment behind it. My use case wants to calculate distance between front-facing and back-facing polygon (convex meshes only). I should have mentioned that in the problem description above, sorry.

Any insights appreciated - thanks in advance!


You have to render the scene to a frameBuffer and attach A depth texture to this frameBuffer.
You can then send the texture to a shader as a material param
In your case you’ll need 2 passes, the classic one with facleCull to back then a second pass with faceCull to Front. you’ll have 2 depth textures and you’ll be able to compute the difference.

Be warned that the hardware depth buffer is not linear.
read this Learning to Love your Z-buffer.

I want to do transparent voxels. To generate the color visible at the frontfacing polygons, I need to know far the viewing ray from each frontfacing pixel will travel before it hits a backfacing polygon.

Voxels can be behind each other visually, so Options 1 and 2 don’t work (if I understood them correctly).

I’m currently considering 3D textures as an alternative.
I see that support for that was recently added by Rémy (hi @nehon!), based on work by darkfalcon. Is it stable enough for practical use?
If yes, that would solve my use case and allow me to avoid shaders entirely, sparing me the second 90% of the learning curve :slight_smile: and an entire round of shader debugging.

Edit: Thanks for mentioning the nonlinearity. I was aware of that but I guess the hint can still benefit googlers and other onlookers, so it’s good to have the mention there.

@toolforger said: I see that support for that was recently added by Rémy (hi @nehon!), based on work by darkfalcon. Is it stable enough for practical use?
Not that recently, It's been there for a while actually. It's functional, but not widely used so there might be some issues hiding here and there.
@toolforger said: Edit: Thanks for mentioning the nonlinearity. I was aware of that but I guess the hint can still benefit googlers and other onlookers, so it's good to have the mention there.
I've been trapped by this one and it's the most common pitfall when dealing with depth buffer so I try to mention it every time I have the occasion ;)

Well the 3d textures worked fine for me so far and never produced any issues so far.

Sweet. I’ll try something tonight then.

I plan to generate a small (say, 16x16x16) texture and scale it to roughly screen size.
The block around the point of view is left at full transparency, and filled with another 3D texture consisting of smaller blocks.
Lather, rinse, repeat until some minimum size.

The transparency of each block is going to be rather high (alpha typically less than 1%), so I can’t have a threshold cuthoff. However, I can finetune texture resolution and minimum size so I guess I have enough knobs to keep performance unter control

Any foreseeable roadblocks on that route? Any hardware limitations that are going to spoil the party, or JME not being built for that scenario?
Or is that kind of scenario running just fine on typical 3D hardware?

Sounds interesting, I’ll be curious to hear how it goes :smiley:

I’ve been planning to publish an MIT-licensed demo.
Which, as usual, hinges on the availability of free time to do it, and on the question whether it can be made to work in the first place…
… but I’m optimistic, I can’t see any roadblocks right now, except possibly for hardware limits (that’s why I asked, I hope @EmpirePhoenix can shed some light on that)

Well it deends hardy on hardware and drivers,
for example my gc says it can support up to 4kx4kx4k textures, however it does not have enough vram to make that happen interactive ^^
If you use only a small map you should not run into problems that fast. Just try to calcualte how much the map will use and if you have enough budget for.

An alternative if the hardware does not support it would be to use 2darraystexture, and blend/mipmap between the z layers manually.

<cite>@toolforger said:</cite> I've been planning to publish an MIT-licensed demo.

Would be better if it used the same license as jME3 just to save complications if people start using bits of both together for whatever reason.

@zarch The license is there so that people can play with it and know it’s okay to reuse in their own project, or to republish. It’s not supposed to ever go into JME.
MIT is just my standard license for non-viral code. If that code is ever going into JME, I’ll relicense if necessary (but it’s probably going to be a rewrite anyway).

@EmpirePhoenix I hadn’t thought about VRAM limits, but of course these could become relevant.
Hm… 16x16x16 = 4K pixels = 16KB.
floats have 23 bits of useful precision, each additional level of detail is going to cover another 4 bits of that, so I’ll have exceeded float precision at 6 levels = 96 KB.
Given that Minecraft’s hardware requirements states cards from the 90ies, and these all had at least 64 MB, I guess that means I’m good and most likely to hit bandwidth or processing power limits long before VRAM becomes an issue 8)
… oh, bandwidth isn’t going to be much of an issue either. Even if the full set of textures is swapped in on every frame, it’s still at most 96 KB per frame.

This is going to be SOO good :slight_smile:
(I just hope I’m not overlooking something that throws a spanner in the works…)

It’s certainly an interesting approach :slight_smile:

Your main complication is probably going to be texturing/etc the voxels.

It’s fine so long as everything in the block is the same.

Nothing inside the voxels! If more detail is needed, the Voxel should be made fully transparent and filled with a more detailed 3D texture with smaller voxels.