and there is one thing I am hoping someone can answer for me.
Many of the early 3D adaptors for the PC have a 16 bit Z buffer, some others have 24 bits - and the very best have 32 bits
What people often fail to realise is that in nearly all machines, the Z buffer is non-linear
made me wonder if the depth buffer for a game, given the exact same place/view/clipping planes/everything would result in an identical depth buffer? Can it differ per graphics card/driver or anything else? Or software? (I’d like to create depth maps in Blender and use them in JME)
I’m interested because with my still image background and 3d overlay project I’d like to render the depth for the background image to an image and load that rather than use an invisible model to give depth which I do at the moment.
Your best bet here is to make a linear depth buffer and make the depth test in the shader instead of relying on the built in mechanism.
To answer your questions, the depth buffer will be different with 16 bits or 24 bits, but they will be very similar. The 24 bit one will just be more precise the for values far from the camera than the 16bit one.
Now for the image format to use, PNG supports 16 bit per channel, but JME “downscale” it to 8 bit when loading. You’ll have to use hdr or dds format that supports 16 or 32 bits per channel images. Or… you can somehow pack your 24 bit depth map in a 8 bit rgb png…but you’re going to have a hard time with that from blender.
I guess you don’t have very big scenes (If I recall it mostly interiors with not much range). So a linear buffer can be enough if the camera frustum is short.
My idea was more to not use the HW depth at all meaning, have a shader that takes your baked depth buffer as a regular texture, compare the depth stored in it and the depth of the current fragment and discard the fragments if it’s behind the depth value.
Yes, but all floats are written as a set of 0s and 1s. This mean you can see a float value as normal numbers, it is up to you (or program) to interpret what lies in particular memory cells.
Btw, reading float as long by casting the pointers was used in Quake 3 to fast calculation of inverse square root. There is a nice code with funny comments on Wiki: Fast inverse square root - Wikipedia
Basically I want to create a material in Blender that textures everything by its depth…
The reason I want to do this is a little involved, but essentially:
-I take a sphere
-Make it 100% reflective
-Unwrap it onto a square texture
-Put it inside my scene somewhere
-Give it a texture and bake it
This just gives you one of those equirectangular projections; I can now put the sphere inside JME with the baked texture, place the camera directly in the center of the sphere and look around at my projection.
I then have a character walking around on a collision model of the room I made in blender, and it looks like he’s walking around on the baked render, but without depth.
Up until now I have been using a model with color write off (thanks to everyone here who helped) which works. However this isn’t 100% accurate and the model needs to be pretty high poly for some circumstances with high detail.
I was hoping to bake another texture onto the sphere, this time the depth, and then just set it in the shader for the sphere. Blender can provide you with the distance from the camera to the pixel so I have implemented the formula from that page with a small amount of success.
When I first started trying to load the depth I was using a single background picture, not this sphere thing, and I was loading the entire scene into JME and rendering the depth buffer to a texture and saving it. Now I have the sphere though with its UV map I can’t do that anymore, so I am attempting to create them in Blender with baking.