Depth Bits and Depth Buffers

I’m trying to understand the AppSettings.getDepthBits property. According to the Java docs:

/**
  * Sets the number of depth bits to use.
  * <p>
  * The number of depth bits specifies the precision of the depth buffer.
  * To increase precision, specify 32 bits. To decrease precision, specify
  * 16 bits. On some platforms 24 bits might not be supported, in that case,
  * specify 16 bits.<p>
  * (Default: 24)
  *
  * @param value The depth bits
  */
 public void setDepthBits(int value){
     putInteger("DepthBits", value);
 }

…which is a great description…if you already know what a depth buffer is.

Also when I search for the docs for “depth bit” or “depth buffer”, I don’t see any search results that explain what either of those concepts are:

Is there any jME3 documentation explaining (for newbies) what depth bits/depth buffers are, and if not, is there any reading outside of jME3 pertaining to these concepts that the community recommends?

These are not JME things. These are basic 3D graphics rendering things. So google is a good source.

First link from google:
https://msdn.microsoft.com/en-us/library/bb976071.aspx

Second link from google:

Thanks @pspeed, so I think I now “get” what the depth buffer is, and that the depth bits dictate how many bytes get allocated for it (either 2-, 3- or 4-bytes). And I’m assuming that the “cost tradeoff” to using a higher value for depth bits means more CPU/GPU utilization.

I’m wondering if there’s an intelligent way to query the hardware that jME3 is running on to see what the “optimal” value for depth bits should be?

I’m not even sure how I’d define “optimal”.

If you don’t have specific requirements then don’t set it. Let the hardware decide.

How does the “hardware decide”? According to the source code the default value is 24-bits. Are you saying that there’s another place in the code where the “hardware” (I assume OpenGL or something on the graphics driver) overrides that default value?

Also, to help better define “optimal”, I guess I’m wondering what behaviors I might expect if I ran the same game on the same commodity machine and changed the values from 16 to 24 to 32 in three separate runs. Would I just see things Z-ordered more correctly when the value is 32 rather than when its 16? Would I tax the CPU/GPU more if the value was 32 vs 16?

You can probably read all about optimal z-buffer bit settings on the internet. There are bound to be a thousand articles on the subject. It’s not a simple thing as “optimal” depends on almost every other setting you have and the scene you are using. And generally it’s not the place you should start “optimizing”.

I can say that in 6+ years of using JME, I have never had a need to set it to something other than the default. Even if you run into bit-size related z-fighting issues (I’ll let you google that one yourself), there are nearly always better ways to deal with it than messing with the z-buffer depth.

In modern hardware, it’s a very specific set of circumstances that would require something different.

:thumbsup: thanks

Actually while we’re on the subject, since that Microsoft article mentions that depth bits tie into stencil bits, and seeing that StencilBits is an AppSetting property, I’m wondering if there’s a rule of thumb for determining when you need to use stencil bits in jME3 (setStencilBits(8))?

Are you using stenciling?

If stenciling produces more realistic/life-like renderings, then yeah I’d like to use it. But I’m sure its not as simple as setStencilBits(8) and voila - you have amazing shadows. I’m sure the assets you use as well as other factors contribute to whether setting stencil bits on (“8”) has any effect on how jME3 renders,right?

In other words, when you ask “Are you using stenciling?”, is the answer as simple as me saying “yep” or “nope”?! Or is there more tuning/design/setup work involved?

If you don’t know if you are using stencling then you aren’t using stenciling.

Ok. By default stenciling is turned off (setStencilBits(0)).

If I had a simple game that used this default value, and then I turned it on (setStencilBits(8)), and left everything else the same, would I see dramatically better shadows in the rendered frames?

Nope. Afaik there is no jme build un technique that makes use of the stencil. It all matters when you make your own custom pipeline. And when you reach that point you need at least opengl 101.

:thumbsup: Thanks @zzuegg for a kind, crystal clear, understandable answer that is not frought with petty, unprovoked jabs.

Its answers like these that make the jME3 community seem welcoming to newbies such as myself, which in turn leads to more use and adoption by other developers, which then in turn helps the jME3 community grow and benefit the project in the long run. :wink:

I updated the wiki to reflect the links pspeed listed.

https://jmonkeyengine.github.io/wiki/jme3/intermediate/appsettings.html

2 Likes