Rendering problem - z order?


I’m seeing some artifacting in my scene. It appears to only happen with a perspective camera, and only Mac OS and Linux (both systems have ATI cards). We’re using LWJGL in a Swing UI.

Here’s the setup for the renderer:

        ZBufferState buf = renderer.createZBufferState();
        buf.setEnabled( true );
        buf.setFunction( ZBufferState.CF_LEQUAL );
        rootNode.setRenderState( buf );
        rootNode.updateGeometricState( 0.0f, true );

here's the code for the update step:

        float tpf = timer.getTimePerFrame();
        input.update( tpf );
        rootNode.updateGeometricState( tpf, true );

and here's the rendering code:

        renderer.draw( rootNode );

I don't see this problem on Windows (card is ATI X1600), but I do see it on Linux (FC6) on the same box (dual boot). This problem also happens on Mac OS X (ATI Radeon HD 2600 Pro) on my iMac (Intel Core2 Duo).

Any suggestions would be very welcome.

What do the normals look like? Also, are you using any cullstates? Weird that it behaves differently on Windows. My guess is something is left undefined, and therefore each platform decides to handle it differently.

The normals look fine, and we’re not explicitly setting any cull states. I tried setting rootNode.setCullMode( CullState.CS_NONE ); but that didn’t help either.

When you view the scene up close, the problem goes away. When you move the camera back a certain distance, then this problem reappears.

Perhaps it is the depth buffer precision.

renanse said:

Perhaps it is the depth buffer precision.

Good point. I'll check that.


Thanks Renanse - turns out the near clipping plane for the perspective camera was too close to the camera. Pushing it out (and generally 'flattening' the frustum) solved the problem.

As an aside, does anyone think it's possible that the platform differences could be because the Windows drivers store the frustum values as doubles rather than floats (considering I run Windows and Linux on the same machine)?d

Not only does that seem possible, it seems pretty likely…

(more bits mean more precision and a Double has 64 bits of precision while a Float only has 32)

Yeah, that was the only explanation I could come up with for the differences between platforms.

Thank you all for your feedback and suggestions.