[TroubleShooting] Large scale SkyFactory causes culling artifacts

Greetings all,

When using the following testing code, the background sky shines through at different triangles of the sphere causing visual artifacts. They are not present when the sky cube is removed. I am testing on Android 2.2 (CyanogenMod based Zeus ROM).

Is there a limitation on the scene size that i have overlooked in the documentation, or is this a culling bug due to the large scene size? Or is this maybe related to the OpenGL implementation on the ROM? I have no clue.

Is there a method that would prevent these artifacts (I have tried setting size of the sky, and changing the viewing frustum of the camera; commented out in the code)?

In any case, what would be your recommendation of representing large scale scenes (think: star systems with distances ranging up to 1e10 km from the center sun, multiply for diameter) besides dividing the distance by some factor (1e4 - 1e6?).


Here’s the test code:


import com.jme3.app.SimpleApplication;

import com.jme3.material.Material;

import com.jme3.scene.Geometry;

import com.jme3.scene.Spatial;

import com.jme3.scene.shape.Sphere;

import com.jme3.texture.Texture;

import com.jme3.util.SkyFactory;

public class Game extends SimpleApplication {


public void simpleInitApp() {

// camera



// background

Texture north = assetManager.loadTexture(“Textures/Sky/Lagoon/lagoon_north.jpg”);

Texture east = assetManager.loadTexture(“Textures/Sky/Lagoon/lagoon_east.jpg”);

Texture south = assetManager.loadTexture(“Textures/Sky/Lagoon/lagoon_south.jpg”);

Texture west = assetManager.loadTexture(“Textures/Sky/Lagoon/lagoon_west.jpg”);

Texture up = assetManager.loadTexture(“Textures/Sky/Lagoon/lagoon_up.jpg”);

Texture down = assetManager.loadTexture(“Textures/Sky/Lagoon/lagoon_down.jpg”);

Spatial sky = SkyFactory.createSky(assetManager, west, east, north, south, up, down);

//Spatial sky = SkyFactory.createSky(assetManager, west, east, north, south, up, down, Vector3f.UNIT_XYZ, 2147483647); // 2e9


//sky.queueDistance = 1E12f;



//create planet

Material mat = new Material(assetManager, “Common/MatDefs/Misc/Unshaded.j3md”);

mat.setTexture(“ColorMap”, assetManager.loadTexture(“Textures/ColoredTex/Monkey.png”));

Sphere planet = new Sphere(32, 32, 6e6f, true, false);

Geometry geom = new Geometry(“Planet”, planet);










the issue might be this.

this is a huge frustum size and you might have precision loss in the zbuffer at far distance (the sky is the farthest thing rendered).

This is probably a lot more noticeable on android since the depth buffer is 8 or at best 16 bit (compared to 24 on desktop).

For more information on zbuffer read this http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html.

one solution would be to change your near frustum to a higher value, but anyway 1e24 is way too high

1 Like

Thanks for the excellent reference.

Now at least I understand the problem.

Looks like I’ll have to rethink my scale concept.

First tests with zFar =1e9 look good. :slight_smile:

A billion units still seems to be a lot but it might work better. For general purposes (including physics and all the stuff that gets into a game sooner or later) I’d say about 100.000 units should be the maximum in terms of world space to avoid accuracy issues (mind that some values in physics and some shaders might get very small or big compared to the input values). If you feel like you need to have more “space” move the world and not the player, this way you avoid these issues.

Appreciate your inputs, normen. Still need to read more on the scale problem.

For now I am looking into the z-fighting issue and still haven’t decided how to solve that. Using a logarithmic depth buffer fails on my hardware (PowerVR SGX 530) as it only supports GLSL 1.0 and hence gl_FragDepth is not supported by the shader.

Would multi-pass rendering be a better way to provide good near resolution and acceptable far resolution, or would rendering with several viewPorts (one for near objects, one for far objects including skycube) be a better approach?

In any case, when rendering the sky in a separate queue ( sky.setQueueBucket(Bucket.Sky); ), should the depth buffer not be cleared anyway after rendering the sky?

This would at least solve z-fighting between the skycube and any far object.

So far, I have successfully solved the z-fighting issue with large scale systems by adopting 2 different view ports, one for near, one for far, by assigning two different frustum ranges to each camera:


cam.setFrustumPerspective(45f, (float)scrW / (float)scrH, 0.001f, 1000f); //near

cam2.setFrustumPerspective(45f, (float)scrW / (float)scrH, 500f, 1e9f); //far



1 Like

Yeah this approach usually works fine, (of course the cost is a additional full rendering) But since I assume a space game, it is probably not that problematic as space is mostly empty

1 Like

Indeed, space tends to be empty :slight_smile:

Still, the question remains if clearing the depth buffer after rendering the sky and then rendering all other objects on top would resolve the z-fighting issue (though not for two overlapping distant object)? Or do I see it the wrong way?

@pedrabella I’m new to JME and also having the same z problem. I see you are using the 2 cameras to fix it, but how are you setting the output from camera 2 as the background for camera 1? Or are you doing something else?

It’s simple. Just create another view port for the second camera and attach the same scene to it as for the first camera.


		// long range view camera 1km-1e9km
		// http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html
		cam.setFrustumPerspective(45f, 1.6f, 1f, 1e9f);

		// second viewport equal to first
		// short range view camera 0.1m-1000m
		cam2 = cam.clone();
		cam2.setViewPort(0f, 1f, 0f, 1f);
		ViewPort worldViewport = renderManager.createMainView("World View", cam2);
		cam2.setFrustumPerspective(45f, 1.6f, 0.0001f, 10f);


This works quite well and the frame rate drop is hardly noticeable for simple scenes.

Of course, you’ll need to move both cameras simultaneously.


Right, I guess I’m trying to understand how the image from camera1 will show up in the background of camera 2 since you are not linking the cameras?

Imagine camera1 as your left eye and camera2 as your right eye.
When you look at the keyboard in front of you, both eyes see the keyboard.
Your analogy is not fully correct: The right eye doesn’t “see the image” of the left eye.
They both rather see the world image by pointing toward the same look at point.

This is very similar to what you are achieving by setting the view port of camera2 to be equal to the view port of camera 1. Both view ports are then rendered on the screen.
At the same time the position and look directions of both cameras must be the same.
The only difference between both cameras is the depth field, which makes it possible to see far (camera 1) and near (camera 2) on one screen.


Right, I understand the cameras have to imitate each others movements accept for the clipping planes, but with out a prerender pass how do you see one rendering through the other?

I tried similar approach in other engines and that didnt work because when the close camera rendered the scene…everything that was cut off was just filled with background color. So if I would overlay one viewport over another…it would completely cover the view port underneath. Does the camera in JME render with an alpha layer? How would you see through one viewport to the other?


Aha!! Very nice!!! thanks Paul!


Would there be a benefit in using

renderManager.createPreView(“World View”, cam2);


renderManager.createMainView(“World View”, cam2);

What is the difference?

It might be that this would be beneficial in specific use cases.
For my part, however, i have not encountered any overlapping issues with using both view ports as MainView.
Did you have any specific scenario in mind when asking your question?