What is the maximum draw distance?

I am trying to draw objects out at 70k+ from the camera. No matter how I set the camera frustum including near and far clipping planes objects are not being drawn beyond ~65k. Is there a default maximum draw distance for nodes, a default cull process, or a default zbuffer configuration. I have modified the SimpleTextureTest to demonstrate. As the container rotates the objects snap in and out of view. If I set the far clipping plane at 60k you can see the objects being precisely culled (i.e. cut in half as apposed to snapping) so I do not think it is a view frustum issue.

public class SimpleTexturedTest extends SimpleApplication {

private static final java.util.logging.Logger logger = java.util.logging.Logger.getLogger(SimpleTexturedTest.class.getName());

private Node spheresContainer = new Node(“spheres-container”);

private boolean lightingEnabled = true;

private boolean texturedEnabled = true;

private boolean spheres = true;


public void simpleInitApp() {



Mesh shapeSphere = null;

Mesh shapeBox = null;

shapeSphere = new Sphere(16, 16, .5f);

shapeBox = new Box(Vector3f.ZERO, 0.3f, 0.3f, 0.3f);

// ModelConverter.optimize(geom);

Texture texture = assetManager.loadTexture(new TextureKey(“test-data/Interface/Logo/Monkey.jpg”));

Texture textureMonkey = assetManager.loadTexture(new TextureKey(“test-data/Interface/Logo/Monkey.png”));

Material material = null;

Material materialMonkey = null;

if (texturedEnabled) {

if (lightingEnabled) {

material = new Material(assetManager, “Common/MatDefs/Light/Lighting.j3md”);

material.setBoolean(“VertexLighting”, true);

material.setFloat(“Shininess”, 127);

material.setBoolean(“LowQuality”, true);

material.setTexture(“DiffuseMap”, texture);

materialMonkey = new Material(assetManager, “Common/MatDefs/Light/Lighting.j3md”);

materialMonkey.setBoolean(“VertexLighting”, true);

materialMonkey.setFloat(“Shininess”, 127);

materialMonkey.setBoolean(“LowQuality”, true);

materialMonkey.setTexture(“DiffuseMap”, textureMonkey);

} else {

material = new Material(assetManager, “Common/MatDefs/Misc/SimpleTextured.j3md”);

material.setTexture(“ColorMap”, texture);

materialMonkey = new Material(assetManager, “Common/MatDefs/Misc/SimpleTextured.j3md”);

materialMonkey.setTexture(“ColorMap”, textureMonkey);


} else {

material = new Material(assetManager, “Common/MatDefs/Misc/SolidColor.j3md”);

material.setColor(“Color”, ColorRGBA.Red);

materialMonkey = new Material(assetManager, “Common/MatDefs/Misc/SolidColor.j3md”);

materialMonkey.setColor(“Color”, ColorRGBA.Red);




int iFlipper = 0;

for (int y = -1; y < 2; y++) {

for (int x = -1; x < 2; x++){

Geometry geomClone = null;


if (iFlipper % 2 == 0)


geomClone = new Geometry(“geometry-” + y + “-” + x, shapeBox);




geomClone = new Geometry(“geometry-” + y + “-” + x, shapeSphere);


if (iFlipper % 3 == 0)








geomClone.setLocalTranslation(x, y, 0);




spheresContainer.setLocalTranslation(new Vector3f(0, 0, -65000f));


cam.setFrustumPerspective(45, 0.8, 10000, 200000);


//PointLight pointLight = new PointLight();

//pointLight.setColor(new ColorRGBA(0.7f, 0.7f, 1.0f, 1.0f));

//pointLight.setPosition(new Vector3f(0f, 0f, 0f));



AmbientLight al = new AmbientLight();





public void simpleUpdate(float tpf) {

if (secondCounter == 0)

logger.info("Frames per second: " + timer.getFrameRate());

spheresContainer.rotate(0.2f * tpf, 0.4f * tpf, 0.8f * tpf);



1 Like

Those are relatively huge values for floats/opengl, I wouldn’t expect proper results with this at any instance.

I’ve been able to draw using native GL calls within the same GL surface with no problem at x10 the distance so I’m fairly certain the OS and graphics system should support the functionality I am after. FYI the floats simply get truncated so you lose fidelity in the positioning of objects.

Yes, and that loss in fidelity compared to possibly tiny values in shaders, physics and what else is running in a game causes the issues I mention.


I have run the test with the desktop renderer with no apparent problems so I am guessing it is an issue with the OGLES implementation that I am using on the android or how I have the settings configured in the testApp. As I mentioned before I have managed to draw a simple shader based box in OGLES on the same GLSurface at the same time as the JME3 is rendering. A bit of a cludge but very possible. Since the native OGLES shader solution works as expected, I suspect something within the android specific extensions of JME3 is causing the culling. Any ideas, thoughts or pointers on where to look would be welcome.



65535 is the maximum unsigned short value. weird coincidence.

1 Like

Running on a 16 bit platform? LOL LOOOOL

Since the desktop renderer works, it is probably somewhere within the android specific code. With the short coincidence I would start looking for used shorts/ints with short value clamping in the Android renderer first, and see if any of them might be responsible.

I would assume this is a programming failture, since the jvm usually gurantees exact same types and ranges on all platforms.

As side note since precision seems to be not o ocncern for you (else you would use another draw system) you might as well jsut say scale everything (rootnode) by 0.1f and reduce drawing distance to 7k

Mobile GPUs usually use a 16bit depth buffer but I don’t think it has anything to do with it. Maybe the shader is being processed with lower precision like 20-bit floating point or something.

Either way there’s really no reason to use those high values

1 Like

Done some testing at the weekend with the desktop renderer. I tried setting the GL Surface to 16bit depth but I’m not sure if my card actually supports it (Onboard Intel HD). Again the distance issue does not seem to occur on the desktop. Can anyone else do a test with a graphics card that lets you force 16 bit depth and let me know if they have the same issue drawing beyond 70,000?

Thanks for all the advice to reduce the scale of my scene. I am stuck with my current scale because I need to be able to render detail when close to the scene (I.e inside and building). The inaccuracies caused when the scene is far away is not a problem as I’m going to be 60km+ from the scene but I do want to be able to see something in order to navigate. Having a world that simply disappears at 60km is awkward.

A lot of games/sims that have these situations will render two separate parts of the scene. The far away stuff scaled way down and the near stuff at regular scale. It’s quite common in space sims.

1 Like

Still looking at this.

I have managed to trace through to the individual gl calls made within the OGLESRenderer and replicated them completely in my own code. As far as I can tell I am setting up exactly the same buffers and projection matrices. The same gl function calls are called with the same data. For some reason my code renders beyond 2^16 but the JME3 still refuses. I am using the same AndroidContext and drawing within the same thread. It doesnt make any sense to me. Can there be anything else at work that any one can think of beyond the gl calls that could be being set up in JME3 but not in my own renderer?

Thanks in advance.

There are other components besides the actual renderer that are being processed. The point below is probably a good one…

@Momoko_Fan said:Maybe the shader is being processed with lower precision like 20-bit floating point or something.
Either way there's really no reason to use those high values

I have specified mediump float in the shaders. Are there any other ways of controlling the precision?

@mattfranklin said:
I have specified mediump float in the shaders. Are there any other ways of controlling the precision?

Don't use such huge numbers xD
@pspeed said:
A lot of games/sims that have these situations will render two separate parts of the scene. The far away stuff scaled way down and the near stuff at regular scale. It's quite common in space sims.

This is a good solution for your case, I would say, and it is the solution we use in our game.
It lets you see objects very far away without the issues caused by floats.
Just use a logarithmic scaling algorithm for far away objects and position them within the limit and they'll still look to be far away.

Thanks for the pointers everyone. I seemed to have a solution courtesy of specifying “precision highp float” in the vert shader. I really would not have bothered looking at this without your persistance as it worked okay with mediump in my own copy of the shader. It turns out the minimum precsion that mediump has to support is 2^14 so the fact that anything got drawn out to 2^16 is a bit of fluke.

I still have no idea why I need to explicitly specify highp in the jme3 shader but do not in my own copy of the shader. I’m guessing it might be something to do with the order of compilation and default precision used in the compilations. I will poke around a little more and post anything that I find. Its a bit of a nause to have to specify precision in the jme3 shaders because I wont be able to use the standard shaders out of the box.

What graficcard do you have? at least I know that intel onboard cards do sometimes try some dirty tricks to get a bitmore performance, had there quite some funny effects (like instead of each frac only each 10th frac is computed and then linearly scaled inbetween and similar hacks)

For android, jME3 specifies medium precision by default to improve speed of the shader

This is a slightly different question but I guess it’s still related:

Say you wanted to create a large outdoor scene that spans several kilometer in diameter. What are the best practices concerning the unit to choose? I know that Unity for example tends to say “one unit = one meter”, but this would cause a lot of stuff to fall out of the standard viewing frustrum in this case. So should one apply some scaling factor or extend the frustrum?

1 Like