How to deal with precision loss?

floats have 24 significant digits.

That means at a distance of ca. 17 km from the origin, coordinates will be at millimeter precision, at 170 km, it will be at the centimeter scale, and at 17,000 km, we’re at meters.

In other words, if somebody goes out really far, he’ll find his character doesn’t move smoothly. Space itself is getting coarse-grained for them.

This affects only games that live in an arena of more than a handful of kilometers across.

In other words, anything that does not wall off the scenario somewhere. Space games certainly qualify, Minecraft would if it used floats for coordinates.

Now the problem is pretty much unavoidable if you do 3D graphics; so how do you people deal with it? Approaches I can imagine:

  • "Duh, I never checked..." ;)

  • Don't use floats for world coordinates. (Minecraft uses ints for block coordinates.) Essentially, you need to do your own scenegraph code - which is not so bad but you lose jme3's scene graph optimizations for vicinity searches.

  • Split the world into 17 km chunks. No interaction between chunks is possible, so you need to close them off against each other in a way that a player understands.

  • Split the world into 17 km chunks but allow interaction. This means special-case code to deal with inter-chunk interactions and inter-chunk navigation. Some kinds of interactions will work differently, so players will become aware that "things work differently at large distances".

Anybody got better ideas?

Do all calculations in doubles on the server, clientside move the world so, taht the player is in the center, then cast the results to floats and use them to setup the rendering scene graph, this way you only need amoutofobjects times a substract to a vector, thats usually pretty good handable for normal machines.

Split the world into chunks.

If you really have a world that large, you might want to do this anyway. I’m currently doing a game where you have a ship sailing over the ocean - since you can not look infinitely, (atm its 4km, i might increase this a little in the future) there is no need to have thw whole world loaded. I only need the visible parts of it. so i split the world into chunks of 2kmx2km and load alway the chunk the player is in and all adjacent chunks.

once a player maves more then 1/2 * chunkSize from the origin in one direction, he enters the next chunk. So i drop all now unneeded chunks (in fact, they were all cached for a while, in case the player moves in and out one chunk often) and recompute the players position and world position so that the current chunks is again centered in the origin.

as far as navigation goes - i store all coordinates as floats relative to their chunk. Inter-chunk navigation is pretty easy. I have a waypoint system that a vessel follows automatically. I store waypoints as floats (Vector3f) relative to the chunk its in. So you want to go from 0f,0f,0f @ Chunk 0,0 to 200f,0f,0@ chunk 0,1? no problem. i calculate ne sub-waypoint just outside of chunk 0,0. The subwaypoint would be (100f + really small value to leave that chunk, 0f, -1.000f [half size of chunk]). So my vessel starts to the subwaypoint, hits the waypoint & switches chunk and continues to the waypoint on chunk 0,1.

as far as interactions between chunks go - i’m still working on that. in my case, there will not be that many interactions. For mouse-pciking, im currently evaluation setting up a vertical plane at the sides of a chunk. When the ray hits such a plane, i will simply send a new ray on the other chunk from the point of impact on the plane.

But i have not yet really worked out how i will manage collision detection of two objects that are each very close to the edge of a chunk, but currently on different chunks, but close enough to collide. i’ll have to deal with it, though.

Thats only how I am currently handling my chunked world. if anyone has suggestion or better ideas, i want to hear it :wink: i do not have much experience with chunked worlds, so i would really appreciate any input :wink:

I first thought about a cunked world too, but then decidied to do the clientside offset, since in a space game distance from sun to pluto the amount of chunks would become quite unfriendly soon. (Also with the double world approach I don’t need to think with chunks serverside, and as long as the server is 64 bit it does not make any real speed difference as well.

@EmpirePhoenix said:
I first thought about a cunked world too, but then decidied to do the clientside offset, since in a space game distance from sun to pluto the amount of chunks would become quite unfriendly soon. (Also with the double world approach I don't need to think with chunks serverside, and as long as the server is 64 bit it does not make any real speed difference as well.

Im curious, how exactly did you do that? since all jME function use float, did you rewrite the hole source to use double or how did you achieve that?

Im curious, since i want my server to be authorative, this means the server has to check collisions between spatials, and has to make sure everyone stays on the terrain (terrain following). Since all built-in function of jME use float, this seems to be a little complicated on the first glance.

/edit: just found that:
seems that you have done exactly that, recompiled the whole source with double instead of float.

Yeah, that was empire too. Really, no need to do that. Just use doubles (if you really need to) to describe the world and then use floats for the stuff thats actually visible (move the world around the character, not vice versa). Your actual viewing range should not go above float precision anyway (and it cannot, as OpenGL is 32bit). In the end what he describes is partitioning although it might feel different ^^

Indeed, we have done a mix of chunks (a certain number of cubes/sectors are visible around a ship at any time and they load/unload dynamically) with the approach normen describes, of using doubles to describe the world and floats for the actual visuals.

The ship is centred at 0,0,0 in the scene graph, but moves in world model coordinates (all other objects move relative to the ship).

For far away object we use a logarithmic algorithm that scales objects so they look farther away (and are farther away in the world model) but are actually closer than they seem in the scene graph.

That way the visuals won’t go beyond float precision range and you won’t get weird precision errors.

Tried that scaling as well, but I decided who cares if a planet is 1km off at a distance of half a sunsystem nobody will notice :slight_smile: (all finer details are ot renderd at that distance due to lod, so z fighting will not appear as well)

Thanks, that has been most enlightening.

Letting the server keep the world in doubles seems to be the way to go; not sure about the logarithmic approach (nifty idea anyway, @Tumaini!) but I guess that can be added later as the need arises.

I’m a bit foggy about how to set that up though.

Do I need a jme3 scene graph at all, or is everything inside bullet?

@normen: if I don’t use a double-precision bullet, how do I do collision detection and vicinity searches?

@EmpirePhoenix: how do I get the sources for your double-precision jbullet? The jar is just binaries, I’d be unable to single-step into calls.

@toolforger: Definitely not with one huge physics space ^^ Create a dynamic space around where you need actual collision (and not just location info, you get that by searching for entity specifics in some database or entity system).

@normen: Hmm… let me apply that to a purely hypothetical situation: interstellar raycasting with millimeter precision (yes it’s hypothetical :)).

I guess I’ll need to find all objects that are near the ray, put them into groups of not too far-away objects, place each group in a scene and let bullet do the precise raycasting - is that correct?

I am wondering how I’m going to find all those game space objects without doing a full scan. Normally I’d just “ask the scene graph” (I assume it’s built for that kind of stuff), but I don’t have a scene graph yet. I could apply the Z-order curve technique, but this feels like reinventing the wheel…

What is the exact situation in your game that a player would want to point at a millimeter space inside a galaxy on screen? :wink: Also you will never get good performance if you really intend to let everything happen like the player was looking at it, with collisions and everything and use one physics space. You will have to cluster that over multiple computers possibly but at least over multiple physics spaces in separate threads. You will have to “minimize” (not just the visuals) anyway.


If you have the situation you describe, yes, you would know the player wants to select a base or solar system at least, so you collect all of those and then check where the player clicked with that data set (instead of every wall and ship in the universe ^^)


SceneGraph is not data, its the visual part, you will need to know about your “ships, stations, solar systems” otherwise, in data means, you cannot keep this amount of spatials etc in memory anyway.

@normen As I said, the situation is hypothetical. If I ever try putting millions of objects into the same scene I’ll get was I deserve, don’t you worry about that.

Let me restate the question I do have: How do I set up my data store so I can do an efficient vicinity search?

(A vicinity search would be something like “if I have a speck in interstellar space, what’s the nearest planet”?)

And since I’m not having a scene graph or bullet library for that: what are the options?

One option I found was using a Z-order curve (see ). However, I bet libraries exist for that; where do I find these? (Actually I fully expect that both jBullet and jme3’s scenegraph core already have that somewhere, for floats and therefore not directly usable. Correct me if I’m wrong.)

Again, the search I am suggesting doesn’t have to do anything with location. Its based on the type of entity. And actual “accurate” collision you need only when you find out both objects are close to each other.

@normen said:Again, the search I am suggesting doesn't have to do anything with location.

However, I'm specifically asking about location-based searching.
How do I do that, efficiently, if I have no jme3 data structure to support that?
I can think of a few ways to do that, but I have no practical experience so I'm wondering what others have to report.


definitly not true, a dbvs with a double compiled is actually quite potent, around 3thousand(moving!) objects on a i5 with one core are the limit. However

I moved to a native bullet for the server however, since the jbullet raycasting is greatly inefficient.

It basically does a bruteforce with all objects, while the native one does a boardphase elemination first.

And es I can do several thousand interplanetary raycasts per second without problems.

(Also since it’s my server I can make sure the antives are properly compiled)

Optimisation trick bonus:

Despawn everything where no player is near → Astroidfield is only a texture, untill a player gets closer than a few dozen kilometers, then the real physical astroids are spawned. If you do this for most unimportant stuff you end up with only a few playerships, planets and spacestations, and they are all static wich means quite fast.

To the jbullet.jar double does it not contain the sources? Anyway i do not have the sources myself anymore, since it was to imperformant for serverside use, and for client prediction the normal jbullet is fine, since its only a float world there.

3000 objects over a whole galaxy? As said, the whole dimensioning makes no sense.

Why over a galaxy, over a sunsystem, since distances between them are far to high, the hyperspace traver/ftl/warp/whatever, hides a switch between different applications.

See if you look at the whole solar system its hard to even see planet earth, theres absolutely no use in having collision for single space ships enabled then.

@toolforger said:
However, I'm specifically asking about location-based searching.

You will never want to know anything about "any object in my world at that location" at this scale, believe me. If you know what to look for (based on other data, e.g. an EntitySystem: entities.getEntitiesWithComponent(SpaceStation.class)) you have way different parameters. You draw too many parallels between the games physics objects and "real objects" in the real world.