JMonkey Dual Marching Cubes

@toolforger said: If you're incurring the cost of conversion between the scenegraph's floats and some other data type for the game simulation, why not use longs? More bits of accuracy, and you don't want to see the decimal point float anyway because that means loss of precision.

The only reason NOT to do that I can imagine would be using one’s own for-doubles build of native Bullet.
I hear that that would be pointless in practice though.

So… where’s the catch?

If you need fractions then the doubles are optimized for it. With longs, you have to do your own fixed point math… not to be taken lightly.

1 Like

FP math isn’t THAT hard - addition stays the same, multiplication with integers too.
FP-to-FP multiplication needs a scaling factor applied, that’s easy.
Transcendental functions are harder, unless somebody already did them for you, anyway. (Okay, it’s 32-bit int instead of 64-bit long, but it’s a good start. I couldn’t dig up a full-blown Java library for that with quick googling, but something should be available.)

@toolforger said: FP math isn't THAT hard - addition stays the same, multiplication with integers too. FP-to-FP multiplication needs a scaling factor applied, that's easy. Transcendental functions are harder, unless somebody already did them for you, anyway. (Okay, it's 32-bit int instead of 64-bit long, but it's a good start. I couldn't dig up a full-blown Java library for that with quick googling, but something should be available.)

It’s not hard… but small errors will go unnoticed for a long time, potentially and manifest as other bugs. So, a) unit test the hell out of it, and b) only do it if you are experienced with such things.

And anyway, anything but the basic stuff is hardware optimized for floating point. So it might even be slower in the long run… and double has plenty of range and resolution if your world is organized properly. You could easily represent a whole galaxy down to sub-millimeter if you keep it hierarchically. I have trouble imagining a feasible game application where doubles would not suffice.

Yeah, definitely use double if it has enough bits for your use case.
Because it’s better tested. Also, because more libraries will be able to work with doubles than with longs-as-fixed-point.

For my own use case, I found doubles sufficient but a bit cramped. Everything - cities, planets, solar systems - will be ridiculously small when compared to their real-world counterparts. It will be enough for whatever kind of gameplay anybody might have in mind, but the more astrophysically minded players will have a nagging attack to suspension of disbelief - but redoing everything as long wouldn’t help anyway, I’d need to use two longs, and reimplementing arithmetic and game-relevant transcendentals would be just a bit too much to do single-handedly.
So even megalomaniac me is sticking with double and taking that hit to suspension of disbelief. I’ll switch to two longs as soon as somebody comes along with a 3D physics library, doing ray casting, collision testing, collision group determination, in a distributed fashion (something I’d like to have to reasons unrelated to this discussion anyway), and all of this in O(N log N) performance. I don’t expect to see that coming my way in the next years, so doubles it is.

Or you could build your universe hierarchically. It seems unlikely that you’d be displaying the whole universe at once.

The milky way galaxy is (only :)) 1.13526341 × 10^24 millimeters or so across. That would fit in a double for positioning systems within the galaxy. Use another double for position within a system, etc…

Construct your local view relative to some local origin and then everything has plenty of precision.

A double has just 52 mantissa bits, which means it can represent up to 2^52 = ca. 10^15.7 millimeters.
That’s enough to represent coordinates that aren’t farther apart than a light day or so, but too small by a factor of 10^8.3 for representing intra-Milky Way coordinates.

<cite>@phr00t said:</cite> I started developing on Android (DroidCraft etc.), so I learned to develop with fewer resources. The #1 performance tip was to avoid creating unnecessary objects because "object creation is never free":

http://developer.android.com/training/articles/perf-tips.html

I’m rather sceptical about this tip.
It’s most likely untrue on the desktop: Object creation is extremely cheap, garbage collection cost is proportional to the number of live objects (so object destruction is actually free).
On Android before the current generation, garbage collectors weren’t very good and object creation and (even more so) object destruction were indeed expensive. This <i>should</i> not be the case anymore, but I don’t know any specifics about current Android’s garbage collection algorithms; maybe they did just something half-hearted and it’s still not a good idea to have many objects on Android.

@toolforger said: A double has just 52 mantissa bits, which means it can represent up to 2^52 = ca. 10^15.7 millimeters. That's enough to represent coordinates that aren't farther apart than a light day or so, but too small by a factor of 10^8.3 for representing intra-Milky Way coordinates.

You are talking resolution and I’m talking max value… which is quite a bit higher than that when taking the exponent into consideration. I remember max value being something like 1.7*10^300 or so. I can’t be bothered to check it. :wink:

The resolution is not important because the hierarchy uses different local origins as it drops down. So the double that represents system location only needs to be roughly approximate (though it will be close) and the coordinates within the system will be system-relative.

When converting the local coordinates to local space, as long as you are careful then the math shouldn’t even be too tough… presuming you want local-to-camera, just calculate the system location to camera offset first as a delta (which will be in system space) and then adjust accordingly. Everything talks the same resolution then. If there are layered deltas (more than two levels in the hierarchy), as long as they are calculated outer to inner, everything should work out.

1 Like

For max value, float is plenty, that’s 10e38, 8 orders of magnitude above the size of the Milky Way.
Though max values are largely irrelevant. If you want coordinates, you want coordinate differences to be preserved, and hence you never want to make use of the “floating” in “floating point”. It’s useful for the scene graph since you can translate the coordinates so that the player is always at the origin, and precision loss doesn’t matter because it’s too far away to see or notice the inaccuracies anyway, but you can’t apply this to the world model where you want to store everything in some kind of global coordinate system. (A hierarchical coordinate system anyway is just a system with coordinates composed of two numbers, the high-significant number and the low-significant number. You also propose to combine this with a segmented storage layout.)

For a hierarchical coordinate system, you can avoid doubles if you segment at the planet+moons level.
For segmentation at the solar system level, you need float+double.
Which approach is better depends on whether you might have direct interaction (say, space battles) at the boundary between solar systems. If yes, you’ll need to have cross-segment interaction code anyway, and float+float is better because you don’t need to do double-to-float conversions, speeding up a few things. If no, and if you may have a space battle at the boundary between planet volumes, then float+double is probably better because you can avoid coordinate transformations on the server when setting up the battlefield.

In general, I think that segmented coordinate systems work as long as you can keep each kind of interaction on its own level.
Once some kind of interaction occurs at more than one level (say, a raycast when firing a laser, or when receiving information from an observatory), you need to code that interaction on each level where it can occur. Other than the increased coding and testing effort, there’s nothing wrong with that - you’ll probably want to write unit tests that make sure that some interaction has the same results independently of whether it’s within the segment or crossing a segment boundary.
Multe-level segmentation does not change that, except that you’ll end coding interactions for more levels. And probably also fixing bugs in more code.
Again, what’s better depends on circumstance. If you’re doing your world model code by yourself anyway, you can use combined coordinates, code the thing once and never look back; if you want to leverage existing libraries written for floats, the added effort for writing intersegment interactions is a constant source of annoying bugs and will slow you down, but it all depends on how many such interactions you actually need to code (not many, usually), and you can avoid that kind of problem if you design all interactions so that they can happen only at one level. In the space game context, you’d have short-range sensors that only work within a segment, and long-range sensors that work between segments (and ignore intra-segment signals).

I once tried Eve Online on Linux and had graphics problems with the skybox, allowing me to see its actual size - I could see it rebuild whenever I got near a space station or planet.
So they’re not only using float+float (Eve Online uses a ridiculously small-scale universe), they’re limiting the scene size to something around a few hundred klicks AND incur the cost of setting up a large set of really small scenes.
I suspect they’re working with a resolution of 10 cm for physics, which gives them a usable arena size of ~1,500 km, and a millimeter-precision scene graph.

1 Like

Back to the dual marching algorithm:
Does someone here know how a density field must look like? My example simply uses 1 for filled and -1 for empty but maybe this is wrong.
If I have more negative and positive values the generated Octree has more detailes because the check weather an Octree is divided uses the differences of some points and -5 and 5 or -1 and -5 leads to different results.
This could be the reason for the mistakes in the mesh.

Tthe density field is relevant only insofar as values above a threshold value are “inside”, those below are “outside”, and the algorithm creates polygons that (a) leave no gaps and (b) always have all “outside” points on one side and all “inside” points on the other. A density fiels with just 0 and 1 values would work just as well if you use a threshold somewhere between 0 and 1.

The other point of having density fields is that they can be used for making smoother normals.
A 0/1 density will always have an edged and jagged appearance.
If you have e.g. a density field that’s 1/r where r is the distance to a center point, a threshold between 0 and 1 will give you a sphere, and with the right formula, you can derive a normal vector that will make the surface look like a smooth sphere (though geometrically and for collisions, it will still be edged and jagged). I saw the formulae for normals somewhere in a freely available GPU Gems chapter I think (I didn’t pay too much attention because I’m not into smoothing stuff, so I didn’t need Marching Cubes or normals).

My question was not precise.
For a marching cubes algorithm the values 1 and -1 are enaugh and the mesh is generated without problems.
Our dual marching cubes algorithm seems to have problems between large and small OctreeNodes:

Maybe it is a mistake that such large Nodes adjoin very small ones. This could be prevent by using values from for example -5 to 5 because with more and larger values we will get a different Octree.

2 Likes
@toolforger said: For max value, float is plenty, that's 10e38, 8 orders of magnitude above the size of the Milky Way. Though max values are largely irrelevant. If you want coordinates, you want coordinate differences to be preserved, and hence you never want to make use of the "floating" in "floating point". It's useful for the scene graph since you can translate the coordinates so that the player is always at the origin, and precision loss doesn't matter because it's too far away to see or notice the inaccuracies anyway, but you can't apply this to the world model where you want to store everything in some kind of global coordinate system. (A hierarchical coordinate system anyway is just a system with coordinates composed of two numbers, the high-significant number and the low-significant number. You also propose to combine this with a segmented storage layout.)

For a hierarchical coordinate system, you can avoid doubles if you segment at the planet+moons level.
For segmentation at the solar system level, you need float+double.
Which approach is better depends on whether you might have direct interaction (say, space battles) at the boundary between solar systems. If yes, you’ll need to have cross-segment interaction code anyway, and float+float is better because you don’t need to do double-to-float conversions, speeding up a few things. If no, and if you may have a space battle at the boundary between planet volumes, then float+double is probably better because you can avoid coordinate transformations on the server when setting up the battlefield.

In general, I think that segmented coordinate systems work as long as you can keep each kind of interaction on its own level.
Once some kind of interaction occurs at more than one level (say, a raycast when firing a laser, or when receiving information from an observatory), you need to code that interaction on each level where it can occur. Other than the increased coding and testing effort, there’s nothing wrong with that - you’ll probably want to write unit tests that make sure that some interaction has the same results independently of whether it’s within the segment or crossing a segment boundary.
Multe-level segmentation does not change that, except that you’ll end coding interactions for more levels. And probably also fixing bugs in more code.
Again, what’s better depends on circumstance. If you’re doing your world model code by yourself anyway, you can use combined coordinates, code the thing once and never look back; if you want to leverage existing libraries written for floats, the added effort for writing intersegment interactions is a constant source of annoying bugs and will slow you down, but it all depends on how many such interactions you actually need to code (not many, usually), and you can avoid that kind of problem if you design all interactions so that they can happen only at one level. In the space game context, you’d have short-range sensors that only work within a segment, and long-range sensors that work between segments (and ignore intra-segment signals).

I once tried Eve Online on Linux and had graphics problems with the skybox, allowing me to see its actual size - I could see it rebuild whenever I got near a space station or planet.
So they’re not only using float+float (Eve Online uses a ridiculously small-scale universe), they’re limiting the scene size to something around a few hundred klicks AND incur the cost of setting up a large set of really small scenes.
I suspect they’re working with a resolution of 10 cm for physics, which gives them a usable arena size of ~1,500 km, and a millimeter-precision scene graph.

I’ll stop this now because it’s off topic and we’re talking past each other. Your scheme was to combine multiple coordinates all the time. Mine was only to construct a local offset. The difference is that the local offset should be in the same resolution as the local space… thus: double. This offset only needs to be reapplied when recreating local space… then everything can be in local doubles. (or when new stuff enters local space.) double also means you have to rebuild local space less often. (Note: local space is different than visible space in this case because I think it’s simpler to keep them separate but now we’re way off topic even from our off topic.

Physics prefers a lot of resolution in the locations and float only gives you ±6000 units or so before it starts to break down. Yes, I’m saying for a large scale space sim you probably want to avoid float. I know one other space sim on here has recompiled bullet for doubles for this reason. If I were writing a space sim then I’d do that too.

Now, I’ll bow out of this conversation because I don’t think anymore needs to be said and we’re off topic.

At the moment i create an Interface for testing:

I hope it will help me to get an overview over all the constants dual marching cubes needs.

4 Likes

@ogerlord How dare you interrupt the off-topic conversation going on in your thread! You should be ashamed :wink:

LOL I’ll forgive ogerlord :slight_smile:
As Paul said, everything relevant has been said.

Jokes aside, I hope we didn’t step on anybody’s toes with that diversion, and apologize if we did.

<cite>@toolforger said:</cite> LOL I'll forgive ogerlord :-) As Paul said, everything relevant has been said.

Jokes aside, I hope we didn’t step on anybody’s toes with that diversion, and apologize if we did.

I was COMPLETELY kidding!

<cite>@ogerlord said:</cite> My question was not precise. For a marching cubes algorithm the values 1 and -1 are enaugh and the mesh is generated without problems. Maybe it is a mistake that such large Nodes adjoin very small ones. This could be prevent by using values from for example -5 to 5 because with more and larger values we will get a different Octree.

In my 3089 game, I have my density function return very different values… if I know an area is definitely going to be air, I have it return -500. If an area is definitely land, I return 500. Values inside can range from -100 to 100… this worked the best for my marching cubes implementation, but not sure what works best with the dual marching cubes system. I’ve been swamped with multiplayer coding and promoting a recent sale to work on this stuff :expressionless:

Good news! Level of Detail seems to be working:

3 Likes

Glad you got it working. Is that the expected behaviour? I would have assumed that connected point are still connected at lower LOD.