For max value, float is plenty, that’s 10e38, 8 orders of magnitude above the size of the Milky Way.
Though max values are largely irrelevant. If you want coordinates, you want coordinate differences to be preserved, and hence you never want to make use of the “floating” in “floating point”. It’s useful for the scene graph since you can translate the coordinates so that the player is always at the origin, and precision loss doesn’t matter because it’s too far away to see or notice the inaccuracies anyway, but you can’t apply this to the world model where you want to store everything in some kind of global coordinate system. (A hierarchical coordinate system anyway is just a system with coordinates composed of two numbers, the high-significant number and the low-significant number. You also propose to combine this with a segmented storage layout.)
For a hierarchical coordinate system, you can avoid doubles if you segment at the planet+moons level.
For segmentation at the solar system level, you need float+double.
Which approach is better depends on whether you might have direct interaction (say, space battles) at the boundary between solar systems. If yes, you’ll need to have cross-segment interaction code anyway, and float+float is better because you don’t need to do double-to-float conversions, speeding up a few things. If no, and if you may have a space battle at the boundary between planet volumes, then float+double is probably better because you can avoid coordinate transformations on the server when setting up the battlefield.
In general, I think that segmented coordinate systems work as long as you can keep each kind of interaction on its own level.
Once some kind of interaction occurs at more than one level (say, a raycast when firing a laser, or when receiving information from an observatory), you need to code that interaction on each level where it can occur. Other than the increased coding and testing effort, there’s nothing wrong with that - you’ll probably want to write unit tests that make sure that some interaction has the same results independently of whether it’s within the segment or crossing a segment boundary.
Multe-level segmentation does not change that, except that you’ll end coding interactions for more levels. And probably also fixing bugs in more code.
Again, what’s better depends on circumstance. If you’re doing your world model code by yourself anyway, you can use combined coordinates, code the thing once and never look back; if you want to leverage existing libraries written for floats, the added effort for writing intersegment interactions is a constant source of annoying bugs and will slow you down, but it all depends on how many such interactions you actually need to code (not many, usually), and you can avoid that kind of problem if you design all interactions so that they can happen only at one level. In the space game context, you’d have short-range sensors that only work within a segment, and long-range sensors that work between segments (and ignore intra-segment signals).
I once tried Eve Online on Linux and had graphics problems with the skybox, allowing me to see its actual size - I could see it rebuild whenever I got near a space station or planet.
So they’re not only using float+float (Eve Online uses a ridiculously small-scale universe), they’re limiting the scene size to something around a few hundred klicks AND incur the cost of setting up a large set of really small scenes.
I suspect they’re working with a resolution of 10 cm for physics, which gives them a usable arena size of ~1,500 km, and a millimeter-precision scene graph.