Need advice: Implementing logarithmic z buffer

Hello guys,

While coding with my space game I ran directly into z buffer issues (as expected) when I combined meter scaled ship structures with planetary or stellar objects of several thousands or million of kilometers in size.

The first thing what came into my mind was stacking viewports together with different cameras for different ranges: One for close range and one for far away range which renders to a texture on a plane placed right at the end of the close range viewport.

But I do not like this solution. Looking for alternatives I found the concept of the logarithmic z buffer and that it seems to be quite common in space games or games with different scaling objects.

Could you please provide me some more information on how to do this in jm3?
I am quite experienced in Java and jme3 but unfortunately I have no idea about shader programming and integration. :slight_smile: What I really need is something like a step by step integration manual on how to specify/alter shaders and integrate them into jme3 context.

Thanks in advance.

Regards…
Harry.

IIrc there’s an implementation of that somewhere in here. It’s never worked properly for me when I’ve tried to pull the relevant code from it though (the zbuffer just ended up being completely messed up), so no idea how to actually get it working.

Why would you render to a texture? If you aren’t concerned with the performance benefit to doing that there is no reason to render to texture… just layer the viewports. And if you ARE interested in the performance benefit then the logarithmic z-buffer is never going to give you that. Z-fighting is not really your issue then.

In the end, because of the scales of space (depending on how accurate you are being), the viewports approach is best. The scale of planets is just so hugely different than local space… and those planets are so very very very far away… it makes a lot of sense to not even have them in the same scene. I mean, floating point won’t even be at all accurate if you use the same scale for both.

The fact that the far stuff is far enough away that it will not change visually as you move around in local space means that you could take advantage of that to render the far viewport to a texture and reuse it… instead of rendering it every frame.

…but there is nothing preventing you from continuing to render it every frame.

2 Likes

Z buffers are not linear anyway. But no matter what you do, you only have so many bits in a zbuffer. Once you go to those sorts of scales it just can never work unless you start having 64 or even 128 bit zbuffers. And it turns out there are better ways.

View ports sounds like the best version on the surface.

Special object treatment is another. For example spaceships and planets. Planets are always HUGE compared to space ships and are always very FAR away. Your ships are always in front of the planets and MUCH closer. So a planet rendering pass. ie the now background , then a foreground rendering pass is fairly easy to do.

https://celestia.space/ is open source and you can see how they do it.

But be warned. Accurate rendering of space is typically far less exciting that what you perhaps are expecting.

Yeah, totally… space is big.

Elite Dangerous does a pretty good job with the scales… they had to add a lot of “gamesplaining” mechanics to do it… though somehow I feel like it takes away from some of the “massiveness” of it all. But it’s very hard.

Every time I see a sci-fi fantasy movie/show with a couple planets hanging out in the sky I always chuckle a little. On the one hand, there is no denying that it looks really cool and alien. On the other hand, the planet would tear itself apart… or at least the oceans would be in constant turmoil.

3 Likes

First of all, thanks for the interesting posts. You really gave me new ideas.

This solution seems very interesting because I tried to do so by rendering the far scene into an offscreen texture and place a plane with the texture right at the edge of the close scene. How would you do this? How to implement correctly?

This asumption would not work. Just think of very big scaled spacecrafts just like Star Wars Death Star. I would not distinguish between celestial objects and spaceships in the rendering as both are being space objects. There is only one difference: Planets positions are analytically calculated while spacecrafts are numerically updated.

I just have read about and I understood why :slight_smile: So i guess that logarithmic z buffers are even less linear. I could figure out that the relative accuracy decays with increasing distance even faster (error/distance ~= constant). I always thought that they are implemented with Fixed point arithmetics.

I think I can now rewrite the question as follows:
My current solution is based on a multiple camera multiple viewPort approach. How can I connect them together without messing with offscreen rendering and textures?

  cam.setFrustumPerspective(40F, (float) cam.getWidth() / cam.getHeight(), 1F, pow(2, 21));
    
    // Interplanetary 14 AU camera
    Camera farCam = cam.clone();
    farCam.setFrustumPerspective(40F, (float) cam.getWidth() / cam.getHeight(), pow(2, 20), pow(2, 41));
    ViewPort farViewPort = getRenderManager().createPreView("far", farCam);
    farViewPort.setClearFlags(true, true, true);
    
    // Interstellar 75 Parsec camera
    Camera veryFarCam = cam.clone();
    ViewPort veryFarViewPort = getRenderManager().createPreView("veryFar", veryFarCam);
    veryFarViewPort.setClearFlags(true, true, true);
    veryFarCam.setFrustumPerspective(40F, (float) cam.getWidth() / cam.getHeight(), pow(2, 40), pow(2, 61));

Regards,
Harry

Create one viewport. Put the far scene into it. The far scene can be in planetary scale. (Say kilometers or bigger.)

Create another viewport. Put the near scene into it. Make sure the color clear flags are set to false. The near scene can be in meter scale.

…which was still smaller than a moon.

But anyway, just like planets, it’s in the far scene.

Once you get close enough for it not to be in a far scene then you aren’t rendering all of it anymore anyway. Just a teeny tiny (relatively speaking) patch of the large object.

Edit: regarding your viewports… just create more main viewports.

I think you are confounding two ideas, the difference between planets and spaceships in the (real) game model they are updated as you say. So the way you model your universe could be in double-precision or whatever.

But that is not rendering, the scene graph is not the game world. Rendering is do-whatever-it-takes to produce a picture that looks like what your game world would look like.
Included in “whatever-it-takes” is to render into viewports, to textures, mip-map. LOD reduction, fake whatever you can and reuse/cache whatever is possible. If you can render an image of a far away planet once every 10 seconds and then just reuse the same image for 600 frames = win.
I mean it really doesn’t matter if the scene-graph has the same tree-structure as your game world or if your Z-buffer can easily be translated to game world distance, as long as it looks right it is right :slight_smile:

Good point. I tried to put all perfectionism into the scene graph, from light years to centimeters. This is necessary for me because I need the scene to check for interaction between the objects (e.g. ray collision to determine wether a projectile has hit a close or far away target or even planet). I also use the scene to store all the data for the game logic in using Spatial.setUserData, Controls, Custom Savables extensively, etc…

This sounds good but there still remains one question: How to handle interactions between the members of different scenes? Lets say, we have a spacecraft at the “far-edge” of the near view port which collides with an planet of the “near-edge” of the far view port. This looks like a Savable Scene AppState which manages all the scene content as well as the interactions between the scene entities and numerics capable of creating/deleting objects as well as loading and saving games.

btw: The color clear flags … sounds interesting.

Why main viewports? I have decided to go for preViews because I want them to be rendered first just before the main viewport is rendered. Is this incorrect? Because using my solution I noticed artefacts with main views because they are added and rendered after the application viewport thus delaying for one frame.

Regards,
Harry

Well you can do something like that…but you’ll have to work with BigDecimals and make your own background simulation, then just show a representation of it rounded into floats.

It’s not gonna be very fast though.

If you plan to deal with orbits of any kind you can already forget doing any trajectory calculations in floats since those orbits will degrade overtime and send you spiraling down the gravity well. KSP solved that decently by using and adjusting conic equations instead. It did limit them to only one sphere of influence however.

FYI, raycasts aren’t infinite in percision either. The further away you go the less the effective resolution you can hit will be. Someone made the exact calculations on the forums a while ago but the results weren’t all that great iirc.

I vaguely remember it being less than a unit accurate on a target 20K units away, but that’s just my hazy memory.

This is the part where @pspeed breathes in and tells you “you shouldn’t be doing that”.

1 Like

Yeah, cue broken record. lol.

In general, using scene graph objects as game objects is a bad idea. It’s a super-duper-duper-duper-monumentally bad idea when you need the kind of precision you are talking about. I think maybe you don’t realize how bad float precision is.

Your current approach will definitely fail. At this point in’s only a matter of how far you can drag the semi-animated dead carcass along before you either give up or do a redesign from the ground up. I don’t mean to sound harsh but it really is quite impossible to do “space scale” in the JME scene graph.

Edit: or anything directly mapped to OpenGL in an efficient way for that matter.

3 Likes

To expand on that slightly, 32-bit floats are only guaranteed to be very accurate up to 6 significant digits (anything after that is a crapshoot - you might get lucky or you might get a result that’s significantly off). For my project my base unit is meters and I wanted millimeter precision. That means that I can only ever deal with positions/offsets of magnitude less than or equal to +/- 999.999 m. In other words, to keep good accuracy I have to split my world into 8 km^3 chunks (+/- 1 km from center per side) and grow the world by keeping all calculations relative to the containing chunk. At space scale, forget it - even with 64-bit doubles I’m not at all sure that the scales you want would maintain any semblance of accuracy.

You’d certainly get farther and there are tricks you can play… but yeah, ultimately, if you have moon-sized space ships and want accurate simulation you will be building off of BigDecimal on the back end probably.

I have similar experiences with float. We were doing some whole earth visualizations back in the early 2000s. Plotting facilities and stuff on the earth, letting users zoom down in, etc… I did an experiment to see at my particular latitude how accurate float could be used to represent buildings. Structures even 100 meters apart would resolve to the same location. (The earth is big in meters, you already lose tons of precision.)

We ended up switching to double for actual locations/math and then just projecting them for the view.

I don’t just preach this stuff idly.

1 Like

Well despite all that, there’s an area of computing that relies entirely on 32 bit floating point calculations - and that is anything related to water simulation.

And you know why? It’s actually rather simple, it’s because, it’s because integers and doubles don’t float.

Ha…ha…haha…I’ll show myself out.

2 Likes

Ok that was a really bad pun… But it was also beautiful. Kudos, monsieur! :smile:

1 Like

A bigger problem with floats is that the error is not fixed. It is related to the value and in a not linear way. It makes orbital simulations way less stable. It is not uncommon to do orbital simulations in fixed point. But that is not required for a game. It is something you need when modeling orbital stability over millions of years sort of thing.

However using the appropriate analytical orbital solutions (aka conical solutions) are not just rather accurate, they then remove a huge amount of these problems. 32bit floats would be fine. I use 64bit cus… meh.

And you definitely want the scene graph to just be the view part of MVC.

And again accurate space scales are still way bigger than your thinking. The moon has a real gravitatoinal effect on earth. It is over 300,000km away. Yet earth is only 13,000km across. So in ascii.

o                                                     .

That is the scale. A ship the size of a moon, only a few 1,000 km from earth would cause such a cataclysmic level of destruction, you don’t need planet buster weapons.

So, don’t be too accurate you say. It is a game after all and fun is the true measure. Well then why try and be so accuate in the first place?

[edit cus the moon was too close in ascii]

I like playing with scale and in my journeys around the internet recently found that Amazon sells a set of marbles representing the earth, moon, and mars to scale with each other. I haven’t bought it but I did read about it.

I like the idea because at the sizes they give, you can envision the distance between the earth and moon by holding the earth marble in one hand, the moon marble in the other, and stretching your arms apart.

I try to think of these visualization that humans can really “feel”.

The other one is if the earth was only a 1 cm diameter marble then the sun would be a ~1 meter diameter ball. To envision how far apart they are, you have to think about an American Football field… the sun would a few rows back in the seats at one end zone and the earth would be at the other… a few rows back. That’s far for that little 1 cm imaginary earth marble.

At the scales of the set in the first paragraph, it would be over 5 football fields… which I think is harder to really internalize.

2 Likes

Yeah, this is the reason why I implemented savable 128 Bit Fixed point data type :slight_smile:

Then, where can I store my game data elsewhere? I was thinking of Spatials which are carrying the data necessary for algorithms and the claculations are performed in controls/appStates.

This is the reason why I stick to Control / AppState objects. But it seems that I am not familiar with the design principles you mentioned. Could you please give me some examples on how one should do it?

Regards,
Harry

public class MyGameObject {
}

…basically, wherever you want. The game object is up to you.

What you are doing right now is the equivalent of hooking a text field right up to a database field. (Actually, it’s much closer to storing the whole text field itself right in the database.)

Make game objects. Whatever class you want. Update the game objects… in whatever precision you want.

Every update(), update your spatials to match the position of your game objects… you can use an app state to do this, controls, whatever.

Thank you very much. I think this was the Main Problem I faced. because I wanted the scene to manage all scene content I shifted all data and algorithm responsibility to spatial controls. Everything in my game is done by controls. I understand now why I ran indo dead end with my game project because complexity arose mainly from compatibility issues between jme3 and me :slight_smile:

This sounds really nice because Not only due to loose coupling between graphic output and the logic but also benefits like parallel execution come into range. I also could handle the scaling issues due to the enormous distances by scaling down the object and putting it more close to the camera.

Jme3 then is only visualisation and no more?

Regards and thanks,
Harry

  • One more point:
    In the following days I am planning to provide a software design for public review. Because I have some solutions in mind which I would like to discuss here…

  • I think I should rename this thread or open a new one named “jme3 design principles for space games or games in general” because I am enjoying the discussion here :slight_smile:

2 Likes