Move the camera, or the world? (Pros/Cons)

I can’t recall where, but a number of times I’ve noticed suggestions to keep your camera stationary and instead move the world around the camera as needed. I think it was particularly in the context of large scale worlds such as space sims.

So why would you choose to keep the camera stationary? What’s the advantage/disadvantage? Which is more typical with jME?

I’m speaking as a camera mover with 0 experience in this area.

I wonder if it’s something do with with numeric primitives having a limit which becomes meaningful in a massive world? If you keep the camera still, you would only be dealing with a particular section of a massive world which is always moved near the camera - whereas moving the camera means you’d eventually reach parts of the world outside the value of floats?

Yep… and it doesn’t take very far before float starts losing potentially noticeable resolution. As I recall, by 65,000 or so you start to lose accuracy even in the third decimal place.

Anyway, for large scale, the idea is to have your game objects position in double and then translate to local player-relative space for float. Whether you move the whole universe whenever the camera moves or just reset things when cross a zone boundary, either way it’s effectively the same. You have the highest float resolution up close and the worst far away where it doesn’t matter anyway… even if you could see that far.

3 Likes

Stupid noob question:
When moving the universe each frame - will that not push all vertex buffers again with different values each frame?

No. You’re moving the spatials, not rebuilding the vertex buffers that are in them.

Neat example of not moving the camera is one @normen did a while back.

The IsoSurfaceDemo does it also.

3 Likes

Yeah I’ve been seeing firsthand how absurd float precision gets when values get high.

What was escaping me is that I had written off float inaccuracy as the main motivation for a static camera, figuring that you’d just see the same inaccuracies in reverse.

So why isn’t a static camera the norm? Just because moving the camera is more intuitive I guess?

Yep. The same reason people treat spatials as their game objects and use the FlyCam to move around… just seems easier when you are starting.

2 Likes

Oh, and just some random little tidbits to throw into the discussion.

  1. As far as the Space Sims go, it’s also common to fudge on the the “actual” planet sizes at a distance so that the players can actually see them better. I think games like eve online and spore do this quite well.

  2. A lot of Space Sims like to “colorize” their stars/suns. Hey, up close, suns are just plain too bright to look at with light running the full gamut of white and beyond. Heck, our own sun appears yellow to us because the blue component is diffused by our atmosphere. So a common practice I’ve noticed is just to go with a “colorful” universe for aesthetic purposes.

  3. Moving the world rather than the camera makes it somewhat easier to get the right camera angle in relationship to the player. If you move the world, then the player can pretty much be at center/zero all (maximum accuracy for nearby rendering at all times). Then just fiddle with camera in relationship to that as a sure fire “fly cam”.

  4. Moving the world also lets you do some cool stuff with LOD for large scale situations. You already have a logarithmic distance from center/zero all in place. Why not get a “pretty” rendering for an object to a sprite and save 3-D poly calculations for the up close stuff on the frame by frame?

  5. On other notes, Moving the Camera is great for FPS’s, RTS’s, and really a lot of games with preset “stages”. So, yeah, the majority of games, come to think of it.

Game On…

2 Likes

I just thought how would I set up Bullet physics engine for this. Don’t know… Maybe some of you?

1 Like

1 - Good to note
2 - Agree, that’s a good area to toss realism and just go with the cooler option.
4 - Sounds like a good idea
5 - Good point addressing why this isn’t the default tutorial style.

So, if we keep the cam at 0,0,0, is there any upper limit we should avoid entirely? (Besides MAX_FLOAT…) Would I regret making 1f = 1 meter in a solar system sized environment? For easy reference, the sun is 700,000 km (radius) and the distance of Pluto to the sun is around 6,000,000,000 km. That’s probably about the farthest distance you’d need to be concerned with and only the star would be visible really at that distance, so is a sphere at 6000000000000f going to cause any grief? (Well, it probably won’t be visible, but what about large distances in between that?) Maybe skybox issues?

That’s probably a big enough question for an entirely different thread conversation, but I’m also interested in bullet limitations here. My instinct is that we’d have to leave very far away objects out of the physics entirely.

1.As for the Cam, It doesn’t necessarily have to be zero all. It’s sometimes easier to think of math in regards to the “focus point” of rendering… like say the player’s spaceship (then you can set an “orbit cam” around the “focus point” which is actually the center/zero all of the system.)

  1. With a logarithmic system, it might be less of a headache to think of numbers as 6E12 rather than 6000000000000f… To me, a float is just an int (the accuracy factor) with an exponent (the scale) attached to it. (The actual structure of a float in memory is more complex, but the previous statement is a decent rough n ready for what I’m focusing on) You can certainly have the metric system and 1 meter = 1f for your system and not regret it. You could even create your own POJO number that makes accessing the exponent easier and slip it into your LOD calculations. The exponent becomes the number to focus on, at this point, rather than the actual number itself.

  2. This leads to artistic “close enough” for large scale situations. If you’d like 1 meter = 1f… then… For most folks, e0 must be pretty (0-10 meters for an astronaut walking). e1 (10-100 meters out can still be pretty for astronauts, but must be pretty for spaceships). e2(100-1000) you can start fudging artistically for astronauts, but might want to keep it pretty for spaceships (most spaceships I’ve seen in video games like to cruise around 300-700 m/s so… 3 second rule? lol) e3 and above, It starts to become safe to either leave it to LOD or you can switch to sprites on billboards for a lot of stuff. Caveat: Scale of object itself. Planets are around the e4 mark, so make the artistic choice of LOD/Sprite at probably e4 or e5. Suns at e5 or e6. And at e8, you’ll probably want to start seriously fudging scale to make planets appear larger on your skybox for the player’s sake while in-system. Aka. e12 distance to pluto might only look like an e5 jaunt at most.

  3. One idea for this fudging on the exponent, is to: hang a pretty mesh or sprite close (it’s actually only in the e3 range, just … scaled down a bit… and probably clamped on the scale so that it can’t go below a certain size) The important thing is the angle in relation to your focus point. The big idea being: if I head in this direction, I will eventually get to this point of interest. Sure, pluto now seems like the size of the moon from earth… but now we can see it and discern it from the pretty field of stars and nebulae that the background artist did for your skybox. Oh, right… and have a handy “zippy really fast drive” ready.

So, yeah, break reality, manage it with “close enough” and focus on the exponent for your artistic decisions. That’s a route that should make a universe pretty and renderable.

Game On…

1 Like

Yeah that’s pre-empting the next thing I was pondering, the best way to render something in front of the camera. The world would also need to rotate around that too, so I guess your local “ship” could be in the same branch of the scene graph as the camera, and the world rotates around that branch.

1 - Yeah I’d use 6e12f in code, but something I’m seeing is that even when I use a HUGE skybox (orders of magnitude larger), it looks like I’m still getting some z-fighting with 1f = 1 meter with a “realistic” scale. I may want to use unrealistic distances anyway, it seems the sun is going to look very small (I guess due to the lack of all that glare) using to-scale distances.

2/3 - Sneaky sneaky… yes, tossing reality in this case sounds wise.

Sounds like we are trying to treat spatials like game objects again.

I don’t see why the ship wouldn’t be at the root. The camera isn’t in the scene graph at all so that doesn’t matter.

Your game objects exist in a big giant double-based universe (unless you use bullet and haven’t rebuilt it for double… then I would still make game objects be in double and just do physics locally… but I digress) the visualization is constructed for your camera from nearby stuff and translated into camera-relative coordinates.

If ships are moving or planets are rotating or alien women are doing a jig… that’s done with the game objects. The visualization just follows along.

2 Likes

Yep I was off track there - I was thinking that if a focused ship was at the root, it would move/rotate whenever the world did, so it’d just look like the camera was moving and nothing else. So, rather than “moving the world around the camera”, aren’t we actually moving the world around whatever your camera is focusing on? (In response to movements of that focused object.) Assuming the camera is a “chase cam”.

I was under the impression that using doubles extensively might create potential pains, since I’ve read elsewhere that essentially the entire engine runs on floats.

OpenGL is float. JME is OpenGL so is float.

JME is visualization.

Visualization != game objects… repeat as many times a necessary.

Vector3f visualizationLocation = toVector3f(gameObjectLocation - playerLocation)

…whatever you call it.

And if you are using the chase cam, the chase cam is simply rotating around your ship which is just a game object turned visual like above. It just happens to also be at viaualization = 0,0,0 because it just happens to be at the same place as the player location.

1 Like