Viewports as layers

Hello,

I use MAIN-viewports to render layers of a visual system. The system’s layers are, from back to front, something like this: “stars”, “moon”, “earth”, “everything close by” (<1 km).
For viewing everything from the right angle each camera has its own rotation matrix, so I cannot combine the viewports into a single view.
What I actually need now is to change the rendering order of the viewports, i.e.: “stars”, “earth”, “moon”, “everything close” but RenderManager does not allow to do so (I can only retrieve an unmodifiable collection with renderManager.getMainViews()).
My idea is to remove all the viewports and recreate them in the order required - not very elegant.
Any ideas to make this more efficient?
What is the point of an unmodifiable collection here?

Rgds

1 Like

<you could let each of them render to a texture,

then have a final composition viewport, where you render the textures as quads as required.

Maybe it is easier to make it so that all the viewports have the same rotation matrix.
I’m not sure how hard that would be, but it should not be very hard - a pivot node in the scene graph between root and “layer” subgraph should do the trick.

Thank you for your quick replies.

toolforger: Unfortunately that would be very complicated. I had to use a floating origin technique for rendering such a huge scene. The camera always stays at (0f,0f,0f), though the viewer might be at a distance of 9E8 or greater from the origin (not practical with floating point precision -> use double precision vectors here). Large chunks of the scene such as terrain blocks, the International Space Station or blocks of the moon are anchored at a double precision vector too and displaced by the relative vector camera->anchor which usually lies within floating point precision now.
The thing gets even more complicated with ephemerides…
I’d like to avoid using double precision rotation matrices. Not only because it is complicated but I’m also thinking about computation performance.

@Empire Phoenix: Hm, might be an idea though it might also be easier to remove and recreate all the viewports. Neither option is “nice” since I would only have to sort the list itself (I tried it and it works) but I’d have to recreate the viewports every time I sort the viewports (layers).

Maybe a jME developer can give me his insight on why the RenderManager’s ArrayList<ViewPort> are provided as read-only?

You may be running into granularity issues if you start separating origin and camera position too far. Especially if the scene is huge (assume you put the camera on the moon - suddenly the camera will not be able to move by less than a kilometer at a time, or something like that).
At around 15 km, the granularity will get into the millimeter range and might start to cause artifacts, at 150 km, the granularity will get into the centimeter range and possibly become observable. (This assumes the usual 1m = 1.0f in coordinate space, you might shave off a decimal digit or two if you stick at coarser resolutions.)

The usual approach is:

  • Separate model and scene graph, so you can make model coordinates sufficiently precise.
  • Transform from model space to scenegraph space by doing all subtractions inside model space, then converting to float. (Rotations can be done in scenegraph space, that’s multiplications.)

I wouldn’t worry too much about double-valued rotation matrices:
a) Use quaternions instead (JME has the code already, you could copy the code and use double instead of float).
b) Premultiply rotation matrices and the performance issues become mostly irrelevant.
c) float is usually not faster than double nowadays. Not unless you process huge data sets anyway.
d) If data size is really a problem, you can always apply a float-sized quaternion to double-sized coordinates.

Can you explain why you need to change the order? Perhaps that’s really where the issue is.

@toolforger: I understand that your description is actually what I’m doing. Models, terrain chunks, etc. are relatively small and its coordinates are precise.
I keep a vector of a model’s position in double precision (let’s call it M), just like the vector of the viewer (V). Before rendering the position of the model is calculated as: M’=M-V , where M’ will be converted into a Vector3f and set as local translation of the model. The idea is to put the origin of my double-precision-vector-space at the center of a planet (let it be earth, or mars and so on…)
I think this is what you mean, isn’t it?
I’m not worrying about the math after all, but I’m looking for easy solutions (see below)

@pspeed: (continuing from above)
I can render an earth and a moon (but without using rotation matrices) the following way:

  1. Let’s say I have view position, direction and up vectors in Earth-Centered-Earth-Fixed (ECEF)* coordinates.
    These vectors can be handed over to the earth viewport (or camera).
  2. Now I clear the z-Buffer, otherwise my terrain doesn’t render nicely.
  3. By transforming these vectors into Moon coordinates (MCMF)* using vector dot products I can render the moon as easy as rendering the earth, in its own separate viewport.

Depending on the position of the observer (moon between earth and observer, or earth between moon and observer) I have to sort these viewports from back to front or it will result in an “overdraw”.

*ECEF, MCMF coordinates are: x-axis points to (longitude=0, latitude=0, r=1), z-axis points to the north pole, y-axis completes the axes frame. Of course, ECEF and MCMF are always different from each other.

@Apollo said: @pspeed: (continuing from above) I can render an earth and a moon (but without using rotation matrices) the following way: 1. Let's say I have view position, direction and up vectors in Earth-Centered-Earth-Fixed (ECEF)* coordinates. These vectors can be handed over to the earth viewport (or camera). 2. Now I clear the z-Buffer, otherwise my terrain doesn't render nicely. 3. By transforming these vectors into Moon coordinates (MCMF)* using vector dot products I can render the moon as easy as rendering the earth, in its own separate viewport.

Depending on the position of the observer (moon between earth and observer, or earth between moon and observer) I have to sort these viewports from back to front or it will result in an “overdraw”.

*ECEF, MCMF coordinates are: x-axis points to (longitude=0, latitude=0, r=1), z-axis points to the north pole, y-axis completes the axes frame. Of course, ECEF and MCMF are always different from each other.

But when you are on the earth, the moon is just a ball… so I’m still not understanding something.

Generally, in space sims I think “stuff that’s near”, “stuff that’s not so near”, and “stuff that’s really far away”. Accuracy gets taken care of because stuff that’s farther away doesn’t need to be as accurate.

Perhaps a picture is helpful?

Well for a space game you would usually run the opposite way.

Let the camera stay at 0,0,0 always, and move the object around.

Then use the first 80% of the frustrum range as usual, but the last 20% exponentially and scale their contents appropiatly.

Assuming 30km range

A planet at 300km would be renderd at (for example) 19km but with scale 0.1f making it appear as being much further away. (The exponential scaling in the end ensures that planets are not overlapping.)

@Empire Phoenix: I’ve read about this already. I will consider this method. Maybe it is the way to go, though I would already have a working system if I were able to sort the viewports :wink:
I’ve implemented my own terrain system that makes really smooth changes between different levels of detail, both in texture and in geometry. I simply fade in a new chunk of terrain and fade out the unneeded, both with a small amount of polygon offset. It works very well in the current implementation, but I don’t know how it will look when I shift and scale things. I may give it a try on the weekend, thank you for remembering me. But I still think that using viewports as layers is very elegant and keeps the code a lot more readable.

@pspeed: I have 3 pictures for you. Let’s start our journey in earth orbit. First the moon viewport, then the earth viewport has been rendered.

Now we fly to the moon and look back to earth. We see the following picture (viewport order is still the same):

Actually, it should look like this (I rearranged the order of creating viewports here. Here I created the earth viewport before the moon viewport):

You see, if I were able to easily rearrange viewports on demand I would be fine. All I need is write-access to the viewport ArrayLists of RenderManager. I hope you understand what I’m trying to say.

Hm… if the viewport order is fixed, how about swapping responsibilities on the fly? I.e. when the Moon gets nearer than the Earth, swap the viewports that are responsible for displaying them.
I have no idea how well that will work, or what the ramifications are and whether that’s going to incur other problems (that’s always a risk if you do outlandish things like (ab?)using viewports for order-of-magnitude ordering), but it may be worth a shot if it’s less work than modifying scenegraph construction.

Another way to view this approach: Instead of assigning viewports roles as “Earth” and “Moon”, assign them as “local” and “interplanetary”.

1 Like
Another way to view this approach: Instead of assigning viewports roles as “Earth” and “Moon”, assign them as “local” and “interplanetary”.

Sometimes I don’t see the trees in the wood - That is a good idea, thank you! On the other hand: I feel so embaressed now :facepalm: Maybe I need holidays…
Seriously now, I think that this is the easiest option. I simply need to swap the viewports’ nodes and the camera. I will try it later. Thanks to all of you for your advice!

Heh. Took me quite a while to come up with that idea myself, so you’re in good company.
Going back on established concepts is never the first thing one thinks of :slight_smile:

@pspeed said: Generally, in space sims I think "stuff that's near", "stuff that's not so near", and "stuff that's really far away". Accuracy gets taken care of because stuff that's farther away doesn't need to be as accurate.

^^ I should have maybe explained that better… but I’m glad others got you there.