getLod

Yes it’s very expensive. I am going to reserve, actually, the inner parts of the quad for procedural generation based on some known factors in the simulation.
The only way I could update the height of the whole quad was to set the height of each vertex iteratively. That is expensive, a 16x16 section of the terrain is representing one data point in the simulation. And that data point has a ton of information. The height is one such bit. I thought maybe it was possible to set the height of the selected patch but that doesn’t seem to be the case.
It runs fast if I only need to update the heights of each vertex if the level of detail is high. And it’s not that expensive if you only need to view some of them. Otherwise I’d have to update so many vertices of a terrain surface that is huge.

I think I see…

I was thinking you might be better off with a big texture and then a custom shader to move the heights and shift the normals of the vertexes. Way simpler to just update an image than generate triangles. You could even still have multiple mesh resolutions just sampling the same texture or whatever.

Even if you are only populating sections of the texture then that’s still an option… though it starts to be better to still break it up into subquads just because a big texture takes a wasteful amount of time to transfer to the GPU if you’ve only updated a handful of pixels.

One approach might be to see how far you can break the whole area up into separate meshes before performance starts to suffer. I don’t know how far you’ve broken it down now (forgive me if you’ve explained and I didn’t understand). Then just leave it that way, max or min LOD. At least then you only have to worry about the leaves and maybe they are already small enough for generation to be fast.

How big is the total terrain size at maximum resolution?

I don’t know what algorithm you are using to generate the heights but if you base everything off an elevation texture then you could even get the GPU involved in generating that texture.

Where does the source data come from?

One approach might be to see how far you can break the whole area up into separate meshes before performance starts to suffer

Well that’s essentially what I’m asking for on how I can do that. How can I check to see if a subquad or patch is visible? Is it based on that getLod method? If the Lod is X then update that patch’s internal vertices?

Also none of this is based on a texture, it’s just plain float values all held in a class that represents the data point. I have a variable number of threads updating this data all held in a ConcurrentHashMap<Integer, Cell>
The integer maps nicely to a hight map float.
So the main jme thread that updates the states of the application fetches the data from this map. The data is Atomic so things are handled nicely there. AtomicFloat (my implementation) AtomicInteger, etc.

The number of threads working on this data can range from 1-8 depending on the computer. The map can range in size form 128x128 or 256x256, just depends on how many threads are equally dispersed over the data. 4 threads for a surface of 256x256 runs smooth on my computer, that means each thread owns 128x128 of this data.

Of course this data is complex and holds a lot to be represented. I want to be able to have a 16x16 mesh (quad) to represent this, or maybe even 32x32, just depends on the computer.

The actual terrain then for a set of data 256x256 = 65536 data points. Each point gets 16x16 or 32x32 section of the terrain making the terrain 4096x4096 or 8192x8192.

I came to jme because I didn’t want to code all this LOD and terrain rendering etc. using LWJGL, it’s just simply reinventing the wheel.

LOD will be based on distance from the camera and not visibility. It will likely be set by a Control on that particular terrain patch. The control can also know what’s rendered and not if it counts frames. ie: on controlUpdate(), increment a counter. on controlRender() set the high watermark. If high watermark = counter then that Geometry was rendered.

I know… I’m saying that it COULD be.

Let’s turn this upside down for a second to see where it falls apart.

At the max, you could have four quads, each with their own 4096^2 texture. Rendering this is no problem. The textures on these quads would represent elevation. The frag shader could be simply adjusting the height of the vertex based on sampling the texture.

So now instead of using any JME terrain stuff, you’d have just quads and a texture to which you write your float data.

Adjust the splits up and down as needed.

My “naive” approach would then be to have threads crunching your calculations at whatever split you like, they then submit their results to a painter that will paint into that texture on the render thread. Split those workers how you like and maybe use a priority queue instead of a regular queue so you can give them priority if they are close to the viewer and in front of the camera (three dot products can tell you that).

If the hires meshes are too much for the GPU then you can worry about splitting them up and giving them LODs… nice thing is that they would still just be sampling your elevation textures and so would automatically update.

But maybe it just all magically works at some split level and you’re fine. Also your points of optimization become completely different.

Maybe you think of a way that the GPU could calculate your heights from your source data and you can do it all in real time… and you are then already setup for that.

Generated terrain meshes are a very poor way to represent dynamic height data, really… especially the way JME’s terrain does it with the useless knit skirts and stuff.

I guess then it’s back to square one. I thought the TerrainQuad would be good to use but I suppose not.

My “naive” approach would then be to have threads crunching your calculations at whatever split you like, they then submit their results to a painter that will paint into that texture on the render thread. Split those workers how you like and maybe use a priority queue instead of a regular queue so you can give them priority if they are close to the viewer and in front of the camera (three dot products can tell you that).

I did think about this approach in the past, though I thought that would be a waste of effort. Not the priority queue, but painting. I mapped the raw height values to an array of color values and used those to paint an image.

Right now I have the color values in an array. The height values are scaled to match each index to that array. I use the height values and scale them to the terrain height values and then update the vertex colors based on the color array.

If the hires meshes are too much for the GPU then you can worry about splitting them up and giving them LODs… nice thing is that they would still just be sampling your elevation textures and so would automatically update.

Not completely sure what you mean by that. “hires”?. I think the meshes will need LODs anyways.

So I guess I need to use a Geometry then. I’m having trouble finding out what the Lod values mean and where I can read about it. I at least know I can add a Lod control to the Geometry.

I’m fuzzy on how JME’s built-in LOD stuff works but I think the vertex buffer is the same but it selects different index buffers based on the LOD… so like several different index buffer arrays.

I think you might be surprised how far you get without LOD given this limited example. I agree you might need it but maybe not as much as you think.

Edit: plus, LOD for terrain like you are talking around I think really does require a pyramid of splits. So big quads split into smaller quads at closer range split into even smaller ones. Given that you don’t want infinite terrain there may be some split level that just works without further subdivision but also isn’t too slow from far away.

Thanks for the quick replies. I’ll try to find a solution and if I keep running to walls I consult here again.

Our documentation may not always be great but we love pitching in and solving problems. :slight_smile:

5 Likes