LOD will be based on distance from the camera and not visibility. It will likely be set by a Control on that particular terrain patch. The control can also know what’s rendered and not if it counts frames. ie: on controlUpdate(), increment a counter. on controlRender() set the high watermark. If high watermark = counter then that Geometry was rendered.
I know… I’m saying that it COULD be.
Let’s turn this upside down for a second to see where it falls apart.
At the max, you could have four quads, each with their own 4096^2 texture. Rendering this is no problem. The textures on these quads would represent elevation. The frag shader could be simply adjusting the height of the vertex based on sampling the texture.
So now instead of using any JME terrain stuff, you’d have just quads and a texture to which you write your float data.
Adjust the splits up and down as needed.
My “naive” approach would then be to have threads crunching your calculations at whatever split you like, they then submit their results to a painter that will paint into that texture on the render thread. Split those workers how you like and maybe use a priority queue instead of a regular queue so you can give them priority if they are close to the viewer and in front of the camera (three dot products can tell you that).
If the hires meshes are too much for the GPU then you can worry about splitting them up and giving them LODs… nice thing is that they would still just be sampling your elevation textures and so would automatically update.
But maybe it just all magically works at some split level and you’re fine. Also your points of optimization become completely different.
Maybe you think of a way that the GPU could calculate your heights from your source data and you can do it all in real time… and you are then already setup for that.
Generated terrain meshes are a very poor way to represent dynamic height data, really… especially the way JME’s terrain does it with the useless knit skirts and stuff.