I am currently working on a minecraft-like game, and one of the issues I am facing is that when large meshes are added to the rootNode, by way of an intermediate node, there is significant stuttering (see the attached video). The generation of the meshes is done on a background thread with the aid of a ThreadPoolExecutor. The only things taking place on the JME thread are the tracking of which chunks to load/unload, the submit() calls to the ThreadPoolExecutor, and the building of a Geometry from a Mesh (the Mesh is passed from the background threads). As can be seen in the video, the PCIe link between the GPU and motherboard is nowhere near saturated at any point, ruling that out as the cause of this issue.
What is causing this stuttering, and how can I fix it?
Sending big objects (especially multiple big objects at once) to the GPU will incur overhead. Sometimes that overhead is enough to drop frames.
The overhead comes from lots of places. Getting the data into the GPU, making the texture ready, (possibly) compiling the shader, etc…
How to fix it largely depends on what the actual cause is but step one is to reduce the size or the count of objects you add in one frame. You don’t say how big “big” is or whether it’s already only one at a time.
For Mythruna, I had to spend a lot of time tuning my “chunk” size to balance between generation time, culling efficiency, and frame drop. In the end, frame drop was most affected by how many chunks I added at once… so I only add one Spatial per frame.
Anyway, you can see all of this in action in the IsoSurfaceDemo which generates terrain on the fly and generally does it without frame drops. I even wrote a whole paging library to help others do similar.
Paging library it uses:
By reducing the number of chunks being added or removed each frame (limiting it to one added and one removed every 50ms), the lag was reduced significantly. However, it is still present enough to be a problem.
Is there some way (ie. a(n opengl) profiler) to figure out what the actual cause is?
If you have JME’s GPU profiler enabled then you could try turning that on… but it’s likely just to show a spike in GPU usage. I think there is a more detailed version of that but I don’t remember how to enable it.
Else you are looking at some kind of third party separate OpenGL debugger. I’m not familiar with those. Keeping my chunks a good size, reusing textures and materials, etc. always solved the problem for me.
I just did this last week or so:
My engine version is 3.2.1-stable (I think the detailed profiler is available in 3.2.0, however), and it’s in the
While I haven’t used it, if you’re on nVidia hardware I’d guess that nSight would provide more than enough information to identify your bottleneck.