I've got a tiny optimisation problem

Hi everyone,

I am trying to visualize some data. However, this is quite a lot of data, and jME can barely manage to keep up with it.

Do keep in mind that I started this project not knowing how much data I actually need to visualize


  • About 700 ‘things’. Let’s call these things ‘network nodes’.
  • Each node has about 250 connections to another node. That is a total of 178500 connections.
  • Also, an update is sent every 0.01s. On every update, I must change the color of the corresponding node and probably connection(s) too.

Which adds up to ~180k objects. Hooray!


  • The network nodes aren’t that important; they can be displayed as low-poly spheres (I currently use new Sphere(8,8,0.3f)
  • The connections should preferably be arrows. The debug arrow (com.jme3.scene.debug.Arrow), however, does not look good at large distances - the arrowhead becomes disproportionately large.

This leaves with me with ~180k objects, each with not too many vertices. Trying to display this, I get 1 fps (or at least, that’s what it displays, it’s probably lower than that).

I thought of several solutions:

  1. Use BatchNode
    The problem is that batching probably takes forever. I tried batching ~50k objects, and it worked pretty well (my fps jumped to 43!). However, when I tried batching ~100k objects, the application froze for like, 30 seconds, and I used task manager to kill it.
  2. Use GeometryBatchFactory
    I have not tried this yet, but I imagine I get the same problems as with BatchNode.
  3. Make some sort of custom mesh that works like BatchNode and GeometryBatchFactory, but is more customised for this particular project
    I have not tried this yet, but it currently seems like the best way to go.

What are your thoughts and opinions on this? What else can I do to improve perfomance?

P.S.: The wiki says that I should not make a lot of (java) objects, keep a low geometry count, and not update too frequently - basically the exact opposite of what I’m doing right now.

This is not a tiny optimization problem.

1 Like

Personally, for this kind of graph visualization, I’d have a single node mesh and a single edge mesh… custom… that I manage myself. The nodes might be instanced or batched. The edges/links would definitely be batched.

I speak from experience here because data visualization is a long standing thing in my background. My original OpenGL apps were large scale data visualizations written using OpenSceneGraph and a Java wrapper we wrote. 15+ years ago now. 10s of thousands of nodes, hundreds of thousands of edges, all in locale/group-optimized spring-style layouts.