Memory leak using multiple parallel physics spaces (native bullet)

I’ve been spending the last week trying to find the source of a memory leak in my game and I now believe that it is coming from inside the engine. I would appreciate it if someone could confirm this behavior (preferably on another OS then Windows cause that’s the one I’ve tested it on) before I submit this as an issue.

The memory leak in question seems to happen when there are multiple physics spaces running in parallel to one another. The funny thing is: it doesn’t seem to be coming from the JVM as no exceptions are thrown. No in fact, the memory leak comes from the OS allocating more and more memory to the JVM until the OS itself throws a low memory warning. Here is a simple test case.

package test;

import com.jme3.bullet.BulletAppState;
import com.jme3.bullet.PhysicsSpace;
import java.util.ArrayList;
import java.util.List;

public class ParallelPhysicsTest extends SimpleApplication{
    private List<PhysicsSpace> physicsSpaces = new ArrayList<>();
    public static void main(String[] args){
        new ParallelPhysicsTest().start();

    public void simpleInitApp(){
    public PhysicsSpace createPhysicsSpace(){
        BulletAppState bulletAppState = new BulletAppState();
        return bulletAppState.getPhysicsSpace();

By running the code above and by monitoring the memory of the process with a tool (such as resource monitor on Windows), you can clearly see the committed memory increasing at a steady rate. If you wait long enough, the OS will eventually warn you that you’re running out of memory.

This was tested using native bullet on JME 3.1 beta 1 on Windows 8.1 64 bit.

1 Like

It must be a leak in the native bullet code since that’s not using Java memory.

I use multiple physicspace since a long time.
→ About how long do we talk here till crash? I only tested for 24hours continous.
I do not use the appstate, but create Physicspaces directly, (i doubt this is the cause however)

It can take a while, definitely less then 24 hours thought. It mostly depends on how much memory you have available on your system. From what I can observe the test case leaks about 0.5 mb per second on average. Also, note that you have to be running multiple physics spaces in parallel mode for the leak to happen. If they all run sequentially, no leak occurs.

On a side note, why would you create your physics spaces directly and not use the appstate?

Yes that would make sense.


On my server there are is no application and barely any jme, so i use them directly.

Doesn’t happen with latest linux64 build.

Really? That surprises me. How long did you run it for and you didn’t see any increase in the allocated memory? I can’t think of a reason that would make this problem OS specific.

Ok, nvm, i was expecting something more noticeable.
It indeed slowly increase every time stepSimulation is called, apparently :confused:

I tried to trace the leak and i found that it’s probably related to the bullet profiler that is currently used (the default one?).
I tried to recompile the library with -DBT_NO_PROFILE=1 and now with 100 parallel physics spaces the memory looks stable.

I don’t know if the profiler is used by jme. If not, I propose to disable it for good and forget about this issue.

1 Like

Can confirm that the memory is more stable when using -DBT_NO_PROFILE=1.

Linking a few people that might be able to enlighten us a bit more on this: @normen @Dokthar @Empire_Phoenix

Do you think it would cause unexpected issues if we add this argument for future builds?

I’m not aware that we actually use the profiler anywhere, however I would prefer a to first understand what is actually creating that memory leak before trying to fix it. (Because it might very well be useful for some, especially if we come around to add a binding for it)

Btw I also use linux build, so that might be the reason why i cannot see it as well. Does the linux build not use that value or something?

Is this maybe already fixed in a newer bullet version (and we should update?)

1 Like

This should be the profiler btQuickprof.cpp (but i have no clue where the issue could be).
There are indeed plenty of commits since the version we are using, but it seems that the new 2x release is a bullet2/3 hybrid, so probably bindings have to be updated to support it.

Nope, you should see it. Try with a more accurate monitor that shows memory usage per process.

FYI, this issue is still there in jME 3.1
There’s a memory leak that eventually causes the OS to crash or kill the app. This issue does not happen with JBullet.

1 Like