During development we need to move between dungeon levels very often. Before we load a new level we drop references to the old objects (jme’s geometrie, nodes, textures etc), clear the assetManager’s cahe (the level we are switching to may have different style, we just don’t need to keep old assets in memory). This causes the problem of OutOfMemoryError. The exception is thrown from Bits.java, inside reserveMemory function:
You already did an analyze, but I would like to share my advice.
To start profiling memory issue and OutOfMemoryError, from my experience, the best is to use the jvm args to dump heap on OutOfMemory (or doing it at runtime with jvisualvm): -XX:+HeapDumpOnOutOfMemoryError and to use MAT (better than jvisualvm to hunt and to analyze/ query heap dump).
MAT is available as a standalone app (and as an eclipse plugin iirc)
And Today I also view an interesting presentation of Leak Canary, I didn’t use it.
Sorry to not help you, on your current problem, I hope MAT or other tool, will help you to see if some object aren’t really freed (leak reference,…)
For now all I can see is that ByteBuffers are allocated but not freed, also NativeObjectManager have UNSAFE set to false, it prevents freeing the buffers by JME, all is left for java’s inners mechanisms. Unfortunately it is too complex for me to make any quick fix.
I think it is direct memory, I don’t know how it works under Java’s VM (I need to read about that). I’m calling gc() several times during that process.
What have you set your max direct memory to? In general, it should be twice or more as your heap max as that memory is not counted in the heuristic to decide when to GC.
Anyway, in my experience, System.gc() will force the stuff to free that can… even (and especially) direct memory. But honestly, for my games where it really mattered, I started freeing direct memory myself. In my case, it was because I was constantly loading/recreating sections of the world, though. Full GC every so often is not so great in that case.
About freeing that memory manually… When it is safe to do that? As far as I know several scene objects can share the same buffers and/or texture images, so it is not safe to dispose the image when the object containing it is removed from scene.
The moment when I clear the assetManager’s cache seems to be good for that, Is there any possibility to obtain all loaded things from cache to free them manually?
Yeah calling GC manually is usually just a cover up for a problem. Hmm, are you using java 8? I didn’t know it could run our of memory, except out of heap.
He’s not talking about calling System.gc(), thats still not guaranteed to free direct memory - it might though. But as said a common mistake is to make the heap larger and larger and leave the direct memory size the same. That will cause even less gcs to happen and thus fill the direct memory even faster. What he means is deleting single direct buffers directly.
I think I’m not setting direct memory size on startup, so it is set to the value of -Xmx. 512MB is still a great amount of memory. Increasing it will allow me to switch levels few more times.
The question is why the buffer is not freed, how to free it safely or how much time Java needs to free it for me? Do I need to worry about that during normal gameplay? Is this only an side effect of switching between levels many times in short time?
I’ll try to trace the direct buffers using VirtualVM…
…or quite a bit smaller. (I think the default in Java 6 was 64 meg.) Always set the direct memory maximum to be pretty large. I try to keep my regular heap down less than 512m and the direct memory heap at 1024m or larger.
edIt: Note that the MemoryUtils class can be used to tell what your maximums are.
From the first screen in my first post I can see that it’s about 512MB. If the VM needs some time, a minute or two to test if the buffer can be freed - then it’s ok, I’ll even increase it to minimize the chance of crash during normal gameplay. But if those buffers are never freed, because I’m doing something wrong, then increasing would give me nothing.
You can see that what is about 512 MB? I don’t see how you are getting that value so I can’t comment on whether it’s accurate. The direct memory size cannot be easily queried through normal Java means… so unless you are using MemoryUtils or going in through the JMX MBeans, then it’s hard to say what it is.
90% of a typical JME app’s memory is going to be direct memory. All textures, meshes, compiled shaders, etc… pretty much all of the data of the scene graph is 90% direct memory.
However, direct memory is never used to calculate when the JVM should run GC. Also, getting low on it won’t force more drastic GC like for everything else.
So, yes, if the problem is yours then increasing the max direct memory size might hide it longer… but for a JME app, the default value is nearly always wrong and will nearly always cause direct memory errors.
Direct buffers are allocated and not deallocated. Trying to trace it down I found that there are a lot of Images that are not keept by anything, except NativeObjectManager. With every dungeon level switch their number is increasing.
NativeObjectManage should only be keeping weak or phantom references to the buffers… and so shouldn’t keep them from being GC’ed. Those references don’t count, basically.