Hi Everyone,
I ran into this OutOfMemoryError: Direct buffer memory. As you might know my game is a minecraft like game that creates sectors (notch calls them chunks) around the player consisting of tesselated meshes. Only a certain number of sectors is visible at a time and the others are cached by holding WeakReferences of them in a list. So when the player walks on the sectors are removed from the scenegraph and they get garbage collected eventually. I made sure they will by adding a finalize method to the sector class and confirming that it was called.
I read some websites about direct buffers and most of them agree on the fact that the garbage collector does not collect direct buffers directly but when the garbage collector is collecting the wrapping class the class itself makes sure (by using native calls) that the memory is freed. Still some claim that there is the rare circumstance when there is enough java heap space available and the gc is thinks everything is fine but the memory region for the direct buffers is already drained empty. That’s the point when this message appears. It sure can’t be the normal heap space because I’m already at 1.5G.
Next step was for me to manually free the buffer and not wait for the gc to do the work because of the described scenario. So I used the code in the BufferUtils class and freed the memory myself. But I still get the OOM-Error. Is it possible that the jME itself is leaking memory? Or is there anything I’m overlooking? And I’m of course already using -XX:MaxDirectMemorySize=500m
I would recommend using the profiler and checking for leaks, if you haven’t already.
The profiler is a very mysterious and powerful device, and it’s mystery is only exceeded by it’s power.
JME is not leaking memory. I can run Mythruna for hours without getting an OOM.
Direct memory does not influence the garbage collector. So what you were originally seeing is that you might have filled up all of your direct memory but since the heap was still relatively empty then none of that direct memory was being reclaimed.
My guess is that there are some direct memory buffers that you are forgetting. It’s worthwhile to add a memory indicator to your app HUD so you can watch it while you run around.
You can alleviate a lot of the issues by setting direct memory really large and heap only as large as needed. For example, set direct memory to 1 gig and leave heap max at 256 meg or so. This will cause the incremental GC to be more aggressive. Still, on Mythruna I would occasionally still see out of memory errors until I explicitly started freeing my buffers.
Also, you’d do better to use a straight LRU cache. WeakReferences won’t be queued until the GC runs and it could already be too late by then… where as a fixed size LRU cache will only ever take up so much memory and you can manage it much more closely.
Direct memory does not influence the garbage collector. So what you were originally seeing is that you might have filled up all of your direct memory but since the heap was still relatively empty then none of that direct memory was being reclaimed.
That was also my guess.
Also, you’d do better to use a straight LRU cache. WeakReferences won’t be queued until the GC runs and it could already be too late by then… where as a fixed size LRU cache will only ever take up so much memory and you can manage it much more closely.
I'd really like to do the memory handling with direct buffers myself - but I can't simply hand out direct allocated buffers (in a wrapper class maybe) to other classes and just drop the least used ones - this could lead to really bad problems if the guess that this buffer was not in use anymore was wrong. There is only the possibility to copy the content to a not directly allocated buffer and copy it back in case it really was still in use. Is that what you had in mind?
By the way - I didn’t find any way to get the amount of directly allocated memory. Neither in the profiler nor in the application itself. Is this even possible?
@entrusc said:
By the way - I didn't find any way to get the amount of directly allocated memory. Neither in the profiler nor in the application itself. Is this even possible?
Not as far as I know.
What I meant by an LRU cache is to just do something like pick a distance that you will support and then make sure your cache is big enough to cover that and then some. Cache the "sectors"... you should have a pretty clear idea of which ones of those are in use and which ones aren't. When you purge a sector from the cache because it hasn't been used in a long time then you free its direct memory also.
@entrusc said:
By the way - I didn't find any way to get the amount of directly allocated memory. Neither in the profiler nor in the application itself. Is this even possible?
I've read that for JDK7 you can use JConsole to see allocated direct buffers, haven't tested it yet.
https://blogs.oracle.com/alanb/entry/monitoring_direct_buffers
What I meant by an LRU cache is to just do something like pick a distance that you will support and then make sure your cache is big enough to cover that and then some. Cache the “sectors”… you should have a pretty clear idea of which ones of those are in use and which ones aren’t. When you purge a sector from the cache because it hasn’t been used in a long time then you free its direct memory also.
So no generic solution ;) But I see what you mean. Still I played around with JDK7s VisualVM Plugin for direct buffers (as suggested by jmaasing) and it looks like this:

Even though I see sectors and it's associated data getting purged (via stdout) still this picture speaks an unmistakable language: memory leak! Next I'll try to find it and then think about appropriate measures - but in the meantime: thanks everyone for the input!
Hmm, it would be nice to see “allocated direct memory” added to the standard stats display actually…since if JConsole can see the data now then Java should be able to too.
@zarch said:
Hmm, it would be nice to see “allocated direct memory” added to the standard stats display actually…since if JConsole can see the data now then Java should be able to too.
I couldn't agree more.
I made sure they will by adding a finalize method to the sector class and confirming that it was called.
I hope you do this only for debugging? Reading Joshua Bloch's Effective Java you should "avoid" using finalizers. Just as a tip ;)
Additionally I agree with Zarch, too.
@enum said:
I hope you do this only for debugging? Reading Joshua Bloch's Effective Java you should "avoid" using finalizers. Just as a tip ;)
Additionally I agree with Zarch, too.
Sure - it was just for debugging - normally I'd use PhantomReferences for that if I really need it ;)
So I was able to fix the memory leak now: I need about 250m of direct buffer total. So for the future I should remember that SoftReferences and direct buffers are a bad combination because they will only be collected when the heap space runs low - and until that happens the direct buffer runs full without being noticed ... I now remove the sectors as soon as they are not needed anymore and with them clear all the direct buffer references. Too bad that we need the direct buffers for openGL - they really destroy the otherwise quite perfect garbage collection concept.
In general, weak references of any kind can be dangerous in a game without careful attention. The reference classes interact with the garbage collector so they won’t be queued until the GC runs. If care isn’t taken when freeing up the related material then you could run into the same situation that NativeObjectManager used to run into… where in a single frame it would delete thousands of objects that had accumulated until the incremental GC happened to deal with them. All because the weak references finally came through.
I have to carefully control what I add and remove in a single frame so I’m pretty sensitive to this. Different games will have different requirements, of course.
@pspeed said:
In general, weak references of any kind can be dangerous in a game without careful attention. The reference classes interact with the garbage collector so they won't be queued until the GC runs. If care isn't taken when freeing up the related material then you could run into the same situation that NativeObjectManager used to run into... where in a single frame it would delete thousands of objects that had accumulated until the incremental GC happened to deal with them. All because the weak references finally came through.
I have to carefully control what I add and remove in a single frame so I'm pretty sensitive to this. Different games will have different requirements, of course.
I totally agree with you - that's as well a curse as it is a blessing to have a sophisticated garbage collector. In a game it might get problematic when you can't really control when everything is halted just to collect some obsolete objects from the heap (especially as a control freak like I am ;) ). On the other hand with the new G1 this all won't be much of a problem anymore because it works mostly concurrent and has only very brief stop-the-world events (if any). Hopefully they will make it the default gc soon ...
By the way I managed to work past my memory problems and implemented the first raw material on xcylin:

Incremental GC works well in Java. It’s when we chain a bunch of processing off of references being reclaimed that we potentially do bad things… though that’s easy enough to work around even just by capping the amount of freeing done per frame.
In my opinion, the biggest problem with direct buffers is that there is no API way to free them. It seems to me that if we can close files and other resources then we should be able to manually free a native block of memory without having to jump through hoops. But oh, well.
Screen shot looks cool.
@pspeed said:
In my opinion, the biggest problem with direct buffers is that there is no API way to free them. It seems to me that if we can close files and other resources then we should be able to manually free a native block of memory without having to jump through hoops. But oh, well.
I think the idea was that no manual memory management is necessary in java - so all the work should be done entirely by the gc. But unfortunately the direct buffers were never really considered when the gc was constructed - that's why it can't handle them correctly I think. Maybe the gc must just learn to also look at the directly allocated memory and run a cleanup sweep there when it is necessary and not only when the java object heap is full.
@pspeed said:
Screen shot looks cool.
Thanks :)
@entrusc said:
I think the idea was that no manual memory management is necessary in java - so all the work should be done entirely by the gc. But unfortunately the direct buffers were never really considered when the gc was constructed - that's why it can't handle them correctly I think. Maybe the gc must just learn to also look at the directly allocated memory and run a cleanup sweep there when it is necessary and not only when the java object heap is full.
Yeah, I know what they were thinking... but this is more like a native resource like a file handle than the normal Java memory. Some more control would have been nice.
@pspeed said:
Yeah, I know what they were thinking... but this is more like a native resource like a file handle than the normal Java memory. Some more control would have been nice.
You can see it this way or the other way around - still they should have implemented at least one solution correctly. Now we have neither a close method (if you come from the resource point of view) nor a gc that is capable of correctly handling this kind of memory (the other point of view). So we have to use a reflection hack that won't work on all JVMs thereby breaking the very idea of Java itself :( .