Hi, i’ve encountered a strange memory leak with this piece of code:
private final ConcurrentHashMap<Integer, InteractiveEntity> entities = new ConcurrentHashMap<>();
private static ScheduledThreadPoolExecutor mobAiExecutor = new ScheduledThreadPoolExecutor(2);
public void update(float tpf) {
Runnable r = () -> {
};
mobAiExecutor.submit(r);
}
The JVM keeps requesting more and more memory and i am not entirely sure why the garbage collector doesnt keep up. As of my understanding the memory is only cleaned when the thread is not running (its not currently running in OS sense)
I’ve also tried the approach (invoked in the initialize method)
Just to understand whats going on (thought that maybe the runnable and Future objects themselves were the cause) - the issue persists.
My multithreading knowledge is not very good (im still learning as a junior dev), but the real interesting part is that the memory leak takes no place when the code inside the runnable is executed on the Main Jme thread (when there actually is code to be ran).
I tried isolating the issue, there actually was a task being scheduled
nope, this one was done in “initialize”.
Anyway, the idea i had was to have another thread that would update the ai whenever the main thread is updated. However now considering what you pointed at it would be wiser to have a thread with a while loop running with a simple atomic boolean “lock” or something? An example from the top of my head, but doesnt require constant memory allocation.
Could you tell us a bit more about this? How much memory is it requesting, and how do you know that the GC isn’t keeping up with garbage? The code you’ve shown us won’t produce a memory leak on its own.
One thing to note about the JVM is that it doesn’t handle memory the way you might at first expect it to. Rather, it keeps large chunks allocated from the OS and rarely returns them. It certainly will not return them just because the GC freed significant chunks of them - rather, it will keep them and re-fill them with new allocations (it does this because allocating new memory from the OS is far more expensive than the JVM’s memory pooling techniques. If you’re looking at JVM memory usage from what your OS system monitor/activity monitor/task manager is reporting, you’ll only see what the JVM has reserved from the OS - not what your app is actually using, and not what the GC is freeing. So you might see something like this (numbers are totally made up, but a typical application flow will look something like this):
OS (16 GB available) → JVM starts your app (reserves 200MB from OS) → app running, uses 150MB → JVM requests 200MB more from OS (total reserved 400 MB) → GC runs, frees 50MB (app is now using 100MB, JVM has 400MB allocated from OS).
If you haven’t already, I’d highly recommend connecting to your app with VisualVM - that will show you in realtime how much heap space the JVM has reserved, how much your app is using and how much each GC run is freeing. If the memory activity graphs suggest a memory leak (or you want to know where the garbage is coming from) you can hop over to the memory sampler (or profiler, for more detail) tab - that will show you exactly how much memory is being claimed by instances of each class, and IIRC it will even show you where they’re being created. (It’s also very useful for doing similarly detailed analysis of CPU activity, so you can see where your code’s hot spots are if you need to optimize).
A Semaphore can be used for this but it is still unusual to have two separate threads run in lock step without some communication between them.
…and once you have communication between them then there is maybe no need for lock step operation anymore.
For a lot of things, the time wasted with one thread waiting for another is more than all the time saved by making those two processes parallel. All the issues with none of the gain.
But in the realm of talking about hypothetical “things” doing imaginary “stuff”, it’s hard to say otherwise.
Edit: corrected CountdownLatch to Semaphore… which is what I meant. This is still a rare thing to want to do.
During profiling it turned out that most memory was occupied by
byte[]
used for pathfinding… Should have been more careful, the array would be allocated every time pathfinding was ran (even though there was no need for that, guess working until 6am doesnt pay off)… So yeah, that wasnt an issue with multithreading at all. Thanks for great advice, it surely pointed in a good direction for further expanding my knowledge. Many thanks pspeed and danielp!