OOM exception (Direct Byte Buffer)

Hello,

I will try to explain my situation as best as possible :slight_smile:

I am trying to create an endless terrain using noise functions.
I have one grid (21x21) filled with chunks. Each chunk has 16x16 cubes. I don’t actually use cube-meshes, but quads, so I don’t render unnessecary triangles. (eg. underside of a box…)

I create these chunks in a separate thread and add them to a Map. In my update loop, I add chunks to the grid (from the map) and remove chunks from the grid that our to far away.
To optimize this process, so I don’t render a zillion of meshes, I optimize each chunk by using GeometryBatchFactory.optimize(); This way I am always rendering 21x21 geometries.

This is all working quit good with nice fps. But after a while wandering my terrain, I get this error:

[java]
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658)
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
at com.jme3.util.BufferUtils.createFloatBuffer(BufferUtils.java:831)
at com.jme3.scene.VertexBuffer.createBuffer(VertexBuffer.java:912)
at jme3tools.optimize.GeometryBatchFactory.mergeGeometries(GeometryBatchFactory.java:174)
at jme3tools.optimize.GeometryBatchFactory.makeBatches(GeometryBatchFactory.java:330)
at jme3tools.optimize.GeometryBatchFactory.optimize(GeometryBatchFactory.java:381)
at jme3tools.optimize.GeometryBatchFactory.optimize(GeometryBatchFactory.java:365)
[/java]

I am assuming I should try to clear the VertexBuffers, so the gc will collect them and remove them from the Direct buffer. But I don’t really understand or know how I should do this.

When I remove a chunk from the grid, I explicitly set it to null to help the gc, but I think some references to some Buffers are still kept alive.

I already set this as jvm memory setting: -XX:MaxDirectMemorySize=1024m

The thing is that java doesn’t really track direct memory, it regards it as “as much space as I need for the java object”. One thing you can try is to lower the java heap size so the GC runs more often. Another thing you can try is to use the BufferUtils clearBuffer method, it uses a “hack” that allows you to deallocate direct memory directly. But its using native hidden methods that are different across JREs so it might not work on all systems.

1 Like

thank you for the reply @normen. Indeed it’s is quit difficult to handle, since you don’t have control over the direct memory. I also tried by forcing the gc (System.gc() )to run at some specified intervals, but this didn’t seem to help.

I used your advice by lowering the heap space. I set it from 512m to 256m (-Xmx=256m) and I also added in the remove method of the grid these lines:

[java]
for (Spatial geometry : chunk.getChildren()) {
for (VertexBuffer buffer : ((Geometry) geometry).getMesh().getBufferList()) {
BufferUtils.destroyDirectBuffer(buffer.getData());
}
geometry = null;
}
chunk = null;
[/java]

but I am still encountering the oom issue. Am I clearing the correct Buffers?

Maybe then you really create too much in one loop or something? Maybe try setting up a simpler test case that in theory puts through as much data…?

1 Like

Hello, I think I found the bottleneck.
In my update loop I load all chunks that are in the line of sight in one go, but I remove only one chunk per cycle to keep the framerate as high as possible. This mean that when you keep on going forward, the chunk cache is slowly growing since it is adding more chunks to it, then removing. I noticed that after the cache reaches around 5000-6000 chunks, I get the OOM error. So I’ll just implement my chunk cache with a fixed size and I think the problem should be gone. (My character is already walking for 30+min in a fixed direction and I’m still OOM error-free!)

Thanks @normen for the guidance, sometimes just explaining your problem, gives you a different view on things. Thanks a bunch!

1 Like

Adding is way more expensive than removing anyway.

1 Like

okay, thanks for the tip! i’ll do some tests. I haven’t monitored it yet.