How to use BufferUtils.destroyByteBuffer

Well i try to use the new BufferUtils.destroyByteBuffer to reduce direct Memory outofRam problems I have when moving fast over a paged vegetation.



→ Idea to force kill all buffers from the page to unload.

→ However i don’t seem to get the Right arguments to the destroyByteBuffer

Code:
Sidequestion, in the taskmanager the ram usage increases till crsh, if I use Bufferutils.printCurrentDirectMemory the ram and bufferusage stays stable however. Actually this shouldn't be happening or am i wrong? Seems like either something is creating buffers directly without the use of BufferUtils or the reports from the method are not accurate..


This is the code I use, however when trying this I get the following exception: java.lang.ClassCastException: java.nio.DirectFloatBufferU cannot be cast to java.nio.ByteBuffer

While this exception makes sense, how could I get this buffer to unload then?
Code:
public static void forceUnload(final Spatial unload) { if (unload instanceof Node) { final Node node = (Node) unload; for (final Spatial child : node.getChildren()) { forceUnload(child); } }
	if (unload instanceof Geometry) {
		final Geometry geo = (Geometry) unload;
		final Mesh mesh = geo.getMesh();
		for (final Entry<VertexBuffer> buffer : mesh.getBuffers()) {
			final Buffer directbuffer = buffer.getValue().getData();

			BufferUtils.destroyByteBuffer((ByteBuffer) directbuffer);
		}
	}
}</div>

This utility method was thrown together from a web article and then maybe only tested with ByteBuffer. It could be that it needs some enhancement.



I’ll poke around a little and see.

floatBuffer.asByteBuffer()?

1 Like

This method does not exist, else it would be easy. You can only do asXX on Bytebuffers but not the other way round.

Yes you can get the ByteBuffer again.

1 Like

Unless you meant “can’t”, I’m curious how one goes from FloatBuffer back to ByteBuffer:

http://docs.oracle.com/javase/6/docs/api/java/nio/FloatBuffer.html

@EmpirePhoenix said:
This is the code I use, however when trying this I get the following exception: java.lang.ClassCastException: java.nio.DirectFloatBufferU cannot be cast to java.nio.ByteBuffer


I have expanded the type for destroyByteBuffer and renamed it to destroyDirectBuffer.

Please see if this works now.

Hey there, I was working on the same issue a few days ago in our game (also seen the same web article I think).

We create a lot of “garbage” when moving around in the game and I used a similar method to destroy the Geometry buffers. It seems to work okay and prevents “hickups” when the garbage collector has to do a lot of work. It also seems that the GC does not check if the direct memory is full(unless -XX:MaxDirectMemory is provided at startup), but only if the JVM is running out of mem, so it can also prevent certain out of memory exception’s.



Are there any plans on integrating this in the monkey engine?



P.S. in my method I used viewedBuffer() to get from a DirectFloatBufferU to the actual ByteBuffer.

[java]

private void destroyBuffer(Buffer toBeDestroyed) {



if (toBeDestroyed instanceof DirectBuffer) {

DirectBuffer d = (DirectBuffer) toBeDestroyed;

toBeDestroyed = (ByteBuffer) d.viewedBuffer();

}

if (toBeDestroyed.isDirect()) {

Method cleanerMethod = toBeDestroyed.getClass().getMethod(“cleaner”);

cleanerMethod.setAccessible(true);

Object cleaner = cleanerMethod.invoke(toBeDestroyed);

Method cleanMethod = cleaner.getClass().getMethod(“clean”);

cleanMethod.setAccessible(true);

cleanMethod.invoke(cleaner);

}

}

[/java]

Maybe also intresting is this method to determine the direct memory usage without keeping track of each buffer:



[java]

Class<?> c = Class.forName("java.nio.Bits");

Field reservedMemory = c.getDeclaredField("reservedMemory");

reservedMemory.setAccessible(true);

long directMem;

synchronized © {

directMem = (Long) reservedMemory.get(null);

}

long mb = 1024 * 1024;

System.out.println("nDirect: " + directMem / mb + " mb");

[/java]

oO we’ve been talking about BufferUtils.destroyDirectBuffer() this whole thread ^^

:wink: Jep I noticed and I did not now of the existence of the BufferUtils.destroyDirectBuffer() before reading this thread, however I did write my own method based on the same web article @pspeed must have read.



Don’t blame when I am wrong’:wink: - but I also think that your current method will not work for a DirectFloatBufferU. These objects have a private field called viewedBuffer containing the actual DirectByteBuffer with the required “Cleaner”.

Thats why paul added the view check I guess… But thats good to know. We intend to use this more thoughout the engine but we’ll have to gain more experience and find the most important points for applying this first. For manually created buffers you will always have to destroy them by hand though. So if you start to use this method you should be on the safe side.

Gah, he’s right… DirectFloatBufferU has a cleaner() method but it returns null. Not sure how I missed that the first time.



I should enhance the method. (P.S.: I did not write this method normen did, I just tweaked it.)

@normen You do not need to destroy them, the GC will do it for you (and to make sure of that use -XX:MaxDirectMemory at startup as a JVM parameter). However destroying them by hand keeps the memory usage low so the GC does not have to do all the work. Note that when the GC is working the game can freeze for a few miliseconds. So doing this destroying manualy (in a seperate thread) can prevent these freezes when you create a lot of garbage.

(You can also tweak the GC with things like: -XX:-UseConcMarkSweepGC however that does not seem to do the trick for me.)



Maybe even better is to re-use old byte-buffer’s like the TempVars instead of creating new one’s al the time. Although I understand that this is difficult because of the different sizes.

(The reason I was working on this is because we need to create on the fly model batches when moving around. Loading all batched models into memory requires to much memory on 32-bit systems).

I know that, thats why we play with this. Basically the GC isn’t triggered by DirectBuffers but does clean them. Somewhere, sometime… which is the issue we try to solve mainly ^^ The thing is you can OOM yourself very quickly, its a quirk in how java handles direct buffers really (see the comments in the javadoc of the method). The better performance is a nice side effect caused by the constant cleanups as opposed to occasional large cleanups :slight_smile:

Yep I agree, the JVM does strange things with it’s direct memory under the hood. ;-(



For example when I say the JVM may not use more then 600MB (with -Xmx) and direct mem is at 200MB and I verify this in the profiler where our app uses only between 300-400MB of the JVM allocated 600MB, the windows resource monitor shows that java.exe can use up to 1.3GB. Thats a large difference I cannot explain.



Even more strange is that on a 32-bit JVM this occurs more often then on a 64-bit JVM with exactly the same JVM settings. Maybe the JVM can easier allocate large memory chunks in 64 bit?

@maximusgrey said:
@normen You do not need to destroy them, the GC will do it for you (and to make sure of that use -XX:MaxDirectMemory at startup as a JVM parameter). However destroying them by hand keeps the memory usage low so the GC does not have to do all the work. Note that when the GC is working the game can freeze for a few miliseconds. So doing this destroying manualy (in a seperate thread) can prevent these freezes when you create a lot of garbage.
(You can also tweak the GC with things like: -XX:-UseConcMarkSweepGC however that does not seem to do the trick for me.)

Maybe even better is to re-use old byte-buffer's like the TempVars instead of creating new one's al the time. Although I understand that this is difficult because of the different sizes.
(The reason I was working on this is because we need to create on the fly model batches when moving around. Loading all batched models into memory requires to much memory on 32-bit systems).


MaxDirectMemory is used to INCREASE the size of direct memory you are allowed to allocate. Normally it is pretty low.

As normen mentions, the issue is that the size of allocated direct memory does not at all (not even slightly) affect when GC is run. You could have used up all of your direct memory but if you still have plenty of JVM heap then full GC will never run and you will get out of memory errors trying to allocate direct buffers.

Your choices are, specifically manage your direct buffers, and/or set your regular JVM heap size small enough so that GC runs more often.

Full GC can actually take as long as 2 seconds and on average at long as half a second. And all threads are paused at that time (at least in Java 6 and below).

Mythruna chews through direct memory like hard candy at a preschool. Counterintuitively (without the above explanation) _lowering_ my max heap fixed my out of memory problems. I operate with a 1 gig direct memory and 512m regular heap. I get GC pauses a lot more often but I don't crash hard from OutOfMemory errors.

...when I get a chance to use the delete buffer stuff then I might be able to go back to having a larger regular heap.
@maximusgrey said:
Yep I agree, the JVM does strange things with it's direct memory under the hood. ;-(

For example when I say the JVM may not use more then 600MB (with -Xmx) and direct mem is at 200MB and I verify this in the profiler where our app uses only between 300-400MB of the JVM allocated 600MB, the windows resource monitor shows that java.exe can use up to 1.3GB. Thats a large difference I cannot explain.

Even more strange is that on a 32-bit JVM this occurs more often then on a 64-bit JVM with exactly the same JVM settings. Maybe the JVM can easier allocate large memory chunks in 64 bit?


Just in case this isn't known, -Xmx and max direct heap control two entirely different pools of memory. Each with their own additional book-keeping overhead. Though as I understand it, the direct memory pool is made of larger chunks that are then partitioned out.

Either way, it's easy for a regular 512m JVM heap (-Xmx512m) to take 800 meg or more just because of the additional book-keeping the JVM does with its memory management. How much extra largely depends on how many objects you have, how small they are, etc..

I imagine the direct heap can be similarly fragmented.

I think your Mythruna has the same problems we are having and I noticed the same behaviour you described with the GC (it runs when the JVM is running out of jvm mem not the direct mem). GC can also do things parallel (-XX:+UseParallelGC) depending on what’s being collected, but that does not make it much better.



As for your second reply (yes, I know the difference between jvm and direct mem) I also think that this has something to do with fragmentation overhead (and that a 64bit jvm handles it’s chunks better). It is however sometimes frustrating that these things happen (I do get jvm crashes due to out of mem when the jvm only uses 600 and direct mem 200) and you cannot influence them as developer.



Anyway, let me know if you find any new revelation’s on this subject :wink:

@maximusgrey said:
I think your Mythruna has the same problems we are having and I noticed the same behaviour you described with the GC (it runs when the JVM is running out of jvm mem not the direct mem).


Yes, and that's why I set the JVM heap relatively small. When it was 1 gig, GC hardly ever ran and I'd run out of direct memory.