Mem Problem, LWJGL related?

sup all,

I am making a scene:

and the mem usage increases to 1.2Gb, i get an org.lwjgl.OpenGL.OutOfMemory error.

Im looking into it now.


Well, I can’t help you in this specific issue, but I know something that might help. Quest has released a version of their JProbe Profiling tool as freeware. As long as you only want to profile local applications it should do fine in helping find memory leaks and time consumers. Be sure to download the freeware license, too.

If you’re using eclipse, you can fully integrate JProbe with Eclipse, IIRC.

Instead of panicking over it, get a profile as batman suggested. Find out where it is occuring. If it’s rising from 100 MB to over a GB it’s going to be a leak somewhere. A profiler will tell you where that kind of leak is probably the first time you run it. You have developer access, make use of it.

it turns out lwjgl didn’t like my card, so im buying a new one today.

What card did you have?

Radeon 9200, the problem is thats its 256Mb, its crap!

I’m not so sure it’s a LWJGL issue, unless of course all your LWJGL apps do the same thing. When did the out of memory start occuring? I notice you added fire to the trees (looks great BTW), maybe there are problems with the particles, etc? I didn’t hear you mentioning the problems on earlier mentions of your game, so it very well could be a jME issue.

256MB Ram crap? Isn’t that the most you can get in a card right now? Mine only has 128.

I suggest the profiler as well. Your card should be good enough for jME, unless you’re doing like 50 million things at once.

i wasn’t refering to the amount of ram being crap. I meant generall the card is crap.

I didn’t notice it before is because I never left the game on for long. I just tested it if was working, and then closed the app.

This time, nature called, and I left the game running, came back, saw the 1.2Gb of Mem used, the game crashed infront of me because of org.lwjgl.OpenGL.OutOfMemoryError.

I bought some ram the other day, 256 Mb of it. That helped a bit with eclipse and stuff. So according to the new ram, i changed the page file, it could be that. So I changed it back, with not prevail.

Now I know, thx to renanse, that this is happening only on my pc, (stupid little shit), as he showed me the mem profile from JProbe, and it looked like GC was called in his. In mine it wasn’t.

Newer drivers solved the render to texture problem, but not this. Renanse mentioned that the texture data might be pushed into the card, over and over and over again. So im leaning towards that this is a LWJGL issue.

im going to go and find some LWJGL demo’s. Or even re install java.

Sorry for the long post.


Do you have any wierd GC flags set in your Eclipse running properties? Perhaps something has disabled the garbage collector. Try running it from the command line with no special flags, as a base line test.

I used to have -Xmx256m -Xms64m, but renanse said thats no good, because all the java threads would need 64m, so thats gone.

I just tested Java Cool Dude’s tests and they work just fine.

GOD DIGGET! this is getting annoying!

Im going to make a new version and upload it to myjavaserver, that way, if you want, you could test it too.

I would also run from webstart, so we are both on the same line.

right, upload complete. its a 7.7Mb download. So go easy, problem still there for me.

Im going to reinstall java, can’t hurt

DP, I got the same problem, continued to eat up memory until it ran out of swap space.

This only happened when the burning tree was visible, so I’m leaning towards it being a particle system issue, this is very unscientific though, just high level observation.

DP, I got the same result, too. It seems that the game is allocating about 2-3 MB RAM each second! On my system it happened always, no matter whether the burning tree was visible or not.

I strongly suggest you use the profiler since you have all the sources. Maybe there is a memory leak in LWJGL or in jME, both of which should be fixed quickly.

i thought it was the tree too, but I turned the particle system off, and the same happenend.

On renanse’s pc, its all gravy. I downloaded the free version of JProbe, and renanse tried to walk me through how to profile and interpret the results, but mine were so screwed, that it literally made no sense what so ever. And the Track Object Allocation option is disabled, so I cant profile it.

Its really really strange.

Btw, if you were to take the zombies and the player out, the memory leak goes away. But the trees are still there, so its not a problem with the SAXReader, but a problem with the KeyFrameController, im investigating now.

Its not a LWJGL problem as Ive tried the demos available at JGO,and their fine. I tried Jme’s terrainTest, and thats fine. I tried Jme’s particle system, and thats fine too (thats what Im using btw).

The strange thing is that the OpenGL.OutOfMemory was called before java.lang.OutOfMemory was called.

There must be something strange going on.

Investigating furthur… Il keep you posted.


Just shooting into the dark here, but what about adjusting the texture color depths down?

my observation on this is that whatever the bug in your game, it seems to work and stabilize in a win2000 environment but not in a winxp environment. But that’s just based on my own platform being win2k and Mojo and DP being winXP.

Also DP, it would be helpful if every second or so you printed out what JAVA thought the memory allocation was like in your game. Something like:

    public static String describeMemory() {
        StringBuffer memSB = new StringBuffer();
        Runtime rt = Runtime.getRuntime();
        memSB.append("Total Memory in VM: "+rt.totalMemory()+" bytes  ");
        memSB.append("Free Memory in VM: "+rt.freeMemory()+" bytes");
        return memSB.toString();

Then instead of ppl trying to guess what's going on from the entirely inaccurate task manager, we can get some real numbers.

How are you using KeyframeController? If your model isn’t moving it should be setActive(false) which would exit update instantly.

If your repeat type is RT_CLAMP, which means the model animates from X → Y then stops, then setActive(false) is called from findFrame(); If to stop your animation you just setMinTime and setMaxTime to the same value, it would appear to not move, but would still try to animate between the two times.

The way I envision it working is something like

if (walkkey){



setCurTime(0) // Would have to add this



} else if (attackkey){



setCurTime(5) // Would have to add this



else …

update in KeyframeController doesn’t create anything new itself, but does call updateNormalBuffer() which (??) can create new memory, but shouldn’t if the model is the same.

wtf is going on? new version is up with renanse’s suggestion. But it shows that the gc is going fine. Look at task manager, and the problem is there!

So it looks like a lwjgl problem from here.

Any ideas?


I wouldn’t jump on lwjgl just yet. Especially not without some kind of tangible proof. :slight_smile: Does anyone know of a tool to describe what makes up mem usage in a given windows xp task (in this case java.exe)?