Memory issues in renderManager.render

We’re basically just looping through the player list and interpolating their position based on server game states. However the issue remains with still camera and with disabling any movement at all

That is really bizarre. I can’t help thinking we’re missing something obvious. It sounds like I’m doing a lot more per frame without issue.



For reference: in my case (as mentioned before), I have about 499 objects displayed with no camera movement. I have enough geometry on screen that my frame rate is down near 60 FPS on this machine (just a cube on its own might run upwards of 2000 FPS). I’m updating sun/moon/clouds/star positions and regenerating the texture coordinates of my sky mesh. I have an animation thread running that is constantly performing collision detection against my scene (even if the camera isn’t moving). I update a handful of material uniforms once per frame.



…and still unless I start paging in new terrain my memory stays nearly constant… with a 10-20 k fluctuation. (my memory consumption is based on looking at the windows task manager so this memory includes any direct memory buffers my application grabs… which the JVM heap stats won’t really tell you.)



So it feels like we’re missing something. /me scratches head.

Ya, Ive fixed several issues like this within my own code, but inserting my memory check functions clearly shows it growing in an internal JME3 call. Hopefully a core dev has seen something like this or knows the obvious thing Im doing wrong :slight_smile:

Yeah, but it’s a needle in a haystack if we can’t narrow down what the difference between working and not working is in application code. :wink:



And you’re sure you had everything disabled? You don’t have any particle effects or anything like that?



Do you do your scene (player geometry position setting, etc.) on simpleUpdate() or something? Wondering if you just early return from that to be sure.



I just find this sort of thing puzzling… and puzzles are meant to be solved. :slight_smile:

I do movement update in simpleUpdate, but I commented out the line which repositions and disabled particles, no dice.

So, with essentially and empty simpleUpdate() and no particles running… you still get memory churn.



Do you see the same memory churn with a basic test app with just a cube? (I forget how that ended up.)

I still get the issue, but thats a good way of simplifying it…



I run this:



https://wiki.jmonkeyengine.org/legacy/doku.php/jme3:beginner:hello_simpleapplication



This grows at about 1mb per second…



Make that 20 boxes instead of 1 and it grows VERY fast, just like in my app.

Ok… so this entire time I’ve been looking at different things.



How are you determining how much time is spent in GC? Or is it just the up-down churn that concerns you? Are you doing verbose GC logging?, as: -verbose:gc -Xloggc:gc.log -XX:+PrintGCTimeStamps

I modified the HelloJME3 in that link as follows:



[java]

public void simpleInitApp() {



for( int i = 0; i < 10; i++ ) {

Box b = new Box(Vector3f.ZERO, 1, 1, 1);

Geometry geom = new Geometry(“Box”, b);

Material mat = new Material(assetManager, “Common/MatDefs/Misc/SolidColor.j3md”);

mat.setColor(“m_Color”, ColorRGBA.Blue);

geom.setMaterial(mat);

geom.setLocalTranslation( i, 0, 0 );

rootNode.attachChild(geom);

}

}

[/java]



I turned on verbose GC logging and get lots and lots of:

[java]

0.375: [GC 4416K->1581K(15872K), 0.0074546 secs]

45.482: [GC 5997K->1773K(15872K), 0.0087257 secs]

45.758: [GC 6189K->1902K(15872K), 0.0054496 secs]

46.053: [GC 6318K->2055K(15872K), 0.0030456 secs]

46.288: [GC 6471K->2050K(15872K), 0.0019528 secs]

46.433: [GC 6466K->2059K(15872K), 0.0015421 secs]

46.572: [GC 6475K->2046K(15872K), 0.0011300 secs]

46.721: [GC 6462K->2052K(15872K), 0.0013641 secs]

46.859: [GC 6468K->2060K(15872K), 0.0012761 secs]

47.006: [GC 6476K->2041K(15872K), 0.0012454 secs]



163.622: [GC 6517K->2105K(15872K), 0.0004805 secs]

163.740: [GC 6521K->2111K(15872K), 0.0004716 secs]

163.850: [GC 6527K->2120K(15872K), 0.0006453 secs]

163.964: [GC 6536K->2102K(15872K), 0.0005093 secs]

164.074: [GC 6518K->2110K(15872K), 0.0004975 secs]

164.184: [GC 6526K->2118K(15872K), 0.0005202 secs]

164.313: [GC 6534K->2102K(15872K), 0.0004931 secs]

164.428: [GC 6518K->2108K(15872K), 0.0004939 secs]

164.540: [GC 6524K->2116K(15872K), 0.0004833 secs]

164.659: [GC 6532K->2113K(15872K), 0.0004886 secs]

[/java]



So, it’s growing but not very fast… normal, I’d say.



I should note: now that I’m actually testing the same thing (sorry) that I am running a fixed BitmapText that makes StatsView much less churny. That could be the difference?



In the HelloJME3, if you turn off stats does the memory still churn?

Also note: that was running at over 1000 FPS and only dipped below that briefly before I stopped running. I never saw a full GC in the entire log, though so that may be other things running on my system that did that.

While I’m caveating, the numbers in the gc.log are apparently object counts… so maybe the growth is suspicious. Rerunning with details turned on.

Running this:



[java]

/*

  • To change this template, choose Tools | Templates
  • and open the template in the editor.

    /



    package jme3test.helloworld;



    import com.jme3.app.SimpleApplication;

    import com.jme3.material.Material;

    import com.jme3.math.Vector3f;

    import com.jme3.scene.Geometry;

    import com.jme3.scene.shape.Box;

    import com.jme3.math.ColorRGBA;



    /
    * Sample 1 - how to get started with the most simple JME 3 application.
  • Display a blue 3D cube and view from all sides by
  • moving the mouse and pressing the WASD keys. */

    public class HelloJME3 extends SimpleApplication {



    public static void main(String[] args){

    HelloJME3 app = new HelloJME3();

    app.start();

    }



    @Override

    public void simpleInitApp() {



    for( int i = -10; i < 10; i++ ) {

    Box b = new Box(Vector3f.ZERO, 1, 1, 1);

    Geometry geom = new Geometry("Box", b);

    Material mat = new Material(assetManager, "Common/MatDefs/Misc/SolidColor.j3md");

    mat.setColor("m_Color", ColorRGBA.Blue);

    geom.setMaterial(mat);

    geom.setLocalTranslation( i, 0, 0 );

    rootNode.attachChild(geom);

    }

    }



    @Override

    public void simpleUpdate(float tpf) {

    printCheckMem("UsedMem");

    // System.out.println(Runtime.getRuntime()

    // .freeMemory());

    }



    private static float oldUsedMem = 0;

    private static float newUsedMem = 0;

    private static float kb = 1024;

    private static long oldSysTime = System.currentTimeMillis();

    private static long newSysTime = 0;



    public void printCheckMem(String label) {

    newUsedMem = (Runtime.getRuntime().totalMemory() - Runtime.getRuntime()

    .freeMemory()) / kb;



    ///// LOOK HERE

    if (Math.abs(newUsedMem - oldUsedMem) > 0)

    // if ((newUsedMem - oldUsedMem) < 0)

    {

    newSysTime = System.currentTimeMillis();

    System.out.println(label + " " + newUsedMem + " memdiff "
  • (newUsedMem - oldUsedMem) + " timediff " + (newSysTime - oldSysTime));

    oldSysTime = newSysTime;

    }

    oldUsedMem = newUsedMem;

    }

    }[/java]



    I get this:

    [java]

    UsedMem 6497.789 memdiff 33.46875 timediff 0

    UsedMem 2158.4062 memdiff -4339.383 timediff 0

    UsedMem 2243.539 memdiff 85.13281 timediff 0

    UsedMem 2328.6719 memdiff 85.13281 timediff 0

    UsedMem 2413.8047 memdiff 85.13281 timediff 0

    UsedMem 2498.9375 memdiff 85.13281 timediff 15

    UsedMem 2584.0703 memdiff 85.13281 timediff 0

    UsedMem 2669.2031 memdiff 85.13281 timediff 0

    UsedMem 2754.336 memdiff 85.13281 timediff 0

    UsedMem 2839.4688 memdiff 85.13281 timediff 0

    UsedMem 2924.6016 memdiff 85.13281 timediff 0

    UsedMem 3009.7344 memdiff 85.13281 timediff 16

    UsedMem 3094.8672 memdiff 85.13281 timediff 0

    UsedMem 3180.0 memdiff 85.13281 timediff 0

    UsedMem 3265.1328 memdiff 85.13281 timediff 0

    UsedMem 3350.2656 memdiff 85.13281 timediff 0

    UsedMem 3435.3984 memdiff 85.13281 timediff 0

    UsedMem 3520.5312 memdiff 85.13281 timediff 15

    UsedMem 3605.664 memdiff 85.13281 timediff 0

    UsedMem 3690.7969 memdiff 85.13281 timediff 0

    UsedMem 3775.9297 memdiff 85.13281 timediff 0

    UsedMem 3861.0625 memdiff 85.13281 timediff 0

    UsedMem 3946.1953 memdiff 85.13281 timediff 0

    UsedMem 4031.3281 memdiff 85.13281 timediff 16

    UsedMem 4116.461 memdiff 85.13281 timediff 0

    UsedMem 4201.5938 memdiff 85.13281 timediff 0

    UsedMem 4286.7266 memdiff 85.13281 timediff 0

    UsedMem 4371.8594 memdiff 85.13281 timediff 0

    UsedMem 4456.992 memdiff 85.13281 timediff 16

    UsedMem 4542.125 memdiff 85.13281 timediff 0

    UsedMem 4627.258 memdiff 85.13281 timediff 0

    UsedMem 4712.3906 memdiff 85.13281 timediff 0

    UsedMem 4797.5234 memdiff 85.13281 timediff 0

    UsedMem 4882.6562 memdiff 85.13281 timediff 0

    UsedMem 4967.789 memdiff 85.13281 timediff 0

    UsedMem 5052.922 memdiff 85.13281 timediff 15

    UsedMem 5138.0547 memdiff 85.13281 timediff 0

    UsedMem 5223.1875 memdiff 85.13281 timediff 0

    UsedMem 5308.3203 memdiff 85.13281 timediff 0

    UsedMem 5393.453 memdiff 85.13281 timediff 0

    UsedMem 5478.586 memdiff 85.13281 timediff 16

    UsedMem 5563.7188 memdiff 85.13281 timediff 0

    UsedMem 5648.8516 memdiff 85.13281 timediff 0

    UsedMem 5733.9844 memdiff 85.13281 timediff 0

    UsedMem 5819.117 memdiff 85.13281 timediff 0

    UsedMem 5904.25 memdiff 85.13281 timediff 0

    UsedMem 5989.383 memdiff 85.13281 timediff 0

    UsedMem 6074.5156 memdiff 85.13281 timediff 16

    UsedMem 6159.6484 memdiff 85.13281 timediff 0

    UsedMem 6244.7812 memdiff 85.13281 timediff 0

    UsedMem 6329.914 memdiff 85.13281 timediff 15

    UsedMem 6415.047 memdiff 85.13281 timediff 0

    UsedMem 6489.25 memdiff 74.203125 timediff 0

    UsedMem 2171.6484 memdiff -4317.6016 timediff 0

    UsedMem 2259.164 memdiff 87.515625 timediff 0

    UsedMem 2346.6797 memdiff 87.515625 timediff 16

    UsedMem 2434.1953 memdiff 87.515625 timediff 0

    UsedMem 2521.711 memdiff 87.515625 timediff 0

    UsedMem 2609.2266 memdiff 87.515625 timediff 0

    UsedMem 2696.7422 memdiff 87.515625 timediff 0

    UsedMem 2784.3438 memdiff 87.60156 timediff 0

    [/java]



    …over and over and over.



    And it’s important to note that the resolution of System.currentTimeMillis() is pretty large on most platforms. My guess is 15 ms on yours. I’m going to try again with nanoTime().



    Edit: I meant “15 ms on mine”.

Using System.nanoTime() for more accurate timings (timings still presented in ms):

[java]

UsedMem 4228.711 memdiff 92.57031 timediff 2.425168

UsedMem 4321.2812 memdiff 92.57031 timediff 2.40282

UsedMem 4413.8516 memdiff 92.57031 timediff 1.813638

UsedMem 4506.422 memdiff 92.57031 timediff 2.42433

UsedMem 4598.992 memdiff 92.57031 timediff 1.505779

UsedMem 4691.5625 memdiff 92.57031 timediff 2.165359

UsedMem 4784.133 memdiff 92.57031 timediff 2.498082

UsedMem 4876.703 memdiff 92.57031 timediff 1.473651

UsedMem 4969.2734 memdiff 92.57031 timediff 2.245258

UsedMem 5061.8438 memdiff 92.57031 timediff 2.237714

UsedMem 5154.414 memdiff 92.57031 timediff 1.467505

UsedMem 5246.9844 memdiff 92.57031 timediff 2.29722

UsedMem 5339.5547 memdiff 92.57031 timediff 2.187987

UsedMem 5432.125 memdiff 92.57031 timediff 1.46974

UsedMem 5524.6953 memdiff 92.57031 timediff 2.841702

UsedMem 5617.2656 memdiff 92.57031 timediff 1.993829

UsedMem 5709.836 memdiff 92.57031 timediff 3.174984

UsedMem 5802.4062 memdiff 92.57031 timediff 2.827175

UsedMem 5894.9766 memdiff 92.57031 timediff 1.473372

UsedMem 5987.547 memdiff 92.57031 timediff 3.513575

UsedMem 6080.117 memdiff 92.57031 timediff 2.432991

UsedMem 6172.6875 memdiff 92.57031 timediff 1.448787

UsedMem 6265.258 memdiff 92.57031 timediff 2.199442

UsedMem 6357.828 memdiff 92.57031 timediff 1.687924

UsedMem 6450.3984 memdiff 92.57031 timediff 2.325435

UsedMem 6515.539 memdiff 65.140625 timediff 2.38047

UsedMem 2177.8047 memdiff -4337.7344 timediff 2.483556

UsedMem 2270.375 memdiff 92.57031 timediff 1.478121

UsedMem 2362.9453 memdiff 92.57031 timediff 5.898794

UsedMem 2455.5156 memdiff 92.57031 timediff 1.969804

UsedMem 2548.086 memdiff 92.57031 timediff 3.19873

UsedMem 2640.6562 memdiff 92.57031 timediff 2.129601

UsedMem 2733.2266 memdiff 92.57031 timediff 1.664736

UsedMem 2825.7969 memdiff 92.57031 timediff 3.037258

UsedMem 2918.3672 memdiff 92.57031 timediff 2.683022

UsedMem 3010.9375 memdiff 92.57031 timediff 2.119264

UsedMem 3103.5078 memdiff 92.57031 timediff 2.455061

UsedMem 3196.0781 memdiff 92.57031 timediff 1.934045

UsedMem 3288.6484 memdiff 92.57031 timediff 2.889473

UsedMem 3381.2188 memdiff 92.57031 timediff 2.216762

UsedMem 3473.8594 memdiff 92.640625 timediff 2.321524

UsedMem 3566.4297 memdiff 92.57031 timediff 2.230731

UsedMem 3659.0 memdiff 92.57031 timediff 2.557308

UsedMem 3751.5703 memdiff 92.57031 timediff 1.840178

UsedMem 3844.1406 memdiff 92.57031 timediff 2.197486

UsedMem 3936.711 memdiff 92.57031 timediff 2.913219

UsedMem 4029.2812 memdiff 92.57031 timediff 1.939632

[/java]



I should note that according to the GC logs, garbage collection never took longer than a millisecond except one time… and generally closer to half a millisecond and pretty constant.



For total clarity, the code I changed:

[java]

private static long oldSysTime = System.nanoTime();

private static long newSysTime = 0;



public void printCheckMem(String label) {

newUsedMem = (Runtime.getRuntime().totalMemory() - Runtime.getRuntime()

.freeMemory()) / kb;



///// LOOK HERE

if (Math.abs(newUsedMem - oldUsedMem) > 0)

// if ((newUsedMem - oldUsedMem) < 0)

{

newSysTime = System.nanoTime();

System.out.println(label + " " + newUsedMem + " memdiff "

  • (newUsedMem - oldUsedMem) + " timediff " + ((newSysTime - oldSysTime))/1000000.0);

    oldSysTime = newSysTime;

    }

    oldUsedMem = newUsedMem;

    }

    [/java]

Ah, so youre also seeing it grow very quickly, but your numbers seem to be different for some reason.



I get something like this running your code:



UsedMem 53942.35 memdiff 856.40625 timediff 11

UsedMem 54798.82 memdiff 856.46875 timediff 12

UsedMem 55296.14 memdiff 497.3203 timediff 12

UsedMem 4774.5312 memdiff -50521.61 timediff 10

UsedMem 5660.9688 memdiff 886.4375 timediff 11

UsedMem 6547.4062 memdiff 886.4375 timediff 12

UsedMem 7433.758 memdiff 886.35156 timediff 11





In the code for my game, the jumps on GC are much higher. In the below example, Im outputting location of the player model along the X axis, along with time diff. You can see the time diff take a huge jump when GC occurs (modified the printmemcheck to only output on GC):



x diff: -0.955204 time diff: 11

x diff: -0.9511337 time diff: 11

x diff: -0.9511337 time diff: 11

x diff: -8.493099 time diff: 99

mem: 5944.742 diff -45751.28 total 188352.0

x diff: -0.17022324 time diff: 2

x diff: -0.085113525 time diff: 1

x diff: -0.93621826 time diff: 11

x diff: -0.9362221 time diff: 11

Very interesting… Im seeing output like below, which shows a 37 ms lag when GC occurs…



x diff: 1.2473297 time diff: 11

x diff: 1.247345 time diff: 11

x diff: 1.2473297 time diff: 11

x diff: 1.2473297 time diff: 11

x diff: 4.228958 time diff: 37

mem: 4308.328 diff -47211.67 total 188352.0

x diff: 0.22901917 time diff: 2

x diff: 0.22901917 time diff: 2

x diff: 1.2595673 time diff: 11

x diff: 1.2595825 time diff: 11





Yet my GC log looks like this:



5.765: [GC 49216K->2304K(188352K), 0.0033376 secs]

16.547: [GC 51520K->3034K(188352K), 0.0025306 secs]





Now Im really confused :slight_smile:

Well, that’s why we’re getting to the bottom of this. My suspicion is that the time you detect as GC isn’t really GC at all. :slight_smile:



…it may be something related. Or in this case, effect and cause could be reversed. Perhaps it is not the GC taking up the time but the thing taking up the time that’s causing the big GC.



If my BitmapText fixes make it into whatever nightly (I don’t know how that works) then maybe that will even the playing field some more. I wouldn’t think it could be so severe in the simple test but maybe. (and yeah, I know you see it without stats displayed but we haven’t run the stripped down test that way yet [both of us I mean] so I’m still holding out hope :))

So, as it turns out, I was actually running this test against a nightly from 2/28. I just upgraded to my locally built JME3 with the BitmapText fix. The results are actually better (to my eyes) than they were before:



[java]

UsedMem 6478.5547 memdiff 86.09375 timediff 5.347886

UsedMem 2154.7422 memdiff -4323.8125 timediff 4.529626

UsedMem 2246.8672 memdiff 92.125 timediff 5.654909

UsedMem 2338.9922 memdiff 92.125 timediff 4.716801

UsedMem 2431.1172 memdiff 92.125 timediff 3.790984

UsedMem 2523.2422 memdiff 92.125 timediff 4.693613

UsedMem 2615.3672 memdiff 92.125 timediff 4.661766

UsedMem 2707.4922 memdiff 92.125 timediff 15.857043

UsedMem 2799.6172 memdiff 92.125 timediff 4.552534

UsedMem 2891.7422 memdiff 92.125 timediff 5.112941

UsedMem 2983.8672 memdiff 92.125 timediff 3.532851

UsedMem 3075.9922 memdiff 92.125 timediff 4.713169

UsedMem 3168.1172 memdiff 92.125 timediff 4.654502

UsedMem 3260.2422 memdiff 92.125 timediff 4.979962

UsedMem 3352.3672 memdiff 92.125 timediff 4.791671

UsedMem 3444.4922 memdiff 92.125 timediff 3.901613

UsedMem 3536.6172 memdiff 92.125 timediff 4.445817

UsedMem 3628.8281 memdiff 92.21094 timediff 4.900902

UsedMem 3720.9531 memdiff 92.125 timediff 4.532978

UsedMem 3813.0781 memdiff 92.125 timediff 4.370947

UsedMem 3905.2031 memdiff 92.125 timediff 5.092826

UsedMem 3997.3281 memdiff 92.125 timediff 4.463975

UsedMem 4089.4531 memdiff 92.125 timediff 3.957486

UsedMem 4181.578 memdiff 92.125 timediff 5.418286

UsedMem 4273.703 memdiff 92.125 timediff 4.087112

UsedMem 4365.828 memdiff 92.125 timediff 3.91167

UsedMem 4457.953 memdiff 92.125 timediff 3.823671

UsedMem 4550.078 memdiff 92.125 timediff 5.127188

UsedMem 4642.203 memdiff 92.125 timediff 3.864457

UsedMem 4734.328 memdiff 92.125 timediff 4.408661

UsedMem 4826.5234 memdiff 92.19531 timediff 4.095493

UsedMem 4918.6484 memdiff 92.125 timediff 4.270934

UsedMem 5010.7734 memdiff 92.125 timediff 4.45308

UsedMem 5102.8984 memdiff 92.125 timediff 4.636064

UsedMem 5195.0234 memdiff 92.125 timediff 4.18014

UsedMem 5287.1484 memdiff 92.125 timediff 5.914718

UsedMem 5379.2734 memdiff 92.125 timediff 4.286579

UsedMem 5471.3984 memdiff 92.125 timediff 3.930946

UsedMem 5563.5234 memdiff 92.125 timediff 4.700597

UsedMem 5655.6484 memdiff 92.125 timediff 5.32023

UsedMem 5747.7734 memdiff 92.125 timediff 4.00833

UsedMem 5839.8984 memdiff 92.125 timediff 5.092547

UsedMem 5932.0234 memdiff 92.125 timediff 4.291328

UsedMem 6024.1484 memdiff 92.125 timediff 4.271492

UsedMem 6116.2734 memdiff 92.125 timediff 4.918223

UsedMem 6208.3984 memdiff 92.125 timediff 4.506439

UsedMem 6300.5234 memdiff 92.125 timediff 3.625321

UsedMem 6392.6484 memdiff 92.125 timediff 4.029003

UsedMem 6478.5703 memdiff 85.921875 timediff 4.515658

UsedMem 2154.6953 memdiff -4323.875 timediff 16.105958

UsedMem 2246.8203 memdiff 92.125 timediff 4.202489

[/java]



GC is happening less often according to that. I also get in excess of 2300 FPS now even with dumping all of that to console. Sure, a lot of that is that were dumping half as much to the console but some of it may just be better stuff in the newer version since 2/28. Encouraging, I say.



After a few seconds in, my gc.log gets into a steady state:

[java]

41.854: [GC 41.854: [DefNew: 4418K->2K(4928K), 0.0004216 secs] 6478K->2062K(15872K), 0.0004593 secs] [Times: u

ser=0.00 sys=0.00, real=0.00 secs]

42.088: [GC 42.088: [DefNew: 4418K->2K(4928K), 0.0004123 secs] 6478K->2062K(15872K), 0.0004515 secs] [Times: u

ser=0.00 sys=0.00, real=0.00 secs]

42.347: [GC 42.347: [DefNew: 4418K->2K(4928K), 0.0004160 secs] 6478K->2062K(15872K), 0.0004534 secs] [Times: u

ser=0.00 sys=0.00, real=0.00 secs]

42.588: [GC 42.588: [DefNew: 4418K->2K(4928K), 0.0004196 secs] 6478K->2062K(15872K), 0.0004576 secs] [Times: u

ser=0.00 sys=0.00, real=0.00 secs]

42.824: [GC 42.824: [DefNew: 4418K->2K(4928K), 0.0004059 secs] 6478K->2062K(15872K), 0.0004562 secs] [Times: u

ser=0.00 sys=0.00, real=0.00 secs]

43.050: [GC 43.050: [DefNew: 4418K->2K(4928K), 0.0004271 secs] 6478K->2062K(15872K), 0.0004663 secs] [Times: u

ser=0.00 sys=0.00, real=0.00 secs]

43.278: [GC 43.278: [DefNew: 4418K->2K(4928K), 0.0004160 secs] 6478K->2062K(15872K), 0.0004537 secs] [Times: u

ser=0.00 sys=0.00, real=0.00 secs]

43.508: [GC 43.508: [DefNew: 4418K->2K(4928K), 0.0004241 secs] 6478K->2062K(15872K), 0.0004693 secs] [Times: u

ser=0.00 sys=0.00, real=0.00 secs]

43.740: [GC 43.740: [DefNew: 4418K->2K(4928K), 0.0004204 secs] 6478K->2062K(15872K), 0.0004598 secs] [Times: u

ser=0.00 sys=0.00, real=0.00 secs]

43.975: [GC 43.976: [DefNew: 4418K->2K(4928K), 0.0004121 secs] 6478K->2062K(15872K), 0.0004718 secs] [Times: u

ser=0.00 sys=0.00, real=0.00 secs]

44.199: [GC 44.199: [DefNew: 4418K->2K(4928K), 0.0004056 secs] 6478K->2062K(15872K), 0.0004439 secs] [Times: u

ser=0.00 sys=0.00, real=0.00 secs]

44.439: [GC 44.439: [DefNew: 4418K->2K(4928K), 0.0004137 secs] 6478K->2062K(15872K), 0.0004554 secs] [Times: u

ser=0.00 sys=0.00, real=0.00 secs]

[/java]



Essentially, GC is reclaiming 2418 objects every time… and it consistently does it in less than half a millisecond… running less often. Way way way more stable than before.

Yeah, more frequent garbage collection is better if you are looking for lower pauses. I was using java for audio processing which requires very low latencies compared to video/graphics and was able to get the garbage collection pauses down to 2ms. The garbage was actually created by Swing, my plugin code didn’t produce any. For game/graphics display use this should be more than enough when we talk about 25fps ~ 40ms per frame. You can tune your garbage collector to collect more often by lowering the mem size for the young generations.

Well, and I’d take “stable and less often” over random 1 second full sweeps any day. :slight_smile: So there’s a bit of a trade off. In 3D graphics, I prefer to be able to keep frame rates consistent even if slow rather than the complete freeze every 20 seconds or so.

Thanks, Ill play around with the GC settings and see if I can make them more frequent and thus cause smaller framerate spikes. Normen, can you give any insight as to why memory would fluctuate so quickly on the simpleapplication example with 20 blocks? Thats the most basic example I can think of