GeometryBatchFactory, BatchNode, OOM issues

@t0neg0d So finally, what happened a year ago about this issue? I’m looking at how batching works at the moment and various ways to accomplish having thousands of geometries with the same material batched together, but they need to keep their physics controler. In short, I’m interested about having tight memory management regarding the batching. Thx for letting me know how it turned out for you and/or what you did to get rid of the memory issues, what it was, etc…

@.Ben. said: @t0neg0d So finally, what happened a year ago about this issue? I'm looking at how batching works at the moment and various ways to accomplish having thousands of geometries with the same material batched together, but they need to keep their physics controler. In short, I'm interested about having tight memory management regarding the batching. Thx for letting me know how it turned out for you and/or what you did to get rid of the memory issues, what it was, etc...

1000s of separate objects will be kind of slow for other reasons, too. You may want to rethink your design and/or post what you are trying to do so others can provide advice.

I said thousands of geometries tough, not objects. I understand the difference between the objects and their geometries. But, I’m not sure how I could make my approach simpler tough, as those thousands of geometries are only very simple ones like textured quads or cubes and my goal is to batch thousands of them in different objects. I would not have more than than like 20-30 objects (BatchNodes at the moment) and each of them objects would contain thousands of geometries, but it’s certainly never ever going to be more than like 2-3 million vertices total sent to the GPU at any given time.

One of the batching related questions I’m facing right now is that I don’t know if I should create a completely new @nehon’s BatchNode object every time I want to add or remove a geometry in it OR if I should simply add/remove and then call its batch() function again without worrying how it works internally… because I noticed that if I did the latter, it would create an additionnal node every time I called it so say I created 1000 geometries and called 1000 times the batch() function, I’d be stuck with 2000 nodes (1000 geos and 1000 BatchNodes) which I think is not very optimal and probably abnormal too… versus only one BatchNode and 1000 geos for the former… but batching thousands of nodes often looked like overkill, depending on exactly how many geos of course. I’d have to profile this in 100, 1000 and 10000 geos test cases I guess, unless people here already figured out what’s the best approach and want to share with us :stuck_out_tongue:

Your reason for wanting to use BatchNode was because you wanted physics enabled for all of those objects. So they are still 1000s of objects. BatchNode only optimizes the mesh. It doesn’t optimize the fact that the engine is going to have to do a certain number of things per object every frame… which is going to slow you down.

If you are making a block world, block worlds are not made of blocks. The lesson there is that the static geometry should be batched… and if you have 1000s of non-static geometry you will have problems, especially if they are all physics enabled.

1 Like

And note: for batch node to work the objects have to share materials. If you are getting 1000 new nodes out of 1000 new objects then they must not share the same material.

Please don’t TLDR on me haha! >.<

Hi Paul, thanks for sharing some useful information, altough I’m not doing a boxel engine. There are too many out there already :stuck_out_tongue: I was more thinking about stuff that lays around, like items falling off, stuff rolling, flying around in wind or whatever, but come to think of it, I probably won’t have MANY THOUSANDS, maybe more in the range of HUNDREDS but not as much as I initially wrote.

Like I said, they all share the same material so it’s not the issue. I’m not even instanciating another material, it’s the exact same object.

I do want physics enabled, but I discovered I don’t need 1000 controls for 1000 geos, but rather only 1 per each different mass.

COOL IDEA:
At first I was making a new control and adding each geo to a separate control. It’s not the case anymore, I instanciate only 1 control of mass 1f for example and then I attach all geos to that same control. I saved up to 4fps (on a pool of 1000 geos) doing that. I also thought about something really cool: I found out that the more collisions happening, the more fps will be dropped from the controls. For instance, when geos do NOT collide with each other (like in an explosion) fps would be DRAMATICALLY higher, like twice or even at some point THRICE the speed versus when all geos were fallen onto the ground or near each other. So I figured I could make a thread or a timer once every second that loops over each geos that has a mass and checks how many distance each geo has travelled and if it did not travel more than X units, I could remove this geo’s mass or better yet, detach the whole control, making it a perfectly still geometry and saving LOTS of fps, like I said, going from 12fps (for 3000 mass controls) up to 60fps. It’s insanely neat as a trick, but I have not done the whole thinking yet, like how to figure when to put the mass back on the geo’s control, maybe via collision detection or something, I’m just not there yet.

TL;DR: What really slows the fps down is the mass controls, not the geos count (given that they are optimized via batching)… and at some point, the number of mass controls instanciated makes a difference too, so better having 1 control per each given mass than 1 control per geo.

Physics is really where the bottleneck happens and I think you’re right when you tell me I should rethink about having so many physics enabled geos laying around. Maybe chunk them like max 200 controls per each frustum volume or something. I’m already using chunks for my infinite terrain so maybe I could attach them to my terrain and so when my terrain is too far and unloads, it would unload the hundreds geos laying on that chunk or something.

Nice food for the brains here, I’ll test some things out, thanks a lot for the reflection. +1 for the effort, but…

MY QUESTION REMAINS: @nehon If I batch nodes using your BatchNode class and I need to add/remove a node in it, should I make a new object everytime I modify the nodes and rebatch or SIMPLY rebatch the EXISTING nodes collection? Like I said I find it weird that everytime I call batch() I get a new node so it grows a LOT if I add/remove hundreds of times you know?

For instance, let’s say I add a new node, I get this in the batched nodes collection (named “My Batched Nodes”):
My node 1
My Batched Nodes-batch0
My node 2
My Batched Nodes-batch1
My node 3
My Batched Nodes-batch2

… versus if I create all the nodes before (like loading a saved game or something) I’d get this:
My node 1
My node 2
My node 3
My Batched Nodes-batch0

So as you can see, maybe I’m not doing things right or is this normal? I mean, it doesn’t look optimized to me…

MY QUESTION IS: What is the best way to add/remove nodes in a BatchNode collection?

Thanks :stuck_out_tongue:

@t0neg0d said:

Well... it is a heck of a lot faster than GeometryBatchFactory.optimize(), however, it doesn't solve the issue. Can someone explain to me the difference between loading 150 models (say with 100 verts) and creating a custome mesh with 15000 verts? I can do the first without OOM issues... however if I try to batch these.... toast. If I try to create a custom mesh instead of these... toast.

Anyone?

Accessing the Java Heap space is often quicker than using native buffers, thats why you might see a speed increase. With your proposed method you use more memory though. Just as much direct memory as the original version plus the heap for the java objects. Unless I misunderstood something.