[SOLVED] Merging physics debug shapes doesn't seem to work

I am currently in the process of adapting GeometryBatchFactoty.makesBatches for my personal needs. One of these adaptations is to be able to batch RigidBodyControls together into the new batch. My process for doing so is quite simple: for all geometries being batched that have a rigid body control, take the debug shape from that control and add it to a list. Then once all the debug shapes have been gathered, merge the geometries into a new mesh that will be used as the new collision shape for the rigid body in the batch. Here is the code being used for this:

Mesh physicsMesh = new Mesh();
List<Geometry> physicsShapes = new ArrayList<>();
for(Geometry geom: batchData.getGeometries()){
    if(geom.getControl(RigidBodyControl.class) != null){
        // This geometry as physics, take the collision shape and create a debug shape out of it
        Spatial debugShape = DebugShapeFactory.getDebugShape(geom.getControl(RigidBodyControl.class).getCollisionShape());
        debugShape.setLocalTransform(geom.getLocalTransform());
        // Take the debug shape and add it to our list
        if(debugShape instanceof Node){
            // Its possible if the collision shape was of Compound type that the debug shape is a node
            for(Spatial child: ((Node)debugShape).getChildren()){
                physicsShapes.add((Geometry) child);
            }
        }else{
            physicsShapes.add((Geometry) debugShape);
        }
    }
}
mergeGeometries(physicsShapes, physicsMesh);
newRigidBodyControl.setCollisionShape(new MeshCollisionShape(physicsMesh));
newRigidBodyControl.setApplyPhysicsLocal(true);

Now the code above works and the geometries are batched together. However when checking the debug shape in game, there doesn’t seem to be anything there. After using GeometryBatchFactory.printMesh, I was able to find that all debug geometries generated using DebugShapeFactory.getDebugShape don’t have any index buffers, only position buffers. This was confirmed when I checked the methods that we’re being called to create the debug shape:

public static Mesh getDebugMesh(CollisionShape shape) {
    Mesh mesh = new Mesh();
    mesh = new Mesh();
    DebugMeshCallback callback = new DebugMeshCallback();
    getVertices(shape.getObjectId(), callback);
    mesh.setBuffer(Type.Position, 3, callback.getVertices());
    mesh.getFloatBuffer(Type.Position).clear();
    return mesh;
}

So when I batch the shapes together and then print that mesh, this is what I get for the index bufer:

Index: 
[0, 0, 0]
[0, 0, 0]
[0, 0, 0]
[0, 0, 0]
[0, 0, 0]
[0, 0, 0]

Note that they’re 6 [0, 0, 0] and they’re 6 debug shapes being merged together (coincidence?)

So now I’m not sure what to do. Is this a misunderstanding from my part? Or is there something I’m not doing that I should be doing? One more question that popped in my head was: if they’re no index buffers in the debug meshes, how am I able to even see those shapes properly? Because if I create a debug shape and attach it to the scene, I can clearly see it. But if I do the same thing with the debug shape batch, I can’t see anything.

Why are you using the debug shapes in the first place? They have nothing to do with the physics shapes - they’re just what the name says - visual debug shapes for the purpose of being rendered.

I was using the debug shapes to be able to batch the collision shapes together to create a new collision shape. So for example I have 4 models that have rigid body controls with a cylinder collision shape. Each model is at a different location. I wanted to take the mesh representation of each collision shape, merge those meshes together, and then create a new mesh collision shape out of that mesh.

The purpose being…?

That new mesh that I’ve created is used to create a new mesh collision shape, to which I would assign to a rigid body control. This would allow me to “combine” rigid body controls into one.

Like I said, I’m batching some geometries together and some of those geometries have rigid body controls. I would like the new batch to keep a rigid body control, as to then I need a new collision shape for. The new collision shape would have to be a combination of all collision shapes of the geometries that I am batching that have a rigid body control. Hence why I would need to batch all representations of each collision shape together.

Are you trying to make something like this: Katamari Damacy - Wikipedia
By any chance? :chimpanzee_closedlaugh:

Have you looked at the jme3test.bullet.TestAttachDriver.java ?
Maybe you can achieve a similar thing with that.

If you think it is for performance optimization then I have doubts.
(if this is your goal you’re aiming at - physics engine works best with moderate amount of convex objects).

Thats what you want to code but that can’t be the purpose, can it?

Lets say you have some dynamic physics world and you want to “freeze” it for some reason while the player is still moving around - I doubt one huge mesh will perform better for physics collision than multiple convex objects. Lets say you only want to batch the objects because the rendering of the single objects takes too long - then the former issue is still valid plus you can’t control the single physics objects anymore.

So the question still stands, what are you trying to do?

No that is not at all what I am trying to accomplish. Thanks thought.

I’ll try to say what I’m trying to accomplish as best that I can. So the ultimate goal is optimization. My world is divided into tiles of 128x128. Each tile contains multiple objects such as trees. Some of these objects have physics attached to them that each have there own different collision shape. Now if I load my tiles as they are now, they drop the FPS by quite a lot (understandable since there’s like 40-50 objects per tile). So the answer to increase FPS in this case is batching these objects. However I’m not just blindly batching all objects into one batch, when I batch I make sure that each batch doesn’t go beyond a certain triangle count threshold. This makes my tiles have a nice balance between the number of objects and the number of triangles per object.

Now I know that big physics objects is bad. But I also know that a big amount of physics objects is no better. This is why I’m trying to apply the same batching to my physics. To have a balance between the number of physics objects and how big those physics objects are. And also to keep the physics accurate to the batched objects.

EDIT:

The physics objects are static and are not gonna be changed later on.

Just an aside, you are way better off batching purely spatially than by arbitrary triangle limits. So are you limiting by triangle and locality? Else, don’t bother with the triangle limit… unless it’s something silly-high like 300,000 triangles per batch as your graphics card can likely handle all kinds of triangles per object… but your whole scene would be happier to be able to completely cull some objects (ie: proper breakup based on location).

Yes I batch per locality at the same time.

So its part of a static scene that you want an optimal collision mesh for? Make a separate super-low-poly mesh (in blender or whatever you use) and use that as collision shape. If its a generated world generate that mesh. Batching boxes, cylinders and whatnot will still leave you with way too many unused vertices (bottom of the box standing on the terrain etc.)

Yes.

I can’t really do that. Or rather I could do that but that would still leave me at my problem. At the time I want to batch my physics objects together, I don’t have any reference to the mesh that each object uses as it collision shape. That is why I was using DebugShapeFactoty.getDebugShape as it is (AFAIK) the only way to get a mesh from a collision shape.

Each object as had its collision shape set at a prior time. So I don’t have anything that I can use to batch them together besides the collision shapes themselves.

Well I’ve figured it out…

Turns out that since the debug shapes don’t have an index buffer, I just need to replicate that on the merged mesh and everything works. In other words, clearing the index buffer on the merged mesh did the trick.

However I don’t understand: I thought that if a mesh didn’t have an index buffer, it would not be shown when attached to the scene. Did I miss something?

1 Like

I think index buffer is for defining triangles, and debug shapes are only ever rendered as lines so no need for face (triangle) info.

You can make meshes without index buffers. Index buffers are a way of sharing vertexes between triangles. If you don’t share any vertexes (or don’t care to) then you don’t need an index buffer.

I recommend reading any number of OpenGL articles on the subject if you have further interest.

Yes, pspeed said it. Here are some examples:

Mode == TRIANGLES (triangle soup)
vertices = v1,v2,v3,v4,v5,v6
triangles = v1,v2,v3 and v4,v5,v6 (2 triangles)

Mode == TRIANGLE_STRIP (tri strips)
vertices = v1,v2,v3,v4,v5,v6
triangles = v1,v2,v3 and v2,v3,v4 and v3,v4,v5 and v4,v5,v6 (4 triangles)

Mode == TRIANGLE_FAN (“circle” of triangles)
vertices = v1,v2,v3,v4,v5,v6
triangles = v1,v2,v3 and v1,v3,v4 and v1,v4,v5 and v1,v5,v6 and v1,v6,v2 (5 triangles)

In good old OpenGL there are other modes too (quads, points, lines, line strips, etc.)

Note: I wrote that from my memory (which might not be 100 percent correct). :chimpanzee_smile:
What I mean: Could be that the TRIANGLE_FAN has only 4 triangles (don’t know anymore).

Marked the topic as [SOLVED].

How is that even possible? You create collision shapes and then “lose” the data? Is there a black hole in you computer?

I am curious: Did that boost your performance? And how many frames per sec do you have now?

If no boost => still okay, you’ve tried something (science is also about finding out what doesn’t work - not only what does work counts). :chimpanzee_smile:

Ah ok, well I’ve just learned something then :stuck_out_tongue:

No, all thought that would be scientifically awesome :stuck_out_tongue:. The collision shapes are created, then are saved inside a j3o file with the model. At a later time, I load the j3o onto my tile. That is why I don’t have any reference to the mesh of the collision shape.

From what I can see yes, it did give me a performance boost. I can’t compare with the original batching method, but if I compare it with no batching at all there is a boost.

To recap, my batching method consists of batching my geometries per locality and making sure the batch doesn’t go beyond a certain triangle threshold. Before, I could be rendering 2000-3000 objects per update without batching, which dropped my FPS to about 12-13. Now with with the batching I get 55-60 FPS. The reason I decided not to use the batching method that was already given in GeometryBatchFactory was because my objects we’re all batched into one big geometry with like 300 000 triangles and was about the size of my tile (which is 128x128). This made for to big of a load when actually loading the tile in my scene plus all the effects of culling we’re nullified since my objects we’re considered constantly in view. By batching per locality with a triangle threshold, I get a good balance between my number of objects being rendered, how many triangles per objects and how big those objects are.

If you’re wondering if I got a boost by using this for the physics, then I’d say yes. I used the same batching method on my server, which is basically one big physics simulation, and I saw a FPS boost of about 8 FPS when I ran one of my stress test cases.

1 Like