Physics Stats like ViewStats?

Hi folks I was wondering if there was anyway to get vertex/triangle count stats that are only defined within the physics space? (i.e. Is there an equavalent setDisplayStatsView(), for the physics space…if not is there a reasonable way to extract this info?)


No, and its also not containing much useful information but the memory use… For rendering it affects performance in a certain way, for physics thats not the case.

I’m not sure I’m following, wouldn’t more geometry in the physics world effect performance? I would think using an alternate simplified geometry in the physics world would yield better performance (physics) wise. (while that would mean that your ‘generating’ 2 geometries in the scene graph…the physics space geometries I was thinking of simply setting to non visible…as such it would not be sent to the gpu). Obviously the trick is in balancing 2.

Yeah it might but knowing how many objects or vertices are in the world won’t tell you much about how it does. You can just look at the vertex count in the geometry if you want to know, its the same except for the hull shape (which already is the most optimized hull version of the given mesh). Theres no “arbitrary values” that you might need to read out at some time. There would have to be information like the culling / rendering info that makes the vertex count useful in the FPS stats. The trick is using as few physics as possible and letting the broadphase do its work (so no world-spanning meshes).

Well, optimizing my physics space, has indeed improved game quite a bit performance wise.

I was running with a physics accuracy setting of .0008, which is rather…‘high’ imho, at the time I was just using the same geometry as the rendering space.

I have now added specific simplified geometry just for the physics space, and am now able to run with an accuracy setting of 0.001 ~ 0.003 with good results. (if memory serves the default is .003)

I think the lesson is…reducing the physics geometry should yield simplified collision checks, as such the less you have, the lower you can the accuracy, while still yielding the ‘same’ results. (I still don’t have a clue as to the actual number number of vertex/triangles, but i do now its significantly lower then it was).

I guess it would be possible traverse the tree/geometries in the physics space, to get the actual count.

and yeah…I’m sure there is no magic number, but as use said, “using as few physics as possible”, should yield the best results, I think it would be useful to have a stat related the number of vertices/triangles, as a point of reference, but yeah…obviously large polygons…will have a big factor of actual results.

You add the shapes yourself, how come you don’t know how many vertices they have? :?

Edit: this was not a response to normen but to the original poster. Normen ninja’ed me.

The difference is that rendered vertexes and triangles has a direct correlation on performance. You can have millions and millions of collision triangles in a physics space and if they never touch each other then your performance impact is near 0.

I would think stats based on collisions or something would be far more useful. How many objects passed broadphase to fail in real collision testing, total number of polys involved, etc.

I don’t know if that’s achievable in JME bullet but this information is the sort of thing I tend to log in my own engine… along with a slew of other things that might be approach specific (contact counts, resolution times and iterations, etc… depends on how the resolver works.)

both good responses :slight_smile:

I’m not a blender guru, and still have not figured out how to get vertex counts on a selected object. :frowning:

Yeah…it makes sense that if the stats for physics were to be useful it would be based a Potential Collision Polygon count (rather the total).

I’m assuming that if an object ‘intersects’ with the sweep algorithm (I think you guys are referring to this as the ‘broadphase’), the polygons will be tested for collision.

Needless to say the less polygons that could potentially cause a collision should translate into a performance benefit.

So yeah…I can see a raw a polygon/vertex count not being all that useful, whats more important is how many polygons are being processed in each sweep to determine collision.

I think the bottom line (at least for me…in this case):

Reduce the potential polygon collision count

  • Achieved in 2 ways:

    a) Reduce object polygons (which can collide)

    b) Reduce object physical ‘size’ spatially, potentially by creating smaller objects

    At least that appears to have improved things for me quite a bit :slight_smile:

    This Noob is slowy learning, maybe more then may be good for me :slight_smile:

Terminology as I understand it:

-broad phase: simpler elimination of things that cannot possibly intersect. For example, by testing bounding sphere to bound sphere.

-sweep: when two objects are thought to intersect during a time slice but you must determine at what point in that slice they intersect.

…I don’t know much about that because my physics engine doesn’t do sweeps.

Close to @pspeed

Actually the Broadphase is far more intelligent, your descirption is the brute force algorithm for it.

Assume I have 2Static objects (in 2d for simplicity) ObjectA ObjectB ObjectC (c is danymic)

When I now know the sizes of those and I find that C and B are not colliding can skip checking c and a, as a is furhter away than b. This goes down to a 3 dimensional volume containing treelike structue (the dbvt) wich has two subtrees, one for static objects and one for danamic(if i remember right).

So if A has 1gigalion polygons it wont matter as it will still be sorted out in the broadphase. (Also since it is static it will use a internal treestructure to increase triangle performance anyway. The reason why most games compute the whole static geometry into one large static mesh (worldspawn in source engine for example)).

→ Moving non primitive objects (gimpact shapes) eat lot of performance once a collision happens, cause no precalculated acceleration structue possible

→ Static mesh shapes are quite fast

→ primitives are quite fast

→ Also stuff like primitive spheres are actually a real sphere! (like infinite polygons as they calculate formel based not triangle based)

→ Compound shapes containing primitives are kinda fast, but scale exponentially.


->Native bullet uses better optimized algorithms, for example for physicrays, this can be a huge difference between (n^3 and n*log n), downside of course is the native requirement. (but since they are nearly interface compabitle you could use the native one as a accelerator if supported)

→ Main interface difference is, native can do sweep tests, (if object moves fast it does kind a ray with the collisionshape, to make sure it cannot tunnel between two ticks, this allows to reduce simulation accuracy and ticks per second dramatically, (if you know there are only few fast objects))

The only usefull informations are:
How many physic objects are there?
How many pass broadphas elemination?
How much time does a tick need?
Theoretically with manipulating the solver and the collision detection it could be possible to calcualte how much % of the ticktime each single object caused. (and thus optimize heavy load scenes)