jBullet is almost 9 years out-of-date now… It is sad, but with that, is there an alternative that can be moved to for the long term. At least something that can be considered? Features that a lot of physics engines have now are currently unavailable to us, and I don’t see jBullet being updated anytime in the future.
i heard there are no more avaliable jBullet sourcecode(and this is the reason imo), and anyway heard nativeBullet is better than jBullet, so we should stick with native one.
anyway @sgold have updated version of native-bullet with some minior fixes inside his “Minie” lib if you are interested.
btw. IMO full java implementation of physics can be same fast as nativeOne(that anyway require JNI that slows down it a little), but i think native one is just better written for some cases.
Also java have its memory limits defined and JNI(native) is working out of this limits. ( so its nice when your java app take all possible memory, but physics will still have memory to use out of limits - its same about opengl that JME java is working via JNI/lwjogl on it)
github/jMonkeyEngine/jmonkeyengine/tree/master/jme3-bullet-native (not updated much too, but much more)
Please note: jme3-bullet lib is general lib for both, like wrapper. jme3-bullet-native lib is native(JNI) version of physics jme3-bullet-native-android lib is native(JNI) version of physics (For android) jme3-jbullet - it is java version outdated one of physics
you add what you need, for example:
jme3-bullet + jme3-bullet-native
If there’s serious interest in using the OpenCL with Bullet with JME, I can look into adding it to Minie.
Also, some clarification: NEVER include both jme3-bullet and jme3-jbullet in a single classpath. Use one or the other, not both; each one implements (approximately) the same API, and you’ll want to be sure which library you’re using.
jme3-bullet-native is only needed for jme3-bullet on a desktop; it’s not used by jme3-jbullet, but it won’t do any serious harm. It’ll merely bloat your application. Same thing for jme3-bullet-native-android; it’s not needed for jme3-jbullet.
Thank you everyone for the clarifications.
I personally would love to see OpenCL supported. I am running into performance issues on the CPU for server side physics, and if I can offload that to a GPU with OpenCL that would help a lot.
It is not so much the type of physics we are using (we have a variety of different shapes, but a lot of MeshCollisionShapes, and are working on getting GImpactCollisionShapes running, not very efficient, but we need accurate collision detection), but the shear number of objects I am running physics calculations on. At any given time I have between 20 to 50 physics spaces, each with 1,000 to 10,000 objects. We are just limited by the amount of performance we can squeeze out of a CPU, but we have a lot of potential GPU capacity. I assume that by utilizing opencl we can take some of the load off the cpu.
I haven’t scoped out the transition from Bullet 2.88 (what Minie uses) to 3.x in detail, but it appears nontrivial. Also, I haven’t found evidence of any Bullet 3.x releases; I suspect the code is still experimental and not ready for production.
It may be of help to explain what it is you’re actually doing. It’s possible that there may be a more efficient way of doing it. I’m not saying there is, but if we knew what you were doing we might know.
Hello @jayfella, I am working on a server-side high accuracy physics system for large scale multiplayer VR simulations. We are attempting to handle semi-accurate haptics, which is very intensive. These simulations are used for training personnel in real world situations that would be hard to do actual training for (an example is like rebuilding a jet engine, or training facility operators how to shutdown a plant). For these simulations we need to have a very large number of small objects with moving parts, and we need to allow multiple users and other objects to interact with them. The system works just fine, but we are just running our servers into the ground with the load. We have a cluster of GPU servers and would love if we could take some of the load off the CPU and run the calcs on those.
EDIT: I should clarify that our current work around is to break the simulations into lots of small simulations that get stitched together and to run them on separate servers.