I’m considering migrating a first person shooter project I’ve been working on for the past several years (custom engine) to jmonkeyengine. After a quick review of the structure and api, it looks to have everything I require, AND is very simple and easy to understand.
One thing I’m hoping someone could confirm for me though is JVM tuning and optimization ability (if it were ever to be required). I’ve written and supported a few java web applications in the past, but have never had to dig too deep into how the JVM manages memory, or do any custom tuning myself, therefore I’m unsure of how easy/difficult it would be to ship a finished game and have the optimizations take effect on a user’s installed jvm, or if rather you would want to package a jvm with the production release?
I simply need to know that if I sink months worth of time into a prototype and discover that there are micro glitches due to GC, that I would have some avenue to be able to smooth these out.
I understand that if I follow best practices and write optimized code that this will probably not be an issue, and I may be concerned for no reason…I just want to confirm that if I get to the level of requiring JVM optimizations to accomplish something, that it’s possible.
I think this is certainly the modern way. Else you potentially send your users down a maze of twisty packages to get and install the correct JVM.
This is almost always an application problem. Modern GC keeps up pretty well and you don’t even have to try very hard. In fact, most folks get into trouble trying to out-think it.
Your frame drops will come from other things: bad scene management, unnecessary shader recompiles, overly complex physics/animation * too many objects, etc…
The only thing that I have seen make a difference (and not a very big one) is to use the shenandoah gc. The key reason for this is it has several optimizations for jni calls, which JME uses LWJGL, which in turn is just a wrapper around thousands of jni calls. In theory one may gain from 1 to 2 ns of performance per jni call. And depending on the number of jni calls in a frame, this may become a slight optimization. But keep in mind, any optimization done at this level will have little difference compared to optimizations at your code level. Also using a newer JDK usually helps with performance. For example, using JDK 17 over 11.
EDIT: Spasi, who is the maintainer of LWJGL has some benchmarks for different GCs with JNI calls buried somewhere in one of the many jdk mailing lists. I did not find it from a quick search, but perhaps someone has a quick link to it on hand.
As others have already said, usually the GC is not an issue. In the off chance it is, I’d switch from the default G1 to ZGC or Shenandoah and that alone will likely be enough to solve your problem (they’re constant-time with respect to the stop-the-world pauses that cause the “hiccup” in applications - all of the collection work is done concurrently with your application running).
As for distributing your game, I’d highly recommend using jlink. This is a built-in tool in the JDK since Java 9 that lets you build a custom distributable JVM image that’s bundled with your application. Do note that instructions for using it generally assume that your application is modular (in the sense of JDK 9 modules) and that your code will be linked into the image, but this is not necessary - if you want to run non-modular code you just link a JVM image using whatever JDK modules you want, copy your application as jars alongside that image, and then use your custom JVM to launch your application from the classpath just like with a non-jlinked runtime.
If you choose to go this route and have any issues with running non-modular code let me know - I’d be happy to share examples of doing this in practice.