[SOLVED] 240 Hz Monitor and Vsync Problems

I wonder if there is a good way to have a steady 60 fps even with a 240 Hz Monitor. One of our tester does have a 240Hz Monitor and plays the game with 240 fps, but then I guess tpf which is float not double in my case causes trouble, the my animations which operate with tpf to look weird. The physic with Dyn4j doesn’t have problems as there time is double.

Similar problems even with a 144 Hz Monitor (my Laptop), not as bad but also bad.

Any ideas? Beside somehow use double for tpf, if that is the solution do you guys have a good example how to do that?

Thanks a lot.

I doubt single-precision math is the issue here. You should design your game to work equally well at 60Hz, 144Hz, or 240Hz.

What does “look weird” mean?

For positions, I actually 100% agree with you that float tpf could cause problems… and I swore off JME’s float tpf years ago. Even with JME test apps it was possible to see the movement camera slow down as frame rates got very high… especially depending on how the math was written and the range of coordinates.

But for small things like the seconds of an animation, I have doubts unless the accumulator is never wrapping.

Edit: by the way anyone looking at rewriting JME from scratch… I could write an essay on how best to avoid float tpf.



Yes it slows down my walk cycle which looks weird obviously. Other interesting flickers and jumps and I thought well higher fps lower float number less precision.

Do you have me a good example to have double for tpf? Do you implement some homebrew app states and if how do you do that?

Well that’s what I thought I did by using tpf…

For Mythruna, my code is based on SimEthereal’s TimeSource and/or the SiO2 GameSystemManager’s SimTime… so it’s all double all the time.

JME animation is tricky but because of the network synching requirements, I’m already keeping track of animation time as double and I convert it to float for JME.

If you are letting JME’s controls play the animation on their own then you are a bit stuck… I feed them their time from the server so I don’t let the AnimComposer ‘play’ on its own.

Edit: note that I haven’t looked but it could be there is some math in JME’s tpf calculation that could be exacerbating the precision issue. 32 bit float requires a certain amount of expertise when ordering operations to avoid losing precision unnecessarily.


By the way, you may also take a look at Lemur AnimationState for playing your animations if they are made up of tweens. Afaik, it is double-based and is independent of JME tpf.

I have just some own written translation working with tpf, and the simple walk cycle (calculated one) gets slower if I have higher fps. Anyway I used GameSystemManager for my top down multiplayer shooter, so I’ll have a look into that and see how difficult it is to replace the existing app states.

Oh and I see I used GameLoop as well of your SiO2 package where I can have my 60 fps anyway no matter the screen refresh rate, is that correct? Should I have certain States in sync with the screen refresh state? I remember that for my multiplayer game I had the client part with the regular app states and the server with GameLoop and GameSystemManager.

Any thoughts on that?

Limiting things to 60 FPS regardless of monitor refresh is one approach but then you never take advantage of the higher refresh for smoother animations.

GameLoop is not the only way to drive a GameSystemManager but I only used these as examples anyway.

Mythruna uses these for the backend which means that the network events will be quantized to 60 FPS (at best). But because everything is happening on different threads (like the server), the client cannot just march along in lock step… as there is no such thing as lock step. The visualization is always interpolating between two frames of reference, ideally the very most recent and the one just before that. (It renders at a 100ms to 200ms delay as described in the networking documents linked a bunch of times before.) ANY time you decouple movement “frames” from the rendering frames you need to do some kind of interpolation or you will have jitter as the different frame sources sync and randomly unsync.

The above is only to explain how I do it and not to suggest that is is the best way for you. This is why I may not have the same issues because turning a long value between two long values into a float ‘time’ is going to be a lot more accurate than accumulating a float tpf.

I do not know the best way to get JME animation converted to accumulate with double. Code-wise, it’s a shame because a lot of it is double-based but is forced into float accumulation because of Control.update(). To avoid this, I think you’d have to supplant the existing update and accumulate your own double-based time, taking care to wrap it for the particular animation, etc…

I’m quickly looking through my code just to see if there is anything interesting. The MOSS libraries (unreleased Mythruna Open Source Software) has a character rig package that it uses to manage a rigged character. So every one of my network driven animated characters is really managed by a character rig.

The first thing the character rig does is kill JME’s normal animation:

    public AnimComposerRig( AnimComposer anim ) {
        this.anim = anim;
        this.originalSpeed = anim.getGlobalSpeed();

…setting global speed to 0 means that the regular update() will never update anything.

After that, I always provide time externally:

    public void setTime( String layerId, double time ) {
        if( layerId == null ) {
            layerId = AnimComposer.DEFAULT_LAYER;
        AnimLayer layer = anim.getLayer(layerId);
        if( layer.getCurrentAction() != null ) {
        } else {
            log.warn("No current action for layer:" + layerId);

Thankfully, JME’s AnimLayer is already double-based for setTime():

…I think that’s the Lemur tween influence.

So in theory, you could do something similar in your own code. Have an app state that wraps all of your AnimComposers in some driver object. The app state could keep track of long nanoTime() from some start value and convert it to double every frame: (current - start) / NANOS_PER_SECOND

…then pass that to all of the wrappers.

1 Like

I don’t want to disrupt the topic about animation synching too much, but I found this point interesting.
Do you have some links or examples about it?
I think I’ve seen my games have some small jittering because of this reason but my attempts to fix it with interpolation only made the problem worse. I pushed the issue for the time being but some pointers might help me.

I don’t have any links handy but I created this quick illustration:


If you pretend that the red lines are physics updates and the green lines are screen updates then every few frames we miss a frame entirely. The pink bars represent how late each frame gets rendered, too.

Flipping it around the other way, if you think of the red lines as rendered frames and the green lines as physics updates then every so often you will have two rendered frames where the value does not change.

Either of these can make animation look jerky. And this is when frame rates are super consistent. It’s worse when either one might vary slightly.

An alternative is to keep the last two good frames of the source frames then you can interpolate the dependent frames at a small lag (one source frame length). In this way, animation would always be smooth at a tiny lag.

In a networked game like Mythruna, this tiny lag is based on ping time because the client renders slightly in the past. So the lag can be 100-200 ms and keeps 5 or 6 source frames around.

You only have to keep enough frames to support the maximum frame rate delta but it never hurts to keep a few extra.

My SimMath library has some thread-safe classes for accumulating and interpolating entire position+rotation+visibility reference frames: https://github.com/Simsilica/SimMath/tree/master/src/main/java/com/simsilica/mathd/trans


Thanks for the explanation. I will experiment with that. :slight_smile: