Character Control and WalkDirection

I am trying to write some client side prediction code for my game and have come across a problem.



In order to perform prediction I need to work out the distance moved in each update and the direction moved in. This is fine. I then use this to update the walkdirection, which, if I have understood correctly, is simply a distance to move on the next step.



In order that my character only moves the distance I have told it I then set the walkdirection to 0 on the next control update (otherwise it just keeps moving). However, I believe my problem is that the walkdirection is used on the physics tick to update the kinematic character and there may be several control ticks between the physics tick during which my walkdirection is adjusted to different values. This means that the same walkdirections can cause the character to end up in different positions depending on when the updates to walkdirection occur.



I have tested this with an overall walk direction. This comes out to the same value on my client and server at the end but the end positions differ wildly.



The solution is simply to maintain an overall walk direction that gets updated each control tick until it is used in the physics. However, I am unsure how to tell when the physics tick has occurred, and therefore that I need to set my walkdirection back to zero.



Does this make sense and any ideas on how to overcome it?



Thanks,



Matt

Yeah, you are exactly right. You can listen to the physics tick by implementing PhysicsTickListener. Its only called when the physics is actually stepped (which happens at 60 fps by default).

Ah I missed that, apologies. Thanks for the pointer, will look at that tomorrow.

That works great, thanks.



One problem I am musing over, and would like thoughts on, is how to get the forces applied to an object via the physics engine in a given physics tick. This is why:



In order to do client side prediction I store the moves performed since A, where A is the last acknowledged state from the server. The moves are also passed to the server.

When doing a client tick I move the object to point A and apply all the stored Moves since A to get to my current point, B.



When doing a Server tick I apply all the moves I have stored up and the object ends up at position C, where C is typically at an earlier time than B.



When I receive an acknowledgement from the server it contains the current server State, C and the moves processed to get to this state.



I then remove the acknowledged moves from the client and update its position to C. Any moves that have happened since the ack are kept.



I then repeat this whole loop where A now equals C.



This allows me to process client inputs immediately and not get out of sync with the server and works well for normal movement. However,

there are unknowns for me - the movements caused by the physics engine (such as gravity). I need to add these to my clients moves so that when I move from A to B I include all the actual move steps that occurred. I am not unduly worried as I am sure I can alter the base code if required to give me this information but if there is a way of getting it using the standard JME implementation that would be great. I cant see anything in physics space, but I may be missing something obvious.



Matt

Syncing the actual location and speed of the objects is probably a better idea than using only incremental data. You have to apply forces in the preTick call while you should read forces in the main tick.

For the other objects in the scene I use the position and velocity and interpolate their position to give a smooth appearance.



The idea of using incremental data for the player is that if the server comes back and says that after commands 1-3 I am at position A, but the client thought it was at A’, I can apply the existing client moves (4-6) from position A (rather than A’) so can smoothly reconcile the difference to where it should be.



When you say you should read forces in the main tick I don’t quite get what you mean. How do I read the forces applied to the object by the physics engine? Thinking about this more, I can work out the position pre and post a physics tick and add the difference as a delta move between the actual movedirection and the move made. That should work fine.



Once I have got my code to a reasonable state I will try and host it somewhere incase it helps others - even if it to see not what to do!



Matt

getLinearVelocity and getAngularVelocity, then setLinearVelocity and setAngularVelocity. Thats how most physics systems sync.

Fantastic, thanks a lot.

I have a similar problem:



I implemented a CollisionGroupListener to check for possible Collisions with some RigidBodyControls that i shoot at a CharacterControl.

I then calculate the vector for the force that would apply to the CharacterControl being hit by those RigidBodyControls.



(If you want to know how: Once CollisionGroupListener lists a possible collision between the CharacterControl and RigidBodyControl, i do raycasting to check for approximated surface normal of impact. The negated surface normal is added to the normalized vector of the linear velocity from the RigidBodyControl-> You get an (roughly) approximated Direction of Force applied to the Character Control - i left out angular velocity. Then you can multiply the direction with scalars, perhaps a virtual mass of CharacterControl or some other values)



So is there a best practise how to apply this vector of force to the CharacterControl?

I set the walkdirection of the CharacterControl in the collide() Method - CollisionGroupListener - and encountered some problems


  1. I figured that physics run on a fixed 60fps framerate. So I set the walkdirection before and at the physictick (using physicticklistener) to zero (just to be sure to have everything in sync). Using a big walkdirection produces NON-deterministic results. Sometimes the character gets moved by a small amount, sometimes by a large amount, although the walkdirection doesnt change much.


  2. If i managed to solve the nondeterministc results, how would I interpolate the walkdirection, as big walkdirections produce big jumps?

    Would it be smart to “chop” the walkdirection into n pieces and apply each “chop” in the physicstick, for n times?

just a quick update: i solved problem 1. I just applied the walkdirection, that i calculated in the collide method, on the next physics tick. I set the walkdirection to zero on the physics tick after that, and voila, the results seem to be the same every time i test.



I did some research, rigidbody collisions are a bit more complex than i thought, so i am working on the formula.



But again, if i manage to get the correct vector of velocity for the character control out of the equations, it is still questionable how to apply this velocity to the character control.



So my thinking is this:

onGround():

If char was in air, take velocity from landing point (=inital velocity). Velocity decreases (linear damping ) on each physicstick - deacceleration. I calculate the distance to move til the next physicstick (this distance decreases ofc, because velocity gets smaller and smaller) → this distanceToMove is set as the walkdirection of the character on each physicstick.

!onGround():

Only the vertical component of velocity deacreases on a projectile motion. This is internally decreased by KinematicsCharacterController already. → the distanceToMove value doesnt on each physicstick, hopefully the KCC applies correct deacceleration.



If you have some tips for me, or if my thinking is wrong, plz send me some suggestions.

ok i applied the equations ;), the character control which gets an velocity, chops the velocity into pieces (every time a smaller piece) and applies it with the help of setWalkDirection().



Now another problem occured :cry:

I am setting the walkdirection on each physic tick. This walkdirection is applied by the underlying KinematicCharacterControl in the update() method, which implements that method from ActionInterface.

Is that update() method applied at the same framerate as the physictick? That would be the only logical conclusion, BUT:

The small walkdirections seem to generate many fluctuations: The CharacterControl is not going a straight line, more like shacking the way it should go (can be caused by rounding problems, but i doubt it, as the character reaches its destination pretty correctly, its just vibrating like mad on the way).



Any1 some ideas? Once i solve this last problem (hopefully, didnt test projectile motion) , i can create and share a “DynamicCharacterControl” which will have a method applyCentralVelocity(Vector3f velocity).



If this isnt solvable, maybe a special RigidBodyControl could emulate dynamically affected Characters.
On that topic:
rigidBodyControl.setAngularFactor(0f) eliminates any rotation applied to the character (no cam shaking, wee), rigidBodyControl.setFriction(x) could possibly eliminate any sliding on a hill surface (although this will be tricky).
moving around could be achieved with applyCentralForce, but how about moving on a hill? how to step up on obstacles? - thats all handled by KinematicCharacterControl, so the above approach i am attempting, seems more appealing to me.

Another thing i tried is switching the enabled flag of a CharacterControl and RigidBodyControl attached to the same playernode.
On a collision, you enable the rigidbodycontrol, let the physics do its thing and when its moving slowly, switch the charactercontrol back on.
This is working like a charm, but its "dirty". No interaction from the player possible while RigidBodyControl is enabled.

On some other topics i read, that they split up the character into more pieces, like the head and feet being charactercontrol and the body being a rigidbodycontrol. But then again, how to "link" the charactercontrols so that they move with the rigidbodycontrol and vice versa? We all know what joints do when applied between a kinematic node and a dynamic one.

update: the walkdirection is applied at the rate of prePhysicTick and physicsTick, tested it with rectangular application of walkdirections.



the problem is the rounding and/or calculation of the small pieces

Yep, that is right. Walkdirection is the amount to be moved per physics tick independent of the duration of the physics tick. However, the actual distance moved will also take into account any physics that occur (such as resolving tunnelling into a terrain, gravity, collision with other ridged bodies etc.). I guess this is what you see? When I don’t use any gravity etc (and float through the air) then I can do the same movement (which simply applies a walk direction based on facing and keys pressed) on my server and client (down to very small amounts) and I get the same answer on both down to several decimal places.



Matt

1 Like

Thx Matt for clarifying.



Yeah, various terrain shapes and collisions can all contribute to the character being pushed around.

Im testing the walkdirection on a BoxCollisionShape floor, and no other forces being involved, except gravity ofc.

Testing with rectangular movement (cyclying the walkdirections between (-1,0,0) (0,0,-1) (1,0,0) (0,0,1) ) keeps the character in the 4 corners all the time, but testing with my other values keeps the character vibrating.



Either its my calculus flaw or some internal, abnormal physical calculations make the character move that way.



I have already simplified the equations on paper (extracted all possible precalculations, extracted the maximum amount of bitshift operations, applied the positive bitshift operations first and the negative bitshift operations last, thus effectively minimizing rounding errors).

I am implementing the methods as we speak, hopefully this will solve the issue, but then again syncing this between server and client will prove difficult, but thats another story :smiley:

I am wandering if the floor has anything to do with it as, to keep you above it, calcs will have to be performed that will impact your move calcs. Try setting gravity and fall to zero and start your character above the floor, in space, and see if that still exhibits the same prob. Also , what collision shape are you using for the character?

1 Like

Man, you saved my day, you dont wanna know how much time i invested optimizing the calculations!!



In space, with no gravity, the character flies nice and slowly a straight line to the destination! No shacking whatsoever!

Im using a CapsuleCollisionShape for the Character and a BoxCollisionShape for the floor (from jme3test.bullet.PhysicsTestHelper.createPhysicsTestWorld (… ) )



I bet the shacking stops if i put the character on a TerrainQuad with a HeightfieldCollisionShape or a CompoundShape consisting of other MeshCollisionShapes!

I tried it out, it works perfectly on this terrain now. Thank you very much mattm!!

I will tweak the code abit and will share once its documentated!!



If you know a solution for dealing with BoxCollisionShape floors, i am all ears though :slight_smile:

I don’t i’m afraid. I’ve only be using jmonkey for a few weeks and that time has been spent implementing my FPS framework for a game I’m hoping to write. If I get a chance I will debug, but unlikely to happen in the near future.

Good luck with that framework!! Once its finished i will sure take a look at it, as I see that you have given many thoughts to synching physics!

I have one further question. What do people do so that their characters move smoothly over the given render frames given that their location is only updated on the physics frames? I dont want my physics engine to run at 60fps, but this means my character only moves on some frames, regardless of input.



Matt