Networkgame Laggy, maybe use other methode to keep players in sync?

So isnt about JMonkeyEngine but Networking more general:

So me and my friend are currently working on an android game (based on a surface view for now but when we are experianced enougth in networking we will switch to jME). It is a local network multiplayer (so its played over BT or LAN). At the moment it is very laggy but we dont know why. We have done some research and first started using Buffered Streams, then we replaced ObjectStreams with DataStreams and tried only to use longs as data, so that we only need primitive datatypes and no arrays (merged numbers with Cantors Pairing function) etc. Was this step rigth? Is ObjectStream really much slower than DataStream?

But is still pretty slow. We send a long in about 1ms (to update position of players) from client to server and parallel from server to client. This means we have a datatransfair of 64bit/ms * 2 = 64kbit/s *2 =128kbit/s. Is that to mouch? Is there a smarter way to sync the position of the players (which is stored in a double[])?

Is there any tutorial explaining how local networking in java should been realised for games which need to exchange much data?

We also thougth of switching to ints or an other datatype to keep data small. We found out that TCP is the standart way for sockets but since we send that much data it would be ok if we lose some packages, so we thought of switching to UDP to increase performance. Any advices/tutorials on this?

We also make usage of multiple Threats (total 6: UIThread, a Thread to call draw(), Reader and Writer Threads, and 2 Threads for the Characters, one for each), but start them with Thread().start(). Should we do that in an other way? I heared that this is not always the best way to handle Threads on multi-core devices.

Or may there just be other reasons that make our game laggy? Are we searching on the wrong end for issues?
It is our very first attempt on creating games using the Canvas and SurfaceViews (and we try to figure out as much as possible on ower owns so we dont google that much and dont follow a specific tutorial, so maybe things arent handled conventional by us).

I suggest reading this:
http://www.gabrielgambetta.com/fpm1.html

This is great too:
https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking

You miss the most simple approach,
why not fake it?

Send way lower rate (network jitter is way to unpredictable anyway to get to good results this way) and interpolate instead.
As each player can only see their own screen smaller differences wont matter.

This thread is interesting, please go on with the discussion…

lover msg rate ant interpolation should solve everything, it only gets complicated when players start to shoot at each other :smiley: :smiley:
also could use extrapolation too for better results

im syncing players every 0.1sec, its recommended as enough in spidermonkey

So thanks for all of your advices^^

We now send ints instead of longs and only sync the position every 10ms. It runs more fluently now.

(EDIT: I misread 0.01s while you wrote 0.1s sorry, it’s actually 100ms so it’s slow)

10ms is very quick and might work in LAN or in same ISP network, but not work in real world distant servers. Most gaming servers get at least 20ms to 30ms ping. You should consider also bombarding positions using UDP packets instead of TCP (if it’s not already the case), because UDP packets are faster but less reliable but since you sync VERY VERY often, it doesn’t really matter if a packet is lost or wrong once in a while.

Also, smaller UDP packets are better… well, small enough to fit under MTU. Otherwise, they will get split during transfer and recombined on the other end… and simply fail if one of the parts doesn’t make it. ie: if the connection is losing any packets at all then you will lose that many more messages because the odds are even better that a whole message will fail.

MTU is difficult to detect but is often 1500 bytes as I recall. So minus header bytes, a UDP message with a data block smaller than 1420 or something.

The thing is 1420 bytes is SO MUCH more than any small game will ever need to sync 50x a second. Think about positions and rotations, that’s what you need to update most often, then you can update health points and such only once a second or something like that, no need to send that every 20ms. I’m actually in the process of planning these things now for my own project, so I find this conversation very interesting. I think props that move faster should be synced more often so that their movement level of detail is more precise than props that move slower. I also think that SMALLER objects should be synced more often than larger ones too. It depends on what type of game you’re making tough. For example if you shoot a bullet or an arrow in your game, then this is small and fast, so it should be synced way more often per second than a vehicle or a box on the ground for instance.

If you read those Valve articles, they talking about sending only every 50 ms (20 times a second) but including all of the frames that occurred during that time. In my own networked physics implementation, I can fit around 70 or 80 objects updates in one 1400 byte message. (It varies because I only send deltas of ‘last agreed upon good state’.) Considering that each message will contain at least three frames of data then it’s important to pack the updates in as small as possible. If I have more than that then I split the message so that they are still independent. And they are already split by zone.

Oh yeah splitting by zone is important to save resources. Do not send updates to stuff that happens 1 chunk away from that particular player, in other words, you have to sort what updates you push to what players too.

I find the “many frames” packing an interesting approach but it seems non trivial to me. I’m new to networking techniques in general (not to mention I’ve only been introduced to Java/OpenGL 1 year ago lol) but it’s interesting. I do not see a fit in all applications for this tough, because often you can calculate the trajectory of moving objects and interpolate on much much less data than that. For instance, shooting an arrow or a bullet, you sync where it starts, what velocity it has, orientation etc… and then for 99.9% of the time, you do not need to sync, because unless it bumps into an object, you already know where it is heading for and at what speed etc…

Do not use a larger mtu than 512 if you do not do any kind of mtu discovery /or fallback ect.!

Routers have according to specification the right to simply drop everything over that size if they want. And at least some common used consumer models actually do that!
I found this out painfully when i got random world state updates never when testing my network system, I recommend to either test with different sizes, or at least make initially a few large udp packets to ensure that the udp path works, and use only tcp as fallback if not.

Btw. if you do a good part on interpolation and some prediction for player movement, you will usually get away with using tcp exclusivly, as the problems it causes happen only rarly on normal links.
And it saves precious time in the beginning, one can always add more complex networking later on.

@Empire Phoenix said: Do not use a larger mtu than 512 if you do not do any kind of mtu discovery /or fallback ect.!

Routers have according to specification the right to simply drop everything over that size if they want. And at least some common used consumer models actually do that!
I found this out painfully when i got random world state updates never when testing my network system, I recommend to either test with different sizes, or at least make initially a few large udp packets to ensure that the udp path works, and use only tcp as fallback if not.

Btw. if you do a good part on interpolation and some prediction for player movement, you will usually get away with using tcp exclusivly, as the problems it causes happen only rarly on normal links.
And it saves precious time in the beginning, one can always add more complex networking later on.

A missed packet on TCP can delay the connection for a full one or two seconds as it catches up and then replays all of the now-redundant data. I used to use direct TCP to synchronize camera movements between multiple users on a LAN and it was really jarring to occasionally have the whole display freeze for a second until all of the play back rapidly threw you forward again. Prediction was no better, either, because it’s impossible to accurately predict 2 seconds into the future… and reconciling with that much data is tricky. Teleportation and quick slides were even more nauseating than the 2 second + rapid fly pauses.

I guess if your data can already be represented sparsely to begin with it might work.

@.Ben. said: Oh yeah splitting by zone is important to save resources. Do not send updates to stuff that happens 1 chunk away from that particular player, in other words, you have to sort what updates you push to what players too.

I find the “many frames” packing an interesting approach but it seems non trivial to me. I’m new to networking techniques in general (not to mention I’ve only been introduced to Java/OpenGL 1 year ago lol) but it’s interesting. I do not see a fit in all applications for this tough, because often you can calculate the trajectory of moving objects and interpolate on much much less data than that. For instance, shooting an arrow or a bullet, you sync where it starts, what velocity it has, orientation etc… and then for 99.9% of the time, you do not need to sync, because unless it bumps into an object, you already know where it is heading for and at what speed etc…

If you don’t capture all of the frames then you potentially miss data that can have jarring effects. For example, a bouncing ball might never actually be seen to hit the ground as you might miss the two frames where it hits and bounces away again at the max of its speed.

It’s not too hard to accumulate changes. Certainly not much harder than “all of the rest of networking”… which is one of the single hardest programming things to do.

@.Ben. said: Oh yeah splitting by zone is important to save resources.

It has another important benefit also in that you can chop the bit size of your coordinates down. Knowing that your zones will only ever be 32 meters wide, you know your positions will always be 0-32. Figure out how much decimal accuracy you want and then pack the floats smaller. Especially useful if you are already tightly backing them in a bit stream. I’d have to look to be sure but my position floats might have been packed as 19 bits or something. That would pack a vec3 into under 8 bytes versus the normal 12.

I’ll be resurrecting this code soon and if I can make it general then I will open source it. Right now the zone sizes, bit packing, etc. assume a lot of constants. I don’t know if I can make it variable without killing its performance.

@pspeed said: Teleportation and quick slides were even more nauseating than the 2 second + rapid fly pauses.

Wait… are you implying that you replay the last 2 seconds on other clients? So everybody sees the past movements in the last 2 seconds from everybody? How can this work when you have to accurately punch or shoot at something? You’re playing in the present but aiming at something that was there 2.x seconds earlier…? O.o

@pspeed said: For example, a bouncing ball might never actually be seen to hit the ground as you might miss the two frames where it hits and bounces away again at the max of its speed.

Well, maybe I’m mistaken but how I see it is that the ball has a position/orientation and a velocity/momentum etc… so if you throw it in a direction with x force, then you do not have to send its properties (position, rotation, force, etc…) every 0.01s, you can instead just send key events that make any of its properties change, like when the ball hits anything, then you can sync, in real time. This way it’s pretty much accurate and it saves a lot of bandwidth. The only delay is the ping itself, which is generally around 20ms to 30ms. If it doesn’t hit anything, then the physics are processed LOCALLY in each client without even the need for an internet connection at all. You already defined where the ball was, its orientation, its mass, its forces etc… so why not let the local physics take care of it?

@.Ben. said: Wait... are you implying that you replay the last 2 seconds on other clients? So everybody sees the past movements in the last 2 seconds from everybody? How can this work when you have to accurately punch or shoot at something? You're playing in the present but aiming at something that was there 2.x seconds earlier...? O.o

Well, with TCP (as was being discussed by that paragraph) your choice is to play them or drop them. Figuring out what to play and what to drop can be interesting, too.

@.Ben. said: Well, maybe I'm mistaken but how I see it is that the ball has a position/orientation and a velocity/momentum etc... so if you throw it in a direction with x force, then you do not have to send its properties (position, rotation, force, etc...) every 0.01s, you can instead just send key events that make any of its properties change, like when the ball hits anything, then you can sync, in real time. This way it's pretty much accurate and it saves a lot of bandwidth. The only delay is the ping itself, which is generally around 20ms to 30ms. If it doesn't hit anything, then the physics are processed LOCALLY in each client without even the need for an internet connection at all. You already defined where the ball was, its orientation, its mass, its forces etc... so why not let the local physics take care of it?

Good luck with the whole “local physics” thing. Be prepared to pull all of your hair out trying to reconcile the local physics engine with the remote one.

Fact 1: You need to have some central authority (the server) or you will get different physics things happening at different
times and trying to sort out which collisions are right… it’s a massive combinatorial problem.

Fact 2: Even the local player counts in all of that so one has to come up with specific strategies for dealing with that.

…you know I’m just going to start regurgitating the Valve articles so I’ll just relink them.
https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
https://developer.valvesoftware.com/wiki/Latency_Compensating_Methods_in_Client/Server_In-game_Protocol_Design_and_Optimization

One should not really attempt real time networking unless they understand all of the issues/solutions discussed there… even if one intends to ignore their experience. If a developer doesn’t at least understand what they are saying then they are in for a bad time.

@pspeed said: Good luck with the whole "local physics" thing. Be prepared to pull all of your hair out trying to reconcile the local physics engine with the remote one.

Fact 1: You need to have some central authority (the server) or you will get different physics things happening at different
times and trying to sort out which collisions are right… it’s a massive combinatorial problem.

Fact 2: Even the local player counts in all of that so one has to come up with specific strategies for dealing with that.

Well, it’s already what I do. In my project, all physics objects collide on server side first and are then synced to clients, not the reverse. The only thing apart from this rule is that local physics on clients can collide to give a smooth impression like the player walking on a terrain or colliding with a wall or a rock, etc… but are constantly “forced” to when the server sends a sync. That’s it. Very simple and effective… TO DATE. I’m not sure what kind of problem I could run into later, but until now it works fine. Never tested with 4-5-6-7-8 people at the same time, but it works with 1-2-3 on LAN already :stuck_out_tongue:

I bet it might degrade later with real world lag and 50ms pings, time will tell I guess.