How can i simulate adverse networking effects in a testing environment?

Is there a good way to test things like high latency in my spidermonkey game? I’d like to see how it does under bad conditions, but these conditions just don’t happen naturally when the server and clients are all on the same local machine :amused:

If you happen to have linux machine somewhere:
http://www.linuxfoundation.org/collaborate/workgroups/networking/netem

Here’s the set of relevant links I’ve been collecting.
I haven’t used any of this yet, but it may give you ideas where to look.

http://www.khokhar.net/2010/01/simulating-low-bandwidths/
http://info.iet.unipi.it/~luigi/dummynet/
http://www.akmalabs.com/downloads_netsim.php
http://www.charlesproxy.com/
http://www.softperfect.com/products/connectionemulator/
http://www.netlimiter.com/

In general, there seem to be two approaches:

  1. If you need to do load tests, there is little point in trying to simulate anything because no simulation is going to be good enough to really cover all the bottlenecks that you might encounger.
  2. If you need to test effects of lag and packet loss, you can introduce lag into the networking layer. The general approach is to place a proxy somewhere between (virtual or real) machines and have it introduce artificial lag.

Actually I disagree with 1. There is no point doing too much work in that area but doing at least basic tests and suchlike to give you some baselines is well worth it.

If you don’t mind adding a little code to your game, you can also add a MessageListener before any others that just sleeps a little bit. It’s not an exact simulation of latency but is good enough for some test cases and it’s really simple.

Messages are delivered to the listeners sequentially so if the first one gobbles up some time in a sleep (random or not) then all of the other listeners have to wait for it. I’ve used this to look at the effect of latency on specific messages, even… since you could base how long to sleep on the type of message.

I typed too quickly for case 1.
Load testing means finding out how many clients can hammer the server before the server breaks down. Obviously, you won’t be able to properly test that situation if you keep all clients and the server on the same hardware.

What you CAN do is to find out how server degradation will “look like” as it approaches its limits.
However, that’s going to be somewhat unreliable because the client processes will take up RAM and CPU that the server won’t normally have to provide.
And you won’t find out whether the first bottleneck is going to be disk I/O vs. CPU/RAM. Which is actually a pretty important distinction to make, so I still doubt that load testing with everything on the same hardware is going to be very useful. Common wisdom on load testing in the web business is using separate hardware after all.

Yes, but you can still simulate load by running test clients on one machine and the server on the other.

For herodex we wrote a “headless” client that could be spawned any number of times and would then follow simple rules moving around the world. We could spawn any number of those on one machine or even spread over multiple machines to test simulated load on the server without needing 100s of live testers.

That’s what I meant. You can simulate the users, but you can’t simulate the hardware.
Well, you could, but it’s usually easier to hook up some real hardware.

Why would you simulate hardware to do load testing? :s

When you don’t trust the results from hooking up boxes in your office.
Failure mode tend to differ once lag and dropped packets come into play. And some lag/drop compensation approaches are sensitive to failure distribution, as are some kinds of bugs; so it’s good to have simulation results before the real world becomes tight.

Also, if you want to automatically unit test your network code in nightly builds, a simulation on a server might work better than keeping an array of boxes networked for that purpose.

This is all stuff for large-scale corporations though. The effort to set up such a simulation would be substantial.

You can also create a few VMs, thats what we do at work to test a few things.
Some VM softwares let you set up network latency and limit the network bandwidth which simulates many problems that can come up pretty well.

Our favorite tool is to set up pfSense as a router. You can then use NAT reflection to bounce traffic off of it, even to the same machine. Its filter rules are applied even on reflected traffic.

pfSense can emulate latency, loss, and limited bandwidth.