Distributed Games

Distributed, near-real time multiplayer games (e.g. Doom and Quake) make interesting demands on the developer in terms of distributed computing. Where might I learn more about the architectures used?

What protocols do they use? What does the server do?

I'd like to know how the individual player sessions are synchronised. Given the time constraints and the computationally intensive rendering that's also happening in many games I think it would make interesting reading.

Does anyone here know of a written account of this work?


Doom used a PseudoRandomNumberGenerator, starting each client with the same seed, to generate the enemy movements; this avoided the need to synchronize them. Presumably, other games from that time period did likewise. -- DanielKnapp

The source code for DOOM and Descent are available for those who want to see how the networking is actually implemented. Both DOOM and Descent use a peer-to-peer topology and expect to run on a LAN.

I've designed and implemented protocols for computer games ruunning over the Internet. I based my protocol on SIMNET, DIS and research leading from those protocols. I also designed the protocol engine on the principles of ApplicationLayerFraming and IntegratedLayerProcessing (a.k.a. ALF and ILP). The basis for all game protocols is the HalfObjectPlusProtocol pattern.

There are a number of interesting trade-offs in such protocols, particularly in achieving "real-time" performance and reliability. The traditional methods of achieving reliability (timeout and retransmit, for example) need to be modified to avoid wasting bandwidth by retransmitting out-of-date information when more up-to-date information is available from the application. E.g: you shouldn't retransmit a message containing the location of a player because the player will have moved; instead transmit a message containing the current location of the player.

You can use dead reckoning algorithms to reduce the number of messages you need to send. Dead reckoning is also essential to decouple the processing rates of nodes: each node performs its simulation loop as fast as it can, depending on its processor, graphics hardware etc., but fast nodes do not swamp slow nodes with traffic.

Another tradeoff is to send as many messages without reliability as possible. Also, piggy-back reliability messages (acks, naks) on other messages; this is what TCP does, for example.

You can also use multicast where possible to reduce bandwidth usage. However, many ISPs do not support multicast traffic, so nodes should be able to fall back on unicast transmission if necessary.

As for synchronisation between clients -- it's basically impossible in a wide area network with a large number of nodes. Rather than all clients seeing events in the same order, it's best to have one node (the server) that acts as a "game master" and arbitrates between players based on the order of events it sees. At least this is fair to all players.

Finally, the most important thing is to playtest your protocol and tweak it for the game you are writing to ensure that the playability is as good as possible. It doesn't matter how good your protocol is in theory if the game is not fun to play!

--NatPryce


See also: DistributedComputing


EditText of this page (last edited April 10, 2012) or FindPage with title or text search