I still like the original Quake1 networking code (NetQuake) the best, which has zero prediction. It feels extremely stable and responsive. There's no warping/jello feel at all.
Most broadband connections (in the US) are really quite low latency and free from packet loss, especially to servers that are nearby (~800 miles).
Most cable/DSL connections will see 20-40 msec ping times to nearby servers. No prediction is necessary (or wanted) in that case, but you can still feel it a bit on all modern games.
Basically, if you have a broadband connection, a wireless card and a wireless N router expect bizarre latency spikes. The reason you get those unpredictable spikes is because of the unnecessarily big buffers at every step down the line.
Explain why large buffers in and of themselves would matter? The only relation I could see there would be if they were flush-on-fill, or constant rate in, constant rate out (like the audio buffer in a receiver). Aside from those cases, if the new bigger buffers are actually always being utilized near the new bigger sizes, then a smaller buffer scenario would just mean you'd get more packet loss and even more latency, right?
smokinn@ubuntu:~$ ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=1.24 ms
64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=1.21 ms
64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=75.4 ms
64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=1.26 ms
64 bytes from 192.168.1.1: icmp_req=5 ttl=64 time=1.18 ms
64 bytes from 192.168.1.1: icmp_req=6 ttl=64 time=1.21 ms
64 bytes from 192.168.1.1: icmp_req=7 ttl=64 time=1.20 ms
64 bytes from 192.168.1.1: icmp_req=8 ttl=64 time=1.28 ms
64 bytes from 192.168.1.1: icmp_req=9 ttl=64 time=1.28 ms
64 bytes from 192.168.1.1: icmp_req=10 ttl=64 time=1.23 ms
64 bytes from 192.168.1.1: icmp_req=11 ttl=64 time=1.23 ms
64 bytes from 192.168.1.1: icmp_req=12 ttl=64 time=1.24 ms
64 bytes from 192.168.1.1: icmp_req=13 ttl=64 time=1.19 ms
64 bytes from 192.168.1.1: icmp_req=14 ttl=64 time=1.22 ms
64 bytes from 192.168.1.1: icmp_req=15 ttl=64 time=101 ms
64 bytes from 192.168.1.1: icmp_req=16 ttl=64 time=1.31 ms
64 bytes from 192.168.1.1: icmp_req=17 ttl=64 time=1.69 ms
Notice number 3 and 15. I'm not yet sure why exactly it does this, I haven't had time to investigate fully and honestly it's a minor problem since I don't really play games so it's pretty low on my priority list but I suspect it's either bufferbloat or a driver issue.
Flush on full means the data is only sent when the buffer is full. If the buffer is large and you have an optimized game that minimizes the amount of data sent then the buffer takes longer to fill up and packets are sent less often.
Older games that send more data may get quicker response if your throughput is high but your packets are large
If the game doesn't optimize down the total data it sends, then it's constantly filling the buffers, so there's no waiting for buffers to fill which can reduce ping times.
To be fair, modern games throw around a lot more data every frame, games such as the Battlefield series[1] would not have been feasible without client-side prediction.
How do you know that's true? I'm not saying you're wrong, I'm genuinely curious.
Older games built on the Quake{1,2,3} engines use hardly any network bandwidth. Most Bad Company 2 games have only 32 players, which is hardly bigger than Quake1. What's going on that's so much more bandwidth intensive?
Way more dynamic objects (vehicles, destructible environments) with a much larger playable area (more room for more objects). Usually objects further from the player are updated less frequently (to deal with the sheer volume of objects) but may still be visible to the player, client-side prediction comes in handy here.
Also the need for more detailed positional/vector data for much more complex physics simulation, which also requires both a higher resolution of updates to function realistically. Additionally, fast moving vehicles with more delicate control/handling, such as fighter aircraft, also a require higher resolution of updates so that you can actually fly them without crashing due to the feedback loop of perceiving jerky/delayed movement and input response. Prediction can help to smooth this out.
Additionally modern games use things like physically simulated projectiles (think bullet drop[1]) with collisions calculated using mesh-based collisions instead of simple hitboxes, over much longer distances, so there is a need for prediction as timing is more of a factor. Quake 1, 2 and 3[2] use the 'hitscan' method[3] of calculating whether a bullet hits its target, which is instant and calculates a linear trajectory, which has the side effect of making it easier for players to adjust to lag (basically any difference between the player firing and the hit registering is network lag, and should be more or less consistent).
Finally, two slightly different issues that have become apparent in recent years are automatic match-making and local servers (ie, the game running on one player's console). Auto matchmaking is often fairly shitty, meaning that (on average) players end up playing on servers to which they have a higher ping time than if they had chosen the server from a server browser (something that is sadly becoming rare in today's games). The phenomena of local servers, on the other hand, puts pressure on the total available throughput, rather than ping time, as the player that is hosting likely has more limited outgoing bandwidth than a dedicated server might.
Couldn't you synchronize the beginning states of the game on all computers (including their pseudo-random number generator states), and then the only data you'd need to pass around is user input (keypresses, mouse movements, and the exact times--at least down to the frame--at which they occurred)? Given the initial state, the PRNG data, and the user input, games are completely deterministic, aren't they?
It seems that could pose a security risk to extremely savvy users (as the game state includes things they shouldn't be able to see, like the opponent's units in Starcraft, and one might be able to make a program to inspect the state of the running game client), but I can imagine encryption and obfuscation dealing with that sort of problem with very little overhead.
The problem with your scheme is that to update frame X correctly, I have to have all input for frame X, which is now 200ms ago. Meanwhile, I need to be presenting something to the user now. Prediction is basically a way to deal with the fact that you can't do what you're describing.
Also, don't underestimate your users. Encryption is irrelevant, they'll hack into your program's process space and read the stuff after you decrypt it, and obfuscation is at best problematic for the same reason. For a AAA-class game, assume all users have access to all data on the client at all times and you'll not be far wrong. Sort of how like in the encryption world we just assume that the attacker knows the algorithm already.
Wrong perspective. If you ship too much information down to the client the user will use it to know things they shouldn't. Google "wallhack". The stream up to the server will bear no overt traces of cheating.
wallhack is something you can't get around, a user can always control how their graphics are displayed, hence you can't trust them. You either accept this or change the game.
You can make it harder to do (Blizzard's Warden, Valve's VAC) but people will get around any technical barrier you put up.
Fog of War (maphack) would be a more apt example, you only send the client what it is possible for them to see.
Speed hacks are another that can be stopped, the client asks the server if it can move X units, the server responds with how many units the client moved.
I guess this is one of those "in theory, theory and practice are the same, in practice, they are not" kind of things. I don't know of any triple-A game engine that works that way, and it's not for a lack of smart people working on it. I'm sure that could work for a sufficiently simple game engine, however.
RTS often use that approach combined with check sums. They then resend world state when they don't match. However, FPS that use a little state prediction become much more complicated.
Ideally, FPS would run two versions of the game world, what happened, and what the players see, with more information being sent about what happened close to the player helping the version that players see be more accurate. And continuous updating of the version players see based on predictions from the "real" version of the game world. But, getting that working would be extremely demanding and reduce the games ability to do physics in real time etc.
PS: Game developers may be smart, but as a rule they have less experience than you might think. Short burn out cycles keep the industry from evolving.
I believe this prediction code wasn't included in the original Quake. It was added later in QuakeWorld which was some sort of multiplayer enhancement for Quake. Please correct me if I'm wrong.
Edit: My bad, this code IS for QuakeWorld. As far as I'm concerned I have no evidence to put forth that the original commenter is wrong in his assessment.
---
You may be right. I'm genuinely asking the guy if he knows one way or the other, as I do not know but certainly am curious. I mean, no lag compensation in the days of dialup? Seems like id/John Carmack would have know better even at that stage. But then again, it could just as easily be that they didn't realize the need for that tech in the beginning.
Internet play wasn't important when Quake was released. Serious gaming was pretty much LAN only, and Quake was the epitome of serious gaming. At the time the graphics and 3d environments alone were the big selling point.
I remember playing that first version over a modem, and indeed there was no client prediction at all. Not just shooting, but every single thing. You press forward, wait 300ms before anything happens, you switch weapons, wait 300ms. Even basic things like running and timing a jump over a small gap were extraordinarily difficult. But it was still kind of fun for the novelty of it, at least until somebody on a T1 or ISDN joined at which point you simply died before you even saw the enemy (ie. you saw them before you saw yourself die, but by then you had in fact already died on the server).
There was no prediction in NetQuake. The main reason for QuakeWorld was the addition of client-side prediction. It was pretty controversial and many people stayed with NetQuake.
Even back then many people (including id software) had ISDN, T1s, or college connections that had the kind of latency most people have today.
This was also in the days when LAN parties and LAN events were much more popular.
Qwest has interleave on for DSL which means my first hop is easily over 40 ms so a minimum ping of 60-80 is rather likely for me.
I don't know if the same holds true for their fiber but I'm sure interleave is common among DSL providers. Cable also suffers from neighborhood overselling where in peak hours people get ping times in like 200-600 ms when they are likely to play.
I don't know if this problem will be fixed soon but I think prediction is still necessary (or more so required) for a large population.
This may just be limited to American broadband services though, and I'm sure larger cities may have less of a problem.
Sure, when everybody has connections of 20-40ms this prediction is not much of a feature. But back in the days when everybody were on modem connections with 120ms latency, this was important. It means you don't have to physically aim 'in front' of where the target is moving towards, this may not be an issue with automatic or AOE weapons, but if you're trying to use a sniper rifle this can be pretty hard.
quake3+cpma or quakelive (which uses cpma netcode) are better - very good at low latencies while also being very good up to 100 ms (which makes competitive intercontinental matches a realistic possibility - if you're not rat already).
There was something cool you could do in older version of counterstrike, you can tweak the ex_interp parameter from 1.0 to something lower like 0.5, in this way you have to aim 'behind' where the person was and hit. The advantage of this is with sniper rifles, when someone moving through a narrow gap the time is so short that your response time is not fast enough to click the instant they move through. But with ex_interp you can fire after they've gone through and still hit, not so fun for the target though, they get to safety then die :D
People actually "cheat" using lag compensation. There are players that induce lag on their system so they can shoot at players on a "snapshot" of the game. Very hard to detect. Usually when you see people with 500ms ping at the top of the scoreboard, you know it's probably someone who uses this method. It has its disadvantages, mainly that opponents are hard to track (they "jump" on each sync). So it's usually snipers that do this from a distance.
This is a pretty dated article -- 2001 -- but the basic concepts still hold up. I came across this a few months ago when I wanted to write some net code for a canvas game project I was tinkering on. This resource helped.
would you think it would be required to implement client-side prediction, lag compensation (normalization) and etc on a simple tron-like multiplayer game?
It doesn't matter per se how simple the game is. If the game mechanics are sensitive to temporal latency between players then the game can benefit from these methods.
However, I don't see how a light-cycles game could use any kind of prediction or lag comp. Since players can change direction instantaneously and at arbitrary times, there is nothing that can be predicted. And any kind of lag comp would appear as obvious breaches of the game rules.
There is a lag to tell the server when you want to move and a lag when the server tells the other player where you moved, so the lag compensation is for the server to guess where you thought the other player was when you made your move. This is used extensively in Counter-Strike, and I think it has its uses here.
How would that be used in light cycles? Would you see your opponent's trail jump around as the client prediction is corrected? Would the game let you pass through your opponent's trail if you couldn't see it at the time?
Lag comp takes advantage of the independence of your gaming experience and your opponent's. It effectively hides inconsistencies in places you aren't paying attention to, like where your opponent is aiming. In light cycles, your actions and consequences are much more tightly coupled to those of your opponents. There is nowhere to hide the inconsistency.
Don't get me wrong, I understand the stark contrast between CS and this style of gameplay.
>>Would you see your opponent's trail jump around as the client prediction is corrected?
The prediction in this game would only be which direction the opponents are moving in. Between packets the client would continue moving opponents in their last known direction. Once the client knows better it must change the game state to the player. If that means one opponent moved several game units in one direction and must be moved back those several units and then turned in a different direction then that is what must happen, and yes it will be jarring to the player, but this happens all the time in online gameplay.
>>Lag comp takes advantage of the independence of your gaming experience and your opponent's. It effectively hides inconsistencies in places you aren't paying attention to, like where your opponent is aiming.
I'm not sure I actually agree with this statement. Client-side lag compensation as far as I understand it (or at least how I'm describing it) is to allow the client to predict what is going in-between server updates. Server-side lag compensation takes into account the latency of each client and attempts to give a fair assessment of each player's actions based on what they must have been shown by the server at the time.
>>Would the game let you pass through your opponent's trail if you couldn't see it at the time?
I suppose it depends on how long that trail existed. This is certainly a difficult problem when it comes to a Tron/snakes/nibbles kind of game online. I do see where you are coming from here with the difficulty of deciding who or what should take precedence. That is the issue one will always have when employing these techniques and I think it would require some bit of experimentation to get right.
By the conventional terminology (which could admittedly be less ambiguous) "client prediction" is where the client shows the player a locally predicted present game state inferred from stale authoritative information. The client guesses what is actually happening right now based on what it knows was happening some time in the past, as told by prior updates it received from the server. This works if there is a lot of temporal continuity in the game, such as there is when the laws of physics are obeyed. Sources of entropy, like player input, can't be predicted and so that is where inconsistencies will appear.
"Lag compensation" is where the server accounts for client latency when making certain authoritative game logic decisions i.e. "did a bullet hit a target y/n"? Essentially, it decouples the when and the where of the event. So a target can be hit by a bullet now but the hit took place where the target was some time in the past. Since that would generally feel ridiculous to the player, it can only be done in very particular situations where the event is mostly entropic and instantaneous, and therefore unpredictable (e.g. a player firing a bullet with infinite velocity) and where the inconsistency is unlikely to be noticed (because you typically aren't paying much attention to where other players are aiming).
The lag comp described on that page you linked to is only for player vs NPC, where the NPC is totally predictable and therefor always appears in the same place to all players. If you tried to lag comp a slow moving projectile with player vs player then the projectile would have a curved path, following the target around as they tried to dodge it. It would give a blatant unfair advantage to lagged players.
I don't see any lag comp options for light cycles since your only real options for bending the game rules are a) letting players pass through walls/crash into walls that aren't there or b) moving players/walls around retroactively, both of which will appear obvious and unfair.
I think I understand your point, in that with the game play of light cycles there are only absolute moves that you've made. You still want to the player's action to play instantly on his and show up as fast as possible in the server's and other players' simulations.
Obviously lag in a moment of high importance is really going to feel unexpected in a light cycles match so it isn't a game suited to high amounts of lag.
You still want to see the other player move smoothly, and you can have that happen if you guess where he is rather than wait for acknowledgement. His prediction happens to be super easy while also being dangerously wrong if he's turned in front of you.
There is no solution to this type of gameplay being networked synced. Low latency is your only bet.
something that occurs me is that the server has to keep track of every point of every player to track collisions, basically a table of "playerID,x,y,timestamp", so when the client gets a status update, instead of receiving the current position of other players, he would get the moves of all players since the last update, limited to the x1,y1,x2,y2 of the client's viewport. Then the client would normalize the data (over time)?
Honestly, it's difficult for me to say. Is this two players over the Internet? More than two players? The movement mechanics matter a lot. I've never seen Tron or played any Tron games, but if it will play something like this:
Which appears to be kind of like snakes/nibbles then I could see you requiring some of the techniques the article describes. Since players are mostly moving in a single direction for a period of time you could extrapolate their movements by continuing their motion in that single direction until the server is made otherwise aware of where the player decided to move. Then you can reset the player back to the position they were at when they changed direction and then continue them along that path.
However, for when the action gets close-up and quick, where perhaps the two players are twitching back and forth trying to avoid or ensnare each other then yes I can see how lag compensation will be necessary so that the server can accurately figure out where each player thought the other player was when they made their move.
Here are two articles that I think will help you, and they helped me when I was working at this problem:
yeah i was wondering about creating a single arena with a million square points or more, each player would see a smaller viewport focused on his cycle (snake/nibble), new players would spawn in a empty spot and so on. the server would have to store each point of the trail to calculate collisions.
EDIT:
the game's not particularly exciting but i'd imagine that being on the same canvas avoiding the trail of potentially thousands of players (plus your own) could be cool
EDIT2:
also if it was possible to add 3D to it (the ability to turn in the Z axis) could be pretty awesome
"turning on the Z axis" (dunno if i said it correctly, could be also X or Y) would essentially be changing the plane, meaning, the arena would be a cube instead of a 2 dimensional plane. Let's say a couple of players would create a dead end you couldn't avoid by going up, down, left or right, you could rotate the view.
For example on a normal snake 2D game, pressing the key oposite and to the direction your'e heading is meaningless, you can't go straight back and your'e already going forward. On a cube those keys could be mapped to 3D rotation.
You would still see the game as 2D and depth would be emulated for example with transparency, were close trails are opaque and deeper objects are transparent, so that you could know if you could turn or not, for example on a trail right in front of you.
edit: fix typos
ps.: of course playing on a cube is only be interesting when you have a lot of players in the game, or else it would be very impractical to corner someone (since you would need to get him wrapped completely)
How would you ever find yourself on the same plane as the other players? Or even on any plane that they had ever been on? If the trails are infinitely thin lines arbitrarily positioned in 3D space then it's effectively impossible to hit them.
You could quantize them on some or all of the axes but I would find that less elegant than keeping the game in continuous space and making the trails planar somehow.
no, because everyone's moving in x,y,x integer coordinates limited to the cube's dimension, max x,y,x = 1000 for example, and the server records the coordinates you'be moving along (0,0,1), (1,0,0), (2,0,0). when you move into a coordinate someone (or you) has been, its a collision.
although you move 1 point at a time, the line thickness in the Ui may be more than 1 pixel, it's just on the UI level.
you start on the usual plain:
y
|
| *-->
|
z-------x
y is the vertical coordinate and x is the horizontal. that's the plane your're seeing on your game client. if you're moving right along the x line, you're increasing the x coordinate (x,y, z) while the others are constant
if you press 'D' or 'right', you "turn" the view on the y axis, and then the plane you're on becomes:
y
|
| *-->
|
x-----z
now, y is the (same) vertical but the horizontal has become z, you keep moving right but x value has fixed and now z is increasing. if you press up, you start increasing y, but x, and z are still fixed (since you havent moved on their axis)
the server records the x,y,x coordinates you been through (for everyone) and when someone moves into a recorded coordinate, game over man.
Armegatron has a little triangle around multiplayer opponents that describes where their light cycle could be with respect to the moves they could make until the packet reaches you. It's quite interesting; go grab http://armagetronad.net/screenshots.php and play a network game to see how it works.
EDIT: Wait a moment, it wasn't GLTron, it was Armegatron
The client/server separation for prediction never made sense to me. The article notes the trust and cheating aspect, where a client could lie about whether a particular player was hit. But, since the authoritative information about actions in the time sequence are transmitted to the server anyway, it seems like the server could do post-game validation. If the game actions all fit the rules, then the results are valid-- otherwise, the cheater(s) stick out like sore thumbs.
Even a lagging validation could be done. Yeah, your cheat comes off, but you're kicked out of the game and your team gets penalized.
That is indeed what happens. When you fire a shot, for the sake of player feedback the client will determine if the hit was made, and render the game accordingly.
The server can invalidate that result if it disagrees. This is why very occasionally, in high-lag situations, you might shoot a layer, see blood, but the player takes no damage.
In this case the validation is being done in real-time, and the prediction is only ever performed for feedback-critical actions (shooting, movement, reloading, etc).
This does happen in some games - the Halo series comes to mind. A player is determined to be the "server" based on network conditions, and the plus side of it is also that should the server drop, the game will only be momentarily interrupted and you can select a new player to act as the server.
This doesn't work in all instances - most games using this mechanism are fairly small in terms of player count, since the server is not of a guaranteed performance level, the connection its on is probably a home-based broadband connection at best, and it has to invest a lot of cycles into rendering the game. For larger games (say, the Battlefield series) the hardware required makes it such that the system is infeasible. Instead you have extremely beefy hardware running dedicated servers - headless, non-rendering servers that can throw every cycle it has into serving the game.
I have a real-time, multiplayer online game (http://TheWikiGame.com - built with XMPP + Redis + Django), and I can say it's been one hell of a learning experience.
The features you build and test on your local machine or private cloud instance certainly behave differently when you have a bunch of people, with totally different internet connection speeds, and game-play behaviors, all using your app at once.
Ahh, the race conditions you never knew you had :-)
Yeah the id Tech 3 engine is still one of my favorite engines. Quake 3, Call of Duty, Medal of Honour, Return to Castle Wolfenstein and Star Wars Jedi Knight all ran on it.
Most broadband connections (in the US) are really quite low latency and free from packet loss, especially to servers that are nearby (~800 miles).
Most cable/DSL connections will see 20-40 msec ping times to nearby servers. No prediction is necessary (or wanted) in that case, but you can still feel it a bit on all modern games.