There's potential for a little rearchitecting to help, at least in the case of UDP:
NAME
sendmmsg - send multiple messages on a socket
SYNOPSIS
#define _GNU_SOURCE /* See feature_test_macros(7) */
#include <sys/socket.h>
int sendmmsg(int sockfd, struct mmsghdr *msgvec, unsigned int vlen,
unsigned int flags);
That only works if messages are independent of answers received and are all known at the same point in time. In most games this typically would not be the case, you'd use a message to cram as much state change into it as is known to keep the game moving fluidly. Packing more than one such message together would serve no purpose.
I'd have presumed otherwise, but I'm not sure if you're understanding the API correctly.. it's not about sending multiple messages to the same destination, but to multiple destinations in a single call. The msg_hdr struct has room for specifying the target address.
From userspace' perspective, even if the same data isn't being broadcast at every client, just building up a big array (perhaps while looping over the input from recvmmsg()!) and spitting it out once would have the same semantics as just calling sendmsg() immediately on each, etc
Yes, I understand the API correctly. Having implemented it once I think I have the basics down ;) But that said I was assuming that this would be in the context of multiple UDP messages sent from a game client to a game server.
The bottleneck generally isn't at the client side for games, The server has much more network traffic to handle. So even if this performance fix only works server-side that might be enough.
It's pretty common for FPS server game loops to read all the network packets, update player state, run one tick of game logic and physics for all users in a single game, and then send out updates to everyone.
Yup, this is pretty much on the mark. Game updates are sent at a fixed rate(usually 10-20Hz) so you could batch up all outgoing game state into a single dispatch. It'd probably take a bit of work re-architect the output loop(pre-allocate for max number of players to avoid per-frame allocations) but should be reasonably straightforward to implement in most engines.
If they are using socket API for UDP, performance is not critical for them. Otherwise porting UDP servers to DPDK/netmap is not rocket science and gets you like an order of magnitude better performance.
A shooter running at 30Hz (high side for free developer sponsored servers) with even 100 players is only going to be processing at most 10-15K packets per second, and that’s assuming 5 packets per tick per player. Server update rates are usually lower than the internal physics and game logic tick rate as well, so it’s doubtful to even be that high.
These aren’t stats of the gameplay servers, these are the backend servers that handle matchmaking, player stats, inventories and progression.
You could use it, but you would probably have to reachitect the server to use green threads to avoid the overhead. Frequent syscall sends are done with games to keep the latency as low as possible. Any batching would increase delay
Yep, I was actually debating whether I'd mention sendmmsg/recvmmsg in my original post but I left it out. Definitely an option for UDP, but you're out of luck if your game server uses TCP (surprisingly many do) because you have to recv from each socket separately.
There may still be some options depending on the structure of your server and where the added CPU load is hurting most, for example, shunting IO to a thread/threadpool where futex() calls (if necessary) only occur for every N IO requests, rather than pay the syscall price for every IO on the main thread. But that might introduce new latency/ordering problems all of its own
Latency is more important than reliability for online gaming because the world state instantly gets stale. Instead of retransmit you want latest snapshot. I'd be curious to find out in what online games that is not the case.
Latency isn’t an issue today what so ever, it’s not like UDP also has a magical lower latency it had in the past because of smaller packets and slower computers but today?
Online games work today with fixed ticks and polling usually at half of full frame rate which means that the server updates and polls the client 30 or 60 times a second or any other even multiplier of the expected synced frame rate.
>This fixed and predictable rate pretty much means that UDP is near pointless and games that still don’t have a predictable tick rate and use UDP tend to be a rubber banding lag fest.
You're wrong. Head of line blocking is a real thing that happens very often in TCP.
Edit: parent poster removed that part of the comment between me reading and submitting a reply
I'm calling bullshit. TCP has to retransmit lost packets whereas UDP can keep on going without waiting for transmission. Ephemeral data like controller input or past game states can be ignored because that time has passed, while TCP is still trying its best to get the packets there in order and reliably.
Latency is still absolutely an issue. I play games from Japan with my friends in the states and I often have a ping of 140 ms or so. That is latency, and properly implemented games (Rocket League, for example) will deal with it using UDP among other techniques.
Slower paced games can still use TCP though because latency issues are less sensitive.
If it's a latency sensitive (like a twitch FPS), UDP is the way to go. Having up to date data is more important than having all the data.
If it's a synchronized game (like a turn based game or an RTS where all clients run at the same logic framerate), or not latency sensitive (an MMO like WOW), TCP is fine and probably easier.
Nope, it might use TCP for negotiation of things that aren't time sensitive but it uses UDP for replication[1] as has pretty much every Unreal or Quake based engine since they were first developed.
I've worked on a variety of engines which were either UE or Quake based. All of them use UDP for temporal game state updates to avoid head of line blocking[2][3].
Are you sure? I would be extremely surprised if it wouldn't use UDP.
For things like getting stats, probably from a HTTP endpoint, sure, but for gameplay? The lag would be very bad, no? Lose a packet and everything is slowed down
and according to wireshark it's used heavily when in a game, so I assume that's the gameplay protocol. Also when I left my game (but stayed in the lobby), immediately the port 61879 stopped listening.
I'm not sure about UE4, but previous versions of the unreal engine used UDP for replication and RPC.
Interesting. I know World of Warcraft uses TCP, but I imagine it's less "real-time" than shooters (ie: no hitscan) so a few dropped packets wouldn't have a huge impact (ie. if you're standing casting a spell for 2sec, the game can recover the lag easily). Didn't know some shooters used TCP
Since they describe this as log-in issues, should we expect that to be a server using lots of UDP? Or do you think that the log-in service is hemmed in by the load on the game servers?
No, login in Fortnite is over 443 and it uses normal HTTPS.
It also uses HTTPs to load a lot of other data such as server data, friends list, chat etc.
Unreal Engine comes with a version of chromium built in which is used for many in game things like social tabs, news, and in game purchases these all work over HTTP/S.
Right, that's what I was guessing and am not sure why I was downvoted when everyone is confirming that contrarian's post doesn't really make sense since UDP isn't very relevant here.
Gameservers don't handle that many packets because they're limited in the number of player they host. A game a 60hz with 64 players will only receive 3840 packets/sec.
You can basically have bigger UDP packets, that will get fragmented and may not make it to their end.
As far as I know, in Linux there is no support for multi-packet UDP vectored I/O. I wonder if it would be possible to "simulate" that with a raw socket....
> Sounds like servers handling lots of small UDP packets would be hit pretty hard.