Unreal Engine is an incredibly popular game engine and it’s getting more popular every day. An often used reason for switching to Unreal is the included networking functionality which can be used to make multiplayer games without having to rely on a third party solution. While this is definitely a selling point, there are a few issues with how it is implemented. The intention of this article is to bring to light some of these issues which may arise and cause inaccuracies while developing your own game and suggest a potential alternative. Many games choose to work around these inaccuracies, but for some genres such as FPS even the smallest degree of aim or location can make all of the difference.
Making Networked Games Feel Responsive
Most networked environments will have a server (or a host which will have authority) and a set of clients which must communicate by sending packets of data to each other. These packets take time to reach the destination and our code has to account for that. When thinking about how to implement this functionality we can choose to either trust the client (client-authoritative) or execute all of the logic on the server which will then tell the clients where everything should be (server-authoritative).
A client-authoritative model will execute all input on the client representation of the player and then send the resulting details to the server. As the server has to trust what the client is sending is correct, this model is open to cheating as the player can easily change the data which is sent to the server. A common example of this is a speed hack where the client can change a single speed variable or location once the packet has been sent. The server could attempt to verify that the new packet is within the expected ranges of the previously received data but the developer would be required to validate every possible packet. Fortnite uses (or at least used at one point) this approach for their weapon hit acknowledgement. This old video from PlayerUnknown’s Battlegrounds shows an example of a player hacking by shooting seemingly randomly but likely editing the projectile velocity and start location to values that will hit their target and the server not verifying that the shot is valid.
RTT = Round Trip Time, The time taken to send a packet from the client to the server and back again
The alternative (and more commonly used approach) is the server-authoritative model. This shifts all of the important information away from the client and onto the server which means the server no longer needs to trust what the user is saying, we’re just sending inputs and the server tells us where our character should be. This is good as it makes cheating much harder but it also adds a full RTT delay between the input and the character moving. There are a few methods we can use to make this action feel more responsive and the one i’ll talk about further in this post is prediction.
It’s worth noting that the choice is not necessarily one approach or the other. The decision for this is made on a case-by-base basis and games will often choose either based on development needs. An action like bumping a physics object off a table may not affect the gameplay and is therefore safe enough to be client-authoritative, as even if the clients did want to edit their packets it wouldn’t have any relevant consequences.
Determinism And Reliable Prediction
Prediction is the act of the client applying your players inputs to the local representation of the game state in a server-authoritative model to predict what will happen to your character on the server. The server then applies the same input and then sends the state back to the client. If there are any disparities in the data the client can then reconcile these differences and continue predicting. This means any inputs will immediately move your character and avoid the round trip time of sending a packet to the server which is necessary for many gameplay critical actions such as shooting or moving to feel responsive. The downside of this is that extra care has to be taken to make sure that the client state is always as correct as it can be and that any visual artifacts from mispredictions are handled correctly.
Inputs will be sent to the server in the form of key down/up and the time of the input. There’s a problem here though with how Unreal handles ticks. Unreal will aim for a specific update rate but if a frame takes too long due to having too much to process, that tick will take longer and the following ticks will execute late as a result. Because of this, inputs that are triggered at the start of a frame on the client may actually get executed at a time which is half way through a tick on the server (as shown in the first image). There are also many cases where you’d want the client and server to run at different frame rates. While it is possible to implement logic with subtick logic similar to what CS2 has implemented, it’s still going to be difficult to reliably know at which time the server is going to execute your packet as the server and clients are not running relative to each other. To improve on this we can fix the tickrate so that every frame is guaranteed to take a specified amount of time to run. That will ensure that both networked logic on the client and server are running at exactly the same rate and we can work out approximately how many frames ahead the server is from the client by using clock sync algorithms as mentioned here. Logic may still be out by half a fixed tick (which is where CS2’s subbtick logic helps) but it’s much closer than the inconsistency of the tick rate previously. In the image below you can see it’s much easier to estimate when the server will receive a packet as it’s always in 4 frames assuming we have a stable connection and we will receive the resulting packet at 8 frames after sending. It’s worth being aware that due to working with a fixed tickrate, it’s possible to get stuck in a catch-up loop if logic takes longer than the fixed amount of time provided for the network tick so any fixed tick logic should be light and the heavier work that isn’t directly related should stay in the non-fixed update loop.
An example of where this could be an issue would be when reloading a weapon. Imagine the server initiates a reload and we want to shoot as soon as that reload has finished. We know when the reload has started and how long the reload will take. With fixed frames we can specify that shooting can continue again after X frames but if using variable tickrate that reload time is more than likely going to be in between two variable ticks. We can also guarantee that X frames will match the reload time if we have to go back and replay the actions due to a reconcile but with a variable tickrate the final frame could be particularly expensive, take 1 second to complete and then our shooting start time is a whole second later than expected.
Another issue we now face is that the server is instantly executing our packets as they are received. Any unreliable packet is going to be sent and executed out of order (or not received at all) which means our logic is no longer deterministic. To improve this we can to add received packets to a buffer ordered by frame to ensure that any packet is executed in the correct chronological order. We can only buffer for a finite amount of time so any undelivered packets or packets that take longer than our buffer allows for will still be missed and reconciling may need to occur. This GDC talk by the Overwatch team talks about this and how they grow or shrink the buffer based on the clients network conditions. The example below shows buffering only on the packets received by the server but you’d ideally buffer it on both the server and the client.
If we go back to our reloading example and imagine that we’ve started reloading instantly after shooting our last bullet. These are two seperate actions and get sent as seperate packets. Due to ordering issue, the reload packet may arrive on the server before the final shooting packet is executed which means either our final bullet isn’t going to get shot or the reload gets cancelled due to shooting after the reload has started. This is just one example but in real world examples we get hundreds of these happening. In attribute based games like a MOBA, players can have states applied to them and it’s incredibly important that these are applied correctly. There’s a big difference to applying damage before or after a damage multiplier is present.
To achieve full determinism, the executions within each frame will also need to be triggered in the correct order and there may be other external issues such as physics determinism or floating point accuracy (especially cross platform) at play. A non-deterministic simulation is usually accurate enough for most use-cases if we handle reconciles properly but the previously mentioned Overwatch talk shows how they used an ECS architecture to create a deterministic network model.
Rolling Back When Something Goes Wrong
Another point i’d like to touch on is state syncing and reconciling. Once again go back to the reloading example and imagine that you’re pressing the reload button and that packet is being sent to the server. The server disagrees this time though and sends back a packet to say cancel what you’re doing and try to get you back into the correct state. If we want to do this with Unreal networking a simplified version of it may look something a bit like the following:
bIsReloading = true;
// Trigger reload animations
// Clean up any visual/audio state due to the mispredict
The problem with this is that we would have to implement this logic for every bit of state we want to keep in sync and then you’d need to resolve those issues in order which wouldn’t be particularly easy unless you’re handling all state from a single point in your codebase. An alternative to this would be having a single state struct that we use to store all player related state and then have the server send this state to the client each frame so the client can compare against their own history. If the state is incorrect you can then just reapply the authoritative state and run through each frame again with the same deterministic logic until you’re back up to speed.
This all sounds like a fair amount of work to add to your project but fortunately Epic have identified that this is an issue and have provided us with the Network Prediction plugin.
The Network Prediction Plugin
The Network Prediction plugin (which can also be found in Engine/Plugins/Runtime/NetworkPrediction) – Sign up here if you don’t have Git access – is available in a default Unreal Engine installation but is still experimental at the time of writing this article and progress has been halted due to the main programmer (Dave Ratti) no longer working at Epic. Through personal extensive testing it is in a good state to be used though and solves a few of the issues mentioned here such as packet buffering/ordering, client/server frame syncing and desync/reconciling tools along with providing some other nice to have features. Epic have mentioned on UDN that they’re still working on a version of the Character Movement Component which is compatible with the NP plugin but it is yet to be announced (It might be the Mover 2.0 component?). It’s also worth mentioning that if you’re using the Gameplay Ability System in your project it will also not be compatible and you’ll have to create your own alternative using the NP plugin if you want accurate results.
Note: If you do try to use this plugin alongside GAS or CMC you’ll have to account for the buffer time which exists in the NP plugin. If you don’t all CMC/GAS functionality will execute sooner than NP code and you’ll see a visible delay in execution.
Documentation for the Network Prediction plugin is fairly limited so i’ve also worked up a guide in the next post on how to use the plugin. Hope it helps!