Input Delay


The basics.

What is it?
First of all, we need to understand the two main protocols of the internet that transport packets.
TCP and UDP.
Every application will send and receive data. TCP has its mechanism to resend packets that do not arrive “built-in”. UDP just sends the data from point A to B.
TCP is reliable but slower than UDP. UDP is less reliable but fast.
Gamers for certain types of games need “fast” data. Shooters, football games, fighting games, etc.
Turn-based games do not need to use fast data and can rely on TCP.
The problem with UDP is unreliability. Arriving out of sync or not arriving at all. This is a problem for “fast-paced” games.
So games that use UDP use netcode to try to solve the problems that may arise when packets are not arriving on time or at all.
Netcode deals with latency variations between players. It also should handle packets that do not arrive , in a way which makes the game fair to all players.
These are the basic fundamentals of what netcode is meant to do. To equalize gameplay between various player “connections”.
I won’t go into the types of netcode as they can be complex.


What went wrong?

It is difficult to know, without having inside knowledge, how EA’s netcode works.
Latency will usually vary from player to player. This latency should be easy to equalize. If one player is closer to the server they will have latency added so that commands are not acted upon before his opponent’s otherwise the player closest to the server will have an advantage.
Packet loss however is treated differently. The game’s engine will normally predict what the lost packet was supposed to contain. Your commands are then reliant on quality “guessing”.
Some games use a system to resend packets. EA use this. They get the sender to resend a packet that didn’t arrive. It could be they use some form of packet sequencing to determine the packet did not arrive. The problem is that even with this system, a packet will arrive after the time it was intended for so effectively delayed. This may not be an issue with low-level loss as the game may still use “guessing” and the odd dropped packet will be easy to guess.
But when you get higher loss levels, it will be harder to guess an individual packet’s commands.
Normally UDP (GVSP) will have a low-level loss, because of the nature of networks and the protocol itself. If you get a sequence of packets that say the player is running at full speed, you can see how in this instance “guessing” will be easy.
Start adding more commands a higher rate of loss, the “guessing becomes harder.
So what happens if a player with low loss matches against a player with higher loss levels?
One-sided delay.
There is also a scenario where two players can have the same level of loss but different latencies. In this scenario, the player with higher latency will have more of his resends arriving after the cut of time. If there isn’t a cut off time, which is possible in the case of this game, then the code will either be resending later or “guessing badly. The resent packet will arrive later because the higher latency player’s packets take longer to arrive in the first place.
In the case of sending later and using the resent packets, this adds more delay to the higher latency player regardless of resends.
Usually, games have frames where packets are processed, but this isn’t always the case.
Like I said earlier it is hard to know what system EA uses and without facts, we can only speculate.
Some games use rollback which adds delay but is meant to be fair. FIFA uses more time-critical data IMO which itself causes issues.
Your teammate’s AI for example is still time-critical. Timing of runs, reactions to cutting out passes, second man press, stopping after second man press, covering, etc.

The netcode has to be good enough to equalize gameplay. This would require mass testing and reliable testers. Using famous FIFA content providers is fine for EA’s purposes of marketing, but for gameplay for anyone with delay, it is not good.
Internet usage has grown massively since fibre was introduced.
Before fibre, many connections were usage capped. Even with “unlimited”. You may have had the bandwidth, but use too much of a certain protocol which requires good routing and you could find yourself facing degraded gameplay.
Fibre arrived and since then many more services have been added. Streaming services have grown massively. I have Movistar with the Football package. It runs through the router off a different VLAN. The phone also runs off the internet. The TV will use a fair amount of data when on. many people have similar packages. The amount of packets in the Movistar network must be astronomical. If everyone used their TV’s at the same time, what could possibly go wrong if the infrastructure was not big enough? There must be some locations that suffer from this because it would be hard for network operators to quickly build new infrastructure when needed.
As for usage increases, there will be a point where certain services outdo infrastructure. Your 4K TV service may revert to HD. Your game packets may have 1 in 10 dropped. These are options to keep the network running. It is also known as QoS. We normally think of QoS being a good thing. It is in a busy network. But what it does is also boot and or queue packets. As game packets aren’t seen as vital, you can see the potential problem?


My suggestion for EA would be to employ testers from around the globe. Use several freelancers in each location. A few per ISP in each location from their home connections. Run different scenarios.
Use this data to work out how to equalize gameplay to a better standard.
Have the freelancers meet and play on LAN sessions. devise a system so we know the rough ability of each player. This would make it easier for the testing teams to really know the effects of done sided delay.

In the meantime, the code needs to have a matchmaking algorithm to limit the variance in latency where packet loss is an issue.
EA needs to be more transparent regarding packet loss as it paints a bad picture about them.
For example, having low loss on the EA connection tool but 35% packet resends showing on Wireshark is laughable.
Especially when EA themselves told us they resend packets. FIFA Pitch Notes #15 on FIFA 19.

Official link:

This is the interesting bit which let the cat out of the bag.

Another small issue.

This issue may impact the gameplay more than I think.

When researching lag compensation, I did so because I thought that was the natural cause of one-sided delay. But after researching how game makers apply lag compensation I have had many thoughts as to how differences can affect us.
As stated earlier, lag compensation on a 1 v 1 game is quite easy. What will happen is the two latencies will be measured and delay in milliseconds will be added to the lowest latency player at some point.
The issue is how they calculate the latency. Round trip time is the time it takes for a packet to be sent to a destination and arrive back. But the problem with Round Trip Time latency is it will use different routes. Your ISP determines the route. Your ISP does not determine the route for the game server’s packets that are sent to you though. That is determined by the network EA use.
The ISPs will have different agreements and the route will vary, especially once out of the game and your ISP network.
The problem this causes is the one-way latency can vary. One-way being your packets to the game server and vice versa. When you get a difference, one of two things can be affected.
You will either see the gameplay slightly later or earlier than predicted and based on that your inputs will be reversed. So say you see the gameplay early, your inputs will be slightly later. If you see the gameplay later, your inputs will arrive slightly earlier.
This could be the cause of an out of sync feeling.
I tested my old connection to the iD3 network in Rotterdam and there was a 5ms difference between the two routes. This difference isn’t a massive amount. But if the difference is throughout the whole game, you are not going to be able to time everything perfectly.

If lag compensation is to be done correctly, the two data streams for the 2 players (your inputs to the server and the server’s data to you, plus the same for your opponent’s streams) need to take into account the different latencies and then lag compensation is calculated and applied separately for each data stream. IMO the best way to do this is to simply work out which is the highest latency of all 4 streams and make the rest up to that figure.
It would be ideal not to have matchmaking based on too big a difference in overall (RTT) latency though.