Business Report Economy

Speed tests and the missing Mbps: why can’t you hit the 1Gpbs mark?

TECHNOLOGY

Theo van Zyl and Andre Eksteen|Published

At lower speeds, the internet connection itself is the bottleneck - meaning that you can get close to the plan's advertised speed. But, beyond certain speeds the bottleneck is no longer your access speed, but external factors such as servers, end-to-end network conditions, and even the TCP/IP protocol itself. 

Image: File.

Long-time internet users might remember the good old days of dial up internet connections, and the excitement of looking forward to a noticeable difference in experience every time the line speeds were upgraded.

Fast forward to today, and customers don’t seem to feel the same difference when upgrading - why is that the case?

This is where it becomes necessary to understand the differences between capacity and throughput, and evolving connectivity requirements at home and the workplace.

A favourite thing for people to do when they get an internet connection for the first time, or upgrade, is to run a speed test.

But, something strange happens: a customer with a 500 Mbps connection might download a file from a fast server and achieve 480 Mbps; when they upgrade to 1 Gbps, they might expect to see 960 Mbps, but instead, they only get 600 Mbps. 

External factors impacting performance

Why? At lower speeds, the internet connection itself is the bottleneck - meaning that you can get close to the plan's advertised speed. But, beyond certain speeds the bottleneck is no longer your access speed, but external factors such as servers, end-to-end network conditions, and even the TCP/IP protocol itself. 

So, even if you have a 1Gbps, 10Gbps, or 100Gbps connection, if the maximum throughput achievable due to server-side or network limitations is 600Mbps, that is all you will get. In this example the capacity - the maximum data rate under ideal conditions - is 1Gbps while the throughput - the actual data rate achieved in the real world - is 600Mbps. As fibre internet access line capacities continue to increase, these limitations will become more noticeable.

The same applies to wireless services, where for example, even though 5G may be capable of a theoretical speed of 20Gbps, this is based on using the best equipment in ideal conditions, and even then there are radio frequency limitations that will prevent this speed from being reached in real world use cases.

ISPs also make use of contention, which segregate different service profiles. A dedicated service will have much less contention than a broadband service. Higher-contended products allow customers to use their full connection simultaneously, and as such the customers share a portion of the network's capacity. In such instances, users may observe higher throughput at off-peak times, and lower throughput during peak times when more users are active. Similarly with wireless services, the more people connect to a single tower, the lower the throughput becomes.

Speed tests not an accurate picture

Now, back to the speed tests: firstly, in order to be accurate, the speed test must be done in a manner that removes all other variables and this means using a wired connection directly from a laptop to the router.

Users also have to make sure that the device being used for the test is capable - if a laptop has a 100Mbps network port, that is the maximum you are going to get. Even the quality of the LAN cable, being a CAT 4, 5, 5e, 6 may influence your performance test. Ie a CAT 4 cable can only handle 16Mbps, a CAT 5 only 100Mbps and CAT5e would handle up to 1Gbps. 

Servers, switches, routers, cables, firewalls and access points can all have a negative impact on speed.

As an example, when fibre network operators were rolling out free speed upgrades over the past few years, customers found they were unable to benefit because their routers were incapable of handling over 100Mbps.

Many were unaware of this and instead thought the problem lay with the fibre network operator or their ISP.

The second challenge is that high speed connections, such as a gigabit link, were not designed to deliver 1Gbps to a single user on a single device (as would be seen on a speed test), but rather to connect multiple users, devices and applications concurrently to the same network.

Having a gigabit connection is not going to ensure that a single user has perfect video calls all of the time; rather it means that you can have multiple users, all taking part in video calls at the same time each having an optimal experience.

The combined required throughput of a link can be determined by the simultaneous use per user or device.

As example, should 30 concurrent users/devices require an effective throughput of 10Mbps each, one would then need a 300Mbps service. It is all about ensuring that each user/device has a reasonable experience.

Here, the limitations of the internet protocol or devices no longer apply, because it’s not one device trying to download a file at 1Gbps, but multiple devices that are accessing cloud-based services, downloading, streaming video and gaming online at the same time, and making full use of the bandwidth available.

More to connectivity than just speed

What we are starting to see is that a speed test is no longer an accurate reflection of what you can do with a high-speed internet connection - to run a test properly for the modern use case would require you to fire up multiple connections concurrently and test the total capacity of the connection.

Most of us grew up in an age where there were severe limitations on local networks and this was the main bottleneck; with such low speeds to start off with, every bump up had a noticeable difference such as significantly faster download speeds. It also meant that speed tests had more relevance back then.

However, as technology has evolved this is no longer the case and doubling your line speed is not going to result in being able to download a file in half the time that it used to before.

On fibre services the line speed is the maximum throughput download capacity of the line.

On wireless connectivity one has a practical achievable speed based on network load and a theoretical speed that will never be achieved in practice.

We briefly mentioned theoretical and real world speeds in relation to wireless services, but as we start seeing higher speeds on fibre lines, it is likely that we are also reaching the theoretical speed through that medium as communication as well.

For example, how can you properly test a 10Gbps line when the devices themselves are not capable of handling such speeds due to limitations in processing power, memory and other components?

The reality is that we are reaching a stage of bandwidth abundance, where service providers can provide users  more bandwidth that what they would actually require.

At this stage, speed is no longer everything, and what is important is to have the capacity in order to ensure a quality experience across all users and devices.

Theo van Zyl is the Head of Wireless at Vox, and Andre Eksteen is the Senior Product Manager of FTTB at Vox.

Theo van Zyl is the Head of Wireless at Vox.

Image: Supplied.

Andre Eksteen is the Senior Product Manager of FTTB at Vox.

Image: Supplied.

BUSINESS REPORT

Visit: www.businessreport.co.za