Understanding Internet Routing & Transfer Speeds
When transferring data to your remote datastore, you may notice that your actual upload speed doesnβt match the bandwidth your ISP advertises. This is normal behavior and is caused by how internet routing works. This page explains why this happens and how to diagnose it.
How Internet Routing Works
When you send data from your server to your remote datastore, the packets donβt travel in a straight line. Instead, they pass through a series of intermediate networks (called Autonomous Systems, or AS) before reaching their destination. Each network hands the packet off to the next one based on routing tables managed by the Border Gateway Protocol (BGP).
A typical packet path might look like this:
Your Server β Your ISP β Transit Provider A β Internet Exchange β Transit Provider B β Datacenter β Your Datastore
Each of these handoffs is called a hop. The route your data takes is determined by peering agreements, cost, and routing policy β not by geographic proximity or raw performance. This means your traffic might travel through Frankfurt, Amsterdam, and back to reach a destination 200km away.
Potential Bottlenecks
Several factors along the route can reduce your effective transfer speed.
Peering & Transit Capacity
ISPs and transit providers connect to each other at peering points or through paid transit agreements. If the link between two providers is congested β especially during peak hours β your throughput will be limited by the slowest segment in the chain, regardless of your own connection speed.
Congested Internet Exchanges
Major Internet Exchange Points (IXPs) like DE-CIX, AMS-IX, or LINX handle massive amounts of traffic. While they are built for scale, individual peering sessions between two networks at an exchange can still become saturated.
Geographic Detours
BGP optimizes for policy and cost, not latency. Your packets may take a geographically inefficient route. A backup from Hamburg to a datacenter in Nuremberg could be routed through Amsterdam or even London, adding latency and reducing throughput.
Protocol Overhead & TCP Behavior
TCP, the protocol used by most backup software, adjusts its sending rate based on packet loss and round-trip time (RTT). Higher latency or any packet loss along the route causes TCP to slow down significantly. A single congested hop can reduce your effective speed far below your line rate.
ISP Traffic Shaping
Some ISPs intentionally throttle upload traffic or deprioritize certain types of traffic (e.g., large sustained transfers). This is more common on residential connections but can also affect business lines depending on the provider.
Diagnosing Your Route
The following commands help you understand what path your traffic takes and where bottlenecks might occur.
traceroute / tracepath
Shows every hop between your server and the destination, along with the latency at each step.
# Linux
traceroute your-datastore.example.com
# Alternative that doesn't require root
tracepath your-datastore.example.com
Look for sudden jumps in latency between two hops β this often indicates a congested or geographically distant link.
mtr (My Traceroute)
Combines traceroute and ping into a single tool that continuously monitors the route. This is the most useful tool for identifying intermittent issues.
# Run for 100 cycles and generate a report
mtr -r -c 100 your-datastore.example.com
# Interactive mode
mtr your-datastore.example.com
Key columns to watch:
| Column | Meaning |
|---|---|
Loss% |
Packet loss at this hop. Anything above 0% on the final hop indicates a real problem. |
Avg |
Average round-trip time in ms. |
Best / Wrst |
Best and worst latency observed β a large gap suggests jitter or congestion. |
Note: Packet loss on intermediate hops is not always a problem. Many routers deprioritize ICMP (traceroute) traffic in favor of real data. Only loss on the final hop is a reliable indicator of issues.
iperf3
Measures the actual throughput between two endpoints, bypassing application-level overhead.
# On the remote side (if accessible)
iperf3 -s
# On your server
iperf3 -c your-datastore.example.com -t 30 -P 4
The -P 4 flag uses 4 parallel streams, which can help saturate the connection and give a more realistic picture of available bandwidth.
Looking Glass & BGP Tools
To see the actual BGP route from various networks to your destination, you can use public Looking Glass services:
# Check which AS your traffic passes through
traceroute -A your-datastore.example.com
# Query a route server (example)
whois -h whois.radb.net your-destination-ip
Many transit providers and IXPs offer web-based Looking Glass tools where you can run traceroutes from their perspective.
What You Can Do
If youβre consistently seeing lower-than-expected transfer speeds, consider the following:
- Run mtr during your backup window to identify where the bottleneck is. If the issue is consistently at a specific hop, itβs likely a peering or transit capacity problem outside of your control.
- Schedule backups during off-peak hours (typically late night / early morning) when transit links are less congested.
- Enable compression in your backup software to reduce the amount of data that needs to traverse the network.
- Use incremental backups to minimize the volume of data transferred in each session.
- Consider a WireGuard VPN tunnel β in some cases, routing traffic through a VPN can take a different (and sometimes better) path through the internet. See WireGuard VPN for setup instructions.
- Contact your ISP if you see consistent packet loss or throttling on their network. Providing them with mtr output makes the conversation much more productive.
Summary
Your advertised upload speed is the maximum capacity of your local connection. The actual throughput to any given destination depends on every network between you and that destination. Understanding this distinction helps set realistic expectations and diagnose issues when transfer speeds are lower than anticipated.