Edge Locations

Available Edge Locations

Location Region Hostname Use when your servers are in
πŸ‡¨πŸ‡¦ Beauharnois, Quebec North America bhs1-1.edge.pbs-host.de Canada, US East, US Central
πŸ‡ΊπŸ‡Έ Hillsboro, Oregon West America hil1-1.edge.pbs-host.de US West, US Central
πŸ‡ΈπŸ‡¬ Singapore Asia sgp1-1.edge.pbs-host.de Asia
πŸ‡¦πŸ‡Ί Sydney Australia syd1-1.edge.pbs-host.de Australia, New Zeeland
πŸ‡§πŸ‡· Sao Paulo Brazil sao1-1.edge.pbs-host.de Central America, South America

Edge locations are included at no extra cost. If you need a point of presence in a region not listed here, contact us and we’ll evaluate adding one.


Proxmox Backup Server backups over high-latency links are significantly slower than the available bandwidth would suggest. This is not a configuration issue. Two independent bottlenecks in the network stack cap throughput far below what the link can carry. This page explains both problems and how our edge locations resolve them.

Problem 1: PBS Protocol Overhead

Proxmox Backup Server uses HTTP/2 over TLS for backup transport. During a backup, the client splits data into chunks (typically 4 MB for fixed-size, variable for dynamic) and uploads them individually. The protocol flow for each chunk looks like this:

  1. Client sends POST /dynamic_chunk (or /fixed_chunk) with the chunk data
  2. Server receives the chunk, writes it to the datastore, and responds
  3. Client receives the response, then sends the next chunk

This is a serial request-response pattern. The client does not send the next chunk until the current one is acknowledged. On a local network with sub-millisecond round-trip times, this is invisible. Over a WAN link, each chunk upload is gated by the full round-trip time.

With a 150 ms RTT (e.g. Canada to Frankfurt) and 4 MB chunks, the theoretical maximum is:

4 MB / 0.150 s = 26.7 MB/s β‰ˆ 213 Mbit/s

In practice, PBS achieves less than this because the protocol involves additional round trips beyond the raw chunk upload β€” index appends, negotiation, and HTTP/2 framing overhead. We measured 44–67 Mbit/s on a link that iperf3 shows can carry over 1 Gbit/s.

Info
HTTP/2 supports concurrent streams, but the PBS backup protocol processes requests sequentially on the server side. Sending multiple chunks concurrently would create race conditions between chunk registration and index operations. The serial behavior is by design, not a limitation of the transport layer.

Problem 2: TCP Congestion Window

Even if the protocol were fully parallel, a single TCP connection cannot saturate a fast, high-latency link.

TCP uses a congestion window (cwnd) to control how much data can be in flight at any time. The maximum throughput of a single TCP connection is:

throughput = cwnd / RTT

The congestion window grows over time (slow start, then congestion avoidance), but it is bounded by the receive window, kernel buffer sizes, and packet loss. On a 150 ms path, a single TCP stream with typical Linux kernel defaults achieves around 200–250 Mbit/s even under ideal conditions. With any packet loss, the window collapses and recovers slowly.

This is why iperf3 with a single stream measures 216 Mbit/s, but iperf3 -P 8 (8 parallel streams) reaches 1,040 Mbit/s. Each stream maintains its own independent congestion window, so the aggregate throughput scales roughly linearly with the number of connections.

PBS uses a single TCP connection for the entire backup session. Even if you remove the protocol serialization from Problem 1, this single-connection limit caps throughput well below the link capacity.

Measured Impact

Measured Impact We measured all combinations on a link between OVH BHS (Beauharnois, Quebec) and our Frankfurt datacenter:

Method Throughput
iperf3 single stream 216 Mbit/s
iperf3 -P 8 1,040 Mbit/s
PBS direct 44–67 Mbit/s
TCP multiplexer only (no write accelerator) 80 Mbit/s
Write accelerator + TCP multiplexer 573 Mbit/s

The TCP multiplexer alone improves things modestly (80 Mbit/s) because the PBS protocol serialization is the dominant bottleneck. The write accelerator alone would be limited by the single TCP stream. Both together eliminate both bottlenecks: 573 Mbit/s, an 8–13x improvement over direct PBS.

Solution: Edge Locations

An edge location is a point of presence deployed close to your server. It runs two components that address each bottleneck independently.

Data Flow Diagram

Write Accelerator (solves Problem 1)

The write accelerator is a protocol-aware PBS proxy that sits on the edge node. It speaks the full PBS backup protocol (HTTP/1.1 upgrade to HTTP/2, chunk uploads, index operations, finish) and acts as the backup target from the client’s perspective.

When your client uploads a chunk, the accelerator:

Writes the chunk to a local NVMe spool Immediately returns success to the client Forwards the chunk to the real PBS server asynchronously, with up to 64 uploads in parallel Your client never waits for a Frankfurt round trip. It runs at the speed of the local link to the edge node.

Not all operations can be ACKed instantly. Index closes, finishes, and appends act as barriers β€” they block until all pending chunk forwards have completed. This preserves protocol correctness: a backup only reports success when every chunk is confirmed stored in the datacenter.

TCP Multiplexer (solves Problem 2)

The TCP multiplexer splits the single connection between edge and datacenter into N parallel TCP streams (default: 8). Each stream maintains its own kernel congestion window. Traffic is distributed across streams using round-robin with a framed protocol (16-byte header: sequence number, length, flags).

The aggregate throughput scales with the number of streams:

8 streams Γ— ~125 Mbit/s per stream β‰ˆ 1,000 Mbit/s theoretical

In practice, we see around 800 Mbit/s through the multiplexer (measured with a built-in benchmark mode), which is close to the iperf3 -P 8 result of 1,040 Mbit/s.