Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday July 25 2017, @07:16PM   Printer-friendly
from the better-feeding-of-the-pipes dept.

Google has debuted a new algorithm for traffic congestion control, TCP BBR:

We're excited to announce that Google Cloud Platform (GCP) now features a cutting-edge new congestion control algorithm, TCP BBR, which achieves higher bandwidths and lower latencies for internet traffic. This is the same BBR that powers TCP traffic from google.com and that improved YouTube network throughput by 4 percent on average globally — and by more than 14 percent in some countries.

[...] BBR ("Bottleneck Bandwidth and Round-trip propagation time") is a new congestion control algorithm developed at Google. Congestion control algorithms — running inside every computer, phone or tablet connected to a network — that decide how fast to send data.

How does a congestion control algorithm make this decision? The internet has largely used loss-based congestion control since the late 1980s, relying only on indications of lost packets as the signal to slow down. This worked well for many years, because internet switches' and routers' small buffers were well-matched to the low bandwidth of internet links. As a result, buffers tended to fill up and drop excess packets right at the moment when senders had really begun sending data too fast.

[...] We need an algorithm that responds to actual congestion, rather than packet loss. BBR tackles this with a ground-up rewrite of congestion control. We started from scratch, using a completely new paradigm: to decide how fast to send data over the network, BBR considers how fast the network is delivering data. For a given network connection, it uses recent measurements of the network's delivery rate and round-trip time to build an explicit model that includes both the maximum recent bandwidth available to that connection, and its minimum recent round-trip delay. BBR then uses this model to control both how fast it sends data and the maximum amount of data it's willing to allow in the network at any time.

IETF draft and GitHub.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by cafebabe on Wednesday July 26 2017, @09:57AM

    by cafebabe (894) on Wednesday July 26 2017, @09:57AM (#544561) Journal

    Attempting to minimize round trip time is good. Measuring round trip time while blatantly disregarding RFC3390 is bad. As I noted three weeks ago [soylentnews.org]:-

    UDP [wikipedia.org] has a reputation for packet flood but TCP congestion control [wikipedia.org] is wishful thinking which is actively undermined by Google, Microsoft and others. Specifically, RFC3390 [ietf.org] specifies that unacknowledged data sent over TCP should be capped. In practice, the limit is about 4KB. However, Microsoft crap-floods 100KB over a fresh TCP connection. Google does similar. If you've ever heard an end-user say that YouTube is reliable or responsive, that's because YouTube's servers are deliberately mis-configured to shout over your other connections. Many of these top-tier companies are making a mockery of TCP congestion control.

    This latest development falls into the same category as Google AMP: a good concept corrupted by commercial concerns [soylentnews.org].

    --
    1702845791×2
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3