BBR: Congestion-Based Congestion Control
Venue
ACM Queue, vol. 14, September-October (2016), pp. 20 - 53
Publication Year
2016
Authors
Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, Van Jacobson
BibTeX
Abstract
By all accounts, today’s Internet is not moving data as well as it should. Most of
the world’s cellular users experience delays of seconds to minutes; public Wi-Fi in
airports and conference venues is often worse. Physics and climate researchers need
to exchange petabytes of data with global collaborators but find their carefully
engineered multi-Gbps infrastructure often delivers at only a few Mbps over
intercontinental distances.6 These problems result from a design choice made when
TCP congestion control was created in the 1980s—interpreting packet loss as
“congestion.”13 This equivalence was true at the time but was because of technology
limitations, not first principles. As NICs (network interface controllers) evolved
from Mbps to Gbps and memory chips from KB to GB, the relationship between packet
loss and congestion became more tenuous. Today TCP’s loss-based congestion
control—even with the current best of breed, CUBIC11—is the primary cause of these
problems. When bottleneck buffers are large, loss-based congestion control keeps
them full, causing bufferbloat. When bottleneck buffers are small, loss-based
congestion control misinterprets loss as a signal of congestion, leading to low
throughput. Fixing these problems requires an alternative to loss-based congestion
control. Finding this alternative requires an understanding of where and how
network congestion originates.