Download presentation
Presentation is loading. Please wait.
Published byYuliana Hermanto Modified over 5 years ago
1
Department of Informatics Networks and Distributed Systems (ND) group
Modularizing TCP with timers Michael Welzl Net Group, University of Rome Tor Vergata
2
Goal Dissect TCP into general-purpose transport protocol modules such that some can become hardware primitives So that we can SDN-enable TCP and other transports End result: simpler code, hardware-supported, platform-independent Which modules are there?
3
Transport modules Module TCP Status Connection management ?
Buffer management / communication with app (sender, receiver) Constructing headers (data, ACK) Hardware support, I think? Sending packets (data, ACK) TSO, GSO, pacing Receiving and parsing packets (data, ACK) Checksum calculation Hardware support exists Flow control = receiver buffer management? Congestion control "Pluggable" in Linux and FreeBSD Loss recovery A mess! (also messes up CC "module")
4
TCP today Where we want to be. When we don't lose packets, even with an ECN cwnd reduction, we stay there! Start the connection; we're clueless! Designed as a replacement for starting with, e.g., cwnd=370 CA Where we end up after loss FR SS Not "pluggable" today! Noooooooo !!!!! NOOOOOO !!!!!! DON'T LET IT HAPPEN !!!
5
The "ACK clocking" rule Packet conservation principle important, bla bla ACK clock, must preserve, bla bla Is a bursty ACK-clocked TCP better, or a paced non-ACK clocked TCP? CA: cwnd+=1 breaks ACK clocking SS breaks ACK clocking IW10 breaks ACK clocking TLP/RACK = FR slightly deviating from ACK clocking! Strict ACK-clocking only applied in FR What good has it done us?
6
ACK-clocking in FR Estimate "pipe": number of packets in flight
Try to keep that constant "in flight" really means: "in flight" + "in queue" Try hard to keep the queue filled??? Fantastic! For instance, can't handle drops of retransmits This has been called a "feature" RACK can handle it... but (currently?) won't reduce cwnd again...
7
Standing queues do exist: Reno...
All tests with: 3-host topology, 5 Mbit/s bottleneck in CORE emulator; 1500-byte packets Default queue: DropTail (FIFO), 100 packets Qlen exceeds BDP in all our tests (base RTT 100ms: BDP = 41 packets).
8
... and Cubic...
9
Remember PRR? It can make things worse, it seems!
RFC 6675 ack# X cwnd: pipe: sent: N N R N N N N N N N N Rate-Halving (Linux) cwnd: pipe: sent: N N R N N N N N N N N Queue drains a little
10
This can cause a "double drop"...
From: "Virtualized Congestion Control" Tech.rep. longer version of SIGCOMM'16 paper
11
A test from one of my Ph.D. students
Is this PRR? We believe so... But everyone thinks that PRR is only a good thing?
12
Solving the loss recovery problem
Basic function that all protocols need: Remember which packets were sent / ACKed, and when, for re-sending and RTT calculation ("scoreboard") I claim: scoreboard operations could be much simpler, and that might even make them better Going in the direction of RACK, but "all the way": base everything on a timer
13
What I envision SS CA Only at the beginning!
Simple rules for increase/decrease events (magnitude determined by CC like before) Increase: upon ACK Decrease: upon ECN or loss Loss determined via (aggressive, not RTO!) per-packet timeout; reduce every time! Undo if we got it wrong (ACKs that shouldn't have arrived – spurious loss det.), adjust timers Avoid over-reacting: look at ACK rate + RTT No need for RTO with SS because we back-off exponentially (instead of: cwnd*=factor, then cwnd=1) Already done today with ECN !
14
End result Much simpler More modular
more robust? As good as PRR but can better handle lost retransmits? Any other direct benefits? More modular
15
Thoughts?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.