Presentation is loading. Please wait.

Presentation is loading. Please wait.

End-to-end Congestion Management for the NGI Hari Balakrishnan MIT Laboratory for Computer Science DARPA NGI PI Meeting October.

Similar presentations


Presentation on theme: "End-to-end Congestion Management for the NGI Hari Balakrishnan MIT Laboratory for Computer Science DARPA NGI PI Meeting October."— Presentation transcript:

1 End-to-end Congestion Management for the NGI Hari Balakrishnan MIT Laboratory for Computer Science http://nms.lcs.mit.edu/ DARPA NGI PI Meeting October 2, 2000 Srinivasan Seshan (CMU), Frans Kaashoek (MIT) Dave Andersen, Deepak Bansal, Dorothy Curtis, Nick Feamster

2 iNAT Project: Motivation Increasing heterogeneity in the Internet –Nodes: Mobiles, devices, sensors,... –Links: Optical, wireless,... –Services & applications: Web, telepresence, streaming, remote device control Need a general solution for applications to discover resources and deal with mobility Need a general framework for learning about and adapting to changing network conditions

3 iNAT Approach Intelligent naming –Resource discovery: Intentional Naming System (INS) using expressive names and self-configuring name resolver overlay network –Mobility: Via dynamic name updates and secure connection migration (check out demo!) Adaptive transmission –End-system congestion management and adaptation framework for the NGI –Congestion Manager software and algorithms

4 Congestion Manager (CM): A new end-system architecture for congestion management The Problem End-to-end congestion management is essential –Reacting when congestion occurs –Probing for spare bandwidth when it doesn’t –The future isn’t about just TCP! Many applications are inherently adaptive, but they don’t adapt today –Enable applications to learn about network conditions Many applications use concurrent flows between sender and receiver, which has adverse effects –Enable efficient multiplexing and path sharing

5 The Big Picture IP HTTPVideo1 TCP1TCP2UDP AudioVideo2 Flows aggregated into macroflows to share congestion state All congestion management tasks performed in CM Apps learn and adapt using API Flows aggregated into macroflows to share congestion state All congestion management tasks performed in CM Apps learn and adapt using API Congestion Manager Per-”macroflow” statistics (cwnd,rtt,…) API

6 Congestion Detector CM Architecture Application (TCP, HTTP, RTP, etc.) Congestion Controller Scheduler Receiver CM protocol API Responder Prober Application API Hints Dispatch feedback cm_update(feedback) Sender Stable controls Deciding when to send Sharing macroflow bandwidth Deciding who can send

7 Transmission API Traditional kernel buffered-send has problems –Does not allow app to “pull back” data App CM IP cm_send(,dst) Packets queued to dst Rate change Lesson: move buffering into the application Can’t pull out and re-encode

8 Transmission API (cont.) Callback-based send cm_request() cmapp_send() based on allowed rate App CM IP Enables apps to adapt “at the last instant” Schedule requests, not packets send( )

9 Transmission API (cont.) Request API works for asynchronous sources while (some_event) { get_data(); /* e.g., from a file, image capture, etc. */ send_data(); /* call cm_request() and send on callback */ } But what about synchronous sources (e.g., audio at constant sampling rate)? do_every_t_ms { /* timer loop */ get_data(); send(); /* oops, waiting for send callback wrecks timing */ } Solution: rate-change callback cmapp_update(newrate); Sender then adapts packet size or timing rate

10 Benefits of macroflow sharing Shared learning –Avoids overly aggressive behavior –Good for Internet stability and fairness Adoption incentives –More consistent performance of concurrent downloads –Avoids independent slow-starts and improves response times –Beats persistent-connection HTTP on interactive performance by allowing parallel downloads

11 CM Web Performance With CM CM greatly improves predictability and consistency of downloads TCP Newreno Time (s) Sequence number

12 CM applications TCP over CM Congestion-controlled UDP HTTP server –Uses TCP/CM for concurrent connections –cm_query() to pick content formats SCTP: Stream Control Transport Protocol Real-time streaming applications –Synchronous API for audio (e.g., vat) –Callback API for video (scalable MPEG-4 delivery system)

13 Congestion Control for Streaming Applications CM provides a flexible framework for per-macroflow congestion control algorithms TCP-style additive-increase/multiplicative decrease (AIMD) is ill-suited for streaming media MD causes large, drastic rate changes Window Time Slow start Goal: Smooth rate reductions

14 TCP-friendliness Throughput vs. loss-rate equation for AIMD:  K  size / (sqrt(p)  RTT) Important for safe deployment and competition with TCP connections Two different approaches: –Increase/Decrease rules Increase: w(t+R)  I(w); e.g., w+1 or 2w Decrease: w(t+  t)  D(w), e.g., w/2 –Loss-rate monitoring (e.g., TFRC) Estimate loss rate, p, and set rate  f(p)

15 Binomial Algorithms I(w) and D(w) are nonlinear functions –I: w(t+R)  w +  /  w K –D: w(t+  t)  w -  w L Generalize linear algorithms –AIMD (K=0, L=1); MIMD (K=-1, L=1) When L < 1, reductions are smaller than multiplicative-decrease Are there interesting TCP-friendly binomial algorithms?

16 The (K,L) space K L 1 TCP-friendly K+L = 1 Unstable (K+L < -1) AIMD AIADMIAD MIMD More aggressive than AIMD (-1 < K+L < 1) Less aggressive than AIMD (K+L > 1) Unstable (L > 1) I: w(t+R)  w +  /  w K D: w(t+  t)  w -  w L IIAD (K=1, L=0) SQRT (K=L=0.5) 0

17 Window Evolution dw/dt =  / (w K  RTT) t w(t) Trade-off between increase aggressiveness and decrease magnitude TCP-friendliness rule: K+L = 1 Trade-off between increase aggressiveness and decrease magnitude TCP-friendliness rule: K+L = 1 AIMD

18 Binomial Algorithms Benefit Layered MPEG-4 Delivery

19 CM Linux Implementation Congestion controller Scheduler CM macroflows, kernel API TCP UDP-CC libcm.a IP cm_notify() ip_output() User-level library; implements API Control socket for callbacksSystem calls (e.g., ioctl) App stream cmapp_*()Stream requests, updates Prober

20 Server performance cmapp_send() Buffered UDP-CC TCP/CM, no delack TCP, w/ delack TCP/CM, w/ delack TCP, no delack CPU seconds for 200K pkts Packet size (bytes)

21 Status CM Linux alpha code release this week http://nms.lcs.mit.edu/projects/cm/ Sender-only CM soon to be up for proposed standard in IETF ECM working group WG document: draft-ietf-ecm-cm-02.txt Mailing list: ecm-request@aciri.org On-going work: –Evaluation of “slowly responsive” algorithms –Macroflow formation for diffserv –Congestion control vs. feedback frequency –CM scheduler algorithms –Using binomial algorithms on high-speed paths

22 Summary Congestion Manager (CM) framework provides end-to-end adaptation platform for NGI applications and protocols CM enables: –Adaptive applications using application-level framing (ALF) ideas –Efficient multiplexing and stable control of concurrent flows by sharing path information –Per-macroflow congestion control algorithms, including binomial algorithms for streaming media Download it!


Download ppt "End-to-end Congestion Management for the NGI Hari Balakrishnan MIT Laboratory for Computer Science DARPA NGI PI Meeting October."

Similar presentations


Ads by Google