ECF: an MPTCP Scheduler to Manage Heterogeneous Paths

Slides:



Advertisements
Similar presentations
1 / 18 Network Characteristics of Video Streaming Traffic Ashwin Rao, Yeon-sup Lim *, Chadi Barakat, Arnaud Legout, Don Towsley *, and Walid Dabbous INRIA.
Advertisements

TCP--Revisited. Background How to effectively share the network? – Goal: Fairness and vague notion of equality Ideal: If N connections, each should get.
1 School of Computing Science Simon Fraser University CMPT 771/471: Internet Architecture & Protocols TCP-Friendly Transport Protocols.
ICNP’07, Beijing, China1 PSM-throttling: Minimizing Energy Consumption for Bulk Data Communications in WLANs Enhua Tan 1, Lei Guo 1, Songqing Chen 2, Xiaodong.
1 / 21 Network Characteristics of Video Streaming Traffic Ashwin Rao †, Yeon-sup Lim *, Chadi Barakat †, Arnaud Legout †, Don Towsley *, and Walid Dabbous.
SCTP v/s TCP – A Comparison of Transport Protocols for Web Traffic CS740 Project Presentation by N. Gupta, S. Kumar, R. Rajamani.
TDTS21 Advanced Networking
Doc.: IEEE /0604r1 Submission May 2014 Slide 1 Modeling and Evaluating Variable Bit rate Video Steaming for ax Date: Authors:
Performance Analysis of a Parallel Downloading Scheme from Mirror Sites Throughout the Internet Allen Miu, Eugene Shih Class Project December 3,
Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco)
Dynamic Adaptive Streaming over HTTP2.0. What’s in store ▪ All about – MPEG DASH, pipelining, persistent connections and caching ▪ Google SPDY - Past,
Confused, Timid and Unstable: Picking a Video Rate is Hard Te-Yuan (TY) Huang Stanford University Nov 15 th, 2012 Joint work with Nikhil Handigol, Brandon.
Performance Interactions Between P-HTTP and TCP Implementations J. Heidemann ACM Computer Communication Review April 1997 김호중 CA Lab., KAIST.
The Tension Between High Video Rate and No Rebuffering Te-Yuan (TY) Huang Stanford University IRTF Open 87 July 30th, 2013 Joint work Prof.
CStream: Neighborhood Bandwidth Aggregation For Better Video Streaming Thangam Vedagiri Seenivasan Advisor: Mark Claypool Reader: Robert Kinicki 1 M.S.
Leveraging Multiple Network Interfaces for Improved TCP Throughput Sridhar Machiraju, Prof. Randy Katz.
Leveraging Multiple Network Interfaces for Improved TCP Throughput Sridhar Machiraju SAHARA Retreat, June 10-12, 2002.
1 Manpreet Singh, Prashant Pradhan* and Paul Francis * MPAT: Aggregate TCP Congestion Management as a Building Block for Internet QoS.
Junxian Huang 1 Feng Qian 2 Yihua Guo 1 Yuanyuan Zhou 1 Qiang Xu 1 Z. Morley Mao 1 Subhabrata Sen 2 Oliver Spatscheck 2 1 University of Michigan 2 AT&T.
TCP Behavior across Multihop Wireless Networks and the Wired Internet Kaixin Xu, Sang Bae, Mario Gerla, Sungwook Lee Computer Science Department University.
Path selection Packet scheduling and multipath Sebastian Siikavirta and Antti aalto.
A measurement study of vehicular internet access using in situ Wi-Fi networks Vladimir Bychkovsky, Bret Hull, Allen Miu, Hari Balakrishnan, and Samuel.
Advanced Network Architecture Research Group 2001/11/149 th International Conference on Network Protocols Scalable Socket Buffer Tuning for High-Performance.
Beacons Impact study Thomas Deillon & Thin Nguyen July 23 rd 2014.
An Efficient Approach for Content Delivery in Overlay Networks Mohammad Malli Chadi Barakat, Walid Dabbous Planete Project To appear in proceedings of.
MULTI-TORRENT: A PERFORMANCE STUDY Yan Yang, Alix L.H. Chow, Leana Golubchik Internet Multimedia Lab University of Southern California.
Congestion control for Multipath TCP (MPTCP) Damon Wischik Costin Raiciu Adam Greenhalgh Mark Handley THE ROYAL SOCIETY.
Advanced Network Architecture Research Group 2001/11/74 th Asia-Pacific Symposium on Information and Telecommunication Technologies Design and Implementation.
Transport Layer3-1 TCP throughput r What’s the average throughout of TCP as a function of window size and RTT? m Ignore slow start r Let W be the window.
Cooperative Mobile Live Streaming Considering Neighbor Reception SPEAKER: BO-YU HUANG ADVISOR: DR. HO-TING WU 2015/10/15 1.
Shuo Deng, Ravi Netravali, Anirudh Sivaraman, Hari Balakrishnan
Performance Interactions Between P-HTTP and TCP Implementation John Heidemann USC/Information Sciences Institute May 19, 1997 Presentation Baekcheol Jang.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
1 Three ways to (ab)use Multipath Congestion Control Costin Raiciu University Politehnica of Bucharest.
MMPTCP: A Multipath Transport Protocol for Data Centres 1 Morteza Kheirkhah University of Edinburgh, UK Ian Wakeman and George Parisis University of Sussex,
On-the-Fly TCP Acceleration with Miniproxy Giuseppe Siracusano 12, Roberto Bifulco 1, Simon Kuenzer 1, Stefano Salsano 2, Nicola Blefari Melazzi 2, Felipe.
Accelerating Peer-to-Peer Networks for Video Streaming
TCP - Part II.
Confluent vs. Splittable Flows
Topics discussed in this section:
Chapter 3 outline 3.1 transport-layer services
Multipath TCP in SDN-enabled LEO Satellite Networks
Improving Datacenter Performance and Robustness with Multipath TCP
Mohammad Malli Chadi Barakat, Walid Dabbous Alcatel meeting
TCP-in-UDP draft-welzl-irtf-iccrg-tcp-in-udp-00.txt
Transport Protocols over Circuits/VCs
Multipath QUIC: Design and Evaluation
SCTP v/s TCP – A Comparison of Transport Protocols for Web Traffic
Multipath TCP Yifan Peng Oct 11, 2012
TCP-LP: A Distributed Algorithm for Low Priority Data Transfer
Open Issues in Router Buffer Sizing
Available Bit Rate Streaming
Understanding Throughput & TCP Windows
Transport Protocols Relates to Lab 5. An overview of the transport protocols of the TCP/IP protocol suite. Also, a short discussion of UDP.
Multiple Path Connection through a Set of Connection Relay Servers
INFOCOM 2013 – Torino, Italy Content-centric wireless networks with limited buffers: when mobility hurts Giusi Alfano, Politecnico di Torino, Italy Michele.
Concurrent Multipath Transfer (CMT)
Lottery Meets Wireless
AMP: A Better Multipath TCP for Data Center Networks
Runa Barik, Simone Ferlin, Michael Welzl University of Oslo
Jiyong Park Seoul National University, Korea
The Impact of Multihop Wireless Channel on TCP Performance
Persistence: hard disk drive
The War Between Mice & Elephants by, Matt Hartling & Sumit Kumbhar
Lecture 17, Computer Networks (198:552)
Deployable Multipath TCP
Transport Layer: Congestion Control
Modeling and Evaluating Variable Bit rate Video Steaming for ax
Designing a Relative Delay Estimator for Multipath Transport
When to use and when not to use BBR:
Presentation transcript:

ECF: an MPTCP Scheduler to Manage Heterogeneous Paths Yeon-sup Lim1, Erich M. Nahum1, Don Towsley2, and Richard Gibbens3 1 IBM T.J. Watson Research Center 2 University of Massachusetts Amherst 3 University of Cambridge ACM CoNEXT, Incheon, Korea, Dec. 2017

Introduction Multi-Path TCP: simultaneously utilizes multiple interfaces Packets can be scheduled to each subflow according to several policies How do these scheduling policies affect MPTCP performance? App subflow-1 Wi-Fi Socket MPTCP subflow subflow … … WiFi LTE subflow-2 Cellular

MPTCP Default Scheduler Default scheduler sends packets over available subflow with lowest RTT Connection level send buffer Connection level subflow level send buffer MPTCP Scheduler subflow 1 with RTT1 MPTCP scheduler subflow subflow … . subflow 2 with RTT2

What is ideal average bit rate given bandwidths? MPTCP default scheduler provides ideal aggregate bandwidth? - Video Streaming Case Dynamic Adaptive Streaming over HTTP (DASH) Video : 22min, 6 resolutions, 5s chunks Required Bandwidth (bit rate) for each resolution Resolution 144p 240p 360p 480p 760p 1080p Bit Rate (Mbps) 0.26 0.64 1.00 1.60 4.14 8.47 What is ideal average bit rate given bandwidths? e.g. 0.3 Mbps WiFi + 8.6 Mbps LTE > 8.47 Mbps (1080p) All chunks served with 1080p 8.47Mbps* * Depends on ABR schemes, but ABR basic goal is to achieve this average bit rate

MPTCP default scheduler provides ideal aggregate bandwidth MPTCP default scheduler provides ideal aggregate bandwidth? - Video Streaming Case Measured average bit rate ratio to ideal 0.3Mbps WiFi & 8.6Mbps LTE Ideal = 0.3 + 8.6 > 8.47 (1080p) → 8.47 Measured = 2.30 Ratio = 2.30 / 8.47 = 0.28 0.3Mbps WiFi & 0.3 Mbps LTE Ideal = 0.3 + 0.3 ≈ 0.64 (240p) → 0.6 Measured = 0.55 Ratio = 0.55 / 0.6 = 0.92 Close to 1 (darker) -> Average bit rate is close to ideal

Motivation Default scheduler causes idle period of fast subflow < 𝑥 𝐶𝑊𝑁 𝐷 𝑓 times tx Connection level send buffer <RTTf RTTf 𝑥 idle … … CWND full Pkt for next GET Subflow with smaller RTT RTTs … CWND available Subflow with larger RTT send

Motivation Waiting for fast subflow can complete transfer earlier 𝑥 𝐶𝑊𝑁 𝐷 𝑓 times tx Connection level send buffer <RTTf RTTf 𝑥 … … CWND full Pkt for next GET Subflow with smaller RTT RTTs … CWND available Subflow with larger RTT send

Motivation Why do these fast subflow idle periods matter? Available bandwidth of fast subflow is not utilized during idle periods CWND frequently restarts by RFC 5681 Another bandwidth loss in fast subflow until CWND becomes sufficient value 0.3Mbps WiFi & 8.6Mbps LTE

Earliest Completion First (ECF) Scheduler If fastest subflow is available, just use it Otherwise, check subflow selected by default scheduler (second fastest subflow): If 𝑅𝑇 𝑇 𝑓 + 𝑥 𝐶𝑊𝑁 𝐷 𝑓 ×𝑅𝑇 𝑇 𝑓 <𝑅𝑇 𝑇 𝑠 i.e., waiting for fastest subflow can complete tx. earlier than using slower one at this moment Do not use second fastest subflow If not, Use second fastest subflow i.e., sender has enough large # of packets in send buffer to efficiently utilize both subflows

Experimental Setup Implemented in MPTCP Linux Kernel (version 0.89) Mobile device (Google Nexus 5) downloads several types of traffic using WiFi and LTE Streaming, HTTP file download, and Web page download Compares to Default, DAPS1, and BLEST2 schedulers Scenarios Controlled in-lab experiments with various conditions Evaluation in the wild [1] N. Kuhn et al., “DAPS:Intelligent delay-aware packet scheduling for multipath transport”, IEEE ICC’14 [2] S. Ferlin et al., “BLEST: Blocking estimation-based MPTCP scheduler for heterogeneous networks”, IFIP Networking’16

In-lab experiments – Streaming w/ fixed bandwidths Measured average bit rate relative to ideal (5 runs) Default Scheduler ECF Close to 1 (darker) -> Average bit rate is close to ideal

In-lab experiments – Streaming w/ fixed bandwidths ECF BLEST DAPS Improves for some cases, but worse for many others Obtains some gains, but ECF is still better

In-lab experiments – Streaming w/ fixed bandwidths LTE CWND Trace (0.3Mbps WiFi and 8.6Mbps LTE) Default’s LTE CWND frequently back to initial value BLEST works better than Default, but More often back to initial than ECF

In-lab experiments – Web browsing Replicated CNN home page (as of 9/11/2014) – 107 Web objects in page HTTP persistent connection is enabled (6 connections) 5Mbps WiFi & 5Mbps LTE (Similar paths) 1Mbps WiFi & 10Mbps LTE (Heterogeneous paths)

Evaluation in the Wild - Streaming DASH server at Washington DC Streaming Client at Cafe using public town WiFi in Amherst, MA 9 runs over two days * Sorted by avg. WiFi RTT small RTT difference - Default and ECF will yield similar performance

Evaluation in the Wild - Streaming similar Avg RTTs (70ms) WiFi has extremely large RTT (≈1sec).

Evaluation in the Wild – Web browsing Replicated CNN home page (as of 9/11/2014) at WDC server Object Download Completion Times from 30 runs 99.9% of object downloads complete in 17 seconds with ECF 30 seconds with Default

Summary Investigated reason of MPTCP performance degradation in presence of path heterogeneity Proposed and implemented new subflow scheduler (ECF) considering completion time Details for implementation in https://cs.umass.edu/~ylim/mptcp_ecf Evaluated ECF scheduler with several types of traffic ECF is never worse than default scheduler and substantially better with path heterogeneity More experimental results in paper HTTP Download workload, Out-of-order delay analysis, …

Thank you! Questions?

In-lab experiments – Streaming w/ fixed bandwidths Backup In-lab experiments – Streaming w/ fixed bandwidths ECF is working with more than two subflows? Experiments using two subflows over WiFi and two over LTE Regulated subflows over each interface to evenly provide designated bandwidth For 8.6 Mbps LTE, each subflow over LTE provides 4.3Mbps bandwidth 0.3 Mbps WiFi and [0.3, 8.6] Mbps LTE

In-lab experiments – Streaming w/ random bandwidths Backup In-lab experiments – Streaming w/ random bandwidths Average throughput during streaming Gains depend on how often heterogeneity happens Note: x axis is chunk index, not time

In-lab experiments – HTTP downloads Backup In-lab experiments – HTTP downloads File downloads using wget for single file of which size is 65KB-1MB In case of small file downloads (128KB) Download time is very short. Second flow is rarely used No difference between default and ECF Ratio of ECF download time to Default (128KB Downloads) More Red: ECF worse More Blue: ECF better White: no difference

In-lab experiments – HTTP downloads Backup In-lab experiments – HTTP downloads In case of large file downloads (256KB to 1MB) ECF works better with path heterogeneity Note: Only one idle period is expected at the end of transfer Significantly large file → Long transfer time → Small effect of idle period → Small gain from ECF 256KB 512KB 1MB