Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar.

Slides:



Advertisements
Similar presentations
TCP Variants.
Advertisements

LOGO Transmission Control Protocol 12 (TCP) Data Flow.
24-1 Chapter 24. Congestion Control and Quality of Service (part 1) 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
Doc.: IEEE /0604r1 Submission May 2014 Slide 1 Modeling and Evaluating Variable Bit rate Video Steaming for ax Date: Authors:
1 TCP Congestion Control. 2 TCP Segment Structure source port # dest port # 32 bits application data (variable length) sequence number acknowledgement.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli University of Calif, Berkeley and Lawrence Berkeley National Laboratory SIGCOMM.
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
TDC365 Spring 2001John Kristoff - DePaul University1 Internetworking Technologies Transmission Control Protocol (TCP)
Transport Layer3-1 Congestion Control. Transport Layer3-2 Principles of Congestion Control Congestion: r informally: “too many sources sending too much.
Multiple constraints QoS Routing Given: - a (real time) connection request with specified QoS requirements (e.g., Bdw, Delay, Jitter, packet loss, path.
1 Internet Networking Spring 2003 Tutorial 11 Explicit Congestion Notification (RFC 3168) Limited Transmit (RFC 3042)
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #7 TCP New Reno Vs. Reno.
1 Internet Networking Spring 2002 Tutorial 10 TCP NewReno.
CSCE 515: Computer Network Programming Chin-Tser Huang University of South Carolina.
1 TCP Transport Control Protocol Reliable In-order delivery Flow control Responds to congestion “Nice” Protocol.
WebTP: Protocol Design Issues Jeng Lung & Yogesh Bhumralkar.
1 Internet Networking Spring 2003 Tutorial 11 Explicit Congestion Notification (RFC 3168)
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
Performance Enhancement of TFRC in Wireless Ad Hoc Networks Mingzhe Li, Choong-Soo Lee, Emmanuel Agu, Mark Claypool and Bob Kinicki Computer Science Department.
Data Communication and Networks
1 Internet Networking Spring 2004 Tutorial 10 TCP NewReno.
Medium Start in TCP-Friendly Rate Control Protocol CS 217 Class Project Spring 04 Peter Leong & Michael Welch.
1 K. Salah Module 6.1: TCP Flow and Congestion Control Connection establishment & Termination Flow Control Congestion Control QoS.
Ns Simulation Final presentation Stella Pantofel Igor Berman Michael Halperin
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
Lect3..ppt - 09/12/04 CIS 4100 Systems Performance and Evaluation Lecture 3 by Zornitza Genova Prodanoff.
Qian Zhang Department of Computer Science HKUST Advanced Topics in Next- Generation Wireless Networks Transport Protocols in Ad hoc Networks.
Improving TCP Performance over Mobile Networks Zahra Imanimehr Rahele Salari.
26-TCP Dr. John P. Abraham Professor UTPA. TCP  Transmission control protocol, another transport layer protocol.  Reliable delivery  Tcp must compensate.
CSE 461 University of Washington1 Topic How TCP implements AIMD, part 1 – “Slow start” is a component of the AI portion of AIMD Slow-start.
B 李奕德.  Abstract  Intro  ECN in DCTCP  TDCTCP  Performance evaluation  conclusion.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Copyright 2008 Kenneth M. Chipps Ph.D. Controlling Flow Last Update
Improving application layer latency for reliable thin-stream By: Joel Fichter & Andrew Sitosky Src:
TCP with Variance Control for Multihop IEEE Wireless Networks Jiwei Chen, Mario Gerla, Yeng-zhong Lee.
Wireless TCP. References r Hari Balakrishnan, Venkat Padmanabhan, Srinivasan Seshan and Randy H. Katz, " A Comparison of Mechanisms for Improving TCP.
Copyright © Lopamudra Roychoudhuri
Forward Error Correction vs. Active Retransmit Requests in Wireless Networks Robbert Haarman.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
1 Computer Networks Congestion Avoidance. 2 Recall TCP Sliding Window Operation.
TCP OVER ADHOC NETWORK. TCP Basics TCP (Transmission Control Protocol) was designed to provide reliable end-to-end delivery of data over unreliable networks.
Ασύρματες και Κινητές Επικοινωνίες Ενότητα # 11: Mobile Transport Layer Διδάσκων: Βασίλειος Σύρης Τμήμα: Πληροφορικής.
Random Early Detection (RED) Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Peer-to-Peer Networks 13 Internet – The Underlay Network
McGraw-Hill Chapter 23 Process-to-Process Delivery: UDP, TCP Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Internet Networking recitation #9
Topics discussed in this section:
Rate Adaptations.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
PUSH Flag A notification from the sender to the receiver to pass all the data the receiver has to the receiving application. Some implementations of TCP.
Internet Networking recitation #10
Low-Latency Adaptive Streaming Over TCP
CS4470 Computer Networking Protocols
TCP Congestion Control
TCP Overview.
Network Performance Definitions
ECN in QUIC - Questions Surfaced
Presentation transcript:

Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Introduction TCP is the most popular choice of a transport protocol for media streaming due to its efficient handling of congestion, flow control and packet loss on the network. However, applications must adapt their streams according to the bandwidth estimated by TCP. Adaptive streaming protocols use techniques like dynamic rate shaping and packet dropping to adapt their media quality. These techniques depend upon bandwidth feedback delay.

Introduction TCP introduces considerable latency at the application level, so the quality of the adapted stream is quite poor. This latency occurs at sender side due to throughput optimizations by TCP. The paper suggests the use of adaptive buffer size tuning to drastically reduce end to end latency. It aims to improve the responsiveness of the adaptive protocols and so the quality of the media.

Related Work Various alternatives, such as DCCP, have been proposed as alternatives to TCP. DCCP provides congestion control but does not handle packet retransmissions, leaving it to higher layers. Loss recovery schemes like FEC have to be implemented, causing high bandwidth overhead. Other VoIP applications such as Microsoft NetMeeting used UDP to provide best effort service. Active queue management and Explicit Congestion Notifications(ECN) can reduce packet loss in TCP, thus reducing the latency.

Features of TCP TCP is a fixed size sliding window based protocol sliding on a sender buffer of a fixed size. The window represents unacknowledged packets in flight over network. When the sender receives an acknowledgement for a packet, TCP sends a new packet from buffer, adjacent to the sliding window. Packets dropped can be retransmitted from the sender buffer. The throughput of TCP can be given as CWND where RTT CWND = Window Size and RTT= Round Trip Time

TCP Bandwidth Estimation TCP estimates bandwidth in a best service network which can be handled by network without packet loss. It does so by increasing the window size i.e. CWND = CWND + 1 every RTT. When congestion (packet loss) occurs, TCP realizes the maximum bandwidth capability of the network and halves the window size i.e. CWND = CWND/2. Latencies in TCP occur due to packet retransmissions, congestion control and sender size buffering.

Latency in TCP Packet retransmissions take a delay of 1 RTT, due to the time taken for congestion event feedback to reach the sender. TCP Congestion control reduces the size of CWND by half, hence the first lost packet takes a delay of 1 ½ RTT to retransmit, and all subsequent packets take ½ RTT to transmit. These latencies can be reduced by using active queue management by sending ECN bits to sender when congestion is imminent. Instead of dropping packets, router sets ECN bit which propagates to sender through the acknowledgement. Sender then reduces the window size.

Latency in Sender Buffer A packet in the send buffer cannot be sent until the window slides on it i.e. it becomes the first packet after CWND packets are in flight and an ACK opens the window. Sender Buffer Latency is caused due to the presence of such blocked packets. If the rate at which application sends packets to the buffer is more than the throughput of the network, a high number of such blocked packets are present. The latency due these packets is typically of the range of RTT, much higher than latency due to packet loss and congestion control.

Latency in Sender Buffer

Adaptive Buffer Tuning Solution: Make sure that sender buffer stores only no of packets = CWND. As CWND changes over time, sender buffer size should change accordingly. Buffer size should not go below CWND, as the throughput of TCP will suffer. Thus, adaptive buffer tuning ensures that there are no blocked packets present. Moves latency to the application level, which has more control of packet rate, thanks to techniques like prioritized packet sending.

Evaluation The authors evaluated the working of the MIN_BUF TCP under varying and heavy network loads. This evaluation was performed on a Linux 2.4 test bed which simulated WAN conditions by introducing delay at an intermediate router. The topology used in these simulations was “dumbbell” topology, with the router in the middle to simulate latency, and handle forward and reverse congestion flow.

Evaluation Heavy load was simulated by running varying long lived TCP streams, short burst TCP streams and a constant bit rate UDP stream. The latency in the above experiments was measured by application read and write times for packet on the receiver and sender side respectively The authors chose the round trip delay to be 100 ms on a 30 Mbps bandwidth connection to simulate the connection between East and West America coasts Both TCP and MIN_BUF TCP are simulated with various other streams mentioned above started and stopped at random times. Both forward and reverse flows simulated.

Results MIN_BUF TCP runs better than TCP on both forward and reverse congestion flows Reverse MIN_BUF TCP runs comparatively worse than forward MIN_BUF TCP, probably due to loss of ACKS

Results Both protocols tested with ECN enabled, using DRD active queue management, which marks a percentage of packets with ECN when the queue length in router exceeds a value Both protocols work better with ECN than without, and MIN_BUF TCP still shows less latency than TCP Each spike in TCP graph due to blocked packets and on MIN_BUF TCP graph due to decrease in CWND

Effect On Throughput MIN_BUF approach impacts network throughput as there are no new packets in the send buffer. TCP stack has to wait for application to write more bytes before new data can be sent. Standard TCP implementation has no such issues because the send buffer is large enough to accommodate new packets. Slightly increasing the size of send buffer can fix the problem. Need to study the event that trigger sending of new packets

Effect On Throughput ACK Arrival – ACK received for the first packet in TCP window. One new packet can be sent. Send buffer size: CWND + 1 Delayed ACK – To save bandwidth, one ACK is sent for two packets. Two new packets can be sent. Send Buffer Size: CWND + 2 CWND Increase – During additive increase phase of TCP steady state, for each ACK, CWND is incremented. Two new packets can be sent Send Buffer Size: CWND + 2. With Delayed ACK, Size: CWND + 3 ACK Compression – Sender receives ACK’s in bursty manner from routers. Worst Case, CWND packets are ACKed together. Send Buffer Size: 2 * CWND. With CWND increase, Size: 2* CWND + 1

To study the impact of send buffer size on latency and throughput, we add two parameters A (>0) and B(>= 0) Send Buffer Size = A * CWND + B MIN_BUF stream with A and B is denoted as MIN_BUF(A,B)

Protocol Latency distribution for Forward Path Congestion Topology

Protocol Latency Distribution for Reverse path congestion topology

Normalized Throughput

System Overload MIN_BUF TCP reduces latency and allows application to write to data to kernel with fine granularity Can cause higher system overhead because more system calls are invoked to write the same amount of data as standard TCP. Write and Poll System calls are the costliest

Implementation TCP stack can be modified to limit the send buffer size to A * CWND + MIN(B, CWND) Application writes data to buffer when at least one new packet can be admitted to the buffer SACK Correction – For Selective Acknowledgements, sacked_out term is introduced to keep count of the selectively acknowledged packets As the application has finer control over the data, the data to be sent can be aligned into MSS sized packets to minimize fragmenting or coalescing latency

Application Level Evaluation Qstream – an open source adaptive streaming application Uses Scalable MPEG(SPEG) – Similar to MPEG – 1. Has conceptual data layers with the base layer of least quality. Each subsequent layers improves the quality of base layer Uses Priority-progress Streaming (PSS). Adaptation Period – Period in which sender sends data in prioritized order. Base layer has highest priority Adaptation Window – Data within the adaptation period. Unsent data from this window is dropped Dropped Windows – Due to low bandwidth on sender. Entire windows can be dropped

Latency Distribution v/s Latency tolerance Adaptation Window = 4 Frames or ms Figures show that with increasing load, the percent of transmitted packets that arrive in time is only marginally affected for MIN_BUF

Adaptation Window = 2 Frames or 66.6 ms Latency tolerance can be made tighter when the adaptation window is made smaller. Trade off of this change is more varying video quality

Conclusion Paper shows that low latency streaming over TCP is feasible by tuning TCP’s send buffer so that it keeps just the packets that are currently in flight Few extra packets in the send buffer help to recover much of the lost network throughput This approach can be used in any application that prioritizes data. As an example, Qstream application was used to prove that TCP buffer tuning yields significant benefits in terms of end-to-end latency

Questions?