1 Experiment And Analysis of Dynamic TCP Acknowledgement Daeseob Lim Sam Lai Wing-Ho Gordon Wong.

Slides:



Advertisements
Similar presentations
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Advertisements

An Improved TCP for transaction communications on Sensor Networks Tao Yu Tsinghua University 2/8/
Introduction 1 Lecture 13 Transport Layer (Transmission Control Protocol) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer.
Consistency and Replication Chapter 7 Part II Replica Management & Consistency Protocols.
Simulation-based Comparison of Tahoe, Reno, and SACK TCP Kevin Fall & Sally Floyd Presented: Heather Heiman September 10, 2002.
TCP Vegas: New Techniques for Congestion Detection and Control.
1 Transport Protocols & TCP CSE 3213 Fall April 2015.
CSCI 4550/8556 Computer Networks
BZUPAGES.COM 1 User Datagram Protocol - UDP RFC 768, Protocol 17 Provides unreliable, connectionless on top of IP Minimal overhead, high performance –No.
Chapter 7 – Transport Layer Protocols
– Wireless PHY and MAC Stallings Types of Infrared FHSS (frequency hopping spread spectrum) DSSS (direct sequence.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
An Empirical Study of Real Audio Traffic A. Mena and J. Heidemann USC/Information Sciences Institute In Proceedings of IEEE Infocom Tel-Aviv, Israel March.
Presentation By: Daniel Mitchell, Brian Shaw, Steven Shidlovsky Paper By: Martin Heusse, Franck Rousseau, Gilles Berger-Sabbatel, Andrzej Duda 1 CS4516.
Dynamic Internet Congestion with Bursts Stefan Schmid Roger Wattenhofer Distributed Computing Group, ETH Zurich 13th International Conference On High Performance.
Distributed Video Streaming Over Internet Thinh PQ Nguyen and Avideh Zakhor Berkeley, CA, USA Presented By Sam.
802.11n MAC layer simulation Submitted by: Niv Tokman Aya Mire Oren Gur-Arie.
Katz, Stoica F04 EECS 122: Introduction to Computer Networks Performance Modeling Computer Science Division Department of Electrical Engineering and Computer.
A simulation-based comparative evaluation of transport protocols for SIP Authors: M.Lulling*, J.Vaughan Department of Computer science, University college.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
1 Internet Networking Spring 2004 Tutorial 10 TCP NewReno.
Design and Implementation of a Server Director Project for the LCCN Lab at the Technion.
Department of Electronic Engineering City University of Hong Kong EE3900 Computer Networks Transport Protocols Slide 1 Transport Protocols.
1 K. Salah Module 6.1: TCP Flow and Congestion Control Connection establishment & Termination Flow Control Congestion Control QoS.
Problems.
The Transport Layer.
Opersating Mode DCF: distributed coordination function
TCP Enhancement for Random Loss Jiang Wu Computer Science Lakehead University.
A Simulation of Adaptive Packet Size in TCP Congestion Control Zohreh Jabbari.
COMT 4291 Communications Protocols and TCP/IP COMT 429.
The Transmission Control Protocol (TCP) Application Services (Telnet, FTP, , WWW) Reliable Stream Transport (TCP) Connectionless Packet Delivery.
MaxNet NetLab Presentation Hailey Lam Outline MaxNet as an alternative to TCP Linux implementation of MaxNet Demonstration of fairness, quick.
Transport Layer Moving Segments. Transport Layer Protocols Provide a logical communication link between processes running on different hosts as if directly.
Demand Based Bandwidth Assignment MAC Protocol for Wireless LANs K.Murugan, B.Dushyanth, E.Gunasekaran S.Arivuthokai, RS.Bhuvaneswaran, S.Shanmugavel.
TCP and SCTP RTO Restart draft-hurtig-tcpm-rtorestart-02 Michael Welzl 1.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
2000 년 11 월 20 일 전북대학교 분산처리실험실 TCP Flow Control (nagle’s algorithm) 오 남 호 분산 처리 실험실
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Competitive Queue Policies for Differentiated Services Seminar in Packet Networks1 Competitive Queue Policies for Differentiated Services William.
Considerations of SCTP Retransmission Delays for Thin Streams Jon Pedersen 1, Carsten Griwodz 1,2 & Pål Halvorsen 1,2 1 Department of Informatics, University.
Improving application layer latency for reliable thin-stream By: Joel Fichter & Andrew Sitosky Src:
Copyright © Lopamudra Roychoudhuri
Chapter 24 Transport Control Protocol (TCP) Layer 4 protocol Responsible for reliable end-to-end transmission Provides illusion of reliable network to.
Self-generated Self-similar Traffic Péter Hága Péter Pollner Gábor Simon István Csabai Gábor Vattay.
Compound TCP in NS-3 Keith Craig 1. Worcester Polytechnic Institute What is Compound TCP? As internet speeds increased, the long ‘ramp’ time of TCP Reno.
Deadline-based Resource Management for Information- Centric Networks Somaya Arianfar, Pasi Sarolahti, Jörg Ott Aalto University, Department of Communications.
Lab The network simulator ns The network simulator ns Allows us to watch evolution of parameters like cwnd and ssthresh Allows us to watch evolution of.
TCP: Transmission Control Protocol Part II : Protocol Mechanisms Computer Network System Sirak Kaewjamnong Semester 1st, 2004.
1 TCP Timeout And Retransmission Chapter 21 TCP sets a timeout when it sends data and if data is not acknowledged before timeout expires it retransmits.
ECE 4110 – Internetwork Programming
Development of a QoE Model Himadeepa Karlapudi 03/07/03.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
Topics in Internet Research: Project Scope Mehreen Alam
4343 X2 – The Transport Layer Tanenbaum Ch.6.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
1 Direct Link Networks: Reliable Transmission Sections 2.5.
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
Computer Networks 1000-Transport layer, TCP Gergely Windisch v spring.
Computer Networking Lecture 16 – Reliable Transport.
TCP - Part II.
Exams hints The exam will cover the same topics covered in homework. If there was no homework problem on the topic, then there will not be any exam question.
By, Nirnimesh Ghose, Master of Science,
TCP Vegas: New Techniques for Congestion Detection and Avoidance
Presented by Kristen Carlson Accardi
Video On Demand.
PUSH Flag A notification from the sender to the receiver to pass all the data the receiver has to the receiving application. Some implementations of TCP.
TCP: Transmission Control Protocol Part II : Protocol Mechanisms
Presentation transcript:

1 Experiment And Analysis of Dynamic TCP Acknowledgement Daeseob Lim Sam Lai Wing-Ho Gordon Wong

2 What is the main problem of dynamic TCP acknowledgment? To ACK, or not to ACK: that is the question. When a packet arrives to the receiver, there are two choices:  ACK immediately.  Wait and ACK later, such that you may have the chance to acknowledge multiple packets with just one ACK.

3 What’s the difference? ACK immediately:  low latency : time elapses between the packet arrival and the ACK for this packet is send.  No. of ACK packet increase. Wait and ACK later:  High latency  Small amount of ACK packet generated

4 What consider to be the best solution? Low no. of acknowledge Low aggregate acknowledgement latency for all packets

5 Aggregate latency example: The total ACK latency is 30ms in this case. Packet A Arrive Packet B Arrive Send single ACK for both A and B 10ms ACK latency for B is 10ms ACK latency for A is 20ms

6 Naïve Solutions Send ACK immediately for each packet?  Low or even no ACK latency, but this will generate too much ACK packets Send one ACK for all the end of all transmits?  Only one ACK is needed  However, high latency  Don’t know which one is the last packet  What if the link is unreliable?

7 An Online Randomized Algorithm Dynamic TCP acknowledgement and other stories about e/(e-1)  By Anna R. Kalin, Claire Kenyon, Dana Randall Able to achieve a competitive ratio of 1.58 compare to the optimal solution Competitive ratio = performance of the algorithm / performance of the optimal solution

8 Detail about this algorithm P(t, t`) be the set of packets arrive between time t and t` There exists z such that z is between 0 and 1 inclusively Distributed function to produced z. (randomized factor) Suppose that ith acknowledgement happens at time i and the next one happen at time t i+1 (con’t)

9 Detail about the algorithm By the algorithm, we should locate T i+1 such that t i <= T i+1 <= t i+1 and P(t i, T i+1 )(T i+1 - t i+1 ) = z If we do that, z unit of latency cost will be saved by sending a single additional ACK at T i+1.

10 Before applying the algorithm

11 After the algorithm

12 Why this works? The rectangle is guarantee to have area of at least 1 By sending 1 additional ACK, the acknowledgement cost increase by 1, but the lat ency cost decreases by at least 1. The new sequence is at least as good as the original one. More detail proof in the paper.  Dynamic TCP acknowledgement and other stories about e/(e-1)

13 Contribution of this research Implement a randomized online algorithm about delayed ACK into Linux kernel Compare real performance of the randomized algorithm and the current TCP implementation Observe its superiority in terms of cost Analyze its inability in terms of throughput

14 ACK for data packet Data packet Receiver Data packet Receiver ACK packet Immediate ACK Delayed ACK Data packet Receiver ACK packet Schedule a timer Timer expired ≈ 40ms

15 Interval of Delayed-ACK Timer Determined by some factors  Minimum/maximum interval by kernel constants  Estimated RTT Restrictions by RFC 2581  The maximum is 500ms.  Acknowledge at least every second segment.  Acknowledge out-of-order data immediately. In most cases, ≈ 40ms (~ 200ms)

16 Implementation of TCP on Linux Need to send an immediate ACK? tcp_rcv_established() __tcp_ack_snd_check() tcp_send_ack() tcp_send_delayed_ack() Received a data packet from IP-layer Yes No, Then why not ‘Delayed ACK’? The point to hack kernel codes !!

17 Hacking protocol stack Cost > Random value ? Send additional ACK ! tcp_send_delayed_ack() Choose a random value Yes No Scale to threshold value Cost = Unacked data size * Elapsed time since last ACK

18 Generating random number Generate random numbers in advance, store them into kernel codes, and select a number sequentially y = e x /(e-1) 1 X Y 0.599, 0.761, 0.232, 0.378, 0.619, 0.997, …. unsigned rand_numbers[1000] = { 0.599, 0.761, 0.232, 0.378, 0.619, 0.997, …. …. }; …. number = rand_numbers[index++]; Generate numbers with off-line program Select random number in the array

19 Test Environment Client Router Server Modified Kernel + Network Sniffer (Ethereal) Network Emulator (ns2)

20 Competitive Ratio Experiment The server sends out 100 packets to the client at random time spacing at most 70ms apart. The competitive ratio is calculated for each cost ratio starting from 0.05 to 0.95 stepping by 0.05, then to stepping by Run on simulated networks having bandwidth of 100Mbps and RTT of 2ms and 100ms for both versions of TCP.

21 Overall Competitive Ratio on 2ms Network

22 Overall Competitive Ratio on 100ms Network

23 Blowup Competitive Ratio on 2ms Network

24 Blowup Competitive Ratio on 100ms Network

25 Analysis For new TCP, overall the competitive ratio is within 1.58 except for borderline cases.  Small cost ratio: Expensive latency cost Overhead from network sniffer Overhead from new TCP  Large cost ratio: Expensive acknowledgement cost Original TCP acknowledgements Possibility of additional acknowledgement

26 Analysis For original TCP, the competitive ratio starts out extremely high, then converges rapidly with the new TCP. Eventually, it starts to increase, but at a slower rate.  Favors delay acknowledgement even when latency cost is high.  Always acknowledge within 200ms or every 2 packet full of data even when acknowledgement cost is high.

27 Streaming Data Experiment The client sends out a request to the server asking for data of a certain size to be sent. The server replies with the data. The client measures the total duration to determine throughput. Run on simulated networks having bandwidth of 100Mbps and RTT of 2ms and 100ms for both versions of TCP.

28 Streaming Data Result RTT Original TCP New TCP ThroughputSpeedup 2ms6.926Mbps6.873Mbps ms0.577Mbps 1

29 Analysis The new TCP can not outperform the original TCP in terms of throughput.  Intuitively, you can imagine, if the incoming traffic is regular and data keeps pouring in, to optimize throughput, you’d want to delay ack as long as possible.  The new TCP can not do delay ack longer than the original TCP.  Random scale down of the threshold for sending an additional acknowledgement. Our implementation induces little overhead.

30 Conclusion Prove the randomized algorithm can achieve the competitive ratio of 1.58 in most cases. Our implementation achieves better competitive ratio comparing to the original TCP in most cases. Low overhead implementation. Can not improve network performance in terms of throughput.