Univ. of TehranComputer Network1 Computer Networks Computer Networks (Graduate level) University of Tehran Dept. of EE and Computer Engineering By: Dr.

Slides:



Advertisements
Similar presentations
TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
Advertisements

1 End to End Bandwidth Estimation in TCP to improve Wireless Link Utilization S. Mascolo, A.Grieco, G.Pau, M.Gerla, C.Casetti Presented by Abhijit Pandey.
Router Buffer Sizing and Reliability Challenges in Multicast Aditya Akella 02/28.
Introduction 1 Lecture 14 Transport Layer (Transmission Control Protocol) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
Sizing Router Buffers Guido Appenzeller Isaac Keslassy Nick McKeown Stanford University.
Announcement Homework 2 in tonight –Will be graded and sent back before Th. class Midterm next Tu. in class –Review session next time –Closed book –One.
Announcement Project 2 finally ready on Tlab Homework 2 due next Mon tonight –Will be graded and sent back before Tu. class Midterm next Th. in class –Review.
High Performance All-Optical Networks with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
1 Congestion Control 2 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle”
Transport Layer3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach Featuring the Internet, 2 nd edition. Jim Kurose, Keith Ross Addison-Wesley,
1 Lecture 10: TCP Performance Slides adapted from: Congestion slides for Computer Networks: A Systems Approach (Peterson and Davis) Chapter 3 slides for.
CSEE W4140 Networking Laboratory Lecture 7: TCP flow control and congestion control Jong Yul Kim
Lecture 18 – TCP Performance
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
Week 9 TCP9-1 Week 9 TCP 3 outline r 3.5 Connection-oriented transport: TCP m segment structure m reliable data transfer m flow control m connection management.
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
High-performance bulk data transfers with TCP Matei Ripeanu University of Chicago.
TCP Congestion Control TCP sources change the sending rate by modifying the window size: Window = min {Advertised window, Congestion Window} In other words,
CSEE W4140 Networking Laboratory Lecture 7: TCP congestion control Jong Yul Kim
1 TCP latency modeling. 2 Q: How long does it take to receive an object from a Web server after sending a request? r TCP connection establishment r data.
The Effect of Router Buffer Size on HighSpeed TCP Performance Dhiman Barman Joint work with Georgios Smaragdakis and Ibrahim Matta.
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
Computer Networking Lecture 19 – TCP Performance.
Data Communication and Networks
Lect3..ppt - 09/12/04 CIS 4100 Systems Performance and Evaluation Lecture 3 by Zornitza Genova Prodanoff.
1 EE 122: Advanced TCP Ion Stoica TAs: Junda Liu, DK Moon, David Zats (Materials with thanks to Vern Paxson,
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
18-1 Last time □ Fast retransmit ♦ 3 duplicate ACKs □ Flow control ♦ Receiver windows □ Connection management ♦ SYN/SYNACK/ACK, FIN/ACK, TCP states □ Congestion.
3: Transport Layer3b-1 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle”
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Computer Networking Lecture 17 – TCP Performance Dejian Ye, Liu Xin.
Computer Networking Lecture 17 – TCP Performance & Future Eric Anderson Fall
CS144 An Introduction to Computer Networks
Principles of Congestion Control Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control!
3: Transport Layer3-1 Where we are in chapter 3 Last time: r TCP m Reliable transfer m Flow control m Connection management r principles of congestion.
Transport Layer3-1 Announcements r Collect homework r New homework: m Ch3#3,4,7,10,12,16,18-20,25,26,31,33,37 m Due Wed Sep 24 r Reminder: m Project #1.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
EE 122: Congestion Control and Avoidance Kevin Lai October 23, 2002.
Computer Networking TCP (cont.). Lecture 16: Overview TCP congestion control TCP modern loss recovery TCP interactions TCP options TCP.
Transport Layer3-1 TCP throughput r What’s the average throughout of TCP as a function of window size and RTT? m Ignore slow start r Let W be the window.
Analysis of Buffer Size in Core Routers by Arthur Dick Supervisor Anirban Mahanti.
Lecture 9 – More TCP & Congestion Control
AQM & TCP models Courtesy of Sally Floyd with ICIR Raj Jain with OSU.
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
1 Transport Layer Lecture 10 Imran Ahmed University of Management & Technology.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
Jennifer Rexford Fall 2014 (TTh 3:00-4:20 in CS 105) COS 561: Advanced Computer Networks TCP.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
Peer-to-Peer Networks 13 Internet – The Underlay Network
Computer Networking Lecture 10 – TCP & Routers.
Univ. of TehranIntroduction to Computer Network1 An Introduction Computer Networks An Introduction to Computer Networks University of Tehran Dept. of EE.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
Approaches towards congestion control
Chapter 3 outline 3.1 transport-layer services
Chapter 6 TCP Congestion Control
Chapter 3 outline 3.1 Transport-layer services
Lecture 19 – TCP Performance
Chapter 6 TCP Congestion Control
CS640: Introduction to Computer Networks
TCP Congestion Control
15-744: Computer Networking
Transport Layer: Congestion Control
Chapter 3 outline 3.1 Transport-layer services
Review of Internet Protocols Transport Layer
Presentation transcript:

Univ. of TehranComputer Network1 Computer Networks Computer Networks (Graduate level) University of Tehran Dept. of EE and Computer Engineering By: Dr. Nasser Yazdani Lecture 9: TCP Behavior and New Developments

Univ. of TehranComputer Network2 TCP and … TCP Latency Model TCP performance High Speed TCP Assigned reading [JK88] Congestion Avoidance and Control [CJ89] Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks

Objectives Appreciate mathematical modeling of TCP for performance optimization of TCP/IP networks Understand the fundamental relationship between packet loss probability and TCP performance Apply a range of mathematical models to predict TCP performance

Motivation for TCP Modeling TCP operating scale is very large Models are required to gain deeper understanding of TCP dynamics Uncertainties can be modeled as stochastic processes Drive the design of TCP-friendly algorithms for multimedia applications Optimize TCP performance

TCP Modeling Essentials Mainly Reno flavour are modelled Two main features are modelled Window dynamics Packet loss process Linear increase and multiplicative decrease is modled The standard assumption X(t) = W(t)/RTT

Packet Loss Process Packet loss triggers window decrease Packet loss is uncertain This uncertainty is typically modeled as a stochastic process E.g. probability p of losing a packet

Gallery of TCP Models Periodic model Detailed packet loss model Stochastic model Control system model Network system model

Univ. of TehranComputer Network8 TCP latency modeling Q: How long does it take to receive an object from a Web server after sending a request? TCP connection establishment data transfer delay Notation, assumptions: Assume one link between client and server of rate R Assume: fixed congestion window, W segments S: MSS (bits) O: object size (bits) no retransmissions (no loss, no corruption) Two cases to consider: WS/R > RTT + S/R: ACK for first segment in window returns before window’s worth of data sent WS/R < RTT + S/R: wait for ACK after sending window’s worth of data sent

Univ. of TehranComputer Network9 TCP latency Modeling Case 1: latency = 2RTT + O/R Case 2: latency = 2RTT + O/R + (K-1)[S/R + RTT - WS/R] K:= O/WS

Univ. of TehranComputer Network10 TCP Latency Modeling: Slow Start Now suppose window grows according to slow start. Will show that the latency of one object of size O is: where P is the number of times TCP stalls at server: - where Q is the number of times the server would stall if the object were of infinite size. - and K is the number of windows that cover the object.

Univ. of TehranComputer Network11 TCP Latency Modeling: Slow Start (cont.) Example: O/S = 15 segments K = 4 windows Q = 2 P = min{K-1,Q} = 2 Server stalls P=2 times.

Univ. of TehranComputer Network12 TCP Latency Modeling: Slow Start (cont.)

13 TCP Modeling Given the congestion behavior of TCP can we predict what type of performance we should get? What are the important factors Loss rate Affects how often window is reduced RTT Affects increase rate and relates BW to window RTO Affects performance during loss recovery MSS Affects increase rate

14 Simple TCP Model Some additional assumptions Fixed RTT No delayed ACKs In steady state, TCP losses packet each time window reaches W packets Window drops to W/2 packets Each RTT window increases by 1 packet  W/2 * RTT before next loss BW = MSS * avg window/RTT = MSS * (W + W/2)/(2 * RTT).75 * MSS * W / RTT

15 TCP Performance Can TCP saturate a link? Congestion control Increase utilization until… link becomes congested React by decreasing window by 50% Window is proportional to rate * RTT Doesn’t this mean that the network oscillates between 50 and 100% utilization? Average utilization = 75%?? No…this is *not* right!

16 TCP Congestion Control Only W packets may be outstanding Rule for adjusting W If an ACK is received: W ← W+1/W If a packet is lost:W ← W/2 SourceDest t Window size

17 Single TCP Flow Router without buffers

18 Summary Unbuffered Link t W Minimum window for full utilization The router can’t fully utilize the link If the window is too small, link is not full If the link is full, next window increase causes drop With no buffer it still achieves 75% utilization

19 TCP Performance In the real world, router queues play important role Window is proportional to rate * RTT But, RTT changes as well the window Window to fill links = propagation RTT * bottleneck bandwidth If window is larger, packets sit in queue on bottleneck link

20 TCP Performance If we have a large router queue  can get 100% utilization100% utilization But, router queues can cause large delays How big does the queue need to be? Windows vary from W  W/2 Must make sure that link is always full W/2 > RTT * BW W = RTT * BW + Qsize Therefore, Qsize > RTT * BW Ensures 100% utilization Delay? Varies between RTT and 2 * RTT

21 Single TCP Flow Router with large enough buffers for full link utilization

22 Summary Buffered Link t W Minimum window for full utilization With sufficient buffering we achieve full link utilization The window is always above the critical threshold Buffer absorbs changes in window size Buffer Size = Height of TCP Sawtooth Minimum buffer size needed is 2T*C This is the origin of the rule-of-thumb Buffer

23 Rule-of-thumb Rule-of-thumb makes sense for one flow Typical backbone link has > 20,000 flows Does the rule-of-thumb still hold?

24 Simple Loss Model What was the loss rate? Packets transferred between losses = Avg BW * time = (.75 W/RTT) * (W/2 * RTT) = 3W 2 /8 1 packet lost  loss rate = p = 8/3W 2 W = sqrt( 8 / (3 * loss rate)) BW =.75 * MSS * W / RTT BW = MSS / (RTT * sqrt (2/3p))

25 If flows are synchronized Aggregate window has same dynamics Therefore buffer occupancy has same dynamics Rule-of-thumb still holds. t

26 If flows are not synchronized Probability Distribution B 0 Buffer Size

27 Central Limit Theorem CLT tells us that the more variables (Congestion Windows of Flows) we have, the narrower the Gaussian (Fluctuation of sum of windows) Width of Gaussian decreases with Buffer size should also decreases with

28 Required buffer size Simulation

Periodic Model Simplest model for TCP No specific flavor assumed Assumes a periodic pattern of congestion window See Fig 5.1 X(p) = (1/RTT)*Root(3/2p) P is the packet loss probability

Example 1 - Question If a TCP connection has an average RTT of 200ms, and packet loss probability 0.05, what is the average throughput? RTT=0.2, p=0.05. Using Eq. (5.4), throughput is obtained as: X=(1/0.2)*(root(3/2x0.05)) = 27.4 pkts/sec

31 Different TCP versions Loss based, TCP Reno, Scalable TCP, Highspeed TCP, … Delay based TCP, TCP Vegas, Fast TCP. Different Increase/decrease policy AIMD MIMD AIAD Fairness. Usually MIMD, Scalable TCP act aggressively. Late flows get less Bandwidth. AIMD version usually starve when operating with MIMD AIAD get along well with MIMD

32 Changing Workloads New applications are changing the way TCP is used 1980’s Internet Telnet & FTP  long lived flows Well behaved end hosts Homogenous end host capabilities Simple symmetric routing 2000’s Internet Web & more Web  large number of short xfers Wild west – everyone is playing games to get bandwidth Cell phones and toasters on the Internet Policy routing How to accommodate new applications?

Dependence on TCP/IP (cont.) We depend on TCP at office, home, and while on the move We depend on TCP not only for research, but also for critical business transactions and entertainment Internet (powered by TCP/IP) is now a key tool used by our kids to prepare their projects at schools

Critical Role of TCP Many believe that network performance can be boosted by simply upgrading hardware Not correct in many occasions TCP sits between application and network TCP has total control of how application data should be released to the network TCP is a complex protocol which interacts with many network elements in the end to end path Unless TCP is optimized, hardware alone cannot boost network performance

Emergence of New Networking Technologies We are witnessing proliferation of different networking technologies Wireless, satellite, optical etc. TCP algorithms suitable for one environment, do not always work best in another Need for research into new algorithms Evidence: large number of articles on TCP are published every year in top journals and conferences

IP Convergence Many traditional non-TCP/IP industries are converging to TCP/IP E.g. cellular communication, video and other entertainments, etc. Understanding TCP/IP performance fundamentals thus becomes important to scientists and engineers working in all these industries

Univ. of TehranComputer Network37 Next Lecture: Performance How to measure Network Performance Assigned reading Any Queuing Theory book. For instance see …