1 The Latency/Bandwidth Tradeoff in Gigabit Networks UBI 527 Data Communications Ozan TEKDUR 2011-2012, Fall.

Slides:



Advertisements
Similar presentations
CSE 413: Computer Networks
Advertisements

ECE358: Computer Networks Fall 2014
Spring 2000CS 4611 Introduction Outline Statistical Multiplexing Inter-Process Communication Network Architecture Performance Metrics.
TELE202 Lecture 8 Congestion control 1 Lecturer Dr Z. Huang Overview ¥Last Lecture »X.25 »Source: chapter 10 ¥This Lecture »Congestion control »Source:
Ethernet – CSMA/CD Review
Fundamentals of Computer Networks ECE 478/578
CSCI-235 Micro-Computers in Science The Network. Network Fundamentals A computer network consists of two or more computers linked together to exchange.
CECS 474 Computer Network Interoperability Notes for Douglas E. Comer, Computer Networks and Internets (5 th Edition) Tracy Bradley Maples, Ph.D. Computer.
CS 408 Computer Networks Congestion Control (from Chapter 05)
Answers of Exercise 7 1. Explain what are the connection-oriented communication and the connectionless communication. Give some examples for each of the.
Optical communications & networking - an Overview
1 Updates on Backward Congestion Notification Davide Bergamasco Cisco Systems, Inc. IEEE 802 Plenary Meeting San Francisco, USA July.
ECE 4450:427/527 - Computer Networks Spring 2015
ECEN “Mobile Wireless Networking”
1 ITC242 – Introduction to Data Communications Week 8 Topic 13 Wireless WANS Reading 2.
Introduction© Dr. Ayman Abdel-Hamid, CS4254 Spring CS4254 Computer Network Architecture and Programming Dr. Ayman A. Abdel-Hamid Computer Science.
Department of Computer Engineering University of California at Santa Cruz Networking Systems (1) Hai Tao.
Lecture 2 Introduction 1-1 Chapter 1: roadmap 1.1 What is the Internet? 1.2 Network edge  end systems, access networks, links 1.3 Network core  circuit.
Data Communication and Networks
EE 4272Spring, 2003 Chapter 14 LAN Systems Ethernet (CSMA/CD)  ALOHA  Slotted ALOHA  CSMA  CSMA/CD Token Ring /FDDI Fiber Channel  Fiber Channel Protocol.
Lecture Internet Overview: roadmap 1.1 What is the Internet? 1.2 Network edge  end systems, access networks, links 1.3 Network core  circuit switching,
Lecture Internet Overview: roadmap 1.1 What is the Internet? 1.2 Network edge  end systems, access networks, links 1.3 Network core  circuit switching,
EE 4272Spring, 2003 Chapter 11. ATM and Frame Relay Overview of ATM Protocol Architecture ATM Logical Connections ATM Cells ATM Service Categories ATM.
Chapter 3 part II Data and Signals
Lecture 1, 1Spring 2003, COM1337/3501Computer Communication Networks Rajmohan Rajaraman COM1337/3501 Textbook: Computer Networks: A Systems Approach, L.
Switching Techniques Student: Blidaru Catalina Elena.
Communication Networks
1 Computer Communication & Networks Lecture 4 Circuit Switching, Packet Switching, Delays Waleed.
Introduction 1-1 Chapter 1: roadmap 1.1 What is the Internet? 1.2 Network edge  end systems, access networks, links 1.3 Network core  circuit switching,
Bandwidth in the Local and Wide Area Network Monmouth County Vocational Schools / Advanced Networking Program X.25 ATM 56k SONET T1/T3 OC 192 Gigabit Megabit.
Networks for Distributed Systems n network types n Connection-oriented and connectionless communication n switching technologies l circuit l packet.
Brierley 1 Module 4 Module 4 Introduction to LAN Switching.
جلسه دهم شبکه های کامپیوتری به نــــــــــــام خدا.
Chapter 2 – X.25, Frame Relay & ATM. Switched Network Stations are not connected together necessarily by a single link Stations are typically far apart.
Data Link Layer We have now discussed the prevalent shared channel technologies  Ethernet/IEEE  Wireless LANs (802.11) We have now covered chapters.
DISPERSITY ROUTING: PAST and PRESENT Seungmin Kang.
HIGH SPEED WIDE AREA NETWORKS BYWANJAU. Introduction  WANs – Group of LANs linked together by communication service providers over large geographically.
Huda AL-Omair_ networks61 Wide Area Network. Huda AL-Omair_ networks62 What is a WAN? Wide area network or WAN is a computer network covering a wide geographical.
Computer Networks Performance Metrics. Performance Metrics Outline Generic Performance Metrics Network performance Measures Components of Hop and End-to-End.
Data and Computer Communications Chapter 10 – Circuit Switching and Packet Switching (Wide Area Networks)
Computer Networks with Internet Technology William Stallings
CSC 311 Chapter Eight FLOW CONTROL TECHNIQUES. CSC 311 Chapter Eight How do we manage the large amount of data on the network? How do we react to a damaged.
CCNA 3 Week 4 Switching Concepts. Copyright © 2005 University of Bolton Introduction Lan design has moved away from using shared media, hubs and repeaters.
CS 164: Slide Set 2: Chapter 1 -- Introduction (continued).
ICOM 6115©Manuel Rodriguez-Martinez ICOM 6115 – Computer Networks and the WWW Manuel Rodriguez-Martinez, Ph.D. Lecture 7.
Data and Computer Communications Chapter 11 – Asynchronous Transfer Mode.
1 © 2003, Cisco Systems, Inc. All rights reserved. CCNA 3 v3.0 Module 4 Switching Concepts.
Cisco 3 - Switching Perrine. J Page 16/4/2016 Chapter 4 Switches The performance of shared-medium Ethernet is affected by several factors: data frame broadcast.
Lecture # 03 Switching Course Instructor: Engr. Sana Ziafat.
Lecture Focus: Data Communications and Networking  Transmission Impairment Lecture 14 CSCS 311.
Chapter Two Fundamentals of Data and Signals Data Communications and Computer Networks: A Business User's Approach Eighth Edition.
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
Queuing Delay 1. Access Delay Some protocols require a sender to “gain access” to the channel –The channel is shared and some time is used trying to determine.
CSE 413: Computer Network Circuit Switching and Packet Switching Networks Md. Kamrul Hasan
CCNA3 Module 4 Brierley Module 4. CCNA3 Module 4 Brierley Topics LAN congestion and its effect on network performance Advantages of LAN segmentation in.
Introduction Computer networks: – definition – computer networks from the perspectives of users and designers – Evaluation criteria – Some concepts: –
1 Ethernet CSE 3213 Fall February Introduction Rapid changes in technology designs Broader use of LANs New schemes for high-speed LANs High-speed.
Lecture 11 B-ISDN. Broadband ISDN In 1988, CCITT issued the first two recommendations relating to the broadband ISDN called B-ISDN. This system is defined.
Chi-Cheng Lin, Winona State University CS412 Introduction to Computer Networking & Telecommunication Data Link Layer Part II – Sliding Window Protocols.
Lecture # 3: WAN Data Communication Network L.Rania Ahmed Tabeidi.
Introduction1-1 Data Communications and Computer Networks Chapter 1 CS 3830 Lecture 3 Omar Meqdadi Department of Computer Science and Software Engineering.
CSCI-100 Introduction to Computing The Network. Network Fundamentals A computer network consists of two or more computers linked together to exchange.
Chapter Two Fundamentals of Data and Signals
CS Lecture 2 Network Performance
Congestion Control (from Chapter 05)
Optical communications & networking - an Overview
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Presentation transcript:

1 The Latency/Bandwidth Tradeoff in Gigabit Networks UBI 527 Data Communications Ozan TEKDUR , Fall

Background Late 1960s up to 30 character per second Mid 1970s 64 kbps trunk speed 10 kbps file transfer speed X.25 packet switched networks Late 1980s cost effective T1 Networks 1544 Mbps trunkspeed and file transfer Packet-switched network architecture still had to process every packet up to the third layer (the networklayer) 2

Background Early 1990s Frame Relay Networks Packet switching at T1 speeds fiber optic transmission media Very high bandwidth Noise free Less error control burden Improvements on VLSI technology(faster switches) Improvements on VLSI technology intelligent switches dynamically assign the channel bandwidth on a demand basis 3

Background ISDN Link Access Protocol for D channel Packets are processed up to the 2nd(data link) layer reducing the burden on network layer In summary Frame Relay Networks can achieve T1 speeds by: implementing functions to HW moving functions out of the network Taking advantage of LAPD architecture example: LAN 4

Background Multi megabit data networks FDDI 100 Mbps SMDS 45 Mbps DQDB Up to 150 Mbps ATM switches and Broadband ISDN At 155 Mbps up to 2.4 Gbps HPPI 800 Mbps SONET 1.2 Gbps 5

Major Issiue: Latency vs. Bandwidth gigabit world, just another step in greater bandwitdth systems or are they different? effect of latency Channel Latency: the time it takes energy to move from one end of the link to the other Key parameters in any data network system: C = Capacity of the network (Mbis) b = Number of bits in a data packet L = Length of the network (miles) 6

Major Issiue: Latency vs. Bandwidth a = 5LC / b (Eq.1) where “a” is the critical system parameter which can be defined as the ratio of the latency of the channel to the time it takes to pump one packet into the link. measures how many packets can be pumped in to one end of the link before the first bit appears at the other end. Factor 5 is simply the approximate number of microseconds it takes light to move one mile. 7

Major Issiue: Latency vs. Bandwidth 8 Capacity (C) Mbps Packet Length (b) bits Propagation Delay (t) μs Ratio “a” LAN ,05 WAN0, ,001 Satellite0, ,0125 Optic Fiber Table 1: propagation delay / packet Tx time

Major Issiue: Latency vs. Bandwidth “a” grows dramatically in gigabit systems. Why? 2 cases to consider large number of users are each sharing a small piece of this large bandwidth few users each sending packets and files at gigabit speeds from user point of view case 1 is no different from lower datarate systems. for case 2, due to the high datarate “a” gets large. 9

Major Issiue: Latency vs. Bandwidth Consider a terminal sends 1 Mb across US as shown in Figure 1 10 Figure 1

Major Issiue: Latency vs. Bandwidth Assume communication channel is X.25 packet network 64kbps First bit will arrive after ~1000 bits are pumped to the channel Channel is buffering 0,001 of the message 1000 times much data is stored in terminal then channel Clearly if a higher speed is used transmitting time will be reduced and we can benefit from more bandwidth 11

Major Issiue: Latency vs. Bandwidth Assume communication channel is T1 packet network 1,544Mbps 40 times much data is stored in terminal then channel Once again if a higher speed is used transmitting time will be reduced and we can benefit from more bandwidth 12

Major Issiue: Latency vs. Bandwidth Assume communication channel is SONET packet network 1.2 Gbps entire 1 Mb file as asmall pulse moving down the channel pulse occupies roughly only 0.05 of the channel it is clear that more bandwidth is of no use in speeding up the communication at this rate Latency of the channel dominates the time to deliver the file 13

Major Issiue: Latency vs. Bandwidth Pre-gigabit networking Capacity limited Post-gigabit networking Latency limited Speed of light, fundamental limitation 14

Major Issiue: Latency vs. Bandwidth Case of competing traffic Queueing Classical M/M/1 queueing system the mean time from when the message arrives at the tail of the transmit queue until the last bit of the message appears at the output of the channel, including any propagation delay is given by T = (1.024/c)/ (1 – p) + τ (Eq.2) T = mean response time (milliseconds) τ = propagation delay (channel latency) in milliseconds p, system utilization factor. p = λ( 1024/C) (Eq.3) λ = arrival rate (messages per microsecond) C= channel capacity (Mbps) 15

Figure 2: Response time vs. system load with no propagation delay Major Issiue: Latency vs. Bandwidth 16

Figure 3: Response time vs. system load with 15ms propagation delay Major Issiue: Latency vs. Bandwidth 17

Major Issiue: Latency vs. Bandwidth Can we define a relationship between bandwidth and latency systems? assume a M/M/1 model, where the messages have an average length equal to b bits is transmittnig data across USA Recall Eq.2 T = (1.024/c)/ (1 – p) + τ two components make up the response time T, the queueing + transmission time delay (the first term in the equation) the propagation delay (τ). 18

Major Issiue: Latency vs. Bandwidth we aim to define a sharp boundary between bandwidth- limited and latency-limited regions. assume two terms are equal to eachother. the propagation delay = “the queueing” + “transmission time delay” From Eq. 2 it comes out that Ccritical = (1.000b)/ (1 – p) τ(Eq.4) 19

Major Issiue: Latency vs. Bandwidth Figure 4: Bandwidth vs. system load for a 1-Mb file sent across the U.S. 20

Major Issiue: Latency vs. Bandwidth Above the boundary, System is “latency” limited more bandwidth will have negligible effect in reducing the mean response time, T Below the boundary, System is “bandwidth” limited more bandwidth will reduce the mean response time, T For these parameters, T = 15ms and 1Mb file size, the system is latency limited over most ofthe load range when a gigabit channel is used For these parameters a gigabit channel is overkill so far as reducing delay is concerned 21

Major Issiue: Latency vs. Bandwidth Figure 5: Bandwidth vs. system load for files sent across the U.S. gigabit channels begin to make sense for message sizes of size 10 megabits or more, but are not helpful for smaller file sizes. 22

Other Issiues congestion control and flow control problem in gigabit networks Transmission start at t = 0 T = 15ms, when 1st bit arrive receiver  15 million bits are on the pipeline. if an error occured and by the time a stop bit is received by the transmitter at t = 30ms, another 15 million bits will have launched! closed control feedback method of flow control is of no use in this environment due to latency 23

Other Issiues Possible Solutions: Rate based flow control user is permitted to transmit at amaximum allowable rate Hiding latency at the application level Use of parallelism while one process is waiting for a response, another process, which does not depend upon this response, may proceed with its processing statistical multiplexing of bursty sources If we have a large number of small bursty sources Law of Large Numbers allows selected channel to be driven with high efficiency 24

CONCLUSION gigabit networks forces us to deal with the propagation delay due to the finite speed of light propagation delay across the U.S. is 40 times smaller than the time required to transmit a 1-Mb file into a T1 link At gigabit speeds the situation is completely reversed, propagation delay is 15 times larger than the time to transmit into the link user must pay attention to file sizes and how latency will affect applications 25

CONCLUSION user must try to hide the latency with pipelining and parallelism system designer must think about the problems of flow control, buffering, and congestion control rate based flow control design algorithms for smaller buffer usage 26

27 Reference: KLEINROCK, L., The Latency/Bandwidth Tradeoff in Gigabit Networks. IEEE Communications Magazine, April 1992 UBI527 Data Communications Term Project Ozan TEKDUR