Network Layer Chapter 5 Design Issues Routing Algorithms

Slides:



Advertisements
Similar presentations
1 CONGESTION CONTROL. 2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because.
Advertisements

EE 4272Spring, 2003 Chapter 12 Congestion in Data Networks Effect of Congestion Control  Ideal Performance  Practical Performance Congestion Control.
Network layer -- May Network layer Computer Networks.
TELE202 Lecture 8 Congestion control 1 Lecturer Dr Z. Huang Overview ¥Last Lecture »X.25 »Source: chapter 10 ¥This Lecture »Congestion control »Source:
01. Apr INF-3190: Congestion Control Congestion Control Foreleser: Carsten Griwodz
Chi-Cheng Lin, Winona State University CS 313 Introduction to Computer Networking & Telecommunication Chapter 5 Network Layer.
Congestion Control Algorithms
Congestion Control Algorithms
The Network Layer Functions: Congestion Control
The Network Layer Chapter Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011.
Review: Routing algorithms Distance Vector algorithm. –What information is maintained in each router? –How to distribute the global network information?
William Stallings Data and Computer Communications 7 th Edition Chapter 13 Congestion in Data Networks.
Network and Communications Hongsik Choi Department of Computer Science Virginia Commonwealth University.
Congestion Control Tanenbaum 5.3 Tanenbaum 6.5. Congestion Control Network Layer – Congestion control point to point Transport Layer – Congestion control.
Congestion Control Algorithms
24-1 Chapter 24. Congestion Control and Quality of Service (part 1) 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 – QoS.
Traffic Shaping Why traffic shaping? Isochronous shaping
Quality of Service Requirements
24.1 Chapter 24 Congestion Control and Quality of Service Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
1.  Congestion Control Congestion Control  Factors that Cause Congestion Factors that Cause Congestion  Congestion Control vs Flow Control Congestion.
CS 408 Computer Networks Congestion Control (from Chapter 05)
Congestion Control Algorithms, Quality of Service & Internetworking
ACN: IntServ and DiffServ1 Integrated Service (IntServ) versus Differentiated Service (Diffserv) Information taken from Kurose and Ross textbook “ Computer.
1 Quality of Service Outline Realtime Applications Integrated Services Differentiated Services.
CSc 461/561 CSc 461/561 Multimedia Systems Part C: 3. QoS.
Spring 2002CS 4611 Quality of Service Outline Realtime Applications Integrated Services Differentiated Services.
24-1 Chapter 24. Congestion Control and Quality of Service part Quality of Service 23.6 Techniques to Improve QoS 23.7 Integrated Services 23.8.
The Network Layer Chapter 5.
QoS Guarantees  introduction  call admission  traffic specification  link-level scheduling  call setup protocol  required reading: text, ,
CSE QoS in IP. CSE Improving QOS in IP Networks Thus far: “making the best of best effort”
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Chapter 23 Congestion Control and Quality of Service.
QOS مظفر بگ محمدی دانشگاه ایلام. 2 Why a New Service Model? Best effort clearly insufficient –Some applications need more assurances from the network.
Univ. of TehranAdv. topics in Computer Network1 Advanced topics in Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Computer Networks Performance Metrics. Performance Metrics Outline Generic Performance Metrics Network performance Measures Components of Hop and End-to-End.
1 Lecture 14 High-speed TCP connections Wraparound Keeping the pipeline full Estimating RTT Fairness of TCP congestion control Internet resource allocation.
CSE Computer Networks Prof. Aaron Striegel Department of Computer Science & Engineering University of Notre Dame Lecture 20 – March 25, 2010.
1 Congestion Control Computer Networks. 2 Where are we?
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Chapter 23 Congestion Control and Quality of Service.
CONGESTION CONTROL.
Lecture Network layer -- May Congestion control Algorithms.
An End-to-End Service Architecture r Provide assured service, premium service, and best effort service (RFC 2638) Assured service: provide reliable service.
An End-to-End Service Architecture r Provide assured service, premium service, and best effort service (RFC 2638) Assured service: provide reliable service.
Providing QoS in IP Networks
-1- Georgia State UniversitySensorweb Research Laboratory CSC4220/6220 Computer Networks Dr. WenZhan Song Professor, Computer Science.
1 Lecture 15 Internet resource allocation and QoS Resource Reservation Protocol Integrated Services Differentiated Services.
Chapter 10 Congestion Control in Data Networks and Internets 1 Chapter 10 Congestion Control in Data Networks and Internets.
The Network Layer Congestion Control Algorithms & Quality-of-Service Chapter 5.
Distributed Systems 8. Network Layer Theory Simon Razniewski Faculty of Computer Science Free University of Bozen-Bolzano A.Y. 2015/2016.
Topics discussed in this section:
Congestion Control and
Congestion Control, Quality of Service, and Internetworking
Congestion Control Evaluation in Dynamic Network
CONGESTION CONTROL.
CONGESTION CONTROL, QUALITY OF SERVICE, & INTERNETWORKING
QoS Guarantees introduction call admission traffic specification
The Network Layer Network Layer Design Issues:
Congestion Control (from Chapter 05)
Congestion Control, Quality of Service, & Internetworking
Congestion Control (from Chapter 05)
Figure Areas in an autonomous system
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
The Network Layer Congestion Control Algorithms & Quality-of-Service
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control Algorithms
Presentation transcript:

Network Layer Chapter 5 Design Issues Routing Algorithms Congestion Control Quality of Service Internetworking Network Layer of the Internet Gray units can be optionally omitted without causing later gaps Revised: August 2011 CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

The Network Layer Physical Link Network Transport Application Responsible for delivering packets between endpoints over multiple links CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Congestion Control (1) Handling congestion is the responsibility of the Network and Transport layers working together We look at the Network portion here Traffic-aware routing » Admission control » Traffic throttling » Load shedding » CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Congestion Control (2) Congestion results when too much traffic is offered; performance degrades due to loss/retransmissions Goodput (=useful packets) trails offered load As offered load increases, goodput should increase correspondingly until the capacity of the network is reached. Goodput will trail offered load because the load is bursty and queues will occasionally be too full and a packet will be discarded inside the network. Congestion collapse can occur if the protocols are not carefully designed when nodes retransmit packets many times, believing that they have been lost, when copies of the packet are still in the network (in queues at routers) pending delivery. While throughput at a receiver may be high, goodput falls because multiple copies of the same packet are being received and after the first copy the bandwidth is wasted. This really happened in the late 1980s as the Internet grew, and it lead to the design of modern TCP that includes congestion control mechanisms. CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Factors causing congestion Streams of packets simultaneously begin arriving on several input lines that are headed for the same output line Slow processors Low-bandwidth lines Mismatch between parts of the system Feeding congestion upon itself Retransmission of lost packets TUNALI Computer Networks 1

Congestion Control versus Flow Control Making sure that the net is able to carry the offered traffic Flow control Related to point-to-point traffic between the sender and the receiver Making sure that a fast sender can not continually transmit data faster than the receiver can absorb it A sender may receive a ‘slow down’ message either due to congestion in the subnet or due to receiver not being able to handle the load TUNALI Computer Networks 1

Congestion Control (3) – Approaches Network must do its best with the offered load Different approaches at different timescales Nodes should also reduce offered load (Transport) Provisioning is simply sizing the network to fit the offered load, i.e., don’t build it too small, or with little West-to-East capacity if there is much West-to-East traffic. CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Congestion Control (4) – Approaches Network Provisioning: Build the network well matching traffic needs Traffic-aware routing: Routes may be changed dynamically by changing the shortest path weights Admission control: Refuse requests that would overload the network Load shedding: Discard packets CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Traffic-Aware Routing Choose routes depending on traffic, not just topology E.g., use EI for West-to-East traffic if CF is loaded But take care to avoid oscillations Our previous routes only considered topology; this approach can get more traffic through the network. If not careful, then routing can notice CF is busy and switch traffic over to use EI, only to later notice that EI is busy and switch traffic back to CF. There are various techniques to avoid this: 1) change routes only slowly, e.g., traffic engineering in which an external system sets weights and the routing system does not otherwise adapt; and 2) using multiple paths at once, e.g., both CF and EI. CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Traffic-Aware Routing 2 Choose weights to be a function of Bandwith (fixed) Delay (fixed) Load (variable) The parameters must be crafted carefully to avoid oscillations due to changing load. CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Admission Control Admission control allows a new traffic load only if the network has sufficient capacity, e.g., with virtual circuits Can combine with looking for an uncongested route Network with some congested nodes Uncongested portion and route AB around congestion CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Admission Control 2 To carry out admission control, one needs to characterize the network traffic. Two measures are used to do this: Average data rate Instantaneous burst size CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Traffic Throttling 1 Congested routers signal hosts to slow down traffic ECN (Explicit Congestion Notification) marks packets and receiver returns signal to sender There are other designs, but this is the main one under deployment in the Internet. By marking existing packets using bits in the IP header, routers avoid sending additional packets at a time of congestion. Signal from receiver to sender is carried using a Transport protocol like TCP. CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Traffic Throttling 2 The main ide is congestion avoidance. Routers diagnose upcoming congestion by monitoring: Queue lengths in buffers Compute queuing delay by using exponentially weighted moving average (EWMA) formula Utilization of the output links Number of packets lost CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Choke Packets If a newly arrived packet’s output line is in warning state, a CHOKE PACKET is sent back to the source The packet is forwarded to the destination The source receiving the choke packet is expected to reduce its traffic accordingly TUNALI Computer Networks 1

Explicit Congestion Notification No extra packet generated Router tags the packet and forwards it to the destination Destination reading the tag informa the source about the problem CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Hop-by-Hop Backpressure At high speeds, choke packet reaction is slow Alternative: Hop-by-Hop choke packets Choke packets take effect at every hop it passes through TUNALI Computer Networks 1

Hop-by-Hop Choke Packets (a) A choke packet that affects only the source. (b) A choke packet that affects each hop it passes through. TUNALI Computer Networks 1

Load Shedding (1) When all else fails, network will drop packets (shed load) Can be done end-to-end or link-by-link Link-by-link (right) produces rapid relief 1 4 5 2 3 CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Load Shedding (2) 1 End-to-end (right) takes longer to have an effect, but can better target the cause of congestion 5 2 6 3 7 4 CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Load Shedding 3 Random dropping can be done but there are better policies Wine policy Old is better than new More efficient in go back n Milk policy New is better than old More efficient in multimedia streaming Priority policy MPEG uses compression that has some frames carrying more important information TUNALI Computer Networks 1

Random Early Detection Discard packets before all the buffer space is really exhausted This would be a message to TCP to slow down Maintain a buffer threshold and when average queue length exceeds the threshold, action is taken TUNALI Computer Networks 1

Quality of Service Application requirements » Traffic shaping » Packet scheduling » Admission control » Integrated services » Differentiated services » CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Overprovisioning Provide the necessary router capacity, buffer space and bandwidth to smoothly send the packets Expensive TUNALI Computer Networks 1

Application Requirements (1) Different applications care about different properties We want all applications to get what they need . “High” means a demanding requirement, e.g., low delay CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Application Requirements (2) Network provides service with different kinds of QoS (Quality of Service) to meet application requirements Network Service Application Constant bit rate Telephony Real-time variable bit rate Videoconferencing Non-real-time variable bit rate Streaming a movie Available bit rate File transfer Video conferencing is variable bit rate because video is normally compressed, so the bit rate varies over time. Telephony is typically carried at a lower, fixed rate. Example of QoS categories from ATM networks CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Traffic Shaping 1 Bursty traffic is often the cause of congestion Traffic shaping regulates the average rate (and burstiness) of data transmission This is an open loop method to prevent congestion. When virtual circuit is set up, the user an d the subnet agree on certain traffic pattern. As long as the user fulfills its contract, the carrier live up to its promise Traffic policing is the monitoring of user’s rate Carrier discards packets if the user breaks the contract TUNALI Computer Networks 1

Traffic Shaping (2) Traffic shaping regulates the average rate and burstiness of data entering the network Lets us make guarantees Shape traffic here CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

(need some water to send) Traffic Shaping (3) Token/Leaky bucket limits both the average rate (R) and short-term burst (B) of traffic For token, bucket size is B, water enters at rate R and is removed to send; opposite for leaky. to send to send Leaky bucket (need not full to send) Token bucket (need some water to send) CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

The Leaky Bucket Algorithm Packets enter a finite queue at a random rate Packets leave the queue at a constant rate If the buffer is full, the incoming packets are discarded If packets are of variable length, then the algorithm is defined in terms of bytes per unit time TUNALI Computer Networks 1

The Token Bucket Algorithm A leaky bucket holds tokens that are generated at rate of 1 token in T sec. For a packet to be transmitted, it must capture and destroy one token. This algorithm allows output bursts of up to bucket capacity. This algorithm discards tokens if the bucket is full, but it never discards packets, hence needs a large buffer for incoming packets A variant of the algorithm running on bytes rather than packets is possible TUNALI Computer Networks 1

Traffic Shaping (4) Host traffic R=200 Mbps B=16000 KB Shaped by R=200 Mbps B=9600 KB Shaped by R=200 Mbps B=0 KB For the host traffic the descriptor R=200 Mbps, B=16000KB is the smallest token bucket that can let the traffic pass unchanged. To compute this we work out R as the average rate over the time period, then given we find the smallest B such that the bucket size only just reaches zero at some point. Smaller bucket size delays traffic and reduces burstiness CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Calculation of Maximum Burst Rate Define Burst Length = S sec Token Bucket Capacity = B bytes Token Arrival rate = R bytes/sec Maximum Output Rate = M bytes/sec Then Maximum Output Burst = B + RS bytes Maximum Output Burst in S sec = MS bytes B + RS = MS  S = B / (M - R) TUNALI Computer Networks 1

Packet Scheduling (1) Packet scheduling divides router/link resources among traffic flows with alternatives to FIFO (First In First Out) 1 1 1 2 2 3 3 3 Example of round-robin queuing CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Packet Scheduling (2) Resources reserved for different flows: Bandwidth Buffer space CPU cycles CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Packet Scheduling (2) Fair Queueing approximates bit-level fairness with different packet sizes; weights change target levels Result is WFQ (Weighted Fair Queueing) Virtual times are measured in rounds, where a round lets each input queue send 1 bit for weight 1, or W bits for weight W. The time to send a packet of length L is thus L/W. The formula says that the finish virtual time for a packet is the larger of its arrival time plus the time to send it, or the finish time of the previous packet in the same queue plus the time to send it. Fi = max(Ai, Fi-1) + Li/W Packets may be sent out of arrival order Finish virtual times determine transmission order CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Example flow specification Admission Control (1) Admission control takes a traffic flow specification and decides whether the network can carry it Sets up packet scheduling to meet QoS Example flow specification CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Admission Control (2) Construction to guarantee bandwidth B and delay D: Shape traffic source to a (R, B) token bucket Run WFQ with weight W / all weights > R/capacity Holds for all traffic patterns, all topologies Bandwidth is guaranteed at each router by setting a high enough weight on the flow; if this cannot be done then the flow must not be admitted. Delay guarantees are more subtle and the bound is not given here. Essentially a burst of traffic can arrive at one router and be delayed but then it will not be delayed at other routers because it has already been shaped to be less bursty. So the total delay is something like the propagation delay plus B/R. CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Integrated Services (1) Design with QoS for each flow; handles multicast traffic. Admission with RSVP (Resource reSerVation Protocol): Receiver sends a request back to the sender Each router along the way reserves resources Routers merge multiple requests for same flow Entire path is set up, or reservation not made CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Integrated Services (2) Merge R3 reserves flow from S1 R3 reserves flow from S2 R5 reserves flow from S1; merged with R3 at H CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

RSVP Resource Reservation Protocol Allows multiple senders to transmit to multiple groups of receivers Uses multicast routing with spanning trees Allows dynamic change of bandwidth Allows change of source once bandwidth is reserved TUNALI Computer Networks 1

RSVP 2 Receivers send a reservation message up the tree to the sender Message propagates by reverse path forwarding At each hop, the router reserves the necessary bandwidth If there is no available bandwidth, it reports back failure By the time message arrives to the source, bandwidth has already been reserved TUNALI Computer Networks 1

Differentiated Services Managing thousands of flows is a difficult task Advance setup is required Differentiated services allow definition of a set of service classes Customers sign up for a particular class and receive the appropriate service No advance setup is required TUNALI Computer Networks 1

Expedited Forwarding 1 Two classes of service are available Regular The routers could be programmed to have two output queues for each outgoing line to servive the above classes TUNALI Computer Networks 1

Expedited Forwarding (2) Design with classes of QoS; customers buy what they want Expedited class is sent in preference to regular class Less expedited traffic but better quality for applications CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Assured Forwarding 1 For priority classes each having its own resources For each class three discard probabilities are defined Low Medium High TUNALI Computer Networks 1

Assured Forwarding (2) Implementation of DiffServ: Customers mark desired class on packet ISP shapes traffic to ensure markings are paid for Routers use WFQ to give different service levels CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Reading 5.3 5.4 CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011

Homework 13 16 17 CN5E by Tanenbaum & Wetherall, © Pearson Education-Prentice Hall and D. Wetherall, 2011