Network Simulation NET441

Slides:



Advertisements
Similar presentations
1 CNPA B Nasser S. Abouzakhar Queuing Disciplines Week 8 – Lecture 2 16 th November, 2009.
Advertisements

1 CONGESTION CONTROL. 2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because.
Congestion Control Reasons: - too many packets in the network and not enough buffer space S = rate at which packets are generated R = rate at which receivers.
1.  Congestion Control Congestion Control  Factors that Cause Congestion Factors that Cause Congestion  Congestion Control vs Flow Control Congestion.
ECE 4450:427/527 - Computer Networks Spring 2015
1 ELEN 602 Lecture 18 Packet switches Traffic Management.
1 CNPA B Nasser S. Abouzakhar Resource Allocation 2 Week 6 – Lecture 2 2 nd November, 2009.
Differentiated Services. Service Differentiation in the Internet Different applications have varying bandwidth, delay, and reliability requirements How.
Analysis and Simulation of a Fair Queuing Algorithm
Congestion Control and Resource Allocation
CSE 401N Multimedia Networking-2 Lecture-19. Improving QOS in IP Networks Thus far: “making the best of best effort” Future: next generation Internet.
ACN: Congestion Control1 Congestion Control and Resource Allocation.
Computer Networking Lecture 17 – Queue Management As usual: Thanks to Srini Seshan and Dave Anderson.
School of Information Technologies IP Quality of Service NETS3303/3603 Weeks
7/15/2015HY220: Ιάκωβος Μαυροειδής1 HY220 Schedulers.
CIS679: Scheduling, Resource Configuration and Admission Control r Review of Last lecture r Scheduling r Resource configuration r Admission control.
CSE QoS in IP. CSE Improving QOS in IP Networks Thus far: “making the best of best effort”
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
CONGESTION CONTROL and RESOURCE ALLOCATION. Definition Resource Allocation : Process by which network elements try to meet the competing demands that.
Advance Computer Networking L-5 TCP & Routers Acknowledgments: Lecture slides are from the graduate level Computer Networks course thought by Srinivasan.
Fair Queueing. 2 First-Come-First Served (FIFO) Packets are transmitted in the order of their arrival Advantage: –Very simple to implement Disadvantage:
1 Congestion Control Computer Networks. 2 Where are we?
9.7 Other Congestion Related Issues Outline Queuing Discipline Avoiding Congestion.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
Spring 2009CSE Congestion Control Outline Resource Allocation Queuing TCP Congestion Control.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429 Introduction to Computer Networks Lecture 18: Quality of Service Slides used with.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Weighted Fair Queuing Some slides used with.
1 Fair Queuing Hamed Khanmirza Principles of Network University of Tehran.
Queue Scheduling Disciplines
© 2006 Cisco Systems, Inc. All rights reserved. 3.2: Implementing QoS.
Spring Computer Networks1 Congestion Control Sections 6.1 – 6.4 Outline Preliminaries Queuing Discipline Reacting to Congestion Avoiding Congestion.
Providing QoS in IP Networks
Univ. of TehranIntroduction to Computer Network1 An Introduction Computer Networks An Introduction to Computer Networks University of Tehran Dept. of EE.
Scheduling Mechanisms Applied to Packets in a Network Flow CSC /15/03 By Chris Hare, Ricky Johnson, and Fulviu Borcan.
04/02/08 1 Packet Scheduling IT610 Prof. A. Sahoo KReSIT.
Instructor Materials Chapter 6: Quality of Service
QoS & Queuing Theory CS352.
Topics discussed in this section:
Congestion Control Outline Queuing Discipline Reacting to Congestion
Congestion Control, Quality of Service, and Internetworking
Buffer Management in a Switch
Queue Management Jennifer Rexford COS 461: Computer Networks
Congestion Control Outline Queuing Discipline Reacting to Congestion
Congestion Control Outline Queuing Discipline Reacting to Congestion
Chapter 6 Queuing Disciplines
Congestion Control and Resource Allocation
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 6: Quality of Service Connecting Networks.
CONGESTION CONTROL.
TCP, XCP and Fair Queueing
Queuing and Queue Management
COMP/ELEC 429/556 Introduction to Computer Networks
ECE 4450:427/527 - Computer Networks Spring 2017
Fair Queueing.
Advance Computer Networking
COS 461: Computer Networks
CSCD 433/533 Advanced Networks
Computer Science Division
Congestion Control, Quality of Service, & Internetworking
Advance Computer Networking
EE 122: Lecture 7 Ion Stoica September 18, 2001.
COMP/ELEC 429 Introduction to Computer Networks
Advanced Computer Networks
Figure Areas in an autonomous system
Congestion Control Reasons:
Introduction to Packet Scheduling
Congestion Control and Resource Allocation
EECS 122: Introduction to Computer Networks Packet Scheduling and QoS
Fair Queueing.
A Simple QoS Packet Scheduler for Network Routers
Introduction to Packet Scheduling
Presentation transcript:

Network Simulation NET441 Queuing Disciplines Network Simulation NET441

Flow control vs Congestion control Flow control involves preventing senders from overrunning the capacity of the receivers Congestion control involves preventing too much data from being injected into the network, thereby causing switches or links to become overloaded

Congestion Control and Resource Allocation Resources Bandwidth of the links Buffers at the routers and switches Packets contend at a router for the use of a link, with each contending packet placed in a queue waiting for its turn to be transmitted over the link

Congestion Control and Resource Allocation When too many packets are contending for the same link The queue overflows Packets get dropped Network is congested! Network should provide a congestion control mechanism to deal with such a situation

Congestion Control and Resource Allocation Congestion control involves both hosts and routers In network elements Various queuing disciplines can be used to control the order in which packets get transmitted and which packets get dropped At the hosts’ end The congestion control mechanism paces how fast sources are allowed to send packets

Network Control Issues Resources are limited. Identify the resources: Buffer space Bandwidth allocation One could simply “route around” congested links. Put a large edge weight on a congested link to route around it. That doesn't solve the inherent problem though.

Flows We talk about “flows” in the context of queuing because of the ease in which they can be viewed at different levels: Process to process Host to host Institution to institution Region to region In general, a flow refers to a sequence of packets sent between a source/destination pair, following the same route through the network.

Queuing Disciplines Routers must implement some queuing discipline that governs how packets are buffered or prioritized. One can think of queuing disciplines as rules for the allocating bandwidth or rules for the allocation of buffer space within the router. The book discusses two common disciplines: FIFO Fair Queuing

FIFO Queuing FIFO queuing is called first-come-first- served (FCFS) queuing First packet that arrives at a router is first packet to be transmitted Amount of buffer space at each router is finite Tail drop - If a packet arrives and the queue (buffer space) is full, then the router discards that packet

(a) FIFO queuing; (b) tail drop at a FIFO queue.

FIFO Queuing – Priority Queuing A simple variation on basic FIFO queuing is priority queuing Each packet marked with a priority The routers then implement multiple FIFO queues, one for each priority class Router always transmits packets out of the highest-priority queue if that queue is nonempty before moving on to the next priority queue. Within each priority, packets are still managed in a FIFO manner.

Priority Queuing ! The problem with priority queuing, of course, is that the high-priority queue can starve out all the other queues. That is, as long as there is at least one high- priority packet in the high-priority queue, lower-priority queues do not get served For this to be viable, there need to be hard limits on how much high-priority traffic is inserted in the queue

Fair Queuing The main problem with FIFO queuing is that it does not discriminate between different traffic sources, or it does not separate packets according to the flow to which they belong. Fair queuing (FQ) maintains a separate queue for each flow currently being handled by the router. The router then services these queues in round- robin algorithm

Fair Queuing - Round-robin service Round-robin service of four flows at a router

Fair Queuing - Round-robin service The router then services these queues in a sort of round- robin ,as illustrated in Figure 6.6. When a flow sends packets too quickly, then its queue fills up. When a queue reaches a particular length, additional packets belonging to that flow’s queue are discarded. In this way, a given source cannot arbitrarily increase its share of the network’s capacity at the expense of other flows. Note that FQ does not involve the router telling the traffic sources anything about the state of the router or in any way limiting how quickly a given source sends packets. In other words, FQ is still designed to be used in conjunction with an end-to-end congestion-control mechanism. It simply segregates traffic so that ill-behaved traffic sources do not interfere with those that are faithfully implementing the end-to-end algorithm. FQ also enforces fairness among a collection of flows managed by a wellbehaved congestion-controlalgorithm.

Fair Queuing The main complication with Fair Queuing is that the packets being processed at a router are not necessarily the same length. To truly allocate the bandwidth of the outgoing link in a fair manner, it is necessary to take packet length into consideration. For example, if a router is managing two flows, one with 1000- byte packets and the other with 500-byte packets (perhaps because of fragmentation upstream from this router), then a simple round-robin servicing of packets from each flow’s queue will give the first flow two thirds of the link’s bandwidth and the second flow only one-third of its bandwidth.

Queuing Disciplines What we really want is bit-by-bit round-robin; that is, the router transmits a bit from flow 1, then a bit from flow 2, and so on. However, it is not feasible to interleave the bits from different packets. Simulates this behavior instead Determine when a given packet would finish being transmitted if it were being sent using bit-by-bit round- robin Use this finishing time to sequence the packets for transmission.

Queuing Disciplines Fair Queuing To understand the algorithm for approximating bit-by- bit round robin, consider the behavior of a single flow For this flow, let Pi : denote the length of packet i Si: time when the router starts to transmit packet i Fi: time when router finishes transmitting packet i Fi = Si + Pi

Queuing Disciplines Fair Queuing When do we start transmitting packet i? Depends on whether packet i arrived before or after the router finishes transmitting packet i-1 for the flow Let Ai denote the time that packet i arrives at the router Then Si = max(Fi-1, Ai) Fi = max(Fi-1, Ai) + Pi

Queuing Disciplines Fair Queuing Now for every flow, we calculate Fi for each packet that arrives using our formula We then treat all the Fi as timestamps Next packet to transmit is always the packet that has the lowest timestamp The packet that should finish transmission before all others

Queuing Disciplines Fair Queuing Example of fair queuing in action: packets with earlier finishing times are sent first; sending of a packet already in progress is completed

Bandwidth Sharing Because FQ is work-conserving, any bandwidth that is not used by one flow is automatically available to other flows. Thus we can think of FQ as providing a guaranteed minimum share of bandwidth to each flow For example: if we have 4 flows passing through a router, and all of them are sending packets, then each one will receive 1/4 of the bandwidth. if one of them is idle long enough that all its packets drain out of the router’s queue then the available bandwidth will be shared among the remaining 3 flows, which will each now receive 1/3 of the bandwidth

Weighted Fair Queuing (WFQ) allows a weight to be assigned to each flow (queue). specifies how many bits to send (BW) each time the router services that queue. Example: a router has 3 flows (queues), one queue has a weight of 2, the second queue has a weight of 3, and the third queue has a weight of 1. Assuming that each flow always contains a packet waiting to be sent, what is the percentage of BW that is assigned to each flow? Source: Peterson & Davie, 2007 p 473

WFQ, cont. Solution The first flow will get 1/3 of the available BW. The second flow will get ½ of the available BW. The third flow will get 1/6 of the available BW. Simple FQ gives each queue a weight of 1, which means that logically only 1 bit is transmitted from each queue each time around. This results in each flow getting 1/nth of the bandwidth when there are n flows. PQ1 , PQ2 and PQ3 would be allocated as: BW1 = w1/(w1+w2+w3), BW2 = w2/(w1+w2+w3), BW3 = w3/(w1+w2+w3) Source: Peterson & Davie, 2007 p 473

Example Suppose a router has 3 input flows and one output. It receives the packets listed in Table 1 all at about the same time, in the order listed, during a period in which the output port is busy but all queues are otherwise empty. Give the order in which the packets are transmitted, assuming: Fair queuing. Weighted fair queuing with: flow 1 having a weight of 2,  flow 2 having twice as much share as flow 1, 4 and flow 3 having 1.5 times as much share as flow 1. 3 Source: Peterson & Davie, 2007 p 529

Example, cont. Packet Size Flow 1 200 2 3 160 4 120 5 6 210 7 150 8 90 Table 1 Source: Peterson & Davie, 2007 p 530

Solution (a) Fi is the cumulative per-flow size. Consider Ai = 0 as all packets are received at about the same time so there is no waiting. Packet Size Flow Fi 1 200 2 400 3 160 4 120 5 6 210 7 150 8 90 Source: Peterson & Davie, 2007 p 737

Solution, cont. Packet Size Flow Fi 1 200 2 400 3 160 4 120 280 5 440 210 7 150 360 8 90 450

Solution, cont. So, packets are sent in increasing order of Fi: Packet 3, Packet 1, Packet 6, Packet 4, Packet 7, Packet 2, Packet 5, Packet 8. Packet Size Flow Fi 1 200 2 400 3 160 4 120 280 5 440 6 210 7 150 360 8 90 450

Solution, cont. (b) Flow 1 has a weight of 2, so Flow 2 has a weight of 4, so Flow 3 has a weight of 3, so Packet Size Flow Weighted Fi 1 200 100 2 3 160 40 4 120 5 6 210 7 150 8 90 Source: Peterson & Davie, 2007 p 737

Solution, cont. Packet Size Flow Weighted Fi 1 200 100 2 3 160 40 4 120 70 5 110 6 210 7 150 8 90

Solution, cont. So, packets are sent in increasing order of weighted Fi: Packet 3, Packet 4, Packet 6, Packet 1, Packet 5, Packet 7, Packet 8, Packet 2. Packet Size Flow Weighted Fi 1 200 100 2 3 160 40 4 120 70 5 110 6 210 7 150 8 90

Quality of Service Approaches to QoS Support fine-grained approaches, which provide QoS to individual applications or flows coarse-grained approaches, which provide QoS to large classes of data or aggregated traffic In the first category we find “Integrated Services,” a QoS architecture developed in the IETF and often associated with RSVP (Resource Reservation Protocol). In the second category lies “Differentiated Services,” which is probably the most widely deployed QoS mechanism.

Reference Computer Networks: A systems approach by Larry Peterson and Bruce Davie, published by Morgan Kaufmann (Fourth edition ISBN: 0 12 370548 7).