Lecture 4#-1 Scheduling: Buffer Management. Lecture 4#-2 The setting.

Slides:



Advertisements
Similar presentations
1 Comnet 2010 Communication Networks Recitation 4 Scheduling & Drop Policies.
Advertisements

1 GPS Example 2: Arrivals o Eleven Sources. First source gets 0.5. Other 10 sources get 0.05 each. First source sends 11 cells send one each at t=0.
Deficit Round Robin Scheduler. Outline Introduction Ordinary Problems Deficit Round Robin Latency of DRR Improvement of latencies.
1 CNPA B Nasser S. Abouzakhar Queuing Disciplines Week 8 – Lecture 2 16 th November, 2009.
Congestion Control Reasons: - too many packets in the network and not enough buffer space S = rate at which packets are generated R = rate at which receivers.
Abhay.K.Parekh and Robert G.Gallager Laboratory for Information and Decision Systems Massachusetts Institute of Technology IEEE INFOCOM 1992.
# 1 Scheduling: Buffer Management. # 2 The setting.
Nick McKeown CS244 Lecture 6 Packet Switches. What you said The very premise of the paper was a bit of an eye- opener for me, for previously I had never.
Worst-case Fair Weighted Fair Queueing (WF²Q) by Jon C.R. Bennett & Hui Zhang Presented by Vitali Greenberg.
Scheduling CS 215 W Keshav Chpt 9 Problem: given N packet streams contending for the same channel, how to schedule pkt transmissions?
CS 268: Lecture 15/16 (Packet Scheduling) Ion Stoica April 8/10, 2002.
Generalized Processing Sharing (GPS) Is work conserving Is a fluid model Service Guarantee –GPS discipline can provide an end-to-end bounded- delay service.
Service Disciplines for Guaranteed Performance Service Hui Zhang, “Service Disciplines for Guaranteed Performance Service in Packet-Switching Networks,”
תזכורת  שבוע הבא אין הרצאה m יום א, נובמבר 15, 2009  שיעור השלמה m יום שישי, דצמבר 11, 2009 Lecture 4: Nov 8, 2009 # 1.
Computer Networking Lecture 17 – Queue Management As usual: Thanks to Srini Seshan and Dave Anderson.
Lecture 5: Congestion Control l Challenge: how do we efficiently share network resources among billions of hosts? n Last time: TCP n This time: Alternative.
7/15/2015HY220: Ιάκωβος Μαυροειδής1 HY220 Schedulers.
Localized Asynchronous Packet Scheduling for Buffered Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York Stony Brook.
Packet Scheduling From Ion Stoica. 2 Packet Scheduling  Decide when and what packet to send on output link -Usually implemented at output interface 1.
A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single-Node Case Abhay K. Parekh, Member, IEEE, and Robert.
Packet Scheduling and Buffer Management in Routers (A Step Toward Quality-of-service) 10-1.
CIS679: Scheduling, Resource Configuration and Admission Control r Review of Last lecture r Scheduling r Resource configuration r Admission control.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
ATM SWITCHING. SWITCHING A Switch is a network element that transfer packet from Input port to output port. A Switch is a network element that transfer.
Link Scheduling & Queuing COS 461: Computer Networks
Fair Queueing. 2 First-Come-First Served (FIFO) Packets are transmitted in the order of their arrival Advantage: –Very simple to implement Disadvantage:
1 Our focus  scheduling a single CPU among all the processes in the system  Key Criteria: Maximize CPU utilization Maximize throughput Minimize waiting.
Queueing and Scheduling Traffic is moved by connecting end-systems to switches, and switches to each other Traffic is moved by connecting end-systems to.
March 29 Scheduling ?. What is Packet Scheduling? Decide when and what packet to send on output link 1 2 Scheduler flow 1 flow 2 flow n Buffer management.
Packet Scheduling and Buffer Management Switches S.Keshav: “ An Engineering Approach to Networking”
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
Nick McKeown Spring 2012 Lecture 2,3 Output Queueing EE384x Packet Switch Architectures.
Packet Scheduling: SCFQ, STFQ, WF2Q Yongho Seok Contents Review: GPS, PGPS SCFQ( Self-clocked fair queuing ) STFQ( Start time fair queuing ) WF2Q( Worst-case.
ITFN 2601 Introduction to Operating Systems Lecture 4 Scheduling.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429 Introduction to Computer Networks Lecture 18: Quality of Service Slides used with.
CprE 458/558: Real-Time Systems (G. Manimaran)1 CprE 458/558: Real-Time Systems Real-Time Networks – WAN Packet Scheduling.
Scheduling Determines which packet gets the resource. Enforces resource allocation to each flows. To be “Fair”, scheduling must: –Keep track of how many.
Scheduling CS 218 Fall 02 - Keshav Chpt 9 Nov 5, 2003 Problem: given N packet streams contending for the same channel, how to schedule pkt transmissions?
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Weighted Fair Queuing Some slides used with.
1 Fair Queuing Hamed Khanmirza Principles of Network University of Tehran.
Queue Scheduling Disciplines
Spring Computer Networks1 Congestion Control Sections 6.1 – 6.4 Outline Preliminaries Queuing Discipline Reacting to Congestion Avoiding Congestion.
Providing QoS in IP Networks
Univ. of TehranIntroduction to Computer Network1 An Introduction Computer Networks An Introduction to Computer Networks University of Tehran Dept. of EE.
Scheduling for QoS Management. Engineering Internet QoS2 Outline  What is Queue Management and Scheduling?  Goals of scheduling  Fairness (Conservation.
Univ. of TehranIntroduction to Computer Network1 An Introduction to Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Scheduling Mechanisms Applied to Packets in a Network Flow CSC /15/03 By Chris Hare, Ricky Johnson, and Fulviu Borcan.
04/02/08 1 Packet Scheduling IT610 Prof. A. Sahoo KReSIT.
QoS & Queuing Theory CS352.
Stratified Round Robin: A Low Complexity Packet Scheduler with Bandwidth Fairness and Bounded Delay Sriram Ramabhadran Joseph Pasquale Presented by Sailesh.
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Variations of Weighted Fair Queueing
Fair Queueing.
Scheduling: Buffer Management
Computer Science Division
EE 122: Lecture 7 Ion Stoica September 18, 2001.
Variations of Weighted Fair Queueing
Network Simulation NET441
COMP/ELEC 429 Introduction to Computer Networks
Advanced Computer Networks
Congestion Control Reasons:
Introduction to Packet Scheduling
EECS 122: Introduction to Computer Networks Packet Scheduling and QoS
Fair Queueing.
Introduction to Packet Scheduling
کنترل جریان امیدرضا معروضی.
Presentation transcript:

Lecture 4#-1 Scheduling: Buffer Management

Lecture 4#-2 The setting

Lecture 4#-3 Buffer Scheduling  Who to send next?  What happens when buffer is full?  Who to discard?

Lecture 4#-4 Requirements of scheduling  An ideal scheduling discipline m is easy to implement m is fair and protective m provides performance bounds  Each scheduling discipline makes a different trade-off among these requirements

Lecture 4#-5 Ease of implementation  Scheduling discipline has to make a decision once every few microseconds!  Should be implementable in a few instructions or hardware m for hardware: critical constraint is VLSI space m Complexity of enqueue + dequeue processes  Work per packet should scale less than linearly with number of active connections

Lecture 4#-6 Fairness  Intuitively m each connection should get no more than its demand m the excess, if any, is equally shared  But it also provides protection m traffic hogs cannot overrun others m automatically isolates heavy users

Lecture 4#-7 Max-min Fairness: Single Buffer m Allocate bandwidth equally among all users m If anyone doesn’t need its share, redistribute m maximize the minimum bandwidth provided to any flow not receiving its request m Ex: Compute the max-min fair allocation for a set of four sources with demands 2, 2.6, 4, 5 when the resource has a capacity of 10. s1= 2; s2= 2.6; s3 = s4= 2.7 m More complicated in a network.

Lecture 4#-8 FCFS / FIFO Queuing  Simplest Algorithm, widely used.  Scheduling is done using first-in first-out (FIFO) discipline  All flows are fed into the same queue

Lecture 4#-9 FIFO Queuing (cont ’ d)  First-In First-Out (FIFO) queuing m First Arrival, First Transmission m Completely dependent on arrival time m No notion of priority or allocated buffers m No space in queue, packet discarded m Flows can interfere with each other; No isolation; malicious monopolization; m Various hacks for priority, random drops,...

Lecture 4#-10 Priority Queuing  A priority index is assigned to each packet upon arrival  Packets transmitted in ascending order of priority index. m Priority 0 through n-1 m Priority 0 is always serviced first  Priority i is serviced only if 0 through i-1 are empty  Highest priority has the m lowest delay, m highest throughput, m lowest loss  Lower priority classes may be starved by higher priority  Preemptive and non-preemptive versions.

Lecture 4#-11 Priority Queuing Transmission link Packet discard when full High-priority packets Low-priority packets Packet discard when full When high-priority queue empty

Lecture 4#-12 Round Robin: Architecture Flow 1 Flow 3 Flow 2 Transmission link Round robin Hardware requirement: Jump to next non-empty queue r Round Robin: scan class queues serving one from each class that has a non-empty queue

Lecture 4#-13 Round Robin Scheduling  Round Robin: scan class queues serving one from each class that has a non-empty queue

Lecture 4#-14 Round Robin (cont’d)  Characteristics: m Classify incoming traffic into flows (source- destination pairs) m Round-robin among flows  Problems: m Ignores packet length (GPS, Fair queuing) m Inflexible allocation of weights (WRR,WFQ)  Benefits: m protection against heavy users (why?)

Lecture 4#-15 Weighted Round-Robin  Weighted round-robin m Different weight w i (per flow) m Flow j can sends w j packets in a period. m Period of length  w j  Disadvantage m Variable packet size. m Fair only over time scales longer than a period time. If a connection has a small weight, or the number of connections is large, this may lead to long periods of unfairness.

Lecture 4#-16 DRR algorithm  Choose a quantum of bits to serve from each connection in order.  For each HOL (Head of Line) packet, m if its size is <= (quantum + credit) send and save excess, m otherwise save entire quantum. m If no packet to send, reset counter (to remain fair)  Each connection has a deficit counter (to store credits) with initial value zero.  Easier implementation than other fair policies m WFQ

Lecture 4#-17 Deficit Round-Robin  DRR can handle variable packet size Second Round First Round Head of Queue A B C 0 Quantum size : 1000 byte  1st Round m A’s count : 1000 m B’s count : 200 (served twice) m C’s count : 1000  2nd Round m A’s count : 500 (served) m B’s count : 0 m C’s count : 800 (served) 500

Lecture 4#-18 DRR: performance  Handles variable length packets  Backlogged source share bandwidth equally  Preferably, packet size < Quantum  Simple to implement m Similar to round robin

Lecture 4#-19 Generalized Processor Sharing

Lecture 4#-20 Generalized Process Sharing (GPS)  The methodology: m Assume we can send infinitesimal packets single bit m Perform round robin. At the bit level  Idealized policy to split bandwidth  GPS is not implementable  Used mainly to evaluate and compare real approaches.  Has weights that give relative frequencies.

Lecture 4#-21 GPS: Example Packets of size 10, 20 & 30 arrive at time 0

Lecture 4#-22 GPS: Example Packets: time 0 size 15 time 5 size 20 time 15 size 10

Lecture 4#-23 GPS: Example Packets: time 0 size 15 time 5 size 20 time 15 size 10 time 18 size 15

Lecture 4#-24 GPS : Adding weights  Flow j has weight w j  The output rate of flow j, R j (t) obeys:  For the un-weighted case (w j =1):

Lecture 4#-25  Non-backlogged connections, receive what they ask for.  Backlogged connections share the remaining bandwidth in proportion to the assigned weights.  Every backlogged connection i, receives a service rate of : Fairness using GPS Active(t): the set of backlogged flows at time t

Lecture 4#-26 GPS: Measuring unfairness  No packet discipline can be as fair as GPS m while a packet is being served, we are unfair to others  Degree of unfairness can be bounded  Define: work A (i,a,b) = # bits transmitted for flow i in time [a,b] by policy A.  Absolute fairness bound for policy S m Max (work GPS (i,a,b) - work S (i, a,b))  Relative fairness bound for policy S m Max (work S (i,a,b) - work S (j,a,b)) assuming both i and j are backlogged in [a,b]

Lecture 4#-27 GPS: Measuring unfairness  Assume fixed packet size and round robin  Relative bound: 1  Absolute bound: < 1  Challenge: handle variable size packets.

Lecture 4#-28 Weighted Fair Queueing

Lecture 4#-29 GPS to WFQ  We can ’ t implement GPS  So, lets see how to emulate it  We want to be as fair as possible  But also have an efficient implementation

Lecture 4#-30

Lecture 4#-31 Queue t=0 Queue t=0 GPS:both packets served at rate 1/2 Both packets complete service at t=2 t Packet-by-packet system (WFQ): queue 1 served first at rate 1; then queue 2 served at rate 1. Packet from queue 1 being served Packet from queue 2 being served Packet from queue 2 waiting 1 t GPS vs WFQ (equal length)

Lecture 4#-32 Queue t=0 Queue t=0 2 1 t Packet from queue 2 served at rate 1 GPS: both packets served at rate 1/2 queue 2 served at rate 1 Packet from queue 1 being served at rate 1 Packet from queue 2 waiting 1 t GPS vs WFQ (different length)

Lecture 4#-33 Queue t=0 Queue t=0 1 t WFQ: queue 2 served first at rate 1; then queue 1 served at rate 1. Packet from queue 1 being served Packet from queue 2 being served Packet from queue 1 waiting 1 t GPS: packet from queue 1 served at rate 1/4; Packet from queue 2 served at rate 3/4 GPS vs WFQ Weight: Queue 1=1 Queue 2 =3 Packet from queue 1 served at rate 1

Lecture 4#-34 Completion times  Emulating a policy: m Assign each packet p a value time(p). m Send packets in order of time(p).  FIFO: m Arrival of a packet p from flow j: last = last + size(p); time(p)=last; m perfect emulation...

Lecture 4#-35 Round Robin Emulation  Round Robin (equal size packets) m Arrival of packet p from flow j: m last(j) = last(j)+ 1; m time(p)=last(j);  Idle queue not handle properly!!! m Sending packet q: round = time(q) m Arrival: last(j) = max{round,last(j)}+ 1 m time(p)=last(j);  What kind of low level scheduling?

Lecture 4#-36 Round Robin Emulation  Round Robin (equal size packets) m Sending packet q: m round = time(q); flow_num = flow(q); m Arrival: m last(j) = max{round,last(j)} m IF (j < flow_num) & (last(j)=round) THEN last(j)=last(j)+1 m time(p)=last(j);  What kind of low level scheduling?

Lecture 4#-37 GPS emulation (WFQ)  Arrival of p from flow j: m last(j)= max{last(j), round} + size(p); m using weights: last(j)= max{last(j), round} + size(p)/w j ;  How should we compute the round? m We like to simulate GPS: m round(t+x) = round(t) + x/B(t) m B(t) = active flows  A flow j is active while round(t) < last(j)

Lecture 4#-38 WFQ: Example (equal size) Time 0: packets arrive to flow 1 & 2. last(1)= 1; last(2)= 1; Active = 2 round (0) =0; send 1 Time 1: A packet arrives to flow 3 round(1) = 1/2; Active = 3 last(3) = 3/2; send 2 Time 2: A packet arrives to flow 4. round(2) = 5/6; Active = 4 last(4) = 11/6; send 3 Time 2+2/3: round = 1; Active = 2 Time 3 : round = 7/6 ; send 4; Time 3+2/3: round = 3/2; Active = 1 Time 4 : round = 11/6 ; Active=0

Lecture 4#-39 Worst Case Fair Weighted Fair Queuing (WF 2 Q)

Lecture 4#-40 Worst Case Fair Weighted Fair Queuing (WF 2 Q)  WF 2 Q fixes an unfairness problem in WFQ. m WFQ: among packets waiting in the system, pick one that will finish service first under GPS m WF 2 Q: among packets waiting in the system, that have started service under GPS, select one that will finish service first GPS  WF 2 Q provides service closer to GPS m difference in packet service time bounded by max. packet size.

Lecture 4#-41

Lecture 4#-42

Lecture 4#-43

Lecture 4#-44

Lecture 4#-45 Multiple Buffers

Lecture 4#-46 Buffers  Input ports  Output ports  Inside fabric  Shared Memory  Combination of all Buffer locations Fabric

Lecture 4#-47 Input Queuing fabric Inputs Outputs

Lecture 4#-48 Input speed of queue – no more than input line Need arbiter (running N times faster than input) FIFO queue Head Of Line (HOL) blocking. Utilization: Random destination 1- 1/e = 59% utilization due to HOL blocking Input Buffer : properties

Lecture 4#-49 Head of Line Blocking

Lecture 4#-50

Lecture 4#-51

Lecture 4#-52  The fabric looks ahead into the input buffer for packets that may be transferred if they were not blocked by the head of line.  Improvement depends on the depth of the look ahead.  This corresponds to virtual output queues where each input port has buffer for each output port. Overcoming HOL blocking: look-ahead

Lecture 4#-53 Input Queuing Virtual output queues

Lecture 4#-54  Each output port is expanded to L output ports  The fabric can transfer up to L packets to the same output instead of one cell. Overcoming HOL blocking: output expansion Karol and Morgan, IEEE transaction on communication, 1987:

Lecture 4#-55 fabric L Input Queuing Output Expansion

Lecture 4#-56 Output Queuing The “ideal”

Lecture 4#-57 Output Buffer : properties  No HOL problem  Output queue needs to run faster than input lines  Need to provide for N packets arriving to same queue  solution: limit the number of input lines that can be destined to the output.

Lecture 4#-58 Shared Memory a common pool of buffers divided into linked lists indexed by output port number FABRIC MEMORY

Lecture 4#-59 Shared Memory: properties Packets stored in memory as they arrive Resource sharing Easy to implement priorities Memory is accessed at speed equal to sum of the input or output speeds How to divide the space between the sessions