Presentation is loading. Please wait.

Presentation is loading. Please wait.

Packet Scheduling and Buffer Management Switches S.Keshav: “ An Engineering Approach to Networking”

Similar presentations


Presentation on theme: "Packet Scheduling and Buffer Management Switches S.Keshav: “ An Engineering Approach to Networking”"— Presentation transcript:

1

2 Packet Scheduling and Buffer Management Switches S.Keshav: “ An Engineering Approach to Networking”

3 Original Design Goals Deliverability Survivability Speed

4 Statistical Gain r0r0 Switch r0r0 r n-1 r1r1 R’<<  r i Switch r n-1 r1r1 R=  r i R i is the rate for the i th input

5 Internet Today: Best Effort Packet scheduling: FCFS Queue management: Drop Tail –Avantage: Simplicity –Inconvenience: flat service, no guarantees!

6 From Best Effort to Guaranteed Service? Shared Resource

7 Zoom on a Router Switch

8 Scheduling Functions 1) Packet scheduling: select the next packet that will use the link 2) Queue management: Managing the shortage of storage for awaiting packets  allocating queuing delays  allocating loss rates

9 Scheduling Is Necessary If On networks with high statistical fluctuations (more for packet switched than for circuit switched) Need of GUARANTEES Or Need of FAIRNESS

10 A Limit: The Law of Conservation In words: you cannot decrease mean delays for one flow without increasing mean delays for other flows Formally: for each flow f i – i = mean arrival rate –x i = mean service time –q i = mean waiting time –Sum for all flows of i.x i.q i = Constant (whatever is the scheduling policy)

11 Example Consider ATM virtual circuits A and B with arrival rates 10 and 25 Mbps that share an OC3 link. Packet size is PS. –When using FCFS, both mean queuing delays for A and B are 0.5 ms –A researcher claims that he designed a new scheduling policy where A’s mean delay is reduced by 0.4ms, and B’s mean delay is reduced by 0.2ms. Is this possible?

12 Example (Cont’d) Switch 10 Mbps 25 Mbps 155 Mbps

13 Example (Cont’d) FCFS – A =10 Mbps, x A = PS/155Mbps, q A = 0.5 ms – B =25 Mbps x B = PS/155Mbps, q B = 0.5 ms New scheduling policy – A =10 Mbps, x A = PS/155Mbps, q A = 0.1 ms – B =25 Mbps, x B = PS/155Mbps, q B = 0.3 ms

14 Fairness Max-min fair share: –Resources are allocated in order of increasing demand –No source gets a resource share larger than its demand –Sources with unsatisfied demands get an equal share of the resource. Example: Compute the max-min fair allocation for a set of four sources with demands 2, 2.6, 4, and 5. Resource has capacity 10.

15 Fair Scheduling Implement a max-min. Is it possible? Generalized processor sharing (ideal): serve an infinitesimal portion for each “connection”

16 Approximations of GPS Round Robin: serve a packet instead of infinitesimal quantity. Weighted Round Robin: round robin, but using a weight for each connection. Example: Suppose connections A, B, and C have the same packet size, and weights 0.5, 0.75, and 1.0. How many packets from each connection should a round robin scheduler serve in each round?

17 Approximations of GPS (2) Problem: how to handle variable packet size? Specially if we do not know the mean packet size? Deficit round robin

18 Bad News Keep –state variables per flow –Queues per flow 5000 (simultaneous) active flows may be handled by current harware  scalability? Complex buffer management (different packet size) Changing traffic patterns

19 Goods News Close to the user, smaller number of flows Core routers “overprovisionned” Small number of “bad guys”

20 Architecture 45 Mbps OC3-OC192

21 Two Options Implement fair queueing on all routers (or variant (DDR) Approximate fair queueing

22 Approximate Fair Queueing Keep it simple in the Core router –Use FIFO to allocate queueing delays –Differenciate service by using a differentiated buffer management (allocate loss rates) Push complexity to the edge of the network: –Keep per flow state –Label packets for differentiated service

23 Core-Stateless Fair Queueing At the edge router –Estimate the rate for each flow (requires keeping state information) –Label packets with estimated rates r i At the core router: –Estimate fair share rate  –Estimate packet dropping probability based on r i. (No need of state, flow characterization is r i )

24 Active Research Area Labeling techniques Corresponding packet drop techniques (Active queueing Management) End-2-end performance of the two techniques above.

25 Buffer Management Techniques (Allocations Loss Rates) Drop Tail Drop Front RED: Random Early Detection CHOKe: stateless fair queue management Stochastic Fair Blue

26 RED Routers (Random Early Detection) Objective : –keep queue length small –keep high link utilization –Warn senders earlier to avoid massive losses and back offs How ? –Detect incipient congestion (not a temporary burst) Read Floyd and Jacobson paper (1993)

27 RED Implementation Maintain an average queue length AQL (different from current queue length) When packet gets to the queue –if AQL < Tmin, forward packet –if Tmin <= AQL <=Tmax, “mark” packet with probability p(AQL) –if AQL > Tmax, “mark” every packet TminTmax

28 RED Performance Keeps good delay (short queue) “Marks” packets with the same performance RED implemented by many router manufacturers (CISCO) Shortcoming –Does not inforce fairness –Tuning thresholds and other algorithm parameters done by try (work on adaptive RED)

29 Problems with RED Parameter Tuning –Threshold –Slope of the probability function Provider thinking: –Why drop a packet that I can handle?

30 WRED Routers How would you design a WRED (Weighted RED?

31 CHOKe: A Stateless Queue Management AQL <= Tmin ? Admit NP Draw a packet at random RP New packet NP NP and RP same flow? AQL <= Tmax Admit NP with Probability p Drop NP Drop both y y y n n n

32 Blue Algorithm Maintain a probability pm to mark or drop packets Upon packet loss –Increase pm (with insuring some delay between increase) Upon idle link –Decrease pm


Download ppt "Packet Scheduling and Buffer Management Switches S.Keshav: “ An Engineering Approach to Networking”"

Similar presentations


Ads by Google