Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Packet Scheduling (The rest of the dueling bandwidth story)

Similar presentations


Presentation on theme: "1 Packet Scheduling (The rest of the dueling bandwidth story)"— Presentation transcript:

1 1 Packet Scheduling (The rest of the dueling bandwidth story)

2 2 Lab 9: Configuring a Linux Router Set NICs in 10 Mbps full-duplex mode Enable IPv4 forwarding Manually configure routing tables Install tcp_sink and udp_sink Generate traffic from tcp_gen and udp_gen TCP/UDP traffic flow measurements

3 3 Lab 9 Results What is the major issue? What impact did TCP’s flow control have? What impact did UDP’s flow control (or lack thereof) have? What implications does this have for today’s Internet?

4 4

5 5 Lab 9 (first part): Conclusions TCP’s flow control mechanisms back off in the presence of UDP congestion UDP’s lack of flow control mechanisms can cause link starvation for TCP flows TCP application performance (e-mail, web,FTP) can be degraded significantly by UDP traffic on the same shared link

6 6 Lab 9 (first part): Conclusions (cont.) UDP is the preferred protocol for most multimedia applications. Why? Future challenge for the Internet community: Will multimedia applications of the Internet impair the performance of mainstay TCP applications? How can the industry manage this new Internet traffic without stifling the growth of new applications?

7 7 Lab 9 (Second part): Strict Priority Scheduling Our first attempt to solve problem of TCP and UDP interaction: Priority Scheduling Modify the Linux source code Implemented a strict priority scheduler Priority based on layer 4 protocol Give TCP priority over UDP which has no flow control Generate traffic from tcp_gen and udp_gen TCP/UDP traffic flow measurements

8 8

9 9 Lab 9 (Second part): Conclusions TCP’s flow control mechanism is “greedy,” but “timid.” Strict priority scheduling removes the “timid” aspects. TCP greedily consumes all available bandwidth. We have not solved the problem. We have just shifted it from UDP to TCP.

10 10 The “Real” Solution: Fair Scheduling

11 11 Introduction What is scheduling? Advantages of scheduling Scheduling “wish list” Scheduling Policies Generalized Processor Sharing (GPS) Packetized GPS Algorithms Stochastic Fair Queuing (SFQ) and Class Based Queuing (CBQ)

12 12 Motivation for Scheduling TCP application performance degraded significantly by UDP traffic on the same shared link Different versions of TCP may not co-exist fairly (ex: TCP Reno vs. TCP Vegas) Quality of Service (QoS) requirements for next generation Internet Most important: Finishes the story about TCP and UDP traffic mixtures (email and web versus video teleconferencing and Voice over IP)

13 13 What is Scheduling? Sharing of bandwidth always results in contention A scheduling discipline resolves contention: Which packet should be serviced next? Future networks will need the capability to share resources fairly and provide performance guarantees Implications for QoS?

14 14 Where does scheduling occur? Anywhere where contention may occur At every layer of the protocol stack Discussion will focus on MAC/network layer scheduling – at the output queues of switches and routers

15 15 Advantages of Scheduling 1) Differentiation - different users can have different QoS over the same network 2) Performance Isolation - behavior of each flow or class is independent of all other network traffic 3) QoS Resource Allocation - with respect to bandwidth, delay, and loss characteristics 4) Fair Resource Allocation - includes both short and long term fairness

16 16 Scheduling “Wish List” An ideal scheduling discipline… 1) Is amenable to high speed implementation 2) Achieves (weighted) fairness 3) Supports multiple QoS classes 4) Provides performance bounds 5) Allows easy admission control decisions Does such an algorithm exist that can satisfy all these requirements?

17 17 Requirement 1: High Speed Implementation Scheduler must make a decision once every few microseconds Should be implementable in hardware. Critical constraint: VLSI area available. Should be scalable and efficient in software. Critical constraint: Order of growth per flow or class.

18 18 Requirement 2: Fairness Scheduling discipline allocates a scare resource Fairness is defined both on a short term and long term basis Fairness is evaluated according to the max-min criteria

19 19 Max-Min Fairness Criteria Each connection gets no more bandwidth than what it needs Excess bandwidth, if any, is shared equally Example: Generalized Processor Sharing (GPS) scheduler managing three flows with equal priority

20 20 Benefits of Fairness Fair schedulers provide protection Bandwidth gobbling applications are kept in check Automatic isolation of heavy traffic flows Fairness is a global (Internet level) objective, while scheduling is local (router or switch level) Global fairness guarantees are beyond the scope of the course (go to grad school :>)

21 21 Scheduling Policies 1) First Come First Serve (FCFS) Packets queued in FCFS order No fairness Most widely adopted scheme in today’s Internet 2) Strict Priority Multiple queues with different priorities Packets in a given queue are served only when all higher priority queues are empty 3) Generalized Processor Sharing (GPS)

22 22 Generalized Processor Sharing Idealized fair queuing approach based on a fluid model of network traffic Divides the link of bandwidth B into a discrete number of channels Each channel has bandwidth b i where: B = b 1 + b 2 + b 3 + … Extremely simple in concept Impossible to implement in practice. Why?

23 23 Shortcomings of GPS Reason 1: Inaccurate traffic model Underlying model of networks is fluid- based or continuous Actual network traffic consists of discrete units (packets) Impossible to divide link indefinitely

24 24 Shortcomings of GPS Reason 2: Transmission is serial GPS depicts a parallel division of link usage Actual networks transmit bits serially “Sending more bits” implies sending increasing the transmission rate

25 25 Packetized GPS Packetized version of GPS Attempts to approximate the behavior of GPS as closely as possible All schemes hereafter fall under this category

26 26 Packetized GPS Algorithms 1) Weighted Fair Queuing (WFQ) 2) Weighted Round Robin (WRR) 3) Deficit Round Robin (DRR) 4) Stochastic Fair Queuing (SFQ) 5) Class Based Queuing (CBQ) 6) Many, many others…

27 27 Weighted Fair Queuing Computes the finish time of each packet under GPS Packets tagged with each finish time Packet with smallest finish time across queues is serviced first Not scalable due to the overhead of computing the ideal GPS schedule

28 28 WFQ: An Example 3 flows A, B, C read left to right Assume all packets are same size Given example weights A=1, B=2, C=3 Divide packet finish time by weight Weighted fair share of service results 2 5 8 1 3 4 7 6

29 29 Weighted Round Robin Simplest approximation of GPS Queues serviced in round robin fashion, proportional to assigned weights Max-min fair over long time scales May cause short term unfairness A B C Fixed Tx Schedule: C C C B B A A

30 30 Deficit Round Robin (DRR) Handles varying size packets Each queue begins with zero credits or quanta Flow transmits a packet only when it accumulates enough quanta, subtract used quanta A queues not served during a round accumulates a weighted number of quanta Use of quanta permit DRR to fairly serve packets of varying size

31 31 Stochastic Fair Queuing* Traffic divided into a large number of FIFO queues serviced in a round robin fashion Uses a “stochastic” rather than fixed allocation of flows to queues by means of a hashing algorithm to decide which queue to put flow in Prevents unfair bandwidth usage of any one flow Frequent recalculating of the hash necessary to ensure fairness Extremely simple to configure in Linux

32 32 Class Based Queuing* A framework for organizing hierarchical link sharing Link divided into different traffic classes Each class can have its own scheduling algorithm, providing enormous flexibility Classes can borrow spare capacity from a parent class Most difficult scheduling discipline to configure in Linux

33 33 CBQ: An Example

34 34 Some results from a previous semester’s final lab Covered SFQ and CBQ Identical experimental setup as Lab 9 SFQ and CBQ are already built into version 2.4.7-10 and higher of the Linux kernel No modification of the source code required Repeat TCP and UDP traffic measurements to determine the impact of each scheduling discipline

35 35 Overview (cont.) 1) Do TCP and UDP flows share the link fairly in the experiment? 2) What are the relative advantages and disadvantages of SFQ vs. CBQ? How does each one meet the 5 requirements of the scheduling “wish list”?

36 36 Overview (cont.) 3) Are these scheduling disciplines scalable to the complexity required to handle real Internet traffic? 4) How can these scheduling algorithms be used to provide QoS guarantees in tomorrow’s Internet? What might this architecture look like?

37 37 How we turned on SFQ cd /usr/src/linux-2.4.18-14 make oldconfig This command will save all of the options that are currently built into the kernel to a file (.config). This allows you to keep the current options you have selected and add to them, rather than erase the options you have previously turned on. cp.config /root (y to overwrite) make clean make mrproper make xconfig Click “Load Configuration from file”; in Enter filename, type /root/.config We need to turn on several options. In the main menu, select Networking Options. Scroll down and select QoS and/or Fair Queuing. Select for every option in this menu. This will enable every available queuing discipline that is built into the Linux Kernel. Click on OK. Click Main Menu. Click Save and Exit. Click OK. make dep make bzImage Completed the remaining steps we did in lab 9 to compile the kernel

38 38 How we turned on fair queuing Opened an xterm window and type: tc qdisc add dev eth1 root sfq perturb 5 This line enables SFQ and installs it on the interface eth0, which is connected to your destination. The command tc sets up a traffic classifier in the router. The word qdisc stands for queuing discipline. The value perturb 5 indicates that the hashing scheme used by SFQ is reconfigured once every 5 seconds. In general, the smaller the pertrub value, the better the division of bandwidth between TCP and UDP. To change perturb value to a different value (e.g., 6), type the followings tc qdisc del dev eth1 root sfq perturb 5 then tc qdisc add dev eth1 root sfq perturb 6 Now, type the following command: tc –s –d qdisc ls This should return a string of text similar to the following: qdisc sfq 800c: dev eth1 quantum 1514b limit 128p flows 128/1024 perturb 5sec Sent 4812 bytes 62 pkts (dropped 0, overlimits 0) The number 800c is the automatically assigned handle number. Limit means that 128 packets can wait in this queue. There are 1024 hash buckets available for accounting, of which 128 can be active at a time (no more than 128 packets would be queued!) Once every 5 seconds, the hashes are reconfigured.

39 39 Stochastic Fair Queuing Enabled SFQ and set perturb value to 5 which means hashing scheme used by SFQ is reconfigured once every 5 seconds

40 40 Measured Results UDP IA time TCP (Mb/s) Measured UDP Attempted (Mb/s) UDP Measured (Mb/s) 0.059.00.16380.15 0.018.410.81920.79 0.0057.561.63841.61 0.0014.828.1924.32 0.00014.8781.924.34

41 41

42 42 How we turned on CBQ tc qdisc add dev eth1 root handle 1: cbq bandwidth 10Mbit allot 1514 cell 8 avpkt 1024 mpu 64 This line enables CBQ and installs it on the interface eth0, which is connected to your destination. The command tc sets up anything related to traffic controller in a router. The word qdisc stands for queuing discipline. Generally, the classes in CBQ can be constructed into a tree structure, starting from the root and its direct descendants. A descendant is a parent if it has its own direct descendants. Each parent can originate a CBQ with a certain amount of bandwidth available for its direct descendants. Each descendant class is identified by a class identification with the syntax handle x. In this case, the root handle 1:0 means that this CBQ is located at root and the classid of a direct descendant classes of the root has the form 1:x (e.g., 1:1, 1:2, 1:3). The bandwidth 10Mbits is the maximum available bandwidth for this CBQ. allot is a parameter that is used by the link sharing scheduler. A cell value of 8 indicates that the packet transmission time will be measured in terms of 8 bytes. mpu represents the minimum number of bytes that will be sent in a packet. Packets that are of size less than mpu are set to mpu usually set equal to 64. This is done because for ethernet-like interfaces the minimum packet size is 64.

43 43 How we turned on CBQ tc class add dev eth1 parent 1:0 classid 1:1 cbq bandwidth 10Mbit rate 10Mbit allot 1514 cell 8 avpkt 1024 mpu 64 maxburst 40 tc class add dev eth1 parent 1:1 classid 1:2 cbq bandwidth 10Mbit rate 5Mbit allot 1514 cell 8 avpkt 1024 mpu 64 maxburst 40 tc class add dev eth1 parent 1:1 classid 1:3 cbq bandwidth 10Mbit rate 5Mbit allot 1514 cell 8 avpkt 1024 mpu 64 maxburst 40 First, we define a direct descendant classes of 1:0, whose classid is 1:1. Then, we define two direct descendant classes of 1:1, whose classids are 1:2 (for TCP traffic) and 1:3 (for UDP traffic) The tc class add is a command used to define a class. parent defines the parent class. cbq bandwidth 10Mbits represents the maximum available bandwidth possible for the class rate 5Mbit is the bandwidth guaranteed for the class For each class, we enable “bandwidth borrowing” option, in which a descendant class is allowed to borrow the available bandwidth from its parent. In CBQ, a class can send at most maxburst back-to-back packets, so the rate of a class is proportional to maxburst : rate = packetsize * maxburst * 8 / (kernel clock speed)

44 44 How we turned on CBQ Type the following commands, tc filter add dev eth1 parent 1:0 protocol ip u32 match ip protocol 6 0xff flowid 1:2 tc filter add dev eth1 parent 1:0 protocol ip u32 match ip protocol 17 0xff flowid 1:3 tc filter add is a command that installs a filter for IP packets passing through a device flowid represents the classid with which the filter is associated. If the IP protocol number in the IP header of a packet is equal to 6 (TCP), the packet belongs to class 1:2. If the IP protocol number in the IP header of a packet is equal to 17 (UDP), the packet belongs to class 1:3.

45 45 Class Based Queuing Can define separate classes for different applications and then treat them equally (or unequally if desired) Here CBQ was enabled with each class assigned 5 Mb/s rate

46 46 Measured Results UDP IA time TCP (Mb/s) Measured UDP Attempted (Mb/s) UDP Measured (Mb/s) 0.058.990.16380.16 0.018.430.81920.79 0.0057.711.63841.59 0.0014.768.1924.78 0.00014.7581.924.75

47 47

48 48 Conclusions Class based Queuing allocates bandwidth better than any other approach we have used, including SFQ. Neither type of traffic gets more than 5 Mb/s (unless there is no other traffic class in which case more than 5 Mb/s will be allowed


Download ppt "1 Packet Scheduling (The rest of the dueling bandwidth story)"

Similar presentations


Ads by Google