Module 4: Implement the DiffServ QoS Model

Slides:



Advertisements
Similar presentations
Quality of Service CCDA Quick Reference.
Advertisements

Computer Networks20-1 Chapter 20. Network Layer: Internet Protocol 20.1 Internetworking 20.2 IPv IPv6.
1 CNPA B Nasser S. Abouzakhar Queuing Disciplines Week 8 – Lecture 2 16 th November, 2009.
Congestion Control Reasons: - too many packets in the network and not enough buffer space S = rate at which packets are generated R = rate at which receivers.
Top-Down Network Design Chapter Thirteen Optimizing Your Network Design Copyright 2010 Cisco Press & Priscilla Oppenheimer.
© 2006 Cisco Systems, Inc. All rights reserved.QoS v2.2—5-1 Congestion Management Configuring FIFO and WFQ.
Managing Network Performance Queuing & Compression.
© 2006 Cisco Systems, Inc. All rights reserved. Module 4: Implement the DiffServ QoS Model Lesson 4.10: Deploying End-to-End QoS.
Differentiated Services. Service Differentiation in the Internet Different applications have varying bandwidth, delay, and reliability requirements How.
Advance Configuration IOS Commands. Overview of Router Modes Router(config)# Router>enable Router#config term Exit Ctrl-Z (end) User EXEC Mode Privileged.
© 2006 Cisco Systems, Inc. All rights reserved. Module 4: Implement the DiffServ QoS Model Lesson 4.5: Configuring CBWFQ and LLQ.
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 4: Implement the DiffServ QoS Model.
© 2006 Cisco Systems, Inc. All rights reserved. QOS Lecture 4 - Introducing QOS.
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 4: Implement the DiffServ QoS Model.
© 2001, Cisco Systems, Inc. Queuing Mechanisms. © 2001, Cisco Systems, Inc. QOS v1.0—3-2 Objectives Upon completing this module, you will be able to:
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 4: Implement the DiffServ QoS Model.
CHAPTER 8 Quality of Service. Integrated services (IntServ) Ensure that a specific flow of traffic is going to receive the appropriate level of bandwidth.
© 2006 Cisco Systems, Inc. All rights reserved. 3.3: Selecting an Appropriate QoS Policy Model.
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 3: Introduction to IP QoS.
Top-Down Network Design Chapter Thirteen Optimizing Your Network Design Oppenheimer.
Example Applications needing Advanced Services Campus Focused Workshop on Advanced Networks Atlanta, GA.
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 4: Implement the DiffServ QoS Model.
© 2002, Cisco Systems, Inc. All rights reserved..
© 2006 Cisco Systems, Inc. All rights reserved. Module 4: Implement the DiffServ QoS Model Lesson 4.2: Using NBAR for Classification.
UNIT IP Datagram Fragmentation Figure 20.7 IP datagram.
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 3: Introduction to IP QoS.
Queuing Mechanisms.
Analysis of QoS Arjuna Mithra Sreenivasan. Objectives Explain the different queuing techniques. Describe factors affecting network voice quality. Analyse.
1 © 2003, Cisco Systems, Inc. All rights reserved. CCNP 1 v3.0 Module 1 Overview of Scalable Internetworks.
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 3: Introduction to IP QoS.
Mr. Mark Welton.  Quality of Service is deployed to prevent data from saturating a link to the point that other data cannot gain access to it  QoS allows.
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 4: Implement the DiffServ QoS Model.
© 2006 Cisco Systems, Inc. All rights reserved. QOS Lecture 7 - Queuing Implementations.
1 Fair Queuing Hamed Khanmirza Principles of Network University of Tehran.
© 2006 Cisco Systems, Inc. All rights reserved. Module 4: Implement the DiffServ QoS Model Lesson 4.3: Introducing Queuing Implementations.
Queue Scheduling Disciplines
Chapter 6. Configuring Queuing to Manage Traffic.
© 2006 Cisco Systems, Inc. All rights reserved. Module 4: Implement the DiffServ QoS Model Lesson 4.6: Congestion Avoidance.
© 2006 Cisco Systems, Inc. All rights reserved. 3.2: Implementing QoS.
An End-to-End Service Architecture r Provide assured service, premium service, and best effort service (RFC 2638) Assured service: provide reliable service.
Providing QoS in IP Networks
1 Lecture 15 Internet resource allocation and QoS Resource Reservation Protocol Integrated Services Differentiated Services.
Congestion Control in Data Networks and Internets
Chapter 9: Transport Layer
Instructor Materials Chapter 6: Quality of Service
Chapter 9 Optimizing Network Performance
QoS & Queuing Theory CS352.
Topics discussed in this section:
Top-Down Network Design Chapter Thirteen Optimizing Your Network Design Copyright 2010 Cisco Press & Priscilla Oppenheimer.
Congestion Control, Quality of Service, and Internetworking
Understand the OSI Model Part 2
Congestion Control and Resource Allocation
Maximizing the Benefits of Switching
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 6: Quality of Service Connecting Networks.
Internet Protocol (IP)
Chapter 14 User Datagram Protocol (UDP)
Dr. John P. Abraham Professor UTPA
Network Core and QoS.
Dr. John P. Abraham Professor UTRGV, EDINBURG, TX
Quality of Service What is QoS? When is it needed?
Dr. John P. Abraham Professor UTPA
Computer Science Division
Congestion Control, Quality of Service, & Internetworking
Net 323 D: Networks Protocols
Network Simulation NET441
COMP/ELEC 429 Introduction to Computer Networks
Congestion Control Reasons:
Congestion Control and Resource Allocation
EECS 122: Introduction to Computer Networks Packet Scheduling and QoS
Network Core and QoS.
Presentation transcript:

Module 4: Implement the DiffServ QoS Model Lesson 4.4: Configuring WFQ

Objectives Describe Weighted Fair Queuing (WFQ). Describe WFQ architecture and operation. Identify the benefits and drawbacks of using WFQ. Configure and monitor WFQ configuration on an interface.

Weighted Fair Queuing (WFQ) A queuing algorithm should share the bandwidth fairly among flows by: Reducing response time for interactive flows by scheduling them to the front of the queue Preventing high-volume flows from monopolizing an interface In the WFQ implementation, conversations are sorted into flows and transmitted by the order of the last bit crossing its channel. Unfairness is reinstated by introducing weight to give proportionately more bandwidth to flows with higher IP precedence (lower weight). The terms “WFQ flows” and “conversations” can be interchanged. WFQ is a dynamic scheduling method that provides fair bandwidth allocation to all network traffic. WFQ applies priority, or weights, to identified traffic to classify traffic into conversations and determine how much bandwidth each conversation is allowed relative to other conversations. WFQ is a flow-based algorithm that simultaneously schedules interactive traffic to the front of a queue to reduce response time and fairly shares the remaining bandwidth among high-bandwidth flows. In other words, WFQ allows you to give low-volume traffic, such as Telnet sessions, priority over high-volume traffic, such as FTP sessions. WFQ gives concurrent file transfers balanced use of link capacity; that is, when multiple file transfers occur, the transfers are given comparable bandwidth. WFQ solves problems inherent in the following queuing mechanisms: FIFO queuing causes starvation, delay, and jitter. Priority queuing (PQ) causes starvation of lower-priority classes and suffers from the FIFO problems within each of the four queues that it uses for prioritization.

WFQ Operation For situations in which it is desirable to provide consistent response time to heavy and light network users alike without adding excessive bandwidth, the solution is Weighted Fair Queuing (WFQ). WFQ uses a flow-based queuing algorithm that does two things simultaneously: It schedules interactive traffic to the front of the queue to reduce response time. It fairly shares the remaining bandwidth among the various flows to prevent high-volume flows from monopolizing the outgoing interface. The basis of WFQ is to have a dedicated queue for each flow without starvation, delay, or jitter within the queue. Furthermore, WFQ allows fair and accurate bandwidth allocation among all flows with minimum scheduling delay. WFQ makes use of the IP precedence bits as a weight when allocating bandwidth. Low-volume traffic streams, which comprise the majority of traffic, receive preferential service, transmitting their entire offered loads in a timely fashion. High-volume traffic streams share the remaining capacity proportionally between them.

WFQ Architecture WFQ uses per-flow FIFO queues. WFQ is a dynamic scheduling method that provides fair bandwidth allocation to all network traffic. WFQ applies weights to identified traffic, classifies traffic into flows, and determines how much bandwidth each flow is allowed, relative to other flows. The WFQ method works as the default queuing mode on serial interfaces configured to run at or below E1 speeds (2.048 Mbps). WFQ provides the solution for situations in which it is desirable to provide consistent response times to heavy and light network users alike, without adding excessive bandwidth. In addition, WFQ can manage duplex data flows, such as those between pairs of applications, and simplex data flows, such as voice or video. Although WFQ automatically adapts to changing network traffic conditions, it does not offer the precise degree of control over bandwidth allocation that custom queuing (CQ) and class-based weighted fair queuing (CBWFQ) offer. The significant limitation of WFQ is that it is not supported with tunneling and encryption because these features modify the packet content information required by WFQ for classification. WFQ uses per-flow FIFO queues.

WFQ Classification Packets of the same flow end up in the same queue. WFQ classification uses the following parameters: Source IP address Destination IP address Transport protocol ToS field Source TCP or UDP port Destination TCP or UDP port Source IP Address Destination IP Address Source Port Destination Port Protocol ToS WFQ classification has to identify individual flows. This graphic shows how a flow is identified based on the following information taken from the IP header and the TCP or User Datagram Protocol (UDP) headers: Source IP address Destination IP address Protocol number (identifying TCP or UDP) Type of service field Source TCP or UDP port number Destination TCP or UDP port number These parameters are usually fixed for a single flow, although there are some exceptions. For example, a quality of service (QoS) design can mark packets with different IP precedence bit values even if they belong to the same flow. You should avoid such marking when using WFQ. The parameters are used as input for a hash algorithm that produces a fixed-length number that is used as the index of the queue. Packets of the same flow end up in the same queue.

Implementing WFQ Classification A fixed number of per-flow queues is configured. A hash function is used to translate flow parameters into a queue number. System packets (eight queues) and RSVP flows (if configured) are mapped into separate queues. Two or more flows could map into the same queue, resulting in lower per-flow bandwidth. Important: The number of queues configured should be significantly larger than the expected number of flows. WFQ uses a fixed number of queues. The hash function is used to assign a queue to a flow. There are eight additional queues for system packets and optionally up to 1000 queues for Resource Reservation Protocol (RSVP) flows. The number of dynamic queues that WFQ uses by default is based on the interface bandwidth. With the default interface bandwidth, WFQ uses 256 dynamic queues. The number of queues can be configured in the range between 16 and 4096 (the number must be a power of 2). If there are a large number of concurrent flows, it is likely that two flows could end up in the same queue. You should have several times as many queues as there are flows (on average). This design may not be possible in larger environments where concurrent flows number in the thousands.

WFQ Insertion and Drop Policy WFQ has two modes of dropping: Early dropping when the congestive discard threshold (CDT) is reached Aggressive dropping when the hold-queue limit is reached WFQ always drops packets of the most aggressive flow. Drop mechanism exceptions: A packet classified into an empty queue is never dropped. The packet IP precedence has no effect on the dropping scheme. Queue length is determined by finish time, not size. The WFQ system has a hold queue that represents the queue depth, which means the number of packets that can be held in the queue. WFQ uses the following two parameters that affect the dropping of packets: The congestive discard threshold (CDT) is used to start dropping packets of the most aggressive flow, even before the hold-queue limit is reached. The hold-queue limit defines the maximum number of packets that can be held in the WFQ system at any time. There are two exceptions to the WFQ insertion and drop policy: If the WFQ system is above the CDT limit, the packet is still enqueued if the specific per‑flow queue is empty. The dropping strategy is not directly influenced by IP precedence. The length of queues (for scheduling purposes) is determined not by the sum of the size in bytes of all the packets but by the time it would take to transmit all the packets in the queue. The end result is that WFQ adapts to the number of active flows (queues) and allocates equal amounts of bandwidth to each flow (queue). The side effect is that flows with small packets (usually interactive flows) get much better service because they do not need a lot of bandwidth. They need low-delay handling, however, which they get because small packets have a low finish time.

Benefits and Drawbacks of WFQ Simple configuration (no need for classification to be configured) Guarantees throughput to all flows Drops packets of most aggressive flows Supported on most platforms Supported in most Cisco IOS versions Drawbacks Possibility of multiple flows ending up in one queue Lack of control over classification Supported only on links less than or equal to 2 Mb Cannot provide fixed bandwidth guarantees The WFQ mechanism provides simple configuration (no manual classification is necessary) and guarantees throughput to all flows. It drops packets of the most aggressive flows. Because WFQ is a standard queuing mechanism, most platforms and most Cisco IOS versions support WFQ. As good as WFQ is, it does have its drawbacks: Multiple flows can end up in a single queue. WFQ does not allow a network engineer to manually configure classification. Classification and scheduling are determined by the WFQ algorithm. WFQ is supported only on links with a bandwidth less than or equal to 2 Mb. WFQ cannot provide fixed guarantees to traffic flows.

Configuring WFQ router(config-if)# fair-queue [cdt [dynamic-queues [reservable-queues]]] cdt: Number of messages allowed in each queue (a new threshold must be a power of 2 in the range from 16 to 4096; default is 64). When a conversation reaches this threshold, new message packets are discarded. dynamic-queues: Number of dynamic queues used for best-effort conversations (values are: 16, 32, 64, 128, 256, 512, 1024, 2048, and 4096; the default is 256). reservable-queues: Number of reservable queues used for reserved conversations in the range 0 to 1000 (used for interfaces configured for features such as RSVP—the default is 0). Cisco routers automatically enable WFQ on all interfaces that have a default bandwidth of less than 2.048 Mbps. The fair-queue command enables WFQ on interfaces where it is not enabled by default or was previously disabled.

WFQ Maximum Limit Configuration router(config-if)# hold-queue max-limit out Specifies the maximum number of packets that can be in all output queues on the interface at any time. The default value for WFQ is 1. Under special circumstances, WFQ can consume a lot of buffers, which may require lowering this limit. The WFQ system will generally never reach the hold-queue limit because the CDT limit starts dropping the packets of aggressive flows in the software queue. Under special circumstances, it might be possible to fill the WFQ system. For example, a denial-of-service attack that floods the interface with a large number of packets (each different) could fill all queues at the same rate. Use the hold-queue command to adjust this setting.

Monitoring WFQ router> show interface interface Displays interface delays including the activated queuing mechanism with the summary information Router>show interface serial 1/0 Hardware is M4T Internet address is 20.0.0.1/8 MTU 1500 bytes, BW 19 Kbit, DLY 20000 usec, rely 255/255, load 147/255 Encapsulation HDLC, crc 16, loopback not set Keepalive set (10 sec) Last input 00:00:00, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0 (size/max/drops); Total output drops: 0 Queueing strategy: weighted fair Output queue: 0/1000/64/0 (size/max total/threshold/drops) Conversations 0/4/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) 5 minute input rate 18000 bits/sec, 8 packets/sec 5 minute output rate 11000 bits/sec, 9 packets/sec The show interface command can be used to determine the queuing strategy. The output also displays summary statistics. The sample output in this figure shows that there are currently no packets in the WFQ system. The system allows up to 1000 packets (hold-queue limit) with a CDT of 64. WFQ is using 256 queues. The maximum number of concurrent flows (conversations, or active queues) is four.

Monitoring WFQ Interface router> show queue interface-name interface-number Displays detailed information about the WFQ system of the selected interface Router>show queue serial 1/0 Input queue: 0/75/0 (size/max/drops); Total output drops: 0 Queueing strategy: weighted fair Output queue: 2/1000/64/0 (size/max total/threshold/drops) Conversations 2/4/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) (depth/weight/discards/tail drops/interleaves) 1/4096/0/0/0 Conversation 124, linktype: ip, length: 580 source: 193.77.3.244, destination: 20.0.0.2, id: 0x0166, ttl: 254, TOS: 0 prot: 6, source port 23, destination port 11033 Conversation 127, linktype: ip, length: 585 source: 193.77.4.111 destination: 40.0.0.2, id: 0x020D, ttl: 252, TOS: 0 prot: 6, source port 23, destination port 11013 The show queue command is used to display the contents of packets inside a queue for a particular interface, including flow (conversation) statistics: Queue depth is the number of packets in the queue. Weight is 4096 / (IP precedence + 1), or 32,384 / (IP precedence + 1), depending on the Cisco IOS version. In the command output, discards are used to represent the number of drops that are due to the CDT limit. In the command output, tail drops are used to represent the number of drops that are due to the hold-queue limit.

Self Check What problems with FIFO and Priority Queuing does Weighted Fair Queuing solve? What does WFQ use to classify traffic into flows? What must the network administrator be aware of concerning the number of queues vs. the number of concurrent flows? How is the length of the queue determined? How is WFQ enabled on an interface? WFQ solves problems inherent in the following queuing mechanisms: FIFO queuing causes starvation, delay, and jitter. WFQ addresses all of these issues by ensuring all traffic flows receive some bandwidth, while also ensuring that delay sensitive traffic is serviced correctly. Priority queuing (PQ) causes starvation of lower-priority classes and suffers from the FIFO problems within each of the four queues that it uses for prioritization. WFQ classification identifies individual flows based on the following information taken from the IP header and the TCP or User Datagram Protocol (UDP) headers: Source IP address Destination IP address Protocol number (identifying TCP or UDP) Type of service field Source TCP or UDP port number Destination TCP or UDP port number If there are a large number of concurrent flows, it is likely that two flows could end up in the same queue. You should have several times as many queues as there are flows (on average). This design may not be possible in larger environments where concurrent flows number in the thousands. The length of queues (for scheduling purposes) is determined not by the sum of the size in bytes of all the packets but by the time it would take to transmit all the packets in the queue. The end result is that WFQ adapts to the number of active flows (queues) and allocates equal amounts of bandwidth to each flow (queue). Cisco routers automatically enable WFQ on all interfaces that have a default bandwidth of less than 2.048 Mbps. The fair-queue command enables WFQ on interfaces where it is not enabled by default or was previously disabled.

Summary Weighted Fair Queuing overcomes the issues of FIFO and Priority Queuing by ensuring bandwidth to each queue while also controlling delay and jitter for sensitive traffic. Queues are based on traffic flows. Multiple queues are established to service concurrent traffic flows. The WFQ mechanism provides simple configuration (no manual classification is necessary) and guarantees throughput to all flows. It drops packets of the most aggressive flows. Some of the drawbacks of WFQ include: multiple flows can end up in a single queue, WFQ does not allow a network engineer to manually configure classification, and WFQ cannot provide fixed guarantees to traffic flows.