Download presentation
Presentation is loading. Please wait.
1
Network Simulation NET441
Queuing Disciplines Network Simulation NET441
2
Flow control vs Congestion control
Flow control involves preventing senders from overrunning the capacity of the receivers Congestion control involves preventing too much data from being injected into the network, thereby causing switches or links to become overloaded
3
Congestion Control and Resource Allocation
Resources Bandwidth of the links Buffers at the routers and switches Packets contend at a router for the use of a link, with each contending packet placed in a queue waiting for its turn to be transmitted over the link
4
Congestion Control and Resource Allocation
When too many packets are contending for the same link The queue overflows Packets get dropped Network is congested! Network should provide a congestion control mechanism to deal with such a situation
5
Congestion Control and Resource Allocation
Congestion control involves both hosts and routers In network elements Various queuing disciplines can be used to control the order in which packets get transmitted and which packets get dropped At the hosts’ end The congestion control mechanism paces how fast sources are allowed to send packets
6
Network Control Issues
Resources are limited. Identify the resources: Buffer space Bandwidth allocation One could simply “route around” congested links. Put a large edge weight on a congested link to route around it. That doesn't solve the inherent problem though.
7
Flows We talk about “flows” in the context of queuing because of the ease in which they can be viewed at different levels: Process to process Host to host Institution to institution Region to region In general, a flow refers to a sequence of packets sent between a source/destination pair, following the same route through the network.
8
Queuing Disciplines Routers must implement some queuing discipline that governs how packets are buffered or prioritized. One can think of queuing disciplines as rules for the allocating bandwidth or rules for the allocation of buffer space within the router. The book discusses two common disciplines: FIFO Fair Queuing
9
FIFO Queuing FIFO queuing is called first-come-first- served (FCFS) queuing First packet that arrives at a router is first packet to be transmitted Amount of buffer space at each router is finite Tail drop - If a packet arrives and the queue (buffer space) is full, then the router discards that packet
10
(a) FIFO queuing; (b) tail drop at a FIFO queue.
11
FIFO Queuing – Priority Queuing
A simple variation on basic FIFO queuing is priority queuing Each packet marked with a priority The routers then implement multiple FIFO queues, one for each priority class Router always transmits packets out of the highest-priority queue if that queue is nonempty before moving on to the next priority queue. Within each priority, packets are still managed in a FIFO manner.
12
Priority Queuing ! The problem with priority queuing, of course, is that the high-priority queue can starve out all the other queues. That is, as long as there is at least one high- priority packet in the high-priority queue, lower-priority queues do not get served For this to be viable, there need to be hard limits on how much high-priority traffic is inserted in the queue
13
Fair Queuing The main problem with FIFO queuing is that it does not discriminate between different traffic sources, or it does not separate packets according to the flow to which they belong. Fair queuing (FQ) maintains a separate queue for each flow currently being handled by the router. The router then services these queues in round- robin algorithm
14
Fair Queuing - Round-robin service
Round-robin service of four flows at a router
15
Fair Queuing - Round-robin service
The router then services these queues in a sort of round- robin ,as illustrated in Figure 6.6. When a flow sends packets too quickly, then its queue fills up. When a queue reaches a particular length, additional packets belonging to that flow’s queue are discarded. In this way, a given source cannot arbitrarily increase its share of the network’s capacity at the expense of other flows. Note that FQ does not involve the router telling the traffic sources anything about the state of the router or in any way limiting how quickly a given source sends packets. In other words, FQ is still designed to be used in conjunction with an end-to-end congestion-control mechanism. It simply segregates traffic so that ill-behaved traffic sources do not interfere with those that are faithfully implementing the end-to-end algorithm. FQ also enforces fairness among a collection of flows managed by a wellbehaved congestion-controlalgorithm.
16
Fair Queuing The main complication with Fair Queuing is that the packets being processed at a router are not necessarily the same length. To truly allocate the bandwidth of the outgoing link in a fair manner, it is necessary to take packet length into consideration. For example, if a router is managing two flows, one with byte packets and the other with 500-byte packets (perhaps because of fragmentation upstream from this router), then a simple round-robin servicing of packets from each flow’s queue will give the first flow two thirds of the link’s bandwidth and the second flow only one-third of its bandwidth.
17
Queuing Disciplines What we really want is bit-by-bit round-robin; that is, the router transmits a bit from flow 1, then a bit from flow 2, and so on. However, it is not feasible to interleave the bits from different packets. Simulates this behavior instead Determine when a given packet would finish being transmitted if it were being sent using bit-by-bit round- robin Use this finishing time to sequence the packets for transmission.
18
Queuing Disciplines Fair Queuing
To understand the algorithm for approximating bit-by- bit round robin, consider the behavior of a single flow For this flow, let Pi : denote the length of packet i Si: time when the router starts to transmit packet i Fi: time when router finishes transmitting packet i Fi = Si + Pi
19
Queuing Disciplines Fair Queuing
When do we start transmitting packet i? Depends on whether packet i arrived before or after the router finishes transmitting packet i-1 for the flow Let Ai denote the time that packet i arrives at the router Then Si = max(Fi-1, Ai) Fi = max(Fi-1, Ai) + Pi
20
Queuing Disciplines Fair Queuing
Now for every flow, we calculate Fi for each packet that arrives using our formula We then treat all the Fi as timestamps Next packet to transmit is always the packet that has the lowest timestamp The packet that should finish transmission before all others
21
Queuing Disciplines Fair Queuing Example of fair queuing in action:
packets with earlier finishing times are sent first; sending of a packet already in progress is completed
22
Bandwidth Sharing Because FQ is work-conserving, any bandwidth that is not used by one flow is automatically available to other flows. Thus we can think of FQ as providing a guaranteed minimum share of bandwidth to each flow For example: if we have 4 flows passing through a router, and all of them are sending packets, then each one will receive 1/4 of the bandwidth. if one of them is idle long enough that all its packets drain out of the router’s queue then the available bandwidth will be shared among the remaining 3 flows, which will each now receive 1/3 of the bandwidth
23
Weighted Fair Queuing (WFQ)
allows a weight to be assigned to each flow (queue). specifies how many bits to send (BW) each time the router services that queue. Example: a router has 3 flows (queues), one queue has a weight of 2, the second queue has a weight of 3, and the third queue has a weight of 1. Assuming that each flow always contains a packet waiting to be sent, what is the percentage of BW that is assigned to each flow? Source: Peterson & Davie, 2007 p 473
24
WFQ, cont. Solution The first flow will get 1/3 of the available BW. The second flow will get ½ of the available BW. The third flow will get 1/6 of the available BW. Simple FQ gives each queue a weight of 1, which means that logically only 1 bit is transmitted from each queue each time around. This results in each flow getting 1/nth of the bandwidth when there are n flows. PQ1 , PQ2 and PQ3 would be allocated as: BW1 = w1/(w1+w2+w3), BW2 = w2/(w1+w2+w3), BW3 = w3/(w1+w2+w3) Source: Peterson & Davie, 2007 p 473
25
Example Suppose a router has 3 input flows and one output. It receives the packets listed in Table 1 all at about the same time, in the order listed, during a period in which the output port is busy but all queues are otherwise empty. Give the order in which the packets are transmitted, assuming: Fair queuing. Weighted fair queuing with: flow 1 having a weight of 2, flow 2 having twice as much share as flow 1, 4 and flow 3 having 1.5 times as much share as flow 1. 3 Source: Peterson & Davie, 2007 p 529
26
Example, cont. Packet Size Flow 1 200 2 3 160 4 120 5 6 210 7 150 8 90
Table 1 Source: Peterson & Davie, 2007 p 530
27
Solution (a) Fi is the cumulative per-flow size. Consider Ai = 0 as all packets are received at about the same time so there is no waiting. Packet Size Flow Fi 1 200 2 400 3 160 4 120 5 6 210 7 150 8 90 Source: Peterson & Davie, 2007 p 737
28
Solution, cont. Packet Size Flow Fi 1 200 2 400 3 160 4 120 280 5 440
210 7 150 360 8 90 450
29
Solution, cont. So, packets are sent in increasing order of Fi: Packet 3, Packet 1, Packet 6, Packet 4, Packet 7, Packet 2, Packet 5, Packet 8. Packet Size Flow Fi 1 200 2 400 3 160 4 120 280 5 440 6 210 7 150 360 8 90 450
30
Solution, cont. (b) Flow 1 has a weight of 2, so Flow 2 has a weight of 4, so Flow 3 has a weight of 3, so Packet Size Flow Weighted Fi 1 200 100 2 3 160 40 4 120 5 6 210 7 150 8 90 Source: Peterson & Davie, 2007 p 737
31
Solution, cont. Packet Size Flow Weighted Fi 1 200 100 2 3 160 40 4
120 70 5 110 6 210 7 150 8 90
32
Solution, cont. So, packets are sent in increasing order of weighted Fi: Packet 3, Packet 4, Packet 6, Packet 1, Packet 5, Packet 7, Packet 8, Packet 2. Packet Size Flow Weighted Fi 1 200 100 2 3 160 40 4 120 70 5 110 6 210 7 150 8 90
33
Quality of Service Approaches to QoS Support
fine-grained approaches, which provide QoS to individual applications or flows coarse-grained approaches, which provide QoS to large classes of data or aggregated traffic In the first category we find “Integrated Services,” a QoS architecture developed in the IETF and often associated with RSVP (Resource Reservation Protocol). In the second category lies “Differentiated Services,” which is probably the most widely deployed QoS mechanism.
34
Reference Computer Networks: A systems approach by Larry Peterson and Bruce Davie, published by Morgan Kaufmann (Fourth edition ISBN: ).
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.