Download presentation
Presentation is loading. Please wait.
Published byBranden Dixon Modified over 9 years ago
1
Protocols and the TCP/IP Suite ATM Traffic & Congestion Control
Chapter 13 ATM Traffic & Congestion Control Chapter 2
2
Protocols and the TCP/IP Suite
Introduction ATM congestion problem overview ITU-T and ATM Forum framework for control of delay-sensitive traffic ATM traffic control mechanisms ATM congestion control Congestion control schemes for bursty traffic (ABR and GFR) Chapter 13: ATM Traffic & Congestion Control Chapter 2
3
ATM Service Categories
Protocols and the TCP/IP Suite ATM Service Categories Constant Bit Rate (CBR) fixed data rate required at guaranteed capacity Real-Time Variable Bit-Rate (rt-VBR) tightly constrained delay and delay variation sustained rate & guaranteed fast burst rate Non-Real-Time Variable Bit-Rate (nrt-VBR) no delay variation bound, cell loss ratio only Available Bit Rate (ABR) guaranteed minimum capacity, with bursts Guaranteed Frame Rate (GFR) like UBR/ABR, expressed in terms of frame rate Unspecified Bit Rate (UBR) best-effort service Recall from earlier discussions that ATM offers several different service categories which provide varying levels of QoS. This chart briefly describes those categories. What types of applications might require the services offered by CBR? rt-VBR? nrt-VBR? UBR? (we discussed this on in the prior chapter’s lesson) ABR? GFR? Note that ABR and GBR offer essentially the same level of service, but on e is measured in frames and the other in cells. Chapter 13: ATM Traffic & Congestion Control Chapter 2
4
Why Typical Traffic Control Schemes Are Inadequate for ATM
Protocols and the TCP/IP Suite Why Typical Traffic Control Schemes Are Inadequate for ATM Majority of ATM traffic sources are time-sensitive, and not amenable to typical flow control schemes (e.g. CBR, rt-VBR) For long-haul ATM: ttrans << tprop … slow feedback (latency/speed effects) Due to the broad range of ATM application types, flow control may indiscriminately penalize some bandwidth requirements (kbps to Mbps) traffic patterns (CBR, VBR) service requirements (delay/loss sensitivity) Very high-speed switching increases volatility re: control mechanisms Chapter 13: ATM Traffic & Congestion Control Chapter 2
5
ATM Performance considerations
Protocols and the TCP/IP Suite ATM Performance considerations Two key issues must be addressed for CBR and real-time VBR traffic… Latency/speed effects for long-haul networks: Cause - ttrans << 2 x tprop (or, one RTT) Approach – fast feedback mechanisms Cell delay variation: Cause - Variation at user-network interface and in network core Approach: time reassembly of CBR cells at receiver Chapter 13: ATM Traffic & Congestion Control Chapter 2
6
Latency/Speed Effects
Protocols and the TCP/IP Suite Latency/Speed Effects Issue: rapid insertion rate of ATM cells vs. relatively long round-trip delays small size (53 bytes) of ATM cell high bandwidth links in ATM networks small switching delays Simplified Example: 150 Mbps data rate ( SONET OC-3) tinsert = ttrans = = 2.8 x 10-6 seconds U.S. coast-to-coast roundtrip dprop = 48 msec Then the number of cells inserted (N) during RTT = = 1.7 x 104 cells = 7.2 million bits 53 x 8 bits 150 x 106 bps 48 x 10-3 seconds 2.8 x 10-6 seconds Chapter 13: ATM Traffic & Congestion Control Chapter 2
7
Protocols and the TCP/IP Suite
Cell Delay Variation General requirement: delay should be short ATM designed to minimize delay For some applications, rate of delivery of cells to destination must be constant (ATM’s CBR service level) Contributors to cell delay variation network contribution: queuing and processing variations variation at UNI (user network interface) due to cell processing Chapter 13: ATM Traffic & Congestion Control Chapter 2
8
Origins of Cell Delay Variation at UNI
Protocols and the TCP/IP Suite Origins of Cell Delay Variation at UNI Per I-371 Interleaving prior to delivery to physical layer Cell delay can be introduced in the network and at the User Network Interface (UNI). At the UNI, as shown in this example, delays can be due to the fact that multiple processes (i.e. separate logical connections) vie for the same physical resource. As shown here, note that connection A and Connection B each generate “chunks” of data asynchronously, and that these chunks must be interleaved as they are passed down to the ATM layer. Note also, that things are further complicated by the fact that the ATM layer can asynchronously inject OAM ( operations and maintenance) network management cells into the stream. These cells then must be multiplexed onto the physical link, and the physical layer itself can inject further delay as it may add physical layer overhead bits. So, the net result, as can be seen in this example, is that the cells presented by the application enter the network at a different (delayed) rate than the rate of generation. Note also that the delays described here are are hard to predict and don’t follow any repetitive pattern Further delays possible at the physical layer Chapter 13: ATM Traffic & Congestion Control Chapter 2
9
Cell Delay Variation: CBR Cells
Protocols and the TCP/IP Suite Cell Delay Variation: CBR Cells = 1/R = inverse of insertion rate V(0) = est. tolerable delay variation cell arrives late: discarded Slope = R cells/sec = 1/ cell insertion rate D(i) = end-to-end delay for the ith cell V(i) = V(i-1) – [ti – (ti-1 + )] Chapter 13: ATM Traffic & Congestion Control Chapter 2
10
Protocols and the TCP/IP Suite
ATM Attributes How we describe an ATM traffic flow Traffic parameters QoS parameters Congestion (for ABR) Other (for UBR) Each service category is characterized in terms of a set of attributes that describe the services offered. When a connection is established, the user requests (expects) a certain level of service as specified by these attributes Traffic descriptors characterize the type of traffic that a source submits to the network, as well as the traffic that a given connection (VC) will handle. The network will allow the connection only if resources are available to support the requirements specified. QoS parameters characterize the performance of a connection in terms of the Quality of Service expected. Congestion parameters that can be specified are limited in ATM to only the feedback attribute of ABR (more later on this). Several other attributes are available to characterize UBR traffic (primarily for use by TCP/IP). Let’s now look at these attributes in more detail. Chapter 13: ATM Traffic & Congestion Control Chapter 2
11
Protocols and the TCP/IP Suite
Traffic Parameters Connection Traffic Descriptor Source Traffic Descriptor: PCR, SCR, MBS, MCR, MFS (more on next slide) Cell Delay Variation Tolerance (): upper bound on amount of cell delay that is introduced by the network interface and the UNI (due to interleaving, physical layer overhead, multiplexing, etc.) Note- this is the value V(0) from our earlier discussion of CDV Conformance Definition: unambiguous specification of conforming cells of a connection at the UNI (see GCRA, later) The connection traffic descriptor, in addition to the source traffic descriptors, further defines the flow of traffic in a virtual connection. One additional characteristic, Cell Delay Variation Tolerance (CDVT), and an unambiguous specification of how/what to test for conformance are added to the source traffic descriptor to provide a clear description of the traffic that the connection must handle. Chapter 13: ATM Traffic & Congestion Control Chapter 2
12
Protocols and the TCP/IP Suite
Traffic Parameters Source Traffic Descriptor Peak Cell Rate (PCR): upper bound on traffic submitted by source (PCR = 1/T, where T = minimum cell spacing Sustainable Cell Rate (SCR): upper bound on average rate of traffic submitted by source (over a larger T) Maximum Burst Size (MBS): maximum number of cells sent continuously at PCR Minimum Cell Rate (MCR): used with ABR and GFR… minimum cell rate requested, access to unused capacity up to PCR (elastic capacity = PCR-MCR?) Maximum Frame Size (MFS): maximum size of a frame in cells available for GFR service Traffic descriptors come in two forms: a source traffic descriptor, which characterizes traffic submitted to the network at the UNI, and a connection traffic descriptor, which characterizes the flow of cells in a connection. The characteristics shown above described the traffic submitted at the source. Note that not all of these descriptors will be used to characterize all flows, since certain attributes will be nonsensical for certain ATM service categories. Chapter 13: ATM Traffic & Congestion Control Chapter 2
13
Protocols and the TCP/IP Suite
QoS Parameters (negotiated between user and network during connection set-up, as defined by the ATM Forum) Peak-to-peak cell delay variation (CDV): acceptable delay variation at destination – the difference between the best case and worst case CTD Maximum Cell Transfer Delay (maxCTD): maximum time between transmission of first bit of a cell at the source UNI to receipt of its last bit at the destination UNI Cell Loss Ratio: ratio of lost cells to total transmitted cells on a connection Quality of Service parameters provide a clear description of tolerable delay characteristics and of cell loss. As shown here, CDV the amount of delay variation (time value) in cell arrival that is acceptable, and maxCTD specifies the maximum requested delay time for each cell. Cell loss ration is simply the requested ration of lost cell to total transmitted cells (should generally be a very LOW number!). Chapter 13: ATM Traffic & Congestion Control Chapter 2
14
Cell Transfer Delay Probability Density (real-time services)
Protocols and the TCP/IP Suite Cell Transfer Delay Probability Density (real-time services) Variable component of delay, due to buffering and cell scheduling. Propagation through physical media. Fraction of cells that exceed Maximum and will be discarded or delivered late. Note from this graphic that maxCTD is made up of a fixed component and a variable component. The fixed component is simply sum of the transmission and propagation delays through the network, plus any fixed processing delay at the switches along the path. The variable portion of the delay (CDV) is due to the “logical” delays associated with buffering and scheduling. Note that some fraction of the cells, , will be discarded due to exceeding maxCTD. The remainder, 1 - , referred to as peak-to-peak CDV, are within the specified tolerance for CDV. Don’t confuse CDV with CDVT. CDV is usually negotiated when the connection is set up, whereas CDVT is specified as a requirement at the UNI and is not negotiated (because the application requires it to be usable). Chapter 13: ATM Traffic & Congestion Control Chapter 2
15
Congestion Control and Other Traffic Attributes
Protocols and the TCP/IP Suite Congestion Control and Other Traffic Attributes Congestion Control defined only for ABR service category uses network feedback controls ABR flow control mechanism (more later) Other Attributes (introduced July 2000) Behavior class selector (BCS): for IP differentiated services (DiffServ) provides for different levels of service among UBR connections implementation dependent, no guidance in specs Minimum desired cell rate (MDCR): UBR application minimum capacity objective The feedback attribute is the only congestion control parameter currently defined. This attribute is relevant only to ABR and GFR. In practice, the only feedback mechanism available is the ABR flow control mechanism. The behavior class selector (BCS) is used with IP diffserv which we will discuss in detail later in the course. This parameter allows ATM connections to provide different levels of service via UBR. MDCR allows UBR applications to specify minimum capacity objectives. Note that this is specified as an objective, and that all ATM can do within the context of UBR is to provide best effort service. Chapter 13: ATM Traffic & Congestion Control Chapter 2
16
Service Category-Attribute Relationship
Protocols and the TCP/IP Suite Service Category-Attribute Relationship This slide shows how various parameters have meaning only within certain ATM service categories, an are nonsensical in others. For example, note that peak cell rate can be specified for all categories, but CDV and maxCTD are meaningless for lower-order categories such as UBR, ABR and GFR. As mentioned earlier, congestion control mechanisms are only specified for ABR. And, as you can see, BCS and MDCR only make sense in the context of UB R services. Chapter 13: ATM Traffic & Congestion Control Chapter 2
17
Traffic & Congestion Control Function Classification – A Framework
Protocols and the TCP/IP Suite Traffic & Congestion Control Function Classification – A Framework affect more than one connection, effective over long timeframe determine if/how network can accommodate connection at a given QoS Now, given these parameters and capabilities, let’s consider the context of their use in managing traffic and congestion in an ATM network. As shown above, ITU-T has classified the various traffic and congestion control function available in QATM networks based on the timing intervals across which they are relevant. Let’s look at these functions in more detail…. network responds within the round-trip lifetime of a cell react immediately to a cell as it is transmitted Chapter 13: ATM Traffic & Congestion Control Chapter 2
18
Resource Management Using Virtual Paths
Protocols and the TCP/IP Suite Resource Management Using Virtual Paths Multiple VCCs with various QoS requirements in same VPC Cases to consider: User-to-User application: VPCs between pairs of UNIs, VCC QoS is user’s responsibility… user must ensure that aggregate of VCCs does not exceed capacity allocated to VPC User-to-network application: VPC between UNI and network node, network must accommodate QoS of individual VCCs Network-to-network application: network must accommodate QoS of individual VCCs First, let’s consider traffic control functions. First, on a long-term basis, ATM establishes virtual paths, and allocates resources to those virtual paths, in a way that aggregates the capacity and performance requirements of the virtual circuits that make up those paths. In other words, VPCs group VCCs with similar requirements. Note that there are three cases to consider in how virtual connections share a virtual channel, and that the responsibility for providing and “managing” the QoS of the individual VCCs within the VPC is with the user or the network depending on the applicable case. Chapter 13: ATM Traffic & Congestion Control Chapter 2
19
Resource Management Using Virtual Paths
Protocols and the TCP/IP Suite Resource Management Using Virtual Paths Performance (QoS) of a VCC depends on resources allocated to the VPC(s) through which the VCC extends Network allocates capacity to each VPC based on performance objectives agreed between network and subscriber (contract). Two approaches: Aggregate peak demand – VPC capacity (e.g. data rate) set to sum of peak data rates of all VCCs Statistical multiplexing – VPC capacity set to be greater than or equal to average demand for all VCCs, but less than aggregate peak demand First, let’s consider traffic control functions. First, on a long-term basis, ATM establishes virtual paths, and allocates resources to those virtual paths, in a way that aggregates the capacity and performance requirements of the virtual circuits that make up those paths. In other words, VPCs group VCCs with similar requirements. Note that there are three cases to consider in how virtual connections share a virtual channel, and that the responsibility for providing and “managing” the QoS of the individual VCCs within the VPC is with the user or the network depending on the applicable case. Chapter 13: ATM Traffic & Congestion Control Chapter 2
20
Example VCC/VPC Configuration
Protocols and the TCP/IP Suite Example VCC/VPC Configuration This diagram provides an example of a convenient way to think about how VCCs are grouped into VPCs. Note that VCCs 1 and 2 are subject to the performance characteristics of VPC b, VPC c, all of the Virtual Path switching functions, VP-Sw, and the Virtual Channel switching function, VC-Sw (top). One key considerations in setting up a VPC for multiple VCCs is how to allocate capacity to the VPC. If we use aggregate peak demand, i.e. the sum of the peak demand for all VCCs, we will probably end up with underutilized capacity since most VCCs will not normally operate near their peak. If we use statistical multiplexing, we allocate VPC capacity equal to the sum of the average data rates of all VCCs. In this case we will experience greater cell delay variation (CDV) and cell transfer delay (CTD), and possibly a higher cell loss ratio, due to the inevitable buffering and queuing of cells with this approach. (I.e., potentially lower QoS) This approach has the advantage of more efficient utilization of the available capacity. Chapter 13: ATM Traffic & Congestion Control Chapter 2
21
Connection Admission Control (CAC)
Protocols and the TCP/IP Suite Connection Admission Control (CAC) Network accepts the connection only if it can commit resources - in both directions - that satisfy a given connection request: Service category (CBR, rt-VBR, …) Connection traffic descriptor (PCR, …, CDVT, conformance definition) QoS (peak-to-peak CDV, max CTD, CLR) Cell loss priority (CLP bit 0 or 0+1) If connection is accepted, a “traffic contract” is awarded to the user So, how does ATM “enforce” the characteristics we have just discussed, to ensure that user who have subscribed for a certain level of service actually get it (and others don’t)???? First. let’s consider ATM’s Connection Admission Control (CAC). This feature provides a policing function at connection time to ensure that the services requested/required are available in both directions. That is, can the resources required to provide the service level be allocated. Quite simply stated, CAC, check at connection time to see if the parameters can be satisfied, and grants the connection only if the answer is “Yes”. When the connection is granted, the user is granted a traffic contract which specifies that the network will satisfy the user’s requirements. If the resources are not available, the connection is not established. Chapter 13: ATM Traffic & Congestion Control Chapter 2
22
Connection Admission Control (Traffic Contract)
Protocols and the TCP/IP Suite Connection Admission Control (Traffic Contract) (a.k.a. jitter) So, how does ATM “enforce” the characteristics we have just discussed, to ensure that user who have subscribed for a certain level of service actually get it (and others don’t)???? First. let’s consider ATM’s Connection Admission Control (CAC). This feature provides a policing function at connection time to ensure that the services requested/required are available in both directions. That is, can the resources required to provide the service level be allocated. Quite simply stated, CAC, check at connection time to see if the parameters can be satisfied, and grants the connection only if the answer is “Yes”. When the connection is granted, the user is granted a traffic contract which specifies that the network will satisfy the user’s requirements. If the resources are not available, the connection is not established. Chapter 13: ATM Traffic & Congestion Control Chapter 2
23
Connection Admission Control (CAC)
Protocols and the TCP/IP Suite Connection Admission Control (CAC) Contract ATM Network Contract Traffic Parameters Peak cell rate Sustainable cell rate Burst tolerance Etc. Quality of Service Delay Jitter Cell loss So, how does ATM “enforce” the characteristics we have just discussed, to ensure that user who have subscribed for a certain level of service actually get it (and others don’t)???? First. let’s consider ATM’s Connection Admission Control (CAC). This feature provides a policing function at connection time to ensure that the services requested/required are available in both directions. That is, can the resources required to provide the service level be allocated. Quite simply stated, CAC, check at connection time to see if the parameters can be satisfied, and grants the connection only if the answer is “Yes”. When the connection is granted, the user is granted a traffic contract which specifies that the network will satisfy the user’s requirements. If the resources are not available, the connection is not established. Chapter 13: ATM Traffic & Congestion Control Chapter 2
24
Procedures Used to Set Traffic Contract Parameters
Protocols and the TCP/IP Suite Procedures Used to Set Traffic Contract Parameters Note from this chart that traffic contract parameters can be set based on the type of connection. For example, certain parameters are established as default based on the type of service for which the user has subscribed, some are set based on default rule for the network, and others are established at connection establishment. Chapter 13: ATM Traffic & Congestion Control Chapter 2
25
Usage Parameter Control
Protocols and the TCP/IP Suite Usage Parameter Control Purpose: after a connection is established, protect the network’s resources from overload/abuse by violating connections Monitors connection for conformance to the traffic contract detect violation of assigned parameters based on conformance definition agreed to in contract take appropriate action Usage parameter Control (UPC) is a policing function after connection set-up that monitors connections to determine if VCCs are abiding by their individual traffic contracts. If the traffic is out of conformance, appropriate action is taken. This function ensures that all contracts can be met and that any potential “contract violators” don’t overload the network and prevent conforming users form getting the service they expect. Chapter 13: ATM Traffic & Congestion Control Chapter 2
26
Usage Parameter Control
Protocols and the TCP/IP Suite Usage Parameter Control Policing Contract You are Not in Conformance with the Contract. What Should the Penalty Be?? ATM Network Usage parameter Control (UPC) is a policing function after connection set-up that monitors connections to determine if VCCs are abiding by their individual traffic contracts. If the traffic is out of conformance, appropriate action is taken. This function ensures that all contracts can be met and that any potential “contract violators” don’t overload the network and prevent conforming users form getting the service they expect. PASS MARK CLP BIT DROP ?DECISION? REBEL APPLICATION Chapter 13: ATM Traffic & Congestion Control Chapter 2
27
Usage Parameter Control Function Location
Protocols and the TCP/IP Suite Usage Parameter Control Function Location Case 2 Case 1 As shown here, the UPC function can be applied to VPC and VCC levels, and it may be enforced at a variety of points in the network, depending on the logical flow and aggregation level of the VCC. For example, in Case A conformance is checked at entry to the VC switching function at the VCC level. In Case B, conformance is monitored on the VPC at the VP switching function and on each of the VCC at the VC switching function. And, in Case C, UPC is performed by the network only at the VPC level since the VPC terminates with a different network provider than the originator. Chapter 13: ATM Traffic & Congestion Control Chapter 2
28
UPC Traffic Management
Protocols and the TCP/IP Suite UPC Traffic Management Peak Cell Rate Algorithm Regulates the peak cell rate and the associated CDVT of a connection Sustainable Cell Rate Algorithm Regulates the sustainable cell rate and associated burst tolerance of a connection Traffic Shaping Smoothes out traffic at network entry points to reduce “clumping” Reduce delays, ensure fair resource allocation Chapter 13: ATM Traffic & Congestion Control Chapter 2
29
Generic Cell Rate Algorithm (GCRA): Virtual Scheduling
Protocols and the TCP/IP Suite Generic Cell Rate Algorithm (GCRA): Virtual Scheduling GCRA (I, L): I = Increment L = Limit ta(k) = Time of arrival of a cell TAT = Theoretical arrival time At time of arrival ta(1) of the first cell of connection, TAT = ta(1) Late arrival OK Early arrival TEST Early arrival beyond limit NOT OK Algorithm takes two arguments, I and L With PCR of R, I = T = 1/R CDVT limit, = L Then peak cell rate algorithm is expressed as: GCRA(T, ) I.e., this cell arrived too late. Early arrival within limit OK Reset TAT Chapter 13: ATM Traffic & Congestion Control Chapter 2
30
Leaky Bucket Algorithm
Protocols and the TCP/IP Suite Leaky Bucket Algorithm Chapter 13: ATM Traffic & Congestion Control Chapter 2
31
GCRA: Continuous-State Leaky Bucket
Protocols and the TCP/IP Suite GCRA: Continuous-State Leaky Bucket Equivalent to virtual scheduling Max Bucket capacity is T+ Counter X is incremented by T for each compliant cell Bucket decrements (drains at 1 unit/unit time GCRA (I, L): I = Increment L = Limit ta(k) = Time of arrival of a cell X =Value of leaky bucket counter X = Auxiliary variable LCT = Last compliance time At the time of arrival ta(1) of the first cell, X = 0 and LCT = ta(1) Chapter 13: ATM Traffic & Congestion Control Chapter 2
32
Protocols and the TCP/IP Suite
Depiction of GCRA Chapter 13: ATM Traffic & Congestion Control Chapter 2
33
Effect of CDVT on Cell Arrival at UNI (example: T=4.5)
Protocols and the TCP/IP Suite Effect of CDVT on Cell Arrival at UNI (example: T=4.5) Ideal ( = 0.5) Possible ( = 1.5) Cell Clumping Possible ( = 3.5) Note: = T - N = 1 + ( / T-) Cell Clumping Possible ( = 7) Note: > T - N = 1 + ( / T-) Chapter 13: ATM Traffic & Congestion Control Chapter 2
34
Sustainable Cell Rate Algorithm
Protocols and the TCP/IP Suite Sustainable Cell Rate Algorithm Uses GCRA (Ts, s), where: Ts = 1/Rs is the interarrival time at the sustainable cell rate, Rs, and s is the burst tolerance, or the time scale during which cell rate fluctuations (at PCR) are allowed s is derived from the burstiness of the traffic stream: Burst Tolerance = s = (MBS-1) Maximum burst size, MBS, that may be transmitted at the peak rate: MBS = 1 + (s / Ts - T) Note that s can be any value in: [ (MBS-1)(Ts – T), MBS (Ts – T)] So, using the minimum: s = (MBS-1)(Ts – T) = (MBS – 1)(1/SCR-1/PCR) SCR PCR Chapter 13: ATM Traffic & Congestion Control Chapter 2
35
Sustainable Cell Rate Algorithm
Protocols and the TCP/IP Suite Sustainable Cell Rate Algorithm Note that, if the traffic stream is constrained by both GCRA (T, ) and GCRA (Ts, s) , then Maximum Burst Size (MBS) is: s is derived from the burstiness of the traffic stream: Burst Tolerance = s = (MBS-1) s Ts-T MBS = 1 + SCR PCR Chapter 13: ATM Traffic & Congestion Control Chapter 2
36
UPC Function: Possible Actions based on CLP bit (dual CLP)
Protocols and the TCP/IP Suite UPC Function: Possible Actions based on CLP bit (dual CLP) Forward cell or discard it? (see p. 377) P? = Compliance test for parameter P Chapter 13: ATM Traffic & Congestion Control Chapter 2
37
Token Bucket for Traffic Shaping
Protocols and the TCP/IP Suite Token Bucket for Traffic Shaping Tokens are generated and fill the bucket at the constant rate, . To pass, a token is removed from the bucket. If bucket is empty, cell is queued to wait for next token. Departures rate is “smoothed” to . Chapter 13: ATM Traffic & Congestion Control Chapter 2
38
ABR Traffic Management
Protocols and the TCP/IP Suite ABR Traffic Management CBR, rt-VBR, nrt-VBR: traffic contract with open-loop control UBR: best effort sharing of unused capacity ABR: share unused (available) capacity using closed-loop control of source Allowed Cell Rate (ACR): current max. cell transmission rate Minimum Cell Rate (MCR): network guaranteed minimum cell rate Peak Cell Rate (PCR): max. value for ACR Initial Cell Rate (ICR): initial value of ACR Chapter 13: ATM Traffic & Congestion Control Chapter 2
39
ABR Traffic Management
Protocols and the TCP/IP Suite ABR Traffic Management ACR is dynamically adjusted based on feedback to the source in the form of Resource Management (RM) cells RM cells contain three fields: Congestion Indication (CI) bit No Increase (NI) bit Explicit Cell Rate (ER) field Chapter 13: ATM Traffic & Congestion Control Chapter 2
40
Flow of Data and RM Cells – ABR Connection
Protocols and the TCP/IP Suite Flow of Data and RM Cells – ABR Connection Nrm parameter usually set to 32 FRM cell flow BRM Cell flow Chapter 13: ATM Traffic & Congestion Control Chapter 2
41
ABR Source Reaction Rules
Protocols and the TCP/IP Suite ABR Source Reaction Rules NI CI ACTION ACR max[MCR,min[ER,PCR,ACR+RIFxPCR]] 1 ACR max[MCR,min[ER, ACR(1-RDF)]] ACR max[MCR,min[ER, ACR]] RIF – Fixed rate increase factor (default 1/16) RDF – Fixed rate decrease factor (default 1/16) Chapter 13: ATM Traffic & Congestion Control Chapter 2
42
Variations in Allowed Cell Rate
Protocols and the TCP/IP Suite Variations in Allowed Cell Rate RIF = 1/16 RDF = 1/4 Chapter 13: ATM Traffic & Congestion Control Chapter 2
43
Protocols and the TCP/IP Suite
ABR RM Cell Format Chapter 13: ATM Traffic & Congestion Control Chapter 2
44
Initial Values of Cell Fields
Protocols and the TCP/IP Suite Initial Values of Cell Fields Chapter 13: ATM Traffic & Congestion Control Chapter 2
45
Protocols and the TCP/IP Suite
ABR Parameters Chapter 13: ATM Traffic & Congestion Control Chapter 2
46
ABR Capacity Allocation
Protocols and the TCP/IP Suite ABR Capacity Allocation Two Functions of ATM Switches Congestion Control: throttle back on rates based on buffer dynamics Fairness: throttle back as required to ensure fair allocation of available capacity between connections Two categories of switch algorithms Binary: EFCI, CI and NI bits Explicit rate: use of the ER field Chapter 13: ATM Traffic & Congestion Control Chapter 2
47
Binary Feedback Schemes
Protocols and the TCP/IP Suite Binary Feedback Schemes Single FIFO queue at each output port buffer switch issues EFCI, CI, NI based on threshold(s) in each queue Multiple queues per port - separate queue for each VC, or group of VCs uses threshold levels as above Use selective feedback to dynamically allocate fair share of capacity switch will mark cells that exceed their fair share of buffer capacity Chapter 13: ATM Traffic & Congestion Control Chapter 2
48
Explicit Rate Feedback Schemes
Protocols and the TCP/IP Suite Explicit Rate Feedback Schemes Basic scheme at switch is: compute fair share of capacity for each VC determine the current load or degree of congestion compute an explicit rate (ER) for each VC and send to the source in an RM cell Several example of this scheme Enhanced proportional rate control algorithm (EPRCA) Explicit rate indication for congestion avoidance (ERICA) Congestion Avoidance using proportional control (CAPC) Chapter 13: ATM Traffic & Congestion Control Chapter 2
49
Protocols and the TCP/IP Suite
EPRCA Switch calculates mean current load on each connection, called the MACR: MACR(I) = (1-) x MACR(I-1) + x CCR(I) Note: typical value for is 1/16 When queue length at an output port exceeds the established threshold, update ER field in RMs for all VCs on that port as: ER min[ER, DPF x MACR] where DPF is the down pressure factor parameter, typically set to 7/8. Effect: lowers ERs of VCs that are consuming more than fair share of switch capacity Chapter 13: ATM Traffic & Congestion Control Chapter 2
50
Protocols and the TCP/IP Suite
ERICA Makes adjustments to ER based on switch load factor: Load Factor (LF) = Input rate /Target rate where input rate is averaged over a fixed interval, and target rate is typically 85-90% of link bandwidth When LF > 1, congestion is threatened, and ERs are reduced by VC on a fair share basis: Fairshare = target rate/number of VCs Current VCshare = CCR/LF newER = min[oldER, max[Fairshare, VCshare]] Chapter 13: ATM Traffic & Congestion Control Chapter 2
51
GFR Traffic Management
Protocols and the TCP/IP Suite GFR Traffic Management Simple, like UBR no policing or shaping of traffic at end-system no guaranteed frame delivery depends on higher level protocols (like TCP) for reliable data transfer mechanisms Like ABR, provides capacity reservation and traffic contract for QoS guaranteed minimum rate without loss Specify PCR, MCR, MBS, MFS, CDVT Requires that network recognize frames as well as cells in congestion, network discards whole frames, not just individual cells Chapter 13: ATM Traffic & Congestion Control Chapter 2
52
Protocols and the TCP/IP Suite
GFR Mechanism Tagging and policing per traffic contract Manage network resources to ensure fairness and avoid congestion Chapter 13: ATM Traffic & Congestion Control Chapter 2
53
Frame-Based GCRA (F-GCRA)
Protocols and the TCP/IP Suite Frame-Based GCRA (F-GCRA) Chapter 13: ATM Traffic & Congestion Control Chapter 2
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.