Congestion in Data Networks

Slides:



Advertisements
Similar presentations
1 CONGESTION CONTROL. 2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because.
Advertisements

EE 4272Spring, 2003 Chapter 12 Congestion in Data Networks Effect of Congestion Control  Ideal Performance  Practical Performance Congestion Control.
TELE202 Lecture 8 Congestion control 1 Lecturer Dr Z. Huang Overview ¥Last Lecture »X.25 »Source: chapter 10 ¥This Lecture »Congestion control »Source:
01. Apr INF-3190: Congestion Control Congestion Control Foreleser: Carsten Griwodz
The Network Layer Functions: Congestion Control
Review: Routing algorithms Distance Vector algorithm. –What information is maintained in each router? –How to distribute the global network information?
William Stallings Data and Computer Communications 7 th Edition Chapter 13 Congestion in Data Networks.
Computer Networks24-1 Chapter 24. Congestion Control and Quality of Service 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
CONGESTION CONTROL T.Najah Al-Subaie Kingdom of Saudi Arabia Prince Norah bint Abdul Rahman University College of Computer Since and Information System.
24-1 Chapter 24. Congestion Control and Quality of Service (part 1) 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
24.1 Chapter 24 Congestion Control and Quality of Service Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
William Stallings Data and Computer Communications 7th Edition
CS 408 Computer Networks Congestion Control (from Chapter 05)
1 Frame Relay u Packet switching system with low overhead u Assumes very reliable high-quality physical network u Developed for use in ISDN networks u.
Chapter 10 Congestion Control in Data Networks1 Congestion Control in Data Networks and Internets COMP5416 Chapter 10.
Data and Computer Communications Eighth Edition by William Stallings Lecture slides by Lawrie Brown Chapter 13 – Congestion in Data Networks.
Semester Copyright USM EEE442 Computer Networks Congestion En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room.
NETWORK LAYER. CONGESTION CONTROL In congestion control we try to avoid traffic congestion. Traffic Descriptor Traffic descriptors are qualitative values.
Presented By: Pariya Raoufi. Motivations Future applications require: higher bandwidth, generate a heterogeneous mix of network traffic, low latency.
Congestion Control and Quality of Service
1 EE 400 Asynchronous Transfer Mode (ATM) Abdullah AL-Harthi.
ATM service types CBR (Constant Bit Rate)
Data Communication and Networks
ACN: Congestion Control1 Congestion Control and Resource Allocation.
Semester Copyright USM EEE449 Computer Networks Congestion En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room.
Protocols and the TCP/IP Suite Asynchronous Transfer Mode (ATM)
Computer Networks Set 9 Congestion in Data Networks.
EE 4272Spring, 2003 Chapter 11. ATM and Frame Relay Overview of ATM Protocol Architecture ATM Logical Connections ATM Cells ATM Service Categories ATM.
CHAPTER 24. CONGESTION CONTROL AND QUALITY OF SERVICE 24-1.
24-1 Chapter 24. Congestion Control and Quality of Service part Quality of Service 23.6 Techniques to Improve QoS 23.7 Integrated Services 23.8.
Lect3..ppt - 09/12/04 CIS 4100 Systems Performance and Evaluation Lecture 3 by Zornitza Genova Prodanoff.
Data Transfer Case Study: TCP  Go-back N ARQ  32-bit sequence # indicates byte number in stream  transfers a byte stream, not fixed size user blocks.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Chapter 23 Congestion Control and Quality of Service.
CSC 336 Data Communications and Networking Lecture 8d: Congestion Control : RSVP Dr. Cheer-Sun Yang Spring 2001.
Univ. of TehranAdv. topics in Computer Network1 Advanced topics in Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Chapter 24. Congestion Control and Quality of Service part 3
1 Quality of Service Outline Realtime Applications Integrated Services Differentiated Services MPLS.
Chapter 12 Transmission Control Protocol (TCP)
CSC 581 Communication Networks II Chapter 7c: Congestion Control Dr. Cheer-Sun Yang.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Chapter 23 Congestion Control and Quality of Service.
K. Salah 1 Module 3.0: ATM & Frame Relay. K. Salah 2 Protocol Architecture Similarities between ATM and packet switching – Transfer of data in discrete.
CONGESTION CONTROL.
Lecture Network layer -- May Congestion control Algorithms.
Department of Computer and IT Engineering University of Kurdistan
Data Transfer Case Study: TCP  Go-back N ARQ  32-bit sequence # indicates byte number in stream  transfers a byte stream, not fixed size user blocks.
An End-to-End Service Architecture r Provide assured service, premium service, and best effort service (RFC 2638) Assured service: provide reliable service.
William Stallings Data and Computer Communications Chapter 12 Congestion in Data Networks.
Chapter 10 Congestion Control in Data Networks and Internets 1 Chapter 10 Congestion Control in Data Networks and Internets.
The Network Layer Congestion Control Algorithms & Quality-of-Service Chapter 5.
Chapter 30 Quality of Service Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
The Network Layer Role Services Main Functions Standard Functions
Congestion Control in Data Networks and Internets
Topics discussed in this section:
William Stallings Data and Computer Communications
Congestion Control and
CONGESTION CONTROL.
William Stallings Data and Computer Communications
Congestion Control in Data Networks and Internets
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Figure Areas in an autonomous system
Chapter 11. Frame Relay Background Frame Relay Protocol Architecture
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Presentation transcript:

Congestion in Data Networks Data Communications Congestion in Data Networks

What Is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet handling capacity of the network Congestion control aims to keep number of packets below level at which performance falls off dramatically Data network is a network of queues Generally 80% utilization is critical Finite queues mean data may be lost

Figure 23.5 Incoming packet

Interaction of Queues

Effects of Congestion Packets arriving are stored at input buffers (not ATM) Routing decision made Packet moves to output buffer Packets queued for output transmitted as fast as possible Statistical time division multiplexing If packets arrive too fast to be routed, or to be output, buffers will fill Can discard packets Can use flow control Can propagate congestion through network

Ideal Performance Top: As load increases, throughput directly increases Middle: As load increases, delay increases Bottom: Power is ratio of throughput to delay

Practical Performance Ideal assumes infinite buffers and no overhead Unfortunately, buffers are finite Additional overhead occurs in exchanging congestion control messages

Effects of Congestion - No Control

Congestion Control Objectives In general, we want to: Minimize discards Maintain agreed QoS (if applicable) Minimize probability of one end user monopoly over other end users Simple to implement Little overhead on network or user Create minimal additional traffic Distribute resources fairly Limit spread of congestion Operate effectively regardless of traffic flow Minimum impact on other systems Minimize variance in QoS

Basic Mechanisms for Congestion Control Open-Loop Congestion Control (rely on other layers for feedback and control) Retransmission policy - a good policy can reduce congestion Window policy - sel-reject better than go-back-N; use a bigger window size Acknowledgment policy - don’t ack each packet individually Discard policy - a good policy by routers may prevent congestion and at the same time may not harm the integrity of the transmission Admission policy - QOS mechanism

Basic Mechanisms for Congestion Control Closed-Loop Congestion Control Backpressure - when a router is congested, it informs the previous upstream router to reduce the rate of outgoing packets Choke packet of choke point - sent by router to source, similar to ICMP’s source quench packet Implicit signaling - look for delay in some other action Explicit signaling - router sends an explicit signal Backward signaling - bit is set in packet moving in the direction opposite to the congestion Forward signaling - bit is set in packet moving in the direction of congestion. Receiver can use policy such as slowing down acks to alleviate congestion

Basic Mechanisms for Congestion Control (visual examples)

Backpressure If node becomes congested it can slow down or halt flow of packets from other nodes May mean that other nodes have to apply control on incoming packet rates Propagates back to source Can restrict to logical connections generating most traffic Used in connection oriented that allow hop by hop congestion control (e.g. X.25) Not used in ATM or frame relay Only recently developed for IPv6 (PRI field)

Choke Packet Control packet Rather crude mechanism Generated at congested node Sent to source node e.g. ICMP source quench From router or destination Source cuts back until no more source quench message Sent for every discarded packet, or anticipated Rather crude mechanism

Implicit Congestion Signaling Transmission delay may increase with congestion Packets may be discarded Source can detect these as implicit indications of congestion (source is responsible, not network) Useful on connectionless (datagram) networks e.g. IP based (TCP includes congestion and flow control) Used in frame relay LAPF

Explicit Congestion Signaling Network alerts end systems of increasing congestion Used on connection-oriented networks End systems take steps to reduce offered load Backwards Congestion avoidance info sent in opposite direction of packet travel Forwards Congestion avoidance info sent in same direction as packet travel - when end system receives info, either sends it back to source or hands it to higher layer to take action

Categories of Explicit Signaling Binary A bit set in a packet indicates congestion Credit based Indicates how many packets source may send Common for end to end flow control Rate based The source may transmit data at a rate up to the set limit Any node along the path of the connection can reduce the data rate limit in a control message to the source e.g. ATM

How Does TCP Handle / Avoid Congestion How Does TCP Handle / Avoid Congestion? (details in TDC 365 and TDC 463) To Handle: TCP has a sender window size. Sender window size is minimum of receiver window size or network congestion window size. To Avoid: TCP can use Slow Start and Additive Increase - at beginning, TCP sets congestion window size to maximum segment size, then increases window size with each ack. Can also use Multiplicative Decrease - after timeout, threshold set to 1/2 previous threshold, and congestion window size reset to 1 (then slow start)

Figure 23.8 Multiplicative decrease

How Does Frame Relay Handle Congestion? Connection management, coupled with Discard strategy Explicit signaling Implicit signaling In more detail:

Connection Management Before a frame relay network allows a user to transmit data, they agree on a connection Some call this an SLA (service level agreement) Frame relay calls it the CIR (committed information rate) Committed burst size - max amount of data the network agrees to transfer, under normal conditions Excess burst size - max amount of data in excess of committed burst size Different frame relay companies have different agreements

Connection Management What happens if you exceed your CIR and the network experiences congestion? Frame relay may start discarding your frames (Discard Eligible bit = 1) (Discard Strategy) Does frame relay tell you that your frames are being tossed? No. Frame relay assumes a higher layer protocol (such as TCP) will monitor lost or missing frames Frame relay could discard arbitrarily with no regard for source, but then no reward for restraint so end systems transmit as fast as possible CIR not 100% guaranteed, but network tries hard Aggregate CIR should not exceed physical data rate

Figure 23.1 Traffic descriptors

Relationship Among Congestion Parameters

Explicit Signaling Network alerts end systems of growing congestion Backward explicit congestion notification Notifies the user that CA procedures should be initiated for traffic in the opposite direction; simpler Forward explicit congestion notification Notification goes forward, so end user has to somehow get signal back to other end to tell them to slow down Frame handler monitors its queues May notify some or all logical connections User response Reduce rate

Figure 23.9 BECN

Figure 23.10 FECN

Figure 23.11 Four cases of congestion

Implicit Signaling Implicit Congestion Notification Telecom Definition In frame relay, inference by user equipment that congestion has occurred in the network. The inference is triggered by realization of the receiving frame relay access device (FRAD) of transmission delays. Based on block, frame or packet sequence numbers, another protocol may recognize that one or more frames have been lost in transit. Control mechanisms at the upper protocol layers of the end devices then deal with frame loss by requesting retransmissions. From: YourDictionary.com

What About ATM? High speed, small cell size, limited overhead bits Requirements (difficult) Majority of traffic not amenable to flow control Feedback slow due to reduced transmission time compared with propagation delay Wide range of application demands Different traffic patterns Different network services High speed switching and transmission increases volatility

Latency/Speed Effects Consider a typical ATM transmission speed of 150Mbps ~2.8x10-6 seconds to insert single cell Time to traverse network depends on propagation delay, switching delay Assume propagation at two-thirds speed of light If source and destination on opposite sides of USA, propagation time ~ 48x10-3 seconds Given implicit congestion control, by the time dropped cell notification has reached source, 7.2x106 bits have been transmitted So, this is not a good strategy for ATM

Cell Delay Variation For ATM voice/video, data is a stream of cells Delay across network must be short AND Rate of delivery must be constant There will always be some variation in transit Delay cell delivery to application so that constant bit rate can be maintained to application

Time Re-assembly of CBR Cells D(i)=end to end delay of ith cell V(0)= estimate of amount of cell delay variation that an application can tolerate

Various Network Contributions to Cell Delay Variation Packet switched networks Queuing delays Routing decision time Frame relay As above but to lesser extent ATM Less than frame relay ATM protocol designed to minimize processing overheads at switches ATM switches have very high throughput Only noticeable delay is from congestion Must not accept load that causes congestion

Cell Delay Variation At The UNI in ATM Even if application produces data at fixed rate, processing at (potentially) three layers of ATM causes delay Interleaving cells from different connections Operation and maintenance signals need to be interleaved If using synchronous digital hierarchy frames, potential delays here are inserted at physical layer Can not predict these delays (See figure next slide)

Origins of Cell Delay Variation

Traffic and Congestion Control Objectives for ATM ATM layer traffic and congestion control should support QoS classes for all foreseeable network services ATM layer traffic and congestion control should not rely on AAL protocols that are network specific, nor on higher level application specific protocols Any traffic and congestion controls should minimize network and end to end system complexity

Traffic Management and Congestion Control Techniques ITU-T and ATM Forum have defined a range of traffic management functions to maintain the QoS of ATM connections: Resource management using virtual paths - separate traffic flow according to service characteristics (1) Connection admission control (2) Usage parameter control (3) Traffic shaping (4) Let’s examine these in more detail

Resource Management Using Virtual Paths (1) ATM network can use the virtual path to group similar virtual channels

Connection Admission Control (2) Good first line of defense User specifies traffic characteristics for new connection (VCC or VPC) by selecting a QoS Network accepts connection only if it can meet the demand Traffic contract Peak cell rate - upper bound, CBR and VBR Cell delay variation - CBR and VBR Sustainable cell rate - average rate, VBR Burst tolerance - VBR

Usage Parameter Control (3) Monitor established connection to ensure traffic conforms to contract Protection of network resources from overload by one connection Done on VCC and VPC Control of peak cell rate and cell delay variation, or Control of sustainable cell rate and burst tolerance Discard cells that do not conform to traffic contract Called traffic policing

Traffic Shaping (4) Smooth out traffic flow and reduce cell clumping Token bucket and leaky bucket are examples of traffic shaping Token bucket allows bursts, while leaky bucket maintains an even flow (See figures next slides)

Figure 23.18 Token bucket

Token Bucket

Figure 23.16 Leaky bucket

Leaky bucket keeps an average flow moving. Queue overflows? Figure 23.17 Leaky bucket implementation Leaky bucket keeps an average flow moving. Queue overflows? Discard packets. Unlike token bucket, no credit for no transmissions.

ATM’s Real-time Traffic Management QoS provided for CBR, and rt-VBR is based on a traffic contract (connection admission control) and UBC (usage parameter control) There is no feedback in these systems. Cells are simply discarded. This is called open-loop control. This is not used for ABR or UBR traffic.

Non-real-time Traffic Management Some applications (Web, file transfer) do not have well defined traffic characteristics Best efforts Allow these applications to share unused capacity If congestion builds, cells are dropped, eg UBR Closed loop control ABR connections share available capacity Share varies between minimum cell rate (MCR) and peak cell rate (PCR) ABR flow limited to available capacity by feedback Buffers absorb excess traffic during feedback delay Low cell loss

Feedback Mechanisms Transmission rate characteristics: Allowed cell rate Minimum cell rate Peak cell rate Initial cell rate Start with ACR=ICR Adjust ACR based on feedback from network Resource management cells Congestion indication (CI) bit No increase (NI) bit Explicit cell rate (ER) field

Variations in Allowed Cell Rate

Cell Flow (RM = resource management)

23.7 Integrated Services Integrated Services (IntServ) is a model used to provide QoS in the Internet at the IP layer. IntServ is a flow-based model, in that a user needs to create a flow or virtual circuit between source and destination. But IP is connectionless. How do you create a connection? Use RSVP.

Path messages are sent from sender (S1) to all receivers Figure 23.19 Path messages Path messages are sent from sender (S1) to all receivers (multiple if multi-cast). This establishes the path.

Once the path is set, receivers return Resv messages. Note Figure 23.20 Resv messages Once the path is set, receivers return Resv messages. Note how reservations are merged (next slide).

R3 takes the larger of the two reservations and sends that upstream. Figure 23.21 Reservation merging R3 takes the larger of the two reservations and sends that upstream.

Wild card filter - the router creates a single reservation for all Figure 23.22 Reservation styles Wild card filter - the router creates a single reservation for all the senders, based on the largest request. Fixed filter - the router creates a distinct reservation for each flow. Shared explicit - the router creates a single reservation which can be shared by a set of flows.

23.8 Differentiated Services An alternative to Integrated Services. Produced by the IETF to create a class-based QoS model for IP. Beyond the scope of this class.

Meter checks to see if incoming flow matches negotiated traffic Figure 23.24 Traffic conditioner Meter checks to see if incoming flow matches negotiated traffic profile. Marker can re-mark a packet that is using best-effort delivery or down-mark a packet based on info received from meter. Shaper uses the info received from meter to reshape the traffic. Dropper works like a shaper with no buffer, discarding packets if the flow severely violates the negotiated profile.