CSCD 433/533 Advanced Networks

Slides:



Advertisements
Similar presentations
Computer Networking Lecture 20 – Queue Management and QoS.
Advertisements

1 CNPA B Nasser S. Abouzakhar Queuing Disciplines Week 8 – Lecture 2 16 th November, 2009.
1 CONGESTION CONTROL. 2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because.
TELE202 Lecture 8 Congestion control 1 Lecturer Dr Z. Huang Overview ¥Last Lecture »X.25 »Source: chapter 10 ¥This Lecture »Congestion control »Source:
William Stallings Data and Computer Communications 7 th Edition Chapter 13 Congestion in Data Networks.
Congestion Control Reasons: - too many packets in the network and not enough buffer space S = rate at which packets are generated R = rate at which receivers.
CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.
CS 408 Computer Networks Congestion Control (from Chapter 05)
NETWORK LAYER. CONGESTION CONTROL In congestion control we try to avoid traffic congestion. Traffic Descriptor Traffic descriptors are qualitative values.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
Differentiated Services. Service Differentiation in the Internet Different applications have varying bandwidth, delay, and reliability requirements How.
Analysis and Simulation of a Fair Queuing Algorithm
Congestion Control and Resource Allocation
ACN: Congestion Control1 Congestion Control and Resource Allocation.
Computer Networking Lecture 17 – Queue Management As usual: Thanks to Srini Seshan and Dave Anderson.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
CONGESTION CONTROL and RESOURCE ALLOCATION. Definition Resource Allocation : Process by which network elements try to meet the competing demands that.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
1 Lecture 14 High-speed TCP connections Wraparound Keeping the pipeline full Estimating RTT Fairness of TCP congestion control Internet resource allocation.
TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang.
Queueing and Active Queue Management Aditya Akella 02/26/2007.
9.7 Other Congestion Related Issues Outline Queuing Discipline Avoiding Congestion.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
Random Early Detection (RED) Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make.
Spring Computer Networks1 Congestion Control Sections 6.1 – 6.4 Outline Preliminaries Queuing Discipline Reacting to Congestion Avoiding Congestion.
Providing QoS in IP Networks
Univ. of TehranIntroduction to Computer Network1 An Introduction Computer Networks An Introduction to Computer Networks University of Tehran Dept. of EE.
1 Lecture 15 Internet resource allocation and QoS Resource Reservation Protocol Integrated Services Differentiated Services.
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
Chapter 10 Congestion Control in Data Networks and Internets 1 Chapter 10 Congestion Control in Data Networks and Internets.
Other Methods of Dealing with Congestion
Congestion Control in Data Networks and Internets
Advanced Computer Networks
Instructor Materials Chapter 6: Quality of Service
QoS & Queuing Theory CS352.
Topics discussed in this section:
Chapter 6 Congestion Avoidance
Queue Management Jennifer Rexford COS 461: Computer Networks
Chapter 6 Queuing Disciplines
Congestion Control and Resource Allocation
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Introduction of Transport Protocols
Congestion Control, Internet transport protocols: udp
CONGESTION CONTROL.
EE 122: Router Support for Congestion Control: RED and Fair Queueing
Queuing and Queue Management
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
Other Methods of Dealing with Congestion
Other Methods of Dealing with Congestion
Congestion Control (from Chapter 05)
COS 461: Computer Networks
Switching Techniques.
Jiyong Park Seoul National University, Korea
Computer Science Division
Congestion Control, Quality of Service, & Internetworking
Congestion Control (from Chapter 05)
EE 122: Lecture 7 Ion Stoica September 18, 2001.
Network Simulation NET441
Figure Areas in an autonomous system
Congestion Control (from Chapter 05)
Congestion Control Reasons:
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Transport Layer: Congestion Control
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control and Resource Allocation
EECS 122: Introduction to Computer Networks Packet Scheduling and QoS
Introduction to Packet Scheduling
Presentation transcript:

CSCD 433/533 Advanced Networks Priority Traffic CSCD 433/533 Advanced Networks Spring 2016 Lecture 21 Congestion Control and Queuing Strategies 1 1

Topics Congestion Control and Resource Allocation Flows Types of Mechanisms Evaluation Criteria Effective Throughput , Delay and Power Fair Queuing Strategies 2

Introduction The goal of congestion control is …. Regulate traffic flow to avoid saturating or overloading intermediate nodes in the network

Introduction So far, have looked at congestion control in terms of TCP in CSCD 330 Congestion window Happens on the end systems TCP's philosophy is to let it happen, control the congestion by sending less Taught that routers don't have too much to do with congestion They do manage queues and can let end systems know that congestion is happening

Congestion: Effects Congestion is undesirable because it can cause: Increased delay, due to queuing Packet loss, due to buffer overflow Reduced throughput, due to packet loss and retransmission Analogy: “rush hour” traffic

Buffering: A Solution? Buffering in switches/routers can help alleviate short term or transient congestion problems, but... Under sustained overload, buffers will still fill up, and packets will be lost Only defers congestion problem More buffers means more queuing delay Beyond a certain point, more buffering makes the congestion problem worse Answer Because of increased delay and retransmission

Congestion Control, Resource Allocation and Provisioning Are ... active areas of research Problem that crosses all network layers Resource Allocation - Definition Process where network elements try to meet competing demands that applications have for network resources Not always possible to meet all demands for network ... when too many packets queued then, Congestion control - Definition Describes how network responds to or controls overload conditions 7

Congestion Control, Resource Allocation and Provisioning Provisioning is Long term solution to congestion Involves Spare routers, Purchase extra bandwidth and Allocate resources Helps with temporary congestion conditions 8

Layers Involved Congestion Control and Resource Allocation Complimentary concepts Networks can take an active role Allocate resources, If done well, congestion avoided Ask … what layers of network is resource allocation and provisioning done? Where is congestion control implemented?

Introduction Resource allocation, Provisioning Congestion control Network layer Congestion control Network layer, transport layer, link layer

Flows in Networks Up to now, defined network as being either Connectionless – datagram or Connection oriented – circuit or virtual circuit In reality … too rigid a definition Datagram model says that each datagram travels independently of other datagrams But, connections send stream of datagrams from a pair of hosts through a set of routers Known as a flow, we have looked at these before Flow – Sequence of packets sent between source and destination pair following the same route through the network in a given amount of time 11

Flows in Networks Flows Can be defined between source/dest hosts or between source/dest host/port pairs A flow between a source/dest port pair is same as a channel Flows are something routers can see and manipulate Router maintains some state for flows 12

Flow – Routers Can See Them 13

Resource Allocation (RA)‏ Many ways to classify RA mechanisms Examine a few in the next few slides Router Centric vs. Host Centric Router Centric Each router responsible for deciding when packets are forwarded Selects when packets are to be dropped Host Centric End hosts observe network conditions and adjust behavior according 14

Resource Allocation (RA)‏ Host vs. Router Reality …. lots of overlap between two schemes Both used to help with allocation of resources Routers do some allocation and hosts adjust behavior in response to traffic conditions, How do hosts do this? Reservation Based vs. Feedback Based Reservation based systems, host asks network for certain capacity at time flow is established Each router allocates enough resources to satisfy request If request can’t be satisfied, because of over committed resources, router rejects flow 15

Resource Allocation Reservation vs. Feedback Approach End hosts send data without first reserving any capacity and then adjust sending rate according to feedback they receive Windows Based vs. Rate Based Windows Based TCP is an example of Windows based, where receiver advertises a window to sender How much buffer space receiver has limits amount of data sender can transmit Supports flow control 16

Resource Allocation (RA)‏ Windows Based vs. Rate Based Windows Based Similar to TCP mechanism, routers use Window Advertisement to reserve buffer space Rate Based Use number bits/Sec receiver or network is able to absorb Several multimedia protocols use rate based For example if receiver says it can handle 1 Mbps, sender sends frames within that limit 17

Evaluation Criteria How do you evaluate Resource Allocation? Two ways Is it Effective? Does it work? Is it Fair? Distributes resources among competitors Effective Throughput and Delay Want as much throughput with as little delay To increase throughput, increase packets Yet is a relationship between throughput and delay 18

Evaluation Criteria Effective continued … Higher number of packets in the network Increases the length of queues at routers Longer queues means more delay Network Designers Specify the Relationship Called the Power of the Network Power = Throughput/Delay Power might not be the best metric for judging resource allocation effectiveness Based on network model that assumes infinite queues 19

Evaluation Criteria Power = Throughput / Delay Objective is to maximize ratio Function of how much load you place on network The load in turn is set by resource allocation mechanism 20

Evaluation Criteria Ideally, resource allocation would operate at peek of curve Left of peak Too conservative Not allowing enough packets in Right of peak Too generous, increases delay due to queues 21

Fair Resource Allocation Fairness Another criterion to judge resource allocation When several flows share a particular link Want each flow to get an equal share of bandwidth So, how do you quantify fairness of a congestion control mechanism? 22

Fairness Index 2 f(x1,x2,…xn) = If all n flows, get 1 unit of data per second The ratio becomes n2/n x n = 1, 1 is the maximum value if all flows have equal throughput Suppose, one flow receives a throughput of 1 + a Fairness index becomes: n2 + 2na + a2 / n2 + 2na + na2 With a larger denominator, the index < 1 One flow is either getting more or less than rest, if a is a negative value. 2 23

Queuing Discipline Each router must implement some queuing discipline Governs how packets are buffered while waiting to be transmitted Queuing Algorithm Allocates bandwidth and buffer space Affects latency of a packet – how long packet waits to get transmitted Two common Queuing Algorithms FIFO – First In First Out FQ – Fair Queueing 24

FIFO FIFO If packet arrives first, first to get transmitted If buffer is full, last one gets discarded If last packet arrives and queue is full, packet gets dropped – tail drop FIFO and tail drop – simplest queuing implementation Other drop policies possible – more later 25

FIFO with Tail Drop First in, first to transmit As long as there is buffer space Last arriving packets get dropped 26

Is FIFO Good for Congestion? Because FIFO is default queuing in Internet Does nothing for congestion control Easy to implement Causes packets to be lost in bursts Can lose many packets from a single flow… What other queuing schemes are there? Priority Queue Idea, mark each packet with priority Mark could be placed in Type of Service field in IP packet 27

Priority Queue Router implements separate FIFO queue per priority class Always transmit from highest priority queue first Could there be any problems with this? 28

Priority Queue With each priority queue FIFO is used Doesn’t make QOS guarantees Just allows high priority packets preference Problem High priority queue can starve out all other queues For this to work, need limits on how much high priority traffic to allow 29

Queuing Disciplines Fair Queuing (FQ)‏ Yet another strategy for queues Tries to maintain a fair allocation of resources without under utilizing network resources What does it do? Maintains separate queue for each flow Router services queues round robin When flow sends too much traffic, queue fills Packets from that flow are dropped 30

Fair Queuing Source can’t increase share of network capacity at expense of other flows Does not provide feedback for sources No way to tell router’s state Segregate traffic into separate queues Keeps one sender from hogging all the bandwidth Should be used with end-to-end algorithm that does congestion control 31

Fair Queuing Each queue gets a turn as long as packets are queued 32

Notes on Fair Queuing Link never left idle if packets in queue Queuing scheme called, Work preserving If I am only one sending, can use entire bandwidth But, when other flows happen … they get a share of bandwidth and my capacity will decrease If link fully loaded and n flows sending data Can’t use more than 1/nth of link bandwidth If I try to send more My packets get increasingly large timestamps Sit in queue longer awaiting transmission Queue will eventually overflow 33

More Queue Management Previous schemes based on creative buffer management Manage buffers and do not notify senders of congestion Next schemes anticipate congestion and take steps to prevent it from happening More aggressive in approach and more intelligent in selection of packets to drop

Random Early Detection (RED) Queuing discipline with proactive packet discard Picks packets and drops them, uses algorithm Anticipate congestion and take early avoidance action Does not penalize bursty traffic Control average queue length (buffer size) within bounds… therefore, control average queuing delay

RED Buffer Management Discard probability is calculated for each packet arrival at the output queue based on: Current weighted average queue size, and Number of packets sent since previous packet discard

Generalized RED Algorithm Calculate average queue size, avg if avg < THmin Queue the packet else if THmin  avg < THmax Calculate probability Pa With probability Pa discard the packet else with probability 1 – Pa else if avg  THmax Discard the packet

RED Algorithm Maintains running average of queue length If avgq < minth do nothing Low congestion, send packets through If avgq > maxth, drop packet Protection from misbehaving sources Does not penalize all packets in a flow Better distribution of dropped packets

Reading: Chapter 5.3 End 39