1 Buffering Strategies in ATM Switches Carey Williamson Department of Computer Science University of Calgary.

Slides:



Advertisements
Similar presentations
Computer Networking Lecture 20 – Queue Management and QoS.
Advertisements

ATM Switch Architectures
1 CNPA B Nasser S. Abouzakhar Queuing Disciplines Week 8 – Lecture 2 16 th November, 2009.
1 CONGESTION CONTROL. 2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because.
TELE202 Lecture 8 Congestion control 1 Lecturer Dr Z. Huang Overview ¥Last Lecture »X.25 »Source: chapter 10 ¥This Lecture »Congestion control »Source:
24-1 Chapter 24. Congestion Control and Quality of Service (part 1) 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
Channel Allocation Protocols. Dynamic Channel Allocation Parameters Station Model. –N independent stations, each acting as a Poisson Process for the purpose.
1.  Congestion Control Congestion Control  Factors that Cause Congestion Factors that Cause Congestion  Congestion Control vs Flow Control Congestion.
CS 408 Computer Networks Congestion Control (from Chapter 05)
Priority Scheduling and Buffer Management for ATM Traffic Shaping Authors: Todd Lizambri, Fernando Duran and Shukri Wakid Present: Hongming Wu.
NETWORK LAYER. CONGESTION CONTROL In congestion control we try to avoid traffic congestion. Traffic Descriptor Traffic descriptors are qualitative values.
1 Omega Network The omega network is another example of a banyan multistage interconnection network that can be used as a switch fabric The omega differs.
1 Call Admission Control Carey Williamson Department of Computer Science University of Calgary.
1 Cell Networking Carey Williamson Department of Computer Science University of Calgary.
Nick McKeown CS244 Lecture 6 Packet Switches. What you said The very premise of the paper was a bit of an eye- opener for me, for previously I had never.
1 Delta Network The delta network is one example of a multistage interconnection network that can be used as a switch fabric The delta network is an example.
What's inside a router? We have yet to consider the switching function of a router - the actual transfer of datagrams from a router's incoming links to.
4-1 Network layer r transport segment from sending to receiving host r on sending side encapsulates segments into datagrams r on rcving side, delivers.
1 Performance Results The following are some graphical performance results out of the literature for different ATM switch designs and configurations For.
April 10, HOL Blocking analysis based on: Broadband Integrated Networks by Mischa Schwartz.
10 - Network Layer. Network layer r transport segment from sending to receiving host r on sending side encapsulates segments into datagrams r on rcving.
1 ATM Switching: An Overview Carey Williamson Department of Computer Science University of Calgary.
Analysis of Input Queueing More complex system to analyze than output queueing case. In order to analyze it, we make a simplifying assumption of "heavy.
Localized Asynchronous Packet Scheduling for Buffered Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York Stony Brook.
4: Network Layer4b-1 Router Architecture Overview Two key router functions: r run routing algorithms/protocol (RIP, OSPF, BGP) r switching datagrams from.
1 IP routers with memory that runs slower than the line rate Nick McKeown Assistant Professor of Electrical Engineering and Computer Science, Stanford.
Buffer Management for Shared- Memory ATM Switches Written By: Mutlu Apraci John A.Copelan Georgia Institute of Technology Presented By: Yan Huang.
ATM SWITCHING. SWITCHING A Switch is a network element that transfer packet from Input port to output port. A Switch is a network element that transfer.
1 Copyright © Monash University ATM Switch Design Philip Branch Centre for Telecommunications and Information Engineering (CTIE) Monash University
Data and Computer Communications Chapter 10 – Circuit Switching and Packet Switching (Wide Area Networks)
Survey of Performance Analysis on Banyan Networks Written By Nathan D. Truhan Kent State University.
1 Chapters 8 Overview of Queuing Analysis. Chapter 8 Overview of Queuing Analysis 2 Projected vs. Actual Response Time.
Packet Forwarding. A router has several input/output lines. From an input line, it receives a packet. It will check the header of the packet to determine.
Queueing and Active Queue Management Aditya Akella 02/26/2007.
Case Study: The Abacus Switch CS Goals and Considerations Handles cell relay (fixed-size packets) Can be modified to handle variable-sized packets.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
Buffered Crossbars With Performance Guarantees Shang-Tse (Da) Chuang Cisco Systems EE384Y Thursday, April 27, 2006.
Queuing Delay 1. Access Delay Some protocols require a sender to “gain access” to the channel –The channel is shared and some time is used trying to determine.
Throughput of Internally Buffered Crossbar Switch Saturday, February 20, 2016 Mingjie Lin
Input buffered switches (1)
Announcements Deadline extensions –HW 1 dues May 17 (next Wed) –Progress report due May 24 HW 1 clarifications: –On problem 3 users can lower their power.
Univ. of TehranIntroduction to Computer Network1 An Introduction to Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Chapter 10 Congestion Control in Data Networks and Internets 1 Chapter 10 Congestion Control in Data Networks and Internets.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Tel Hai Academic College Department of Computer Science Prof. Reuven Aviv Markov Models for data flow In Computer Networks Resource: Fayez Gebali, Analysis.
Improving OBS Efficiency Li, Shuo, Meiqian Wang. Eric W. M. Wong, Moshe Zukerman City University of Hong Kong 1.
scheduling for local-area networks”
Topics discussed in this section:
Buffer Management in a Switch
Packet Forwarding.
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Chapter 4: Network Layer
Chapter 3 Part 3 Switching and Bridging
CONGESTION CONTROL.
Congestion Control (from Chapter 05)
Switching Techniques.
Congestion Control (from Chapter 05)
Delta Network The delta network is one example of a multistage interconnection network that can be used as a switch fabric The delta network is an example.
EE 122: Lecture 7 Ion Stoica September 18, 2001.
Buffer Management for Shared-Memory ATM Switches
Congestion Control (from Chapter 05)
Chapter 3 Part 3 Switching and Bridging
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Introduction to Packet Scheduling
Congestion Control (from Chapter 05)
Chapter 4: Network Layer
Switch Performance Analysis and Design Improvements
Introduction to Packet Scheduling
Presentation transcript:

1 Buffering Strategies in ATM Switches Carey Williamson Department of Computer Science University of Calgary

2 Introduction l Up to now, we have assumed bufferless switches and bufferless switch fabrics l When contention occurs, cells are dropped l Not practical to do this!

3 Alternatives l Buffering –a cell that cannot be transmitted on its desired path or port right now can wait in a buffer to try again later –several possibilities: input buffering, output buffering, crosspoint (internal) buffering, combination thereof

4 Alternatives (Cont’d) l Recirculation –a cell that cannot be transmitted on its desired path or port right now is sent back to the input ports using a recirculation line to try again in the next time slot (with higher priority) –hopefully will get through next time

5 Alternatives (Cont’d) l Deflection routing –a cell that cannot be transmitted on its desired path or port right now is sent out “another” (available) port instead, in the hope that it will find an alternate path to its destination –example: tandem banyan

6 Alternatives (Cont’d) l Redundant paths –design a switch fabric with multiple possible paths from each input port to each output port (e.g., Benes) –greater freedom for path selection –flexible, adaptive, less contention –works well with deflection routing

7 Buffering Issues l There are three main factors that affect the performance of switch buffering strategies l Buffer location l Buffer size l Buffer management strategy

8 Buffer Location l Several choices l Input buffering l Output buffering l Internal buffering l Combination of the above

9 Input Buffering l In the event of output port contention (which can be detected ahead of time at the input ports), let one of the contending cells (chosen at random) go ahead, and hold the other(s) at the input ports l Others try to go through the switch fabric the next chance they get

10 Input Buffering (Cont’d) l Can be a poor choice! l Input buffering suffers from the Head of the Line (HOL) blocking problem l Can significantly degrade the performance of the switch

11 HOL Blocking Problem l The cell at the head of the input queue cannot go because of output port contention l Because of the FCFS nature of the queue, all cells behind the head cell are also blocked from going l Even if the output port that they want is idle!!!!

12 HOL Blocking Example 2 x 2 Switch OUTPUT 0 OUTPUT 1 INPUT 1 INPUT 0

13 HOL Blocking Example 2 x 2 Switch 0 1 Two arrivals

14 HOL Blocking Example 2 x 2 Switch 0 1 Two departures

15 HOL Blocking Example 2 x 2 Switch 1 0 Two arrivals

16 HOL Blocking Example 0 1 Two departures

17 HOL Blocking Example 2 x 2 Switch 0 0 Two arrivals

18 HOL Blocking Example 2 x 2 Switch 0 0 One departure

19 HOL Blocking Example 2 x 2 Switch 1 01 Two arrivals

20 HOL Blocking Example Two departures

21 HOL Blocking Example 2 x 2 Switch 1 11 Two arrivals

22 HOL Blocking Example 2 x 2 Switch One departure

23 HOL Blocking Example 2 x 2 Switch 1 10 One arrival

24 HOL Blocking Example 2 x 2 Switch 1 10 One departure HOL Blocking

25 HOL Blocking Example 2 x 2 Switch 10 0 One arrival

26 HOL Blocking Example Two departures

27 HOL Blocking Example 2 x 2 Switch 0 No arrivals

28 HOL Blocking Example 2 x 2 Switch 0 One departure

29 HOL Blocking: Summary l Cells can end up waiting at input ports even if their desired output port is idle l How often can this happen? l For a 100% loaded 2x2 switch, HOL blocking happens 25% of time l Effective throughput: 0.75

30 HOL Blocking (Cont’d) l The HOL blocking problem does NOT go away on larger mesh sizes l In fact, it even gets worse!!!

NMaximum Throughput  Maximum Throughput for Input Buffering

NUMBER OF PORTS (N) Maximum Throughput for Input Buffering MAXIMUM ACHIEVABLE THROUGHPUT

33 Solutions for HOL Blocking l Non-FIFO service discipline l Lookahead “windowing” schemes –e.g., if front cell is blocked, then try the next cell, and so on –maximum lookahead W (e.g., W=8) –called “HOL bypass”

34 Solutions for HOL Blocking (Cont’d) l Don’t use input buffering! l Use output buffering instead

35 Output Buffering l In the event of output port contention, send all the cells through the switch fabric, letting one of the contending cells (chosen at random) use the output port, but holding the other(s) in the buffers at the output ports

36 Output Buffering (Cont’d) l Main difference: cells have already gone through the switch fabric l As soon as the port is idle, the cells go out (i.e., work conserving) l Nothing else can get in their way l Achieves maximum possible throughput

37 Buffer Sizing l Need buffer size large enough to keep cell loss below an acceptable threshold (e.g., CLR = ) l Purpose of buffers is to handle short term statistical fluctuations in queue length

38 Buffer Sizing (Cont’d) l Obvious fact #1: the larger the buffer size, the lower the cell loss l Obvious fact #2: the larger the buffer size, the larger the maximum possible queuing delay (and the cost of the switch!) l Tradeoff: cell loss versus cell delay (and cell delay jitter) (and cost)

39 Buffer Sizing (Cont’d) l Reality: finite buffers –e.g., 100’s or 1000’s of cells per port l Buffers need to be large enough to handle the bursty characteristics of integrated ATM traffic l General rule of thumb: buffer size = 10 x max burst size

40 Buffer Management l In a shared memory switch, for example, there is a choice of using dedicated buffers for each port (called partitioned buffering) or using a common pool of buffers shared by all ports (called shared buffering)

Partitioned Buffers SHARED MEMORY

Shared Buffers SHARED MEMORY

43 Buffer Mgmt (Cont’d) l Shared buffering offers MUCH better cell loss performance l Partititioned is perhaps easier to design and build l Shared is more complicated to design, build, and control l Shared is superior (uniform traffic)

44 Summary l There are a wide range of choices to make for buffering in ATM switches l Main issues: –buffer location –buffer size –buffer management strategy l Major impact on performance