Merits of a Load-Balanced AAPN 1.Packets within a flow are transported to their correct destinations in sequence. This is due to the 1:1 logical connection.

Slides:



Advertisements
Similar presentations
QoS Strategy in DiffServ aware MPLS environment Teerapat Sanguankotchakorn, D.Eng. Telecommunications Program, School of Advanced Technologies Asian Institute.
Advertisements

1 Maintaining Packet Order in Two-Stage Switches Isaac Keslassy, Nick McKeown Stanford University.
Internetworking II: MPLS, Security, and Traffic Engineering
1 IK1500 Communication Systems IK1330 Lecture 3: Networking Anders Västberg
Jaringan Komputer Lanjut Packet Switching Network.
William Stallings Data and Computer Communications 7 th Edition Chapter 13 Congestion in Data Networks.
Chapter 10 Congestion Control in Data Networks1 Congestion Control in Data Networks and Internets COMP5416 Chapter 10.
Submitters: Erez Rokah Erez Goldshide Supervisor: Yossi Kanizo.
Nick McKeown CS244 Lecture 6 Packet Switches. What you said The very premise of the paper was a bit of an eye- opener for me, for previously I had never.
Module 3.4: Switching Circuit Switching Packet Switching K. Salah.
Isaac Keslassy, Shang-Tse (Da) Chuang, Nick McKeown Stanford University The Load-Balanced Router.
A Scalable Switch for Service Guarantees Bill Lin (University of California, San Diego) Isaac Keslassy (Technion, Israel)
Making Parallel Packet Switches Practical Sundar Iyer, Nick McKeown Departments of Electrical Engineering & Computer Science,
4-1 Network layer r transport segment from sending to receiving host r on sending side encapsulates segments into datagrams r on rcving side, delivers.
10 - Network Layer. Network layer r transport segment from sending to receiving host r on sending side encapsulates segments into datagrams r on rcving.
The Concurrent Matching Switch Architecture Bill Lin (University of California, San Diego) Isaac Keslassy (Technion, Israel)
Scaling Internet Routers Using Optics Producing a 100TB/s Router Ashley Green and Brad Rosen February 16, 2004.
Using Load-Balancing To Build High-Performance Routers Isaac Keslassy, Shang-Tse (Da) Chuang, Nick McKeown Stanford University.
1 ENTS689L: Packet Processing and Switching Buffer-less Switch Fabric Architectures Buffer-less Switch Fabric Architectures Vahid Tabatabaee Fall 2006.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion The.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Scaling.
In-Band Flow Establishment for End-to-End QoS in RDRN Saravanan Radhakrishnan.
ATM COMPONENTS Presented by: ANG BEE KEEWET CHONG SIT MEIWET LAI YIN LENGWET LEE SEANG LEIWET
1 EE384Y: Packet Switch Architectures Part II Load-balanced Switches Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University.
The importance of switching in communication The cost of switching is high Definition: Transfer input sample points to the correct output ports at the.
1 Achieving 100% throughput Where we are in the course… 1. Switch model 2. Uniform traffic  Technique: Uniform schedule (easy) 3. Non-uniform traffic,
8.1 Chapter 8 Switching Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Internetworking Fundamentals (Lecture #2) Andres Rengifo Copyright 2008.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Scheduling.
Pipelined Two Step Iterative Matching Algorithms for CIOQ Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York, Stony Brook.
Localized Asynchronous Packet Scheduling for Buffered Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York Stony Brook.
Load Balanced Birkhoff-von Neumann Switches
Nick McKeown CS244 Lecture 7 Valiant Load Balancing.
Belgrade University Aleksandra Smiljanić: High-Capacity Switching Switches with Input Buffers (Cisco)
ATM SWITCHING. SWITCHING A Switch is a network element that transfer packet from Input port to output port. A Switch is a network element that transfer.
1 Copyright © Monash University ATM Switch Design Philip Branch Centre for Telecommunications and Information Engineering (CTIE) Monash University
Presented By: Vasantha Lakshmi Gutha Graduate student (CS) Course: CENG 5931 University of Houston-Clear Lake Spring 2011.
Circuit & Packet Switching. ► Two ways of achieving the same goal. ► The transfer of data across networks. ► Both methods have advantages and disadvantages.
Sami Al-wakeel 1 Data Transmission and Computer Networks The Switching Networks.
1 Optical Burst Switching (OBS). 2 Optical Internet IP runs over an all-optical WDM layer –OXCs interconnected by fiber links –IP routers attached to.
Univ. of TehranAdv. topics in Computer Network1 Advanced topics in Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Engineering Jon Turner Computer Science & Engineering Washington University Coarse-Grained Scheduling for Multistage Interconnects.
Applied research laboratory 1 Scaling Internet Routers Using Optics Isaac Keslassy, et al. Proceedings of SIGCOMM Slides:
ISLIP Switch Scheduler Ali Mohammad Zareh Bidoki April 2002.
Packet switching network Data is divided into packets. Transfer of information as payload in data packets Packets undergo random delays & possible loss.
Final Chapter Packet-Switching and Circuit Switching 7.3. Statistical Multiplexing and Packet Switching: Datagrams and Virtual Circuits 4. 4 Time Division.
Belgrade University Aleksandra Smiljanić: High-Capacity Switching Switches with Input Buffers (Cisco)
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429 Introduction to Computer Networks Lecture 18: Quality of Service Slides used with.
Unit III Bandwidth Utilization: Multiplexing and Spectrum Spreading In practical life the bandwidth available of links is limited. The proper utilization.
Interconnect Networks Basics. Generic parallel/distributed system architecture On-chip interconnects (manycore processor) Off-chip interconnects (clusters.
Buffered Crossbars With Performance Guarantees Shang-Tse (Da) Chuang Cisco Systems EE384Y Thursday, April 27, 2006.
1 IEX8175 RF Electronics Avo Ots telekommunikatsiooni õppetool, TTÜ raadio- ja sidetehnika inst.
Virtual-Channel Flow Control William J. Dally
1 Switching and Forwarding Sections Connecting More Than Two Hosts Multi-access link: Ethernet, wireless –Single physical link, shared by multiple.
A Load Balanced Switch with an Arbitrary Number of Linecards I.Keslassy, S.T.Chuang, N.McKeown ( CSL, Stanford University ) Some slides adapted from authors.
Univ. of TehranIntroduction to Computer Network1 An Introduction to Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Univ. of TehranIntroduction to Computer Network1 An Introduction to Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2000 Muhammad Waseem Iqbal Lecture # 20 Data Communication.
Chapter 3 Part 3 Switching and Bridging
Switching and High-Speed Networks
Packet Forwarding.
Network Layer Goals: Overview:
Chapter 3 Part 3 Switching and Bridging
Network Core and QoS.
PRESENTATION COMPUTER NETWORKS
EE 122: Lecture 7 Ion Stoica September 18, 2001.
COMP/ELEC 429 Introduction to Computer Networks
Chapter 3 Part 3 Switching and Bridging
Chapter 2 Switching.
Network Core and QoS.
Presentation transcript:

Merits of a Load-Balanced AAPN 1.Packets within a flow are transported to their correct destinations in sequence. This is due to the 1:1 logical connection of the buffers and use of load-balancing pollers. 2.Since a flow of traffic is spread across all wavelengths, 100% throughput is guaranteed even for very high loads and bursty traffic with a modest buffer requirement. 3.One wavelength independent cross-bar would suffice in the core node to switch the wavelength multiplex as whole, because the load is nearly equally balanced among wavelengths. 4.Complexity is reduced greatly since only one schedule is needed for all the wavelengths. This schedule may be calculated only once per-frame, as opposed to per-slot, reducing the scheduling complexity even further. 5.The load-balanced AAPN is survivable; i.e. in case of a failed layer, the packets will be distributed among the remaining layers. In this case, the pointers will skip over queues labeled by the failed layer and only serve the remaining queues. Load-balanced AAPN Sareh Taebi, Trevor J. Hall Photonic Network Technology Laboratory, Centre for Research in Photonics School of Information Technology and Engineering, University of Ottawa 800 King Edward Avenue, Ottawa ON K1N 6N5. { staebi, thall What is load-balancing? Load-balancing in general means distributing, processing and communicating evenly across a device so that no single section is overwhelmed. Load balancing is especially important for networks where it is difficult to predict the number of requests that will be issued to a server. Load-balancing can therefore also be applied to the network switching devices to avoid congestion in any section of a switch/network. Architectural Derivation of a load-balanced switch 1. Take a Centralised Shared Memory Switch with cross-point queues Q ij for every input-output pair i, j. 3. Sector the shared memory switch and: Place the queue tails in the input stage organized as a set of Virtual Output Queues (VOQ) for each input. Place the heads in the output stage organized as Virtual Input Queues (VIQ) for each output. The layered VOQs in the source node can be connected to its associated layered VIQs in the destination node by a crossbar configuration (shown in next column). 0 1 N-1 X X X 1 0 R R R R R R Crossbars (Core Node) Source NodesDestination Nodes rr Resultant Architecture In the context of AAPN, the Layers can be identified each with a wavelength within a wavelength multiplex. The central crossbars then form the stacked crossbar switches within the core node of the AAPN network.The inputs and outputs of the switch correspond to the source and edge nodes. 2. Split further every queue as a LAYERED cross-point queue with load-balancing pollers. The pollers move round-robin to serve the layered queues in turn. Also, divide every queue into ‘heads’ and ‘tails’ labeled with 1 and 2 respectively. Q 1 ijk is the queue corresponding to input i, output j and layer k. Scheduling Ideally the central crossbars are scheduled on a per-slot basis using queue state information, but the signaling delay would mean that the queue-state is out-of-date. A better option is to take a frame based approach and change the configuration of the central nodes every frame slot. If the load is perfectly balanced, then each crossbar would have the same schedule. Therefore one wavelength independent crossbar could be used in the core node to switch the wavelength multiplex as whole. This is called “Photonic Slot Routing” in the literature. Note that the transmit buffers are VOQs (one fore each destination edge node), the receive buffers are VIQs (one for each source edge node). The co-ordination buffers in the source edge node are per wavelength (i.e. layer). The resequencing buffers in the destination edge node are VIQs per wavelength (layer). The flows to each destination edge node are spread evenly over wavelength by per-destination edge node pollers in the source edge node. The flows are reconstructed by per-source pollers in the destination edge node. The pollers work independently, but serve packets in the same order to preserve synchronization. Future Work Currently investigating the possibility of a more sophisticated flow dependent slot aggregation algorithm between the extremes of: (i) no load balancing and a per-layer scheduler, as performed by other bandwidth allocation algorithms; and (ii) load-balancing with an all-layer scheduler, as presented here Packet loss in the switch will cause pollers to run out of sequence and therefore packets will arrive out of sequence. A simple and effective solution to this problem is being investigated. Load balanced switch architecture: