Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Optical Internet WDM/TDM fiber links Backbone ISP POP, CO Backbone

Similar presentations


Presentation on theme: "The Optical Internet WDM/TDM fiber links Backbone ISP POP, CO Backbone"— Presentation transcript:

1 The Optical Internet WDM/TDM fiber links Backbone ISP POP, CO Backbone
AP MAN/LAN Access Point Backbone MANs, LANs & Access AP AP DS3 AP AP AP OC-3c WDM OC-12 SONET OC-12 Gigabit Ethernet FFTH, FTTC, PON or others.

2 Project Goals Determine where and how optics can be used best in the Optical Internet; Work across the traditional boundaries between networks, systems, sub-systems, device and materials research; Facilitate and foster interactions between network, system, device & material scientists.

3 Optical Internet: The Next Generation Optical Internet Systems
Leonid Kazovsky Kapil Shrikhande Ian White Matt Rogge Joseph Kim (ST) Akira Nakamura (SONY) Hiroshi Okagawa (Fujitsu) Yoshiaki Ikoma (Hitachi) William J Dally Amit K Gupta Arjun Singh Brian Towles Nick McKeown Shang-tse Chuang Isaac Keslassy Sundar Iyer Greg Watson [Kyoungsik Yu]

4 Research Overview (A) Optical Internet Systems:
Metropolitan Area Networks design (Prof. Leonid. G. Kazovsky) Scalable switching fabrics (Prof. William J. Dally) 100Tb/s router design (Prof. Nick McKeown) (B) Optical Internet Devices: All-optical MEMs switches and devices (Prof. Olav Solgaard) 2D Arrays of optical modulators (Prof. David Miller) Low-cost Long wavelength VCSELs (Prof. James S. Harris)

5 The HORNET architecture
. . . Electronic switching, scheduling, queuing etc. sub-systems Tunable Tx FIXED l RX MAC LAN side equipment

6 The HORNET Architecture
Single (tunable) transceiver per node, no central switch, full-mesh connectivity Node throughput potentially = bit-rate of transceiver (minus overhead) Big cost savings Data optimized packet transport A “new” architecture, many open issues

7 Project Status: Highlights
Experimentally demonstrated the following: Tunable transmitter (2.5Gb/s) Control channel MAC Burst mode receiver (2.5Gb/s) Survivable architecture Theoretical study: optical amplification, fairness protocols Latest experimental results: Circuits over HORNET Complete testbed demonstration with MAC, reservation protocols, and continuous mode bit-error-rate tests at 1.25Gb/s 13 conference papers (3 invited), 7 journal papers and several trade journal articles

8 Node implementation Data and control processing sub-system
CW, CCW Control channel O/E + SERDES (MAN-side) 155/622 Mb/s data-rates CW, CCW Data SERDES & I/O (MAN-side) 1.25/2.5Gb/s data-rates Data and control processing sub-system Fast tunable transmitter sub-system DAC FPGA FPGA PLD LAN-side interfaces (SONET, Ethernet) DAC DAC

9 Control channel MAC Time Slotted Network: Reservations:
Nodes align (data) packet transmission to slot boundaries. Control channel signals time slots. Dispersion shifted fiber aligns slot on control and data channel. Reservations: O/E/O conversion at every node. Control channel indicates (Reservation, Occupancy) for all data wavelengths (2 bits/l). Nodes read control channel to determine openings on data channels.

10 Converged Network: Reservation Protocol
2000 slots live in a 100km ring (a frame), for 2.5Gb/s channel bit-rate and 64Byte slots. Reserve ‘X’ slots/frame: fixed bit-rate circuit. Handshake between source and destination, followed by source reserving required slots Circuit reservation in <= 3 RTTs, teardown in <= 2 RTTs Best-effort data is transmitted in unreserved slots

11 Research Overview (A) Optical Internet Systems:
Metropolitan Area Networks design (Prof. Leonid. G. Kazovsky) Scalable switching fabrics (Prof. William J. Dally) 100Tb/s router design (Prof. Nick McKeown) (B) Optical Internet Devices: All-optical MEMs switches and devices (Prof. Olav Solgaard) Quasi-transparent optical switching devices (Prof. David Miller) Long wavelength VCSELs for LANs/MANs (Prof. James S. Harris)

12 Scheduling Optical Switches
Emulation Architecture A switch with non-zero switching time can emulate a zero overhead switch plus delay. Existing “Exact” Algorithm # Configs Speedup Delay (cell times) 16,310 1 3,226,000 MIN 128 44 25,600 DOUBLE 256 2 51,200 Example for 128x128 switch with 200 cell time switching overhead New Algorithms Scheduling Algorithms New algorithms trade a small amount of bandwidth for greatly reduced batch size and switch delay

13 Worst-case Oblivious Routing
Throughput-sensitive applications Emerging applications of interconnection networks are throughput sensitive: packet routing and I/O interconnect Designers want to accurately characterize the worst-case throughput of these networks Finding the worst-case Oblivious routing worst-case found by “building” a bad traffic pattern using a bipartite graph Intractable problem turned into a polynomial time algorithm

14 Design Space for Tori Improved load-balance, lower power and cost Reduced latency Region of feasible algorithms shown in gray. Two new algorithms developed that have optimal load-balance without sacrificing latency

15 RLB and GOAL Oblivious Global Load Balance Local Load Balance within Q
Every source(S)–destination(D) pair has a short and long direction in each dimension. Obliviously pick a direction in each dimension favoring short over long. Defines a quadrant Q for routing to D. y k-y S D x k-x Local Load Balance within Q For RLB, route first to an intermediate node in Q, then to D. Randomize order of dimensions traversed. For GOAL, at each hop, pick the next dimension to be traversed adaptively (based on queue length).

16 Performance comparison
Oblivious Adaptive Throughput for 1000 random traffic permutations Among oblivious algorithms, RLB has best average throughput. It cannot match VAL’s optimal worst case performace (but it gives much lower latency than VAL). GOAL is the only algorithm that can match VAL in the worst-case and performs best in the average case.

17 Latency and Cost 3 topologies considered:
Torus – has higher cost and latency for most configurations Star – has slightly lower (3%) cost than Clos for large sizes, primarily due to distributed switching Folded Clos – has lowest cost and latency over wide range of network sizes, due to smallest diameter

18 Other Design Parameters
Distributed vs. centralised switching Distributed switching Torus and Star (Cayley) networks Can reduce expensive optical links, only 1 required per node. Centralised switching Clos and hierarchical networks 2 long links required – to-and-from central switch cabinet Easier packaging and network manageability Channel sliced folded Clos network built with 256x256 2-bit routers

19 Research Overview (A) Optical Internet Systems:
Metropolitan Area Networks design (Prof. Leonid. G. Kazovsky) Scalable switching fabrics (Prof. William J. Dally) 100Tb/s router design (Prof. Nick McKeown) (B) Optical Internet Devices: All-optical MEMs switches and devices (Prof. Olav Solgaard) Quasi-transparent optical switching devices (Prof. David Miller) Long wavelength VCSELs for LANs/MANs (Prof. James S. Harris)

20 Motivating Design: 100Tb/s Optical Router
Switch Electronic Linecard #1 Electronic Linecard #625 Gb/s Gb/s 40Gb/s Line termination IP packet processing Packet buffering Line termination IP packet processing Packet buffering 40Gb/s 160Gb/s Arbitration 40Gb/s Request Grant 40Gb/s (100Tb/s = 625 * 160Gb/s)

21 Research Problems Linecard Architecture Switch Fabric
Memory bottleneck: Address lookup and packet buffering Architecture Arbitration: Computation complexity Switch Fabric Optics: Fabric scalability and speed Optics: Optical modulators Electronics: Low power optical links Electronics: Optical switch control Electronics: Clock recovery for intra-system links

22 The Arbitration Problem
A packet switch fabric is reconfigured for every packet transfer. At 160Gb/s, a new IP packet can arrive every 2ns. The configuration is picked to maximize throughput and not waste capacity. Known algorithms are too slow.

23 Load-Balanced Switch External Inputs Internal Inputs External Outputs 1 1 2 1 N 1 N 1 N 2 CS Chang had a clever idea in order to make this non-uniform traffic uniform. The idea is that we could load-balance the arrival traffic among all the linecards. [explain on animation: on the right is the “switching” cyclic shift as before. Let’s focus on the left, the load-balancing cyclic shift: when a first cell arrives, it is sent to the middle-stage linecards according to the cyclic shift pattern. This pattern changes, and this is true again for the next cell] Thus, intuitively, the traffic is distributed uniformly among all linecards, and therefore the arrival traffic at all linecards should have about the same shape. This is what led CS Chang to prove that this simple two-stage switch architecture has 100% throughput for a broad range of traffic types. Load-balancing cyclic shift Switching cyclic shift 100% throughput for broad range of traffic types (C.S. Chang et al., 2001)

24 Implement as a Passive Optical Mesh
R/N Passive mesh 1 2 3 1 2 3 R R Cyclic Shift Cyclic Shift 2R/N Passive mesh 1 2 3 Mention that there is a range of choices: 1) Complete Cyclic Shift 2) Complete Passive Mesh 3) Something in between: part cyclic shift and part passive mesh. Give the full form of WGR. No more arbitrations, no more reconfigurations!

25 AWGR (Arrayed Waveguide Grating Router) A Passive Optical Component
1 , 2 N l 1 Linecard 1 Linecard 1 1 l 1 Linecard 2 Linecard 2 2 NxN WGR l 1 Linecard N Linecard N N Wavelength i on input port j goes to output port (i+j-1) mod N Can shuffle information from different inputs

26 WGR Based Solution Problems: We want N > 600
1 , 2 NxN WGR Fixed Laser/Modulator Linecard 1 Linecard 2 Linecard N Detector 3 N-1 N^2 channels A linecard will use multiple lambdas which will evenly spread Impt slide: new way of considering switching Problems: We want N > 600 What if some linecards are missing?

27 From Linecard Mesh to Group Mesh

28 From Linecard Mesh to Group Mesh

29 Hybrid Optical Electrical Switch Fabric
Static MEMS Electronic Switches Fixed Lasers Optical Receivers Electronic Switches 1 1 GxG MEMS Linecard 1 Linecard 1 2 2 LxM Crossbar MxL Crossbar Linecard 2 3 3 Linecard 2 1 Linecard L M M Linecard L GxG MEMS Group 1 Group 1 1 1 Linecard 1 2 Linecard 1 2 2 LxM Crossbar MxL Crossbar Linecard 2 3 3 Linecard 2 GxG MEMS Linecard L M M Linecard L 3 Group 2 Group 2 1 1 Linecard 1 Linecard 1 2 2 LxM Crossbar MxL Crossbar Linecard 2 3 3 Linecard 2 GxG MEMS Linecard L M M Linecard L M Group G Group G


Download ppt "The Optical Internet WDM/TDM fiber links Backbone ISP POP, CO Backbone"

Similar presentations


Ads by Google