Download presentation
Presentation is loading. Please wait.
Published byBarnard Brooks Modified over 8 years ago
1
6.888 Lecture 9: Wireless/Optical Datacenters Mohammad Alizadeh and Dinesh Bharadia Spring 2016 1 Many thanks to George Porter (UCSD) and Vyas Sekar (Berkeley)
2
Datacenter Fabrics 2 Leaf 1000s of server ports Spine Scale out designs (VL2, Fat-tree) Little to no oversubscription Cost, power, complexity
3
3 https://code.facebook.com/posts/360346274145943/introducing-data-center-fabric-the- next-generation-facebook-data-center-network/ Multiple switching layers (Why?)
4
Building Block: Merchant Silicon Switching Chips 4 Facebook Wedge 6 pack Switch ASIC Image courtesy of Facebook Limited radix: 16x40Gbps High power: 17 W/port
5
5 https://code.facebook.com/posts/360346274145943/introducing-data-center-fabric-the- next-generation-facebook-data-center-network/ Long cables (fiber)
6
Scale-out packet-switch fabrics Large number of switches, fibers, optical transceivers Power hungry Hard to expand
7
7 Beyond Packet-Switched DC Fabrics Optical circuit switching [Helios, cThrough, Mordio, ReacTor, …] 60 GHz RF [Flyways, MirrorMirror] Fig. from presentation by Xia Zhou Steerable Links Free-space Optics [FireFly]
8
Integrating Microsecond Circuit Switching into the Data Center 8 Slides based on presentation by George Porter (UCSD)
9
Key idea: Hybrid Circuit/Packet Networks Why build hybrid switch?
10
Circuit vs. Packet Switching Electrical Packet $500/port 10 Gb/s fixed rate 12 W/port Transceivers (OEO) Buffering Per-packet switching In-band control Optical Circuit $500/port Rate free (10/40/100/400/+) 240 mW/port No transceivers No buffering Duty cycle overhead Out-of-band control Observation: Correlated traffic Circuits
11
Disadvantages of Circuits Despite advantages, circuits present different service model: – Point-to-point connectivity – Must wait for circuit to be assigned – Circuit “down” while being reconfigured } } affects throughput, latency affects network duty cycle; overall efficiency
12
Stability Increases with Aggregation 12 Inter-Thread Inter-Process Inter-Server Inter-Rack Inter-Pod Inter-Data Center Where is the Sweet Spot? 1.Enough Stability 2.Enough Traffic
13
Mordia OCS model Directly connects inputs to outputs Reconfiguration time: 10us – “Night” time (Tn): no traffic during reconfiguration – “Day” time (Td): circuits/mapping established Duty cycle: Td / (Td+Tn) Bi-partite graph … S0S0 S1S1 S2S2 S3S3 SkSk … S0S0 S1S1 S2S2 S3S3 SkSk
14
Previous approaches: Hotspot Scheduling 1. Observe 2. Compute 3. Reconfig 1. Observe 2. Compute 3. Reconfig 1. Observe 3. Reconfig Time 2. Compute X X X Assign circuits to elephants
15
Limitations of Hotspot Scheduling 1. Observe 3 3 3. Reconfig Time 1. Observe 3 3 3 3 3. Reconfig Time Goal 1. Observe 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 3 3 3 3 TM( t )
16
Traffic Matrix Scheduling Birkhoff von-Neumann Decomposition
17
BvN Decomposition k’ could be large ( in worst case) T has to be doubly-stochastic Suppose: T is a scaled doubly-stochastic matrix
18
Scheduling circuit switch configuration: bipartite graph matching time 1 1 1 1 1 4 4 4 4 4 n = 5 nodes Traffic Matrix: T
19
time 1 1 1 1 1 4 4 4 4 4 Scheduling configuration of circuit switch modeled as bipartite graph matching n = 5 nodes Traffic Matrix: T
20
time 1 1 1 1 1 0 0 0 0 0 reconfiguration delay Scheduling configuration of circuit switch modeled as bipartite graph matching n = 5 nodes Traffic Matrix: T
21
time 1 1 1 1 1 0 0 0 0 0 Scheduling configuration of circuit switch modeled as bipartite graph matching n = 5 nodes Traffic Matrix: T
22
time 0 0 0 0 0 0 0 0 0 0 Scheduling configuration of circuit switch modeled as bipartite graph matching n = 5 nodes Traffic Matrix: T
23
maximize throughput in time-window W time 1 1 1 1 1 4 4 4 4 4 W ?? Scheduling n = 5 nodes Traffic Matrix: T
24
Problem Statement maximize s.t. permutation matrices duration number of matchings
25
Eclipse: Greedy Algorithm (with provable guarantees) 25 Venkatakrishnan et al., “Costly Circuits, Submodular Schedules, Hybrid Switch Scheduling for Data Centers”, To appear in SIGMETRICS 2016.
26
Discussion 26
27
Firefly 27 Slides based on presentation by Vyas Sekar (CMU)
28
Why FSO instead of RF? 28 RF (e.g. 60GHZ)FSO (Free Space optical) Wide beam Faster steering of beams High interference Limited active links Limited Throughput Narrow beam Slow steering of beams Zero interference No limit on active links High Throughput
29
29 Today’s FSO Cost: $15K per FSO Size: 3 ft³ Power: 30w Non steerable Current: bulky, power-hungry, and expensive Required: small, low power and low expense
30
Why Size, Cost, Power Can be Reduced? 30 Traditional use : outdoor, long haul ‒ High power ‒ Weatherproof Data centers: indoor, short haul Feasible roadmap via commodity fiber optics ‒ E.g. Small form transceivers (Optical SFP)
31
FSO Design Overview 31 SFP fiber optic cables Diverging beam Lens focal distance large cores (> 125 microns) are more robust Large core fiber optic cables Parallel beam lens Focusing lens Collimating lens
32
FSO Link Performance 6 mm 32 FSO link is as robust as a wired link Effect of vibrations, etc. 6mm movement tolerance Range up to 24m tested
33
33 Steerability Cost Size Power Not Steerable FSO design using SFP Via Switchable mirrors or Galvo mirrors Shortcomings of current FSOs
34
Steerability via Switchable Mirror 34 A Ceiling mirror B C Switchable Mirror: glass mirror Electronic control, low latency SM in “mirror” mode
35
Steerability via Galvo Mirror 35 A Ceiling mirror B C Galvo Mirror: small rotating mirror Very low latency Galvo Mirror
36
How to design FireFly network? 36 Goals: Robustness to current and future traffic Budget & Physical Constraints Design parameters – Number of FSOs? – Number of steering mirrors? – Initial mirrors’ configuration Performance metric – Dynamic bisection bandwidth
37
Discussion 37
38
Next Time: Rack-Scale Computing 38
39
39
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.