Presentation is loading. Please wait.

Presentation is loading. Please wait.

Demystifying and Controlling the Performance of Data Center Networks.

Similar presentations


Presentation on theme: "Demystifying and Controlling the Performance of Data Center Networks."— Presentation transcript:

1 Demystifying and Controlling the Performance of Data Center Networks

2 Why are Data Centers Important? Internal users –Line-of-Business apps –Production test beds External users –Web portals –Web services –Multimedia applications –Chat/IM

3 Why are Data Centers Important? Poor performance  loss of revenue Understanding traffic is crucial Traffic engineering is crucial

4 Road Map Understanding Data center traffic Improving network level performance Ongoing work

5 Canonical Data Center Architecture Core (L3) Edge (L2) Top-of-Rack Aggregation (L2) Application servers

6 Dataset: Data Centers Studied DC RoleDC Name LocationNumber Devices UniversitiesEDU1US-Mid22 EDU2US-Mid36 EDU3US-Mid11 Private Enterprise PRV1US-Mid97 PRV2US-West100 Commercial Clouds CLD1US-West562 CLD2US-West763 CLD3US-East612 CLD4S. America427 CLD5S. America427  10 data centers  3 classes  Universities  Private enterprise  Clouds  Internal users  Univ/priv  Small  Local to campus  External users  Clouds  Large  Globally diverse

7 Dataset: Collection SNMP –Poll SNMP MIBs –Bytes-in/bytes-out/discards – > 10 Days –Averaged over 5 mins Packet Traces –Cisco port span –12 hours Topology –Cisco Discovery Protocol DC Name SNMPPacket Traces Topology EDU1Yes EDU2Yes EDU3Yes PRV1Yes PRV2Yes CLD1YesNo CLD2YesNo CLD3YesNo CLD4YesNo CLD5YesNo

8 Canonical Data Center Architecture Core (L3) Edge (L2) Top-of-Rack Aggregation (L2) Application servers Packet Sniffers

9 Analyzing Packet Traces Transmission patterns of the applications Properties of packet crucial for –Understanding effectiveness of techniques ON-OFF traffic at edges –Binned in 15 and 100 m. secs –We observe that ON-OFF persists 9 Routing must react quickly to overcome bursts

10 Data-Center Traffic is Bursty Understanding arrival process –Range of acceptable models What is the arrival process? –Heavy-tail for the 3 distributions ON, OFF times, Inter-arrival, –Lognormal across all data centers Different from Pareto of WAN –Need new models 10 Data Center Off Period Dist ON periods Dist Inter-arrival Dist Prv2_1Lognormal Prv2_2Lognormal Prv2_3Lognormal Prv2_4Lognormal EDU1LognormalWeibull EDU2LognormalWeibull EDU3LognormalWeibull Need new models to generate traffic

11 Canonical Data Center Architecture Core (L3) Edge (L2) Top-of-Rack Aggregation (L2) Application servers

12 Intra-Rack Versus Extra-Rack Quantify amount of traffic using interconnect –Perspective for interconnect analysis Edge Application servers Extra-Rack Intra-Rack Extra-Rack = Sum of Uplinks Intra-Rack = Sum of Server Links – Extra-Rack

13 Intra-Rack Versus Extra-Rack Results Clouds: most traffic stays within a rack (75%) –Colocation of apps and dependent components Other DCs: > 50% leaves the rack –Un-optimized placement

14 Extra-Rack Traffic on DC Interconnect Utilization: core > agg > edge – Aggregation of many unto few Tail of core utilization differs –Hot-spots  links with > 70% util –Prevalence of hot-spots differs across data centers

15 Persistence of Core Hot-Spots Low persistence: PRV2, EDU1, EDU2, EDU3, CLD1, CLD3 High persistence/low prevalence: PRV1, CLD2 –2-8% are hotspots > 50% High persistence/high prevalence: CLD4, CLD5 –15% are hotspots > 50%

16 Prevalence of Core Hot-Spots Low persistence: very few concurrent hotspots High persistence: few concurrent hotspots High prevalence: < 25% are hotspots at any time 0 1020304050 Time (in Hours) 0.6% 0.0% 6.0% 24.0% Smart routing can better utilize core and avoid hotspots Smart routing can better utilize core and avoid hotspots

17 Observations from Interconnect Links utils low at edge and agg Core most utilized –Hot-spots exists (> 70% utilization) –< 25% links are hotspots –Loss occurs on less utilized links (< 70%) Implicating momentary bursts Time-of-Day variations exists –Variation an order of magnitude larger at core Apply these results to evaluate DC design requirements

18 Insights Gained 75% of traffic stays within a rack (Clouds) –Applications are not uniformly placed Traffic is bursty at the edge At most 25% of core links highly utilized –Effective routing algorithm to reduce utilization –Load balance across paths and migrate VMs

19 Road Map Understanding Data center traffic Improving network level performance Ongoing work

20 Options for TE in Data Centers? Current supported techniques –Equal Cost MultiPath (ECMP) –Spanning Tree Protocol (STP) Proposed –Fat-Tree, VL2 Other existing WAN techniques –COPE,…, OSPF link tuning

21 How do we evaluate TE? Simulator –Input: Traffic matrix, topology, traffic engineering –Output: Link utilization Optimal TE –Route traffic using knowledge of future TM Data center traces –Cloud data center (CLD) Map-reduce app ~1500 servers –University data center (UNV) 3-Tier Web apps ~500 servers

22 Draw Backs of Existing TE STP does not use multiple path –40% worst than optimal ECMP does not adapt to burstiness –15% worst than optimal

23 Design Goals for Ideal TE

24 Design Requirements for TE Calculate paths & reconfigure network –Use all network paths –Use global view Avoid local optimals –Must react quickly React to burstiness How predictable is traffic? ….

25 Is Data Center Traffic Predictable? YES! 27% or more of traffic matrix is predictable Manage predictable traffic more intelligently 99% 27%

26 How Long is Traffic Predictable? Different patterns of predictability 1 second of historical data able to predict future 1.5 – 5.0 1.6 - 2.5

27 MicroTE

28 MicroTE: Architecture Global view: –Created by network controller React to predictable traffic: –Routing component tracks demand history All N/W paths: –Routing component creates routes using all paths Monitoring Component Routing Component Network Controller

29 Architectural Questions Efficiently gather network state? Determine predictable traffic? Generate and calculate new routes? Install network state?

30 Architectural Questions Efficiently gather network state? Determine predictable traffic? Generate and calculate new routes? Install network state?

31 Monitoring Component Efficiently gather TM Only one server per ToR monitors traffic Transfer changed portion of TM Compress data Tracking predictability –Calculate EWMA over TM (every second) Empirically derived alpha of 0.2 Use time-bins of 0.1 seconds

32 Routing Component Install routes Calculate network routes for predictable traffic Set ECMP for unpredictable traffic Determine predictable ToRs New Global View

33 Routing Predictable Traffic LP formulation –Constraints Flow conservation Capacity constraint Use K-equal length paths –Objective Minimize link utilization Bin-packing heuristic –Sort flows in decreasing order –Place on link with greatest capacity

34 Implementation Changes to data center –Switch Install OpenFlow firmware –End hosts Add kernel module New component –Network controller C++ NOX modules

35 Evaluation

36 Evaluation: Motivating Questions How does MicroTE Compare to Optimal? How does MicroTE perform under varying levels of predictability? How does MicroTE scale to large DCN? What overheard does MicroTE impose?

37 Evaluation: Motivating Questions How does MicroTE Compare to Optimal? How does MicroTE perform under varying levels of predictability? How does MicroTE scale to large DCN? What overheard does MicroTE impose?

38 How do we evaluate TE? Simulator –Input: Traffic matrix, topology, traffic engineering –Output: Link utilization Optimal TE –Route traffic using knowledge of future TM Data center traces –Cloud data center (CLD) Map-reduce app ~1500 servers –University data center (UNV) 3-Tier Web apps ~400 servers

39 Performing Under Realistic Traffic Significantly outperforms ECMP Slightly worse than optimal (1%-5%) Bin-packing and LP of comparable performance

40 Performance Versus Predictability Low predictability  performance is similar to ECMP

41 Performance Versus Predictability Low predictability  performance is similar to ECMP High predictability  performance is comparable to Optimal MicroTE adjusts according to predictability

42 Conclusion Study existing TE –Found them lacking (15-40%) Study data center traffic –Discovered traffic predictability (27% for 2 secs) Developed guidelines for ideal TE Designed and implemented MicroTE –Brings state of the art within 1-5% of Ideal –Efficiently scales to large DC (16K servers)

43 Road Map Understanding Data center traffic Improving network level performance Ongoing work

44 Looking forward Stop treating the network as a carrier of bits Bits in the network have a meaning – Applications know this meaning. Can applications control networks? – E.g Map-reduce Scheduler performs network aware task placement and flow placement


Download ppt "Demystifying and Controlling the Performance of Data Center Networks."

Similar presentations


Ads by Google