1 Energy in Networks & Data Center Networks Department of EECS University of Tennessee, Knoxville Yanjun Yao.

Slides:



Advertisements
Similar presentations
IEEE INFOCOM 2004 MultiNet: Connecting to Multiple IEEE Networks Using a Single Wireless Card.
Advertisements

IP Router Architectures. Outline Basic IP Router Functionalities IP Router Architectures.
All Rights Reserved © Alcatel-Lucent 2009 Enhancing Dynamic Cloud-based Services using Network Virtualization F. Hao, T.V. Lakshman, Sarit Mukherjee, H.
1 Routing Protocols I. 2 Routing Recall: There are two parts to routing IP packets: 1. How to pass a packet from an input interface to the output interface.
Chabot College Chapter 2 Review Questions Semester IIIELEC Semester III ELEC
Jaringan Komputer Lanjut Packet Switching Network.
PortLand: A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric. Presented by: Vinuthna Nalluri Shiva Srivastava.
Data Center Fabrics. Forwarding Today Layer 3 approach: – Assign IP addresses to hosts hierarchically based on their directly connected switch. – Use.
UNIT-IV Computer Network Network Layer. Network Layer Prepared by - ROHIT KOSHTA In the seven-layer OSI model of computer networking, the network layer.
Data Center Network Topologies: VL2 (Virtual Layer 2) Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems.
Virtual Layer 2: A Scalable and Flexible Data-Center Network Work with Albert Greenberg, James R. Hamilton, Navendu Jain, Srikanth Kandula, Parantap Lahiri,
1 K. Salah Module 4.0: Network Components Repeater Hub NIC Bridges Switches Routers VLANs.
CS335 Networking & Network Administration Tuesday, April 20, 2010.
Semester 4 - Chapter 3 – WAN Design Routers within WANs are connection points of a network. Routers determine the most appropriate route or path through.
VL2: A Scalable and Flexible data Center Network
A Scalable, Commodity Data Center Network Architecture Mohammad Al-Fares, Alexander Loukissas, Amin Vahdat Presented by Gregory Peaker and Tyler Maclean.
Jennifer Rexford Fall 2010 (TTh 1:30-2:50 in COS 302) COS 561: Advanced Computer Networks Data.
A Scalable, Commodity Data Center Network Architecture.
1 25\10\2010 Unit-V Connecting LANs Unit – 5 Connecting DevicesConnecting Devices Backbone NetworksBackbone Networks Virtual LANsVirtual LANs.
1 Algorithms for Bandwidth Efficient Multicast Routing in Multi-channel Multi-radio Wireless Mesh Networks Hoang Lan Nguyen and Uyen Trang Nguyen Presenter:
WAN Technologies.
Connecting LANs, Backbone Networks, and Virtual LANs
Network Topologies.
FAR: A Fault-avoidance Routing Method for Data Center Networks with Regular Topology Bin Liu, ZTE.
LECTURE 9 CT1303 LAN. LAN DEVICES Network: Nodes: Service units: PC Interface processing Modules: it doesn’t generate data, but just it process it and.
Virtual LAN Design Switches also have enabled the creation of Virtual LANs (VLANs). VLANs provide greater opportunities to manage the flow of traffic on.
Networking the Cloud Presenter: b 電機三 姜慧如.
Common Devices Used In Computer Networks
VL2 – A Scalable & Flexible Data Center Network Authors: Greenberg et al Presenter: Syed M Irteza – LUMS CS678: 2 April 2013.
Routing & Architecture
David G. Andersen CMU Guohui Wang, T. S. Eugene Ng Rice Michael Kaminsky, Dina Papagiannaki, Michael A. Kozuch, Michael Ryan Intel Labs Pittsburgh 1 c-Through:
Mohamed Hefeeda 1 School of Computing Science Simon Fraser University, Canada Video Streaming over Cooperative Wireless Networks Mohamed Hefeeda (Joint.
1 Optical Burst Switching (OBS). 2 Optical Internet IP runs over an all-optical WDM layer –OXCs interconnected by fiber links –IP routers attached to.
Floodless in SEATTLE : A Scalable Ethernet ArchiTecTure for Large Enterprises. Changhoon Kim, Matthew Caesar and Jenifer Rexford. Princeton University.
S4-Chapter 3 WAN Design Requirements. WAN Technologies Leased Line –PPP networks –Hub and Spoke Topologies –Backup for other links ISDN –Cost-effective.
Load-Balancing Routing in Multichannel Hybrid Wireless Networks With Single Network Interface So, J.; Vaidya, N. H.; Vehicular Technology, IEEE Transactions.
VL2: A Scalable and Flexible Data Center Network Albert Greenberg, James R. Hamilton, Navendu Jain, Srikanth Kandula, Changhoon Kim, Parantap Lahiri, David.
15.1 Chapter 15 Connecting LANs, Backbone Networks, and Virtual LANs Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or.
1 CSCD 433 Network Programming Fall 2011 Lecture 5 VLAN's.
Packet switching network Data is divided into packets. Transfer of information as payload in data packets Packets undergo random delays & possible loss.
Netprog: Routing and the Network Layer1 Routing and the Network Layer (ref: Interconnections by Perlman)
STORE AND FORWARD & CUT THROUGH FORWARD Switches can use different forwarding techniques— two of these are store-and-forward switching and cut-through.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Connecting Devices CORPORATE INSTITUTE OF SCIENCE & TECHNOLOGY, BHOPAL Department of Electronics and.
Rehab AlFallaj.  Network:  Nodes: Service units: PC Interface processing Modules: it doesn’t generate data, but just it process it and do specific task.
1 Transport Layer: Basics Outline Intro to transport UDP Congestion control basics.
4: DataLink Layer1 Hubs r Physical Layer devices: essentially repeaters operating at bit levels: repeat received bits on one interface to all other interfaces.
Theophilus Benson*, Ashok Anand*, Aditya Akella*, Ming Zhang + *University of Wisconsin, Madison + Microsoft Research.
6.888: Lecture 2 Data Center Network Architectures Mohammad Alizadeh Spring 2016  Slides adapted from presentations by Albert Greenberg and Changhoon.
WAN Technologies. 2 Large Spans and Wide Area Networks MAN networks: Have not been commercially successful.
VL2: A Scalable and Flexible Data Center Network
Data Center Architectures
Network Layer COMPUTER NETWORKS Networking Standards (Network LAYER)
CIS 700-5: The Design and Implementation of Cloud Networks
Architecture and Algorithms for an IEEE 802
Data Center Network Architectures
Data Centers: Network Architecture
Semester 4 - Chapter 3 – WAN Design
Data Center Network Architectures
ElasticTree Michael Fruchtman.
Ad-hoc Transport Layer Protocol (ATCP)
NTHU CS5421 Cloud Computing
湖南大学-信息科学与工程学院-计算机与科学系
TCP in Mobile Ad-hoc Networks
NTHU CS5421 Cloud Computing
VL2: A Scalable and Flexible Data Center Network
Chapter-5 Traffic Engineering.
Routing and the Network Layer (ref: Interconnections by Perlman
Host and Small Network Relaying Howard C. Berkowitz
Lecture 8, Computer Networks (198:552)
Data Center Traffic Engineering
Presentation transcript:

1 Energy in Networks & Data Center Networks Department of EECS University of Tennessee, Knoxville Yanjun Yao

2 Network Architecture Internet Router Switch End Host End Host End Host End Host End Host End Host End Host

3 A Feasibility Study for Power Management in LAN Switches Computer Science Department Portland State University Maruti Gupta, Satyajit Grover and Suresh Singh

4 Motivation and Goals Motivation  Few dynamic power management schemes for internet devices Goal  Power management scheme for LAN switches Why switches?  Switches comprise bulk of network devices in LAN  Consumes largest percentage of energy in internet devices DeviceApproximate Number DeployedTotal AEC TW-h Hubs93.5 million1.6 TW-h LAN Switches95, TW-h WAN Switches50, TW-h router3, TW-h

5 Related Works Estimate power consumption in switch fabrics:  Developing statistical traffic models [Wassal et al. 2001]  Various analytical models [G. Essakimuthu et al. 2002, D. Langen et al. 2000, C. Patel et al. 1997, Hang et al. 2002, Ye et al. 2002] Power management schemes for interconnection network fabrics:  Using DVS with links [Li et al. 2003]  Using on/off links [L. Peh et al. 2003]  Router power throttling [Li et al. 2003]

6 Feasibility What to do?  Put LAN switch components, interfaces or entire switches in sleep. Are there enough idle periods to justify sleeping? Individual Switch Interface Interactive time (seconds) 60% of time has interactivity time Greater than 20 seconds) Low activity time) High activity time Percentage of 2 hours Low activity time) High activity time Activity at Switch Interactive time (seconds)

7 Models for Sleeping Basic sleep components:  No sleep model for switches  Each port has a line card  Each line card with a processor and buffers  Sleep model for a line card is obtained from the sleep model of its constituent parts  Develop sleep model based on the functionality of the line card Network Processor Ingress Buffer Egress Buffer

8 Models for Sleeping Interface state is preserved HABS (Hardware Assisted Buffered Sleep):  Incoming packet wakes up the interface and is buffered  Power on input buffer, input circuits for receiving HAS (Hardware Assisted Sleep):  Incoming packet wakes up switch interface and is lost  Power on receiver circuits Simple Sleep:  Set a sleep timer  Only wakes up when timer expires Assumption:  Transmitting from a deeper sleep to lighter sleep takes time and results in a spike in energy consumption Wake HABS HAS Simple

9 Implication of Sleeping Simple Sleep:  All packets are lost  Poor throughput, energy saving will be offset by retransmission  To use this state, we need: Interface connected to end host: ACPI (Advanced Configuration and Power Interface) to inform the switch that it is going to sleep Interface connecting switches: guarantee no packets will be sent to a sleeping interface HAS:  The packets wake up the interface get lost  To use it, we need: Send a dummy packet ahead of the packets to be sent to the sleeping interface

10 Implication of Sleeping HABS:  Lower energy saving Further simplify the model:  Simple sleep: Switch interface connected to end hosts with extended ACPI  HABS: Switch to switch Switch to router Switch interface connected to hosts without extended ACPI

11 Algorithms for Sleeping Questions:  When can interface go to sleep?  Length of sleep interval ?  Length of wake interval between consecutive sleeps ? Wake and Simple Sleep:  Switch interface sleep when the end host goes to sleep  Wakes up periodically to check if host has woken up: End hosts wakes up and send packets to switch interface with period  Remains awake if end host awake until end hosts sleep again

12 Algorithms for Sleeping Wake and HABS:  Make decision after processing the last packet in the buffer : If, then sleep time Otherwise, stays awake  Two simple practical algorithm: Estimated algorithm:  Use an estimator for, sleep if, where  Sleeps until woken up by an incoming packet Estimated and Periodic Algorithm:  For periodic traffics  Get time to next periodic packet y, determine  Interface sleeps if

13 Estimated Energy Savings Determine energy saving: Individual Switch Interface Time to wake up (seconds) High activity period Low activity period e s = 0.1 e s = 0.5 Energy with no Sleeping/Energy when Sleeping

14 Performance of Three Algorithms Light Heavy Host Y to Switch InterfaceHost M to Switch Interface Heavy Light Heavy Light Time to wake up (seconds) Switch to Switch Interface Optimal, Estimated and Estimated & Period Optimal, Estimated and Estimated & Period Optimal, Estimated and Estimated & Period Light & Heavy All Algorithms Energy with no Sleeping/Energy when Sleeping Three algorithms have very similar performance

15 Simulation Results Topology: Six switches Each host runs STP protocol in addition to different data streams Data for simulations is generated using Markov Modulated Poisson Process Simulation on Opnet Evaluate Interfaces:  Sw0 to sw4  Sw2 to mmpp22

16 Simulation Result Switch to switch saves more energy Energy with no Sleeping/Energy when Sleeping Percentage of Packets Lost Switch Interfaces, HABS Simulation Switch Interfaces, Simple Sleep Simulation Time to wake up (seconds)

17 Impact of Sleeping On protocols and Topology Design Simple Sleep’s impact on protocol design:  For periodic messages, the sleep time must be fine tuned.  Wake up all interfaces for broadcasting. Impact of network topology and VLANs on sleeping:  For redundant paths: Aggregate traffic loads to some of the paths and put the rest to sleep. However, the STP generated a spanning tree

18 Conclusion Sleeping in order to save energy is a feasible option in the LAN. Three sleeping models are proposed. Two types of algorithms for transmitting from wake state and sleeping state are shown. Simulations are done to evaluate the performance of HABS and Simple Sleep.

19 Critique Three sleeping models are proposed but only two of them are evaluated. HAS is eliminated without a good reason. Modifications on hardware are needed to support the three sleep models. For the first simulation, it is said that the HABS are used for both experiments, but different transision energies are used. Did not evaluate packet delay

20 VL2: A Scalable and Flexible Data Center Network Microsoft Reseach Albert Greenberg. James R. Hamilton. Navendu Jain. Srikanth Kandula. Changhoon Kim, et al

21 Architecture of Data Center Networks (DCN)

22 Conventional DCN Problems Static network assignment Fragmentation of resource Poor server to server connectivity Traffics affects each other Poor reliability and utilization CR AR S S S S S S S S A A A A A A … S S S S A A A A A A …... S S S S S S S S A A A A A A … S S S S A A A A A A … I want more I have spare ones, but… 1:5 1:80 1:240

23 Objectives: Uniform high capacity:  Maximum rate of server to server traffic flow should be limited only by capacity on network cards  Assigning servers to service should be independent of network topology Performance isolation:  Traffic of one service should not be affected by traffic of other services Layer-2 semantics:  Easily assign any server to any service  Configure server with whatever IP address the service expects  VM keeps the same IP address even after migration

24 Measurements and Implications of DCN Data-Center traffic analysis:  Traffic volume between servers to entering/leaving data center is 4:1  Demand for bandwidth between servers growing faster  Network is the bottleneck of computation Flow distribution analysis:  Majority of flows are small, biggest flow size is 100MB  The distribution of internal flows is simpler and more uniform  50% times of 10 concurrent flows, 5% greater than 80 concurrent flows

25 Measurements and Implications of DCN Traffic matrix analysis:  Poor summarizing of traffic patterns  Instability of traffic patterns Failure characteristics:  Pattern of networking equipment failures: 95% 10 days  No obvious way to eliminate all failures from the top of the hierarchy

26 Virtual Layer Two Networking (VL2) Design principle:  Randomizing to cope with volatility: Using Valiant Load Balancing (VLB) to do destination independent traffic spreading across multiple intermediate nodes  Building on proven networking technology: Using IP routing and forwarding technologies available in commodity switches  Separating names from locators: Using directory system to maintain the mapping between names and locations  Embracing end systems: A VL2 agent at each server

27 VL2 Addressing and Routing payload ToR 3... y x Servers use flat names Switches run link-state routing and maintain only switch-level topology yz payload ToR 4 z ToR 2 ToR 4 ToR 1 ToR 3 y, z payload ToR 3 z... Directory Service … x  ToR 2 y  ToR 3 z  ToR 4 … Lookup & Response … x  ToR 2 y  ToR 3 z  ToR 3 … LAs AAs

28 Random Traffic Spreading over Multiple Paths xy payload T3T3 y z T5T5 z I ANY Links used for up paths Links used for down paths T1T1 T2T2 T3T3 T4T4 T5T5 T6T6

29 VL2 Directory System RSM DS RSM DS RSM DS Agent... Agent... Directory Servers RSM Servers 2. Reply 1. Lookup “Lookup” 5. Ack 2. Set 4. Ack (6. Disseminate) 3. Replicate 1. Update “Update”

30 Evaluation Uniform high capacity:  All-to-all data shuffle stress test: 75 servers, deliver 500MB Maximal achievable goodput is 62.3 VL2 network efficiency as 58.8/62.3 = 94%

31 Evaluation Fairness:  75 nodes  Real data center workload  Plot Jain’s fairness index for traffics to intermediate switches Time (s) Fairness Index Aggr1 Aggr2 Aggr3

32 Evaluation Performance isolation:  Two types of services: Service one: 18 servers do single TCP transfer all the time Service two: 19 servers starts a 8GB transfer over TCP every 2 seconds Service two: 19 servers burst short TCP connections

33 Evaluation Convergence after link failures  75 servers  All-to-all data shuffle  Disconnect links between intermediate and aggregation switches

34 Conclusion Studied the traffic pattern in a production data center and find the traffic patterns Design, build and deploy every component of VL2 in an 80 server testbed Apply VLB to randomly spreading traffics over multiple flows Using flat address to split IP addresses and server names

35 Critique The extra servers are needed to support the VL2 directory system,:  Brings more cost on devices  Hard to be implemented for data centers with tens of thousands of servers. All links and switches are working all the times, not power efficient No evaluation of real time performance.

36 Comparison LAN SwitchVL2 TargetSave power on LAN switches Achieve agility on DCN NetworksLANDCN Traffic PatternLight for most timeHighly unpredictable ObjectSwitchesWhole network ExperimentSimulation on OpnetReal testbed

37 Q&A