Cloud Computing Data Centers

Slides:



Advertisements
Similar presentations
Clair Christofersen Mentor – Aaron Andersen August 2, 2012
Advertisements

Fred Chong 290N Green Computing
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 6 Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism: Computer.
Matt Warner Future Facilities Proactive Airflow Management in Data Centre Operation - using CFD simulation to improve resilience, energy efficiency and.
HEATING AIR © Commonwealth of Australia 2010 | Licensed under AEShareNet Share and Return licence.
Components of HVAC System
HVAC: heating, ventilating, and air conditioning this is a thermostat: it sends signals to the heating/cooling system.
RUSSIAN ACADEMY OF SCIENCES PROGRAM SYSTEMS INSTITUTE Optimal Control of Temperature Fields for Cooling of Supercomputer Facilities and Clusters and Energy.
Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of Computing, UNF.
PG&E and Altera Data Center Energy Efficiency Project.
Section 16.3 Using Heat.
The University of Adelaide, School of Computer Science
Utility-Function-Driven Energy- Efficient Cooling in Data Centers Authors: Rajarshi Das, Jeffrey Kephart, Jonathan Lenchner, Hendrik Hamamn IBM Thomas.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 6 Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism: Computer.
Copyright Green Revolution Cooling
A Scalable, Commodity Data Center Network Architecture Mohammad Al-Fares, Alexander Loukissas, Amin Vahdat Presented by Gregory Peaker and Tyler Maclean.
Effect of Rack Server Population on Temperatures in Data Centers CEETHERM Data Center Laboratory G.W. Woodruff School of Mechanical Engineering Georgia.
 Site  Requirements  Local Resources  Initial layout ideas  Brief material selection  Supply options.
Computer Room Experiences A medium sized tier-2 site view Pete Gronbech GridPP Project Manager HEPIX April 2012.
Refrigeration and Heat Pump Systems Refrigeration systems: To cool a refrigerated space or to maintain the temperature of a space below that of the surroundings.
ISAT Module III: Building Energy Efficiency
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
September 18, 2009 Critical Facilities Round Table 1 Introducing the Heat Wheel to the Data Center Robert (Dr. Bob) Sullivan, Ph.D. Data Center Infrastructure.
High Density Cooling Air conditioning for high density applications
Best Practices in HVAC Design/Retrofit
Overview of Liquid Cooling Systems Peter Rumsey, Rumsey Engineers.
1 Copyright © 2011, Elsevier Inc. All rights Reserved. Chapter 6 Authors: John Hennessy & David Patterson.
Air Conditioning and Computer Centre Power Efficiency The Reality Christophe Martel Tony Cass.
1 Lecture 20: WSC, Datacenters Topics: warehouse-scale computing and datacenters (Sections ) – the basics followed by a look at the future.
Thermal Design Project Final Report John Wallerich Principal Engineer Wallerich Group, LLC.
Thermal-aware Issues in Computers IMPACT Lab. Part A Overview of Thermal-related Technologies.
Dealing with Hotspots in Datacenters Caused by High-Density Computing Peter Hannaford Director of Business Development EMEA.
The Data Center Challenge
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
Energy Savings in CERN’s Main Data Centre
Henry Brady Computer Components Unit 2 – Computer Systems.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
Date of download: 5/27/2016 Copyright © ASME. All rights reserved. From: Experimentally Validated Computational Fluid Dynamics Model for a Data Center.
PIC port d’informació científica First operational experience from a compact, highly energy efficient data center module V. Acín, R. Cruz, M. Delfino,
© 2010 VMware Inc. All rights reserved Datacenter 101 Ravi Soundararajan, VMware, Inc.
 Heat travels from hot to cold  The bigger the temperature difference the faster the rate of transfer.
PC COMPONENTS. System Unit Cases This is the cabinet that holds the main components of a computer. It includes a plastic front panel for aesthetic purpose.
Dell EMC Modular Data Centers
Extreme Scale Infrastructure
Warehouse Scaled Computers
Overview: Cloud Datacenters II
Unit 2: Chapter 2 Cooling.
Book E3 Section 1.4 Air-conditioning
The Data Center Challenge
The University of Adelaide, School of Computer Science
Lecture 20: WSC, Datacenters
Module 2: DriveScale architecture and components
HVAC EQUIPMENT: COOLING SOURCES (see Chapter 16)
CERN Data Centre ‘Building 513 on the Meyrin Site’
Heating Ventilating and Air Conditioning
Date of download: 11/2/2017 Copyright © ASME. All rights reserved.
Air Distribution Control
Introduction to HDFS: Hadoop Distributed File System
Chapter 6 Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism Topic 4 Storage Prof. Zhang Gang School of.
WHAT IS HX……??? Heat exchangers are equipment that transfer
The University of Adelaide, School of Computer Science
Cloud Computing Data Centers
Lecture 18 Warehouse Scale Computing
Architecture for Modular Data Centers
Commodity Data Center Design
Internet and Web Simple client-server model
Lecture 18 Warehouse Scale Computing
Lecture 18 Warehouse Scale Computing
Cost Effective Network Storage Solutions
Presentation transcript:

Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF 1

Warehouse-Scale Data Center (WSC) Design .

Warehouse-Scale Data Center Design The cloud is built on massive datacenters. Datacenters as large as a shopping mall can house 400,000 to 1 million servers. These data centers are built on economies of scale, i.e. the larger the datacenter, the lower the operational cost. The network cost to operate a small data center is about 7 times greater and storage costs about 5.7 times greater than the costs to operate a large data center. Most datacenters are built with commercial off the shelf (COTS) components. A COTS server consists of a number of processor sockets, each with a multicore CPU and its internal cache hierarchy, local and shared DRAM, and a number of directly attached disk drives (DASD). The DRAM and the DASD within the rack are accessible through first-level rack switches and all resources in all racks are accessible via a cluster-level switch. Example: a data center with 2000 servers, each with 8 GB DRAM and four 1 TB disk drives. Each group of 40 servers is connected through a 1 Gbps link to a rack-level switch that has an additional eight 1 Gbps ports for connecting the rack to the cluster-level switch. . The rack-level switch has 40 input ports (one for each of the 40 servers) and 8 output ports that connect the rack-level switch to the cluster-level switch. So 40 servers have 8 lines to connect to the cluster-level switch. Thus each of the output lines of the rack-level switch is oversubscribed with a ratio of 5:1.

Warehouse-Scale Data Center Building Blocks . Typical elements in warehouse-scale systems: 1 server (left), 7´ rack with Ethernet switch (middle), and diagram of a small cluster with a cluster-level Ethernet switch/router (right).

Warehouse-Scale Data Center: Network Fabric Choosing a networking fabric for WSCs involves a trade-off between speed, scale, and cost.1-Gbps Ethernet switches with up to 48 ports are essentially a commodity component, costing less than $30/Gbps per server to connect a single rack. However, network switches with high port counts, which are needed to tie together WSC clusters, have a much different price structure and are more than ten times more expensive (per 1-Gbps port) than commodity switches. As a result of this cost discontinuity, the networking fabric of WSCs is often organized as the two-level hierarchy depicted in the previous figure. For example, a rack with 40 servers, each with a 1-Gbps port, might have eight 1-Gbps uplinks to the cluster-level switch, corresponding to an oversubscription factor between 5 for communication across racks. Generally, intra rack connectivity is often cheaper than inter rack connectivity.

Warehouse-Scale Data Center: Storage Hierarchy . Figure shows a programmer’s view of storage hierarchy of a typical WSC. A server consists of a number of processor sockets, each with a multicore CPU and its internal cache hierarchy, local and shared DRAM, and a number of directly attached disk drives. The DRAM and disk resources within the rack are accessible through the first-level rack switches (assuming some sort of remote procedure call API to them), and all resources in all racks are accessible via the cluster-level switch.

Warehouse-Scale Data Center: Cooling System .

Warehouse-Scale Data Center: Cooling System CRAC units (CRAC is a 1960s term for computer room air conditioning) pressurize the raised floor plenum by blowing cold air into the plenum. This cold air escapes from the plenum through perforated tiles that are placed in front of server racks and then flows through the servers, which expel warm air in the back. Racks are arranged in long aisles that alternate between cold aisles and hot aisles to avoid mixing hot and cold air. Eventually, the hot air produced by the servers re-circulates back to the intakes of the CRAC units that cool it and then exhaust the cool air into the raised floor plenum again. CRAC units consist of coils through which a liquid coolant is pumped; fans push air through these coils, thus cooling it. A set of redundant pumps circulate cold coolant to the CRACs and warm coolant back to a chiller or cooling tower, which expel the heat to the outside environment. Typically, the incoming coolant is at 12–14°C, and the air exits the CRACs at 16–20°C, leading to cold aisle (server intake) temperatures of 18–22°C. .

Warehouse-Scale Data Center: Quantifying Latency, Bandwidth, and Capacity

Warehouse-Scale Data Center: Quantifying Latency, Bandwidth, and Capacity The next figure attempts to quantify the latency, bandwidth, and capacity characteristics of a WSC. We assume a system with 2,000 servers, each with 8 GB of DRAM and four 1-TB disk drives. Each group of 40 servers is connected through a 1-Gbps link to a rack-level switch that has an additional eight 1-Gbps ports used for connecting the rack to the cluster-level switch (an oversubscription factor of 5). The graph shows the relative latency, bandwidth, and capacity of each resource pool. For example, the bandwidth available from local disks is 200 MB/s, whereas the bandwidth from off-rack disks is just 25 MB/s via the shared rack uplinks. On the other hand, total disk storage in the cluster is almost ten million times larger than local DRAM. A large application that requires many more servers than can fit on a single rack must deal effectively with these large discrepancies in latency, bandwidth, and capacity. The rack-level switch has 40 input ports (one for each of the 40 servers) and 8 output ports that connect the rack-level switch to the cluster-level switch. So 40 servers have 8 lines to connect to the cluster-level switch. Thus each of the output lines of the rack-level switch is oversubscribed with a ratio of 5:1. Network latency numbers assume a socket-based TCP-IP transport, and networking bandwidth values assume that each server behind an oversubscribed set of uplinks is using its fair share of the available cluster-level bandwidth. We assume the rack- and cluster-level switches themselves are not internally oversubscribed. For disks, we show typical commodity disk drive (SATA) latencies and transfer rates.

Container Data Center Design .

Container Data Center Design Container-based datacenters go one step beyond in-rack cooling by placing the server racks into a standard shipping container and integrating heat exchange and power distribution into the container as well. Similar to full in-rack cooling, the container needs a supply of chilled water and uses coils to remove all heat from the air that flows over it. Air handling is similar to in-rack cooling and typically allows higher power densities than regular raised-floor datacenters. Thus, container-based datacenters provide all the functions of a typical datacenter room (racks, CRACs, cabling, lighting) in a small package. Like a regular datacenter room, they must be complemented by outside infrastructure such as chillers, generators, and UPS units to be fully functional. The container-based facility has achieved extremely high energy efficiency ratings compared with typical datacenters today. E.g. Google’s container based datacenter. .

Google Data Center (Video) https://www.youtube.com/watch?v=PBx7rgqeGG8 Inside a Google Data Center https://www.youtube.com/watch?v=XZmGGAbHqa0