Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF.

Slides:



Advertisements
Similar presentations
Clair Christofersen Mentor – Aaron Andersen August 2, 2012
Advertisements

Fred Chong 290N Green Computing
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 6 Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism: Computer.
Matt Warner Future Facilities Proactive Airflow Management in Data Centre Operation - using CFD simulation to improve resilience, energy efficiency and.
HEATING AIR © Commonwealth of Australia 2010 | Licensed under AEShareNet Share and Return licence.
Components of HVAC System
HVAC: heating, ventilating, and air conditioning this is a thermostat: it sends signals to the heating/cooling system.
1Taylor Engineering, LLC Group Think Mark Hydeman, P.E., FASHRAE Taylor Engineering, LLC
Data Center Scale Computing
RUSSIAN ACADEMY OF SCIENCES PROGRAM SYSTEMS INSTITUTE Optimal Control of Temperature Fields for Cooling of Supercomputer Facilities and Clusters and Energy.
Cooling Product Positioning
IT Essentials: PC Hardware and Software 1 Chapter 3 Assembling a Computer.
PG&E and Altera Data Center Energy Efficiency Project.
Section 16.3 Using Heat.
The University of Adelaide, School of Computer Science
Utility-Function-Driven Energy- Efficient Cooling in Data Centers Authors: Rajarshi Das, Jeffrey Kephart, Jonathan Lenchner, Hendrik Hamamn IBM Thomas.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 6 Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism: Computer.
Copyright Green Revolution Cooling
Effect of Rack Server Population on Temperatures in Data Centers CEETHERM Data Center Laboratory G.W. Woodruff School of Mechanical Engineering Georgia.
 Site  Requirements  Local Resources  Initial layout ideas  Brief material selection  Supply options.
Computer Room Experiences A medium sized tier-2 site view Pete Gronbech GridPP Project Manager HEPIX April 2012.
Keeping a Cool Head. These days if you buy a new computer it will probably be quite quiet. That's because manufacturers have learned that silence is golden.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Commodity Data Center Design
September 18, 2009 Critical Facilities Round Table 1 Introducing the Heat Wheel to the Data Center Robert (Dr. Bob) Sullivan, Ph.D. Data Center Infrastructure.
 Install new air cooled high efficiency screw chiller (variable speed)  Install new fan coils with ECM motors and low temperature heating coils and proper.
Architecture for Modular Data Centers James Hamilton 2007/01/08
High Density Cooling Air conditioning for high density applications
Best Practices in HVAC Design/Retrofit
Overview of Liquid Cooling Systems Peter Rumsey, Rumsey Engineers.
1 Copyright © 2011, Elsevier Inc. All rights Reserved. Chapter 6 Authors: John Hennessy & David Patterson.
Air Conditioning and Computer Centre Power Efficiency The Reality Christophe Martel Tony Cass.
1 Lecture 20: WSC, Datacenters Topics: warehouse-scale computing and datacenters (Sections ) – the basics followed by a look at the future.
Strata IT Training Chapter 16 Green Networking. Green Servers Power Management –Power off monitor settings –Automatic server shutdown policy Rack mounted.
Progress Energy Corporate Data Center Rob Robertson February 17, 2010 of North Carolina.
Infrastructure Gina Dickson – Product Manager Kevin Jackson – Sales Manager.
Thermal Design Project Final Report John Wallerich Principal Engineer Wallerich Group, LLC.
Thermal-aware Issues in Computers IMPACT Lab. Part A Overview of Thermal-related Technologies.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Dealing with Hotspots in Datacenters Caused by High-Density Computing Peter Hannaford Director of Business Development EMEA.
The Data Center Challenge
Introduction to Computer Networks Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of Computing, UNF.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
First Law of Thermodynamics  The first law of thermodynamics is often called the Law of Conservation of Energy.
Energy Savings in CERN’s Main Data Centre
High Availability Environments cs5493/7493. High Availability Requirements Achieving high availability Redundancy of systems Maintenance Backup & Restore.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
Date of download: 5/27/2016 Copyright © ASME. All rights reserved. From: Experimentally Validated Computational Fluid Dynamics Model for a Data Center.
PIC port d’informació científica First operational experience from a compact, highly energy efficient data center module V. Acín, R. Cruz, M. Delfino,
© 2010 VMware Inc. All rights reserved Datacenter 101 Ravi Soundararajan, VMware, Inc.
Dell EMC Modular Data Centers
Extreme Scale Infrastructure
Warehouse Scaled Computers
Overview: Cloud Datacenters II
Unit 2: Chapter 2 Cooling.
The Data Center Challenge
The University of Adelaide, School of Computer Science
Lecture 20: WSC, Datacenters
Module 2: DriveScale architecture and components
CERN Data Centre ‘Building 513 on the Meyrin Site’
Date of download: 11/2/2017 Copyright © ASME. All rights reserved.
The University of Adelaide, School of Computer Science
Cloud Computing Data Centers
Lecture 18 Warehouse Scale Computing
Air Conditioning System-1
Commodity Data Center Design
Cloud Computing Data Centers
Lecture 18 Warehouse Scale Computing
Lecture 18 Warehouse Scale Computing
Presentation transcript:

Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of Computing, UNF

Warehouse-Scale Data Center (WSC) Design.

Warehouse-Scale Data Center Design The cloud is built on massive datacenters. Datacenters as large as a shopping mall can house 400,000 to 1 million servers. These data centers are built on economies of scale, i.e. the larger the datacenter, the lower the operational cost. The network cost to operate a small data center is about 7 times greater and storage costs about 5.7 times greater than the costs to operate a large data center. Most datacenters are built with commercial off the shelf (COTS) components. A COTS server consists of a number of processor sockets, each with a multicore CPU and its internal cache hierarchy, local and shared DRAM, and a number of directly attached disk drives (DASD). The DRAM and the DASD within the rack are accessible through first-level rack switches and all resources in all racks are accessible via a cluster-level switch. Example: a data center with 2000 servers, each with 8 GB DRAM and four 1 TB disk drives. Each group of 40 servers is connected through a 1 Gbps link to a rack-level switch that has an additional eight 1 Gbps ports for connecting the rack to the cluster-level switch..

Warehouse-Scale Data Center Building Blocks. Typical elements in warehouse-scale systems: 1 server (left), 7´ rack with Ethernet switch (middle), and diagram of a small cluster with a cluster-level Ethernet switch/router (right).

Warehouse-Scale Data Center: Network Fabric Choosing a networking fabric for WSCs involves a trade-off between speed, scale, and cost.1-Gbps Ethernet switches with up to 48 ports are essentially a commodity component, costing less than $30/Gbps per server to connect a single rack. However, network switches with high port counts, which are needed to tie together WSC clusters, have a much different price structure and are more than ten times more expensive (per 1-Gbps port) than commodity switches. As a result of this cost discontinuity, the networking fabric of WSCs is often organized as the two-level hierarchy depicted in the previous figure. For example, a rack with 40 servers, each with a 1-Gbps port, might have eight 1-Gbps uplinks to the cluster-level switch, corresponding to an oversubscription factor between 5 for communication across racks. Generally, intra rack connectivity is often cheaper than inter rack connectivity.

Warehouse-Scale Data Center: Storage Hierarchy. Figure shows a programmer’s view of storage hierarchy of a typical WSC. A server consists of a number of processor sockets, each with a multicore CPU and its internal cache hierarchy, local and shared DRAM, and a number of directly attached disk drives. The DRAM and disk resources within the rack are accessible through the first-level rack switches (assuming some sort of remote procedure call API to them), and all resources in all racks are accessible via the cluster-level switch.

Warehouse-Scale Data Center: Cooling System.

CRAC units (CRAC is a 1960s term for computer room air conditioning) pressurize the raised floor plenum by blowing cold air into the plenum. This cold air escapes from the plenum through perforated tiles that are placed in front of server racks and then flows through the servers, which expel warm air in the back. Racks are arranged in long aisles that alternate between cold aisles and hot aisles to avoid mixing hot and cold air. Eventually, the hot air produced by the servers re-circulates back to the intakes of the CRAC units that cool it and then exhaust the cool air into the raised floor plenum again. CRAC units consist of coils through which a liquid coolant is pumped; fans push air through these coils, thus cooling it. A set of redundant pumps circulate cold coolant to the CRACs and warm coolant back to a chiller or cooling tower, which expel the heat to the outside environment. Typically, the incoming coolant is at 12–14°C, and the air exits the CRACs at 16–20°C, leading to cold aisle (server intake) temperatures of 18–22°C..

Warehouse-Scale Data Center: Quantifying Latency, Bandwidth, and Capacity

The next figure attempts to quantify the latency, bandwidth, and capacity characteristics of a WSC. We assume a system with 2,000 servers, each with 8 GB of DRAM and four 1-TB disk drives. Each group of 40 servers is connected through a 1-Gbps link to a rack-level switch that has an additional eight 1-Gbps ports used for connecting the rack to the cluster-level switch (an oversubscription factor of 5). The graph shows the relative latency, bandwidth, and capacity of each resource pool. For example, the bandwidth available from local disks is 200 MB/s, whereas the bandwidth from off-rack disks is just 25 MB/s via the shared rack uplinks. On the other hand, total disk storage in the cluster is almost ten million times larger than local DRAM. A large application that requires many more servers than can fit on a single rack must deal effectively with these large discrepancies in latency, bandwidth, and capacity.

Container Data Center Design.

Container-based datacenters go one step beyond in-rack cooling by placing the server racks into a standard shipping container and integrating heat exchange and power distribution into the container as well. Similar to full in-rack cooling, the container needs a supply of chilled water and uses coils to remove all heat from the air that flows over it. Air handling is similar to in-rack cooling and typically allows higher power densities than regular raised-floor datacenters. Thus, container-based datacenters provide all the functions of a typical datacenter room (racks, CRACs, cabling, lighting) in a small package. Like a regular datacenter room, they must be complemented by outside infrastructure such as chillers, generators, and UPS units to be fully functional. The container-based facility has achieved extremely high energy efficiency ratings compared with typical datacenters today. E.g. Google’s container based datacenter..

Google Data Center (Video)