Commodity Data Center Design James Hamilton 2007-04-17

Slides:



Advertisements
Similar presentations
Challenges in optimizing data center utilization
Advertisements

Computer Room Requirements for High Density Rack Mounted Servers Rhys Newman Oxford University.
Global Supply Chain Procurement and Distribution
Energy Service Productivity Management ©2007 ESPM Energy Consultants, L.L.C. All Rights Reserved.
The CDCE BNL HEPIX – LBL October 28, 2009 Tony Chan - BNL.
Cooling Product Positioning
Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of Computing, UNF.
PG&E and Altera Data Center Energy Efficiency Project.
* Making Timely and Accurate Patient Care Decisions through Vista Imaging.
Where Does the Power go in DCs & How to get it Back Foo Camp James Hamilton web: blog:
©2008 Pearson Education, Inc., Upper Saddle River, NJ. All rights reserved. This material is protected under all copyright laws as they currently exist.
Transportation in a Supply Chain
INDUSTRIAL & SYSTEMS ENGINEERING
Challenges in Large Enterprise Data Management James Hamilton Microsoft SQL Server
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Copyright © 2006 by The McGraw-Hill Companies,
DRIVING INNOVATION AND ABILITY TO COMPETE THROUGH OUTSOURCING Anthony (Tony) C. Bernardo, Alloy Polymers Inc. NPE 2003 bernardo:
Cloud Computing Economics Ville Volanen
Architecture for Modular Data Centers James Hamilton 2007/01/17
Datacenter Transformation. In the past decade US Government owned datacenters skyrocketed from 432 to 1200 However, average utilization of servers is.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
24 x 7 Energy Efficiency February, 2007 William Tschudi
Commodity Data Center Design
INTRODUCTION TO CLOUD COMPUTING Cs 595 Lecture 5 2/11/2015.
Steve Craker K-12 Team Lead Geoff Overland IT and Data Center Focus on Energy Increase IT Budgets with Energy Efficiency.
Architecture for Modular Data Centers James Hamilton 2007/01/08
© 2008 IBM Corporation Sales Training IBM SystemsGreen Energy Efficiency Rapid Assessment Scott Barielle STG Lab.
© 2010 Colt Telecom Group Limited. All rights reserved. Next Generation Data Centre Design Akber Jaffer 2.
VAP What is a Virtual Application ? A virtual application is an application that has been optimized to run on virtual infrastructure. The application software.
For more notes and topics visit:
Best Practices in HVAC Design/Retrofit
Server Virtualization: Navy Network Operations Centers
Overview of Liquid Cooling Systems Peter Rumsey, Rumsey Engineers.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
Mission Energy: Energy Efficient Cooling How Data Centre energy consumption and operating costs can be significantly reduced through readily available.
Chapter 3 Storage Systems
ITRS Factory Integration Difficult Challenges Last Updated: 30 May 2003.
Cloud Computing John Engates CTO, Rackspace Presented: Rackspace Customer Conference, 2008 October 29, 2008.
Lots of hype, little science But – lots of this hype is real, and there are many challenging engineering problems Initially, we focus on data centers:
Neil Sanderson 24 October, Early days for virtualisation Virtualization Adoption x86 servers used for virtualization Virtualization adoption.
Overview of Data Center Energy Use Bill Tschudi, LBNL
What is the cloud ? IT as a service Cloud allows access to services without user technical knowledge or control of supporting infrastructure Best described.
HEPiX Fall 2014 Tony Wong (BNL) UPS Monitoring with Sensaphone: A cost-effective solution.
MH...CH LECT011 What is Material Handling? Materials handling is the science and art both involving the moving, packing and storing of substance.
OpenField Consolidates Stadium Data, Provides CRM and Analysis Functions for an Intelligent, End-to-End Solution COMPANY PROFILE : OPENFIELD Founded by.
Authors: William Tschudi, Lawrence Berkeley National Lab Stephen Fok, Pacific Gas and Electric Company Stephen Fok, Pacific Gas and Electric Company Presented.
Increasing DC Efficiency by 4x Berkeley RAD Lab
Key Customer ChallengesCustomer Pain Points How You can Help the CustomerProductsSolutionsServices Increasing Density Difficult to maintain 300 cfm per.
Optimizing Power and Data Center Resources Jim Sweeney Enterprise Solutions Consultant, GTSI.
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
1 PCE 2.1: The Co-Relationship of Containment and CFDs Gordon Johnson Senior CFD Manager at Subzero Engineering CDCDP (Certified Data Center Design Professional)
1 EIT 2.2 Is your company missing out on the cost-savings opportunities offered by data center consolidations? Andy Abbas Co-Founder and Vice President.
Component 8/Unit 1bHealth IT Workforce Curriculum Version 1.0 Fall Installation and Maintenance of Health IT Systems Unit 1b Elements of a Typical.
Renewable Distributed Generation and Public Water Supply Utilities CWWA/CTAWWA Fall Conference Paul R. Michaud, Esq. October 20, 2015.
Dell EMC Modular Data Centers
Extreme Scale Infrastructure
Overview: Cloud Datacenters II
DC Market Trends and the key focus areas within
Unit 2: Chapter 2 Cooling.
Deploying a Million-AP WiFi Network
Data Center Research Roadmap
Closing the Gap to Free Cooling in the Data Center
Cloud Computing Data Centers
Where Does the Power go in DCs & How to get it Back
Architecture for Modular Data Centers
Commodity Data Center Design
Direct Current (DC) Data Center
Cloud Computing Data Centers
The Greening of IT November 1, 2007.
Presentation transcript:

Commodity Data Center Design James Hamilton

15+ years in database engine development teams – Lead architect on IBM DB2 – Architect on SQL Server Led core engine teams over the years including SQL clients, optimizer, SQL compiler, XML, full text search, execution engine, protocols, etc. Led the Exchange Hosted Services Team – anti-spam, anti-virus, and archiving for 2m+ seats – ~700 servers in 10 data centers world-wide Currently architect on the Windows Live Core team Automation & redundancy is only way to: – Reduce costs – Improve rate of innovation – Reduce operational failures and downtime Background and Biases 1/21/20072

Commodity Data Center Growth Software as a Service – Services w/o unique value-add going off premise Payroll, security, etc. all went years ago – Substantial economies of scale Services at 10^5+ systems under mgmt rather than ~10^2 – IT outsourcing also centralizing compute centers Commercial High Performance Computing – Leverage falling costs of H/W in deep data analysis – Better understand customers, optimize supply chain, … Consumer Services – Google estimated at over 450 thousand systems in more than 25 data centers (NY Times) Basic observation: – No single system can reliably reach 5 9’s (need redundant H/W with resultant S/W complexity) – With S/W redundancy, most economic H/W solution is large numbers of commodity systems 1/21/20073

An Idea Whose Time Has Come 1/21/2007 Nortel Steel Enclosure Containerized telecom equipment Sun Project Black Box 242 systems in 20’ Rackable Systems Concentro 1,152 Systems in 40’ (9,600 cores/3.5 PB) Rackable Systems Container Cooling Model Caterpillar Portable Power 4 Datatainer ZoneBox Google WillPower Project Will Whitted Petabox Brewster Kahle Internet Archive

Cooling, Feedback, & Air Handling Gains 1/21/2007 Tighter control of air-flow increased delta-T and overall system efficiency Expect increased use of special enclosures, variable speed fans, and warm machine rooms CRACs closer to servers for tighter temp control feedback loop Container takes one step further with very little air in motion, variable speed fans, & tight feedback between CRAC and rack 5 Intel Verari

Shipping Container as Data Center Module Data Center Module – Contains network gear, compute, storage, & cooling – Just plug in power, network, & chilled water Increased cooling efficiency – Variable water & air flow – Better air flow management (higher delta-T) – 80% air handling power reductions (Rackable Systems) Bring your own data center shell – Just central networking, power, cooling, security & admin center – Grow beyond existing facilities – Can be stacked 3 to 5 high – Less regulatory issues (e.g. no building permit) – Avoids (for now) building floor space taxes Meet seasonal load requirements Single customs clearance on import Single FCC compliance certification 1/21/20076

Unit of Data Center Growth One at a time: – 1 system – Racking & networking: 14 hrs ($1,330) Rack at a time: – ~40 systems – Install & networking: ¾ hour ($60) – Considerably more efficient & now the unit of growth in efficient centers Container at a time: – ~1,000 systems – No packaging to remove – No floor space required – Require power, network, & cooling only Containers are weatherproof & transportable Data center construction takes 24+ months – New build & DC expansion require regulatory approval 1/21/20077

Manufacturing & H/W Admin. Savings Factory racking, stacking & packing much more efficient – Robotics and/or inexpensive labor Avoid layers of packaging – Systems->packing box->pallet->container – Materials cost and wastage and labor at customer site Data Center power & cooling expensive consulting contracts – Data centers are still custom crafted rather than prefab units – Move skill set to module manufacturer who designs power & cooling once – Installation design to meet module power, network, & cooling specs More space efficient – Power densities in excess of 1250 W/sq ft – Rooftop or parking lot installation acceptable (with security) – Stack 3 to 5 high Service-Free – H/W admin contracts can exceed 25% of systems cost – Sufficient redundancy that it just degrades over time At end of service, return for remanufacture & recycling – 20% to 50% of systems outages caused by Admin error (A. Brown & D. Patterson) 1/21/20078

DC Location Flexibility & Portability Dynamic data center – Inexpensive intermodal transit anywhere in world – Move data center to cheap power & networking – Install capacity where needed – Conventional Data centers cost upwards of $150M & take 24+ months to design & build – Political/Social issues USA PATRIOT act concerns and other national interests can require local data centers Build out a massively distributed data center fabric – Install satellite data centers near consumers 1/21/20079

Systems & Power Density Estimating datacenter power density difficult (15+ year horizon) – Power is 40% of DC costs Power + Mechanical: 55% of cost – Shell is roughly 15% of DC cost – Cheaper to waste floor than power Typically 100 to 200 W/sq ft Rarely as high as 350 to 600 W/sq ft – Modular DC eliminates the shell/power trade-off Add modules until power is absorbed 480VAC to container – High efficiency DC distribution within – High voltage to rack can save >5% over 208VAC approach Over 20% of entire DC costs is in power redundancy – Batteries able to supply up to 12 min at some facilities – N+2 generation Instead, more smaller, cheaper data centers Eliminate redundant power & bulk of shell costs 1/21/200710

Where do you Want to Compute Today? 1/21/2007 Slides posted to: 11