Green cloud computing 2 Cs 595 Lecture 15
The Case for Energy-Proportional Computing Barroso and Holzle (Google)
Introduction Energy proportional computing primary design goal for servers Cooling and provisioning proportional to average energy servers consume Energy efficiency benefits all components Computer energy consumption lowered if: Adopt high-efficiency power supplies Use power saving features already in equipment
Introduction More efficient CPUs based on multiprocessing has helped But, generally, higher performance means increased energy usage
Servers Datacenter server efficiency and utilization: Rarely completely idle Seldom operate at maximum 10-50% of max utilization levels 100% utilization not acceptable for meeting throughput, etc. – no slack time
Servers Completely idle server waste of capital Servers need to be available Perform background tasks Move data around Help with crash recovery Applications can be restructured to create idle intervals Difficult, hard to maintain Devices with highest energy savings, highest wake-up penalty, Ex. disk spin up, CPU throttle based on workload
Energy Efficiency at varying utilization levels Utilization – measure of performance normalized to performance at peak loads Energy efficient server still consumes ½ power when doing almost no work Power efficiency – server utilization/power consumed Peak energy efficiency occurs at peak utilization and drops as util. decreases At 20-30% utilization, efficiency drops to less than ½ at peak performance
Toward energy-proportional machines Mismatch between servers’ high-energy efficiency characteristics and behavior Designers (hardware) need to address this Design machines that consume energy in proportion to amount of work performed No power when idle (easy) Nearly no power when little work (harder) More as activity increases (even harder)
CPU power Small amount of total server power consumed by CPU has changed since 2005 CPU no longer dominates power at peak usage, trend will continue Even less when idle Processors close to energy-proportional Consume < 1/3 power at low activity Power range less for other components < 50% for DRAM, 25% for disk drives, 15% for network switches
Dynamic Power range Processors can run at lower voltage frequency mode without impacting performance No other components with such modes Only inactive modes in DRAM and disks Inactive to active mode transition penalty (even it only idle to submilliseconds) Servers with 90% dynamic range could cut energy by ½ in data centers Lower peak power by 30% Energy proportional hardware reduce the need for power management software
Disks - Inactive/active Penalty for transition to active from inactive state makes it less useful Disk penalty 1000x higher for spin up than regular access latency Only useful if idle for several minutes (rarely occurs) More beneficial to have smaller penalty even if higher energy levels Active energy savings schemes are useful even if higher penalty to transition because in low energy mode for longer periods
Conclusions CPUS already exhibit energy proportional profiles, other components less so Need significant improvements in memory and disk subsystems Such systems responsible for larger fraction of energy usage Need energy efficient benchmark developers to report measurements at nonpeak levels for complete picture
Green Introspection by K. Cameron
History of Green Energy In the 1970s Energy crisis High gas prices Fuel shortages Pollution Education and action Environmental activism Energy awareness and conservation Technological innovation
Gifts from the 70s Energy crisis subsided In the meantime advances in computing responsible for: Innovation for energy-efficient buildings and cars Identified causes and effects of global climate change Grassroots activism, distributing info about energy consumption, carbon emission, etc.
What happened next Call to action within IT community (what about the 80s??) In 1990s General-purpose microprocessors built for performance Competing processors ever-increasing clock rates and transistor densities fast processing power and exponentially increasing power consumption Power wall at 130 watts Power is a design constraint
Better, but also worse? To reduce power consumption But Multicore architectures – higher performance, lower power budgets But Users expect performance doubling every 2 years Developers must harness parallelism of multicore architectures Power problems ubiquitous – energy-aware design needed at all levels
More problems Memory architectures consume significant amounts of power Need energy-aware design at systems level Disks, boards, fans switches, peripherals Maintain quality of computing devices, decrease environmental footprint
Trade-offs How often to replace aging systems? 2% of solid waste comes from consumer electronic components E-waste fastest growing component of waste stream In US 130k computers thrown away daily and 100m cell phones annually Recycle e-waste (good luck) Use computers as long as possible?