Dave Bradley Rick Harper Steve Hunter 4/28/2003 CoolRunnings.

Slides:



Advertisements
Similar presentations
Performance Testing - Kanwalpreet Singh.
Advertisements

Scheduling in Web Server Clusters CS 260 LECTURE 3 From: IBM Technical Report.
Resource Management §A resource can be a logical, such as a shared file, or physical, such as a CPU (a node of the distributed system). One of the functions.
Walter Binder University of Lugano, Switzerland Niranjan Suri IHMC, Florida, USA Green Computing: Energy Consumption Optimized Service Hosting.
1 EL736 Communications Networks II: Design and Algorithms Class3: Network Design Modeling Yong Liu 09/19/2007.
Module 5: Configuring Access for Remote Clients and Networks.
NETWORK LOAD BALANCING NLB.  Network Load Balancing (NLB) is a Clustering Technology.  Windows Based. (windows server).  To scale performance, Network.
1 Content Delivery Networks iBAND2 May 24, 1999 Dave Farber CTO Sandpiper Networks, Inc.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
City University London
Lesson 11-Virtual Private Networks. Overview Define Virtual Private Networks (VPNs). Deploy User VPNs. Deploy Site VPNs. Understand standard VPN techniques.
SERVER LOAD BALANCING Presented By : Priya Palanivelu.
Lesson 1: Configuring Network Load Balancing
Tiered architectures 1 to N tiers. 2 An architectural history of computing 1 tier architecture – monolithic Information Systems – Presentation / frontend,
Web Proxy Server Anagh Pathak Jesus Cervantes Henry Tjhen Luis Luna.
SPRING 2011 CLOUD COMPUTING Cloud Computing San José State University Computer Architecture (CS 147) Professor Sin-Min Lee Presentation by Vladimir Serdyukov.
Scalable Server Load Balancing Inside Data Centers Dana Butnariu Princeton University Computer Science Department July – September 2010 Joint work with.
Energy Aware Network Operations Authors: Priya Mahadevan, Puneet Sharma, Sujata Banerjee, Parthasarathy Ranganathan HP Labs IEEE Global Internet Symposium.
Adaptive Server Farms for the Data Center Contact: Ron Sheen Fujitsu Siemens Computers, Inc Sever Blade Summit, Getting the.
CLOUD COMPUTING. A general term for anything that involves delivering hosted services over the Internet. And Cloud is referred to the hardware and software.
1 Content Distribution Networks. 2 Replication Issues Request distribution: how to transparently distribute requests for content among replication servers.
Self-Adaptive QoS Guarantees and Optimization in Clouds Jim (Zhanwen) Li (Carleton University) Murray Woodside (Carleton University) John Chinneck (Carleton.
A User Experience-based Cloud Service Redeployment Mechanism KANG Yu.
1 Chapter Overview Network devices. Hubs Broadcast For star topology Same as a repeater Operate at the physical layer 2.
Barracuda Load Balancer Server Availability and Scalability.
Server Load Balancing. Introduction Why is load balancing of servers needed? If there is only one web server responding to all the incoming HTTP requests.
Module 13: Network Load Balancing Fundamentals. Server Availability and Scalability Overview Windows Network Load Balancing Configuring Windows Network.
Storage Allocation in Prefetching Techniques of Web Caches D. Zeng, F. Wang, S. Ram Appeared in proceedings of ACM conference in Electronic commerce (EC’03)
Design and Implement an Efficient Web Application Server Presented by Tai-Lin Han Date: 11/28/2000.
Dynamic Resource Allocation Using Virtual Machines for Cloud Computing Environment.
Windows 2000 Advanced Server and Clustering Prepared by: Tetsu Nagayama Russ Smith Dale Pena.
Scalability Terminology: Farms, Clones, Partitions, and Packs: RACS and RAPS Bill Devlin, Jim Cray, Bill Laing, George Spix Microsoft Research Dec
1 Chapter 6: Proxy Server in Internet and Intranet Designs Designs That Include Proxy Server Essential Proxy Server Design Concepts Data Protection in.
Infrastructure for Better Quality Internet Access & Web Publishing without Increasing Bandwidth Prof. Chi Chi Hung School of Computing, National University.
Protocol(TCP/IP, HTTP) 송준화 조경민 2001/03/13. Network Computing Lab.2 Layering of TCP/IP-based protocols.
SDN based Network Security Monitoring in Dynamic Cloud Networks Xiuzhen CHEN School of Information Security Engineering Shanghai Jiao Tong University,
Unit – I CLIENT / SERVER ARCHITECTURE. Unit Structure  Evolution of Client/Server Architecture  Client/Server Model  Characteristics of Client/Server.
Challenges towards Elastic Power Management in Internet Data Center.
1 Distributed Energy-Efficient Scheduling for Data-Intensive Applications with Deadline Constraints on Data Grids Cong Liu and Xiao Qin Auburn University.
NETWORK COMPONENTS Assignment #3. Hub A hub is used in a wired network to connect Ethernet cables from a number of devices together. The hub allows each.
Advanced Computer Networks Topic 2: Characterization of Distributed Systems.
S4-Chapter 3 WAN Design Requirements. WAN Technologies Leased Line –PPP networks –Hub and Spoke Topologies –Backup for other links ISDN –Cost-effective.
Copyright © 2011, Performance Evaluation of a Green Scheduling Algorithm for Energy Savings in Cloud Computing Truong Vinh Truong Duy; Sato,
OPERETTA: An Optimal Energy Efficient Bandwidth Aggregation System Karim Habak†, Khaled A. Harras‡, and Moustafa Youssef† †Egypt-Japan University of Sc.
The Intranet.
VGreen: A System for Energy Efficient Manager in Virtualized Environments G. Dhiman, G Marchetti, T Rosing ISLPED 2009.
DYNAMIC LOAD BALANCING ON WEB-SERVER SYSTEMS by Valeria Cardellini Michele Colajanni Philip S. Yu.
The Problem of State. We will look at… Sometimes web development is just plain weird! Internet / World Wide Web Aspects of their operation The role of.
Push Technology Humie Leung Annabelle Huo. Introduction Push technology is a set of technologies used to send information to a client without the client.
Data Communications and Networks Chapter 9 – Distributed Systems ICT-BVF8.1- Data Communications and Network Trainer: Dr. Abbes Sebihi.
6.1 © 2004 Pearson Education, Inc. Exam Designing a Microsoft ® Windows ® Server 2003 Active Directory and Network Infrastructure Lesson 6: Designing.
Jennifer Rexford Fall 2010 (TTh 1:30-2:50 in COS 302) COS 561: Advanced Computer Networks Energy.
By Harshal Ghule Guided by Mrs. Anita Mahajan G.H.Raisoni Institute Of Engineering And Technology.
Introduction to Mobile-Cloud Computing. What is Mobile Cloud Computing? an infrastructure where both the data storage and processing happen outside of.
1Security for Service Providers – Dave Gladwin – Newport Networks – SIP ’04 – 22-Jan-04 Security for Service Providers Protecting Service Infrastructure.
Energy Aware Network Operations
Instructor Materials Chapter 6: Quality of Service
Lab A: Planning an Installation
Web Server Load Balancing/Scheduling
Web Server Load Balancing/Scheduling
Planning and Troubleshooting Routing and Switching
What Are Routers? Routers are an intermediate system at the network layer that is used to connect networks together based on a common network layer protocol.
RSVP: A New Resource ReSerVation Protocol
Cloud Computing.
Web Server Administration
Internet and Web Simple client-server model
The Greening of IT November 1, 2007.
Network Planning & Capacity Management
Computer Networks Protocols
Multicasting Unicast.
Presentation transcript:

Dave Bradley Rick Harper Steve Hunter 4/28/2003 CoolRunnings

Overview  Joint effort between Research and xSeries Development  Predictive algorithm that measures and predicts workload and determines when to place servers in a low power state  Objective is to minimize energy consumption, unmet demand, and power cycles Automatically adapts to short term and seasonal workload variations Automatically adapts algorithm "gains" to workload dynamics Energy savings of 20% or more can be achieved  Demo developed to illustrate benefits  Findings to be published in IBM Journal of Research and Development later this year

Problem  Power consumption and dissipation constraints are jeopardizing the ability of the IT industry to support the business demands of present-day workloads It can be difficult to get power into and heat out of large systems Power shortages such as recently experienced in California impose further, unpredictable, constraints Server complex size is growing dramatically in response to electronic commerce and web hosting computational needs Power consumption is growing to well over 100 watts per processor

Observations  Electronic commerce and web serving workloads have certain characteristics that make them amenable to predictive system management techniques that can effectively reduce power consumption They usually support periodic or otherwise variable workloads, with the peak workload being substantially higher than the minimum or average workload The peak workload has been seen as ten times the minimum (and sometimes average) workload

Observations  These workloads are highly parallel and easy to load balance A typical web serving system has a large number of web servers fronted by a load-balancing “IP sprayer”, which provides a single IP address to the outside world, and dispatches requests from the outside world to the many web servers in the complex to balance the load among them The IP sprayer sends a given request to the server having the lowest utilization, and, in turn, the servers keep the IP sprayer updated with their utilization, response time, or other indication Workload is routed around failed servers, and the users having transactions or sessions on a failed server can click again and their request will go to another server.

IP Load Balancing Application Traditional Load Balancing Model Load Balancer Internet Step 1: Document request (IP-SVA) Step 2: Web server selection - Appropriate IP Address inserted Step 3: Packet forwarding Step 4: Document response (IP-SVA) - Response passes back through Load Balancer Network Switch Server 2 Server N Server 1 4 Client 2 4 Characteristics  Efficient use of Server Resources  High Availability/Scalability  Bandwidth Limited on Return Path (i.e., Response Traffic ) through the Load Balancer Single IP Address

The CoolRunnings Algorithm  CoolRunnings exploits this environment to manage power based on measured and predicted workload, such that both unmet demand and power consumption are minimized. It does this by: Measuring and characterizing the workload on all the servers in a defined group Determining whether any servers need to be powered on or off in the near future, by assessing the current capacity relative to the predicted capacity needs, Manipulating the existing system and workload management functions to remove load from servers to be turned off, and Physically turning on or off (or transitioning into and out of standby or hibernation) designated servers using existing system management interfaces

Application of CoolRunnings Power Management in IP Spraying Schema

Application of CoolRunnings Up to 14 Processor Blades  Modular, Scalable Density with Performance  7U Mechanical Chassis Integrated Network Infrastructure  Switching with point-to-point blade connections Affordable Availability  Redundant, Hot-swappable blades and modules Advanced Systems Management  Integrated service processor Power Management with BladeCenter

Application of CoolRunnings Power Management with Virtual Machines

Demo

Summary  CoolRunnings is a real-time power management algorithm that is applicable to a complex of servers having a parallelizable and migratable workload  The algorithm is designed to minimize power utilization, unmet demand, and server power cycles  Experiments with the CoolRunnings algorithm on /database and web serving workloads show that energy savings of 20% or more can be readily achieved, with typically less than 1% of unmet or deferred demand and an average of one power cycle per server-day.  If the workload has a high dynamic range, or if the system is substantially overprovisioned, much higher energy savings are obtained.