Download presentation
Presentation is loading. Please wait.
Published byAriel Phelps Modified over 9 years ago
1
November 15 - 19, 2009SERVICE COMPUTATION 2009 Analysis of Energy Efficiency in Clouds H. AbdelSalamK. Maly (maly@cs.odu.edu) R. MukkamalaM. Zubair Department of Computer Science, Old Dominion University D. Kaminsky IBM, Raleigh, North Carolina 1
2
Outline Cloud Computing Change Management Power Management –Pro-active approach –Minimize total power consumption –Constraints: SLAs Prior change management commitments –Compute possible time slots for change management task November 15 - 19, 2009SERVICE COMPUTATION 20092
3
Cloud Computing A cloud can be defined as: – a pool of computer resources that can host a variety of different workloads, including batch-style back-end jobs and interactive user applications. A cloud computing platform dynamically provisions, configures, reconfigures, and deprovisions servers as needed. Servers in the cloud can be physical machines or virtual machines. Customers have Service Level Agreements to buy computing services from cloud manager November 15 - 19, 2009SERVICE COMPUTATION 20093
4
Change Management Managing large IT environments such as computing clouds is expensive and labor intensive. Servers go through several software and hardware upgrades. IT organizations handle change management through human group interactions and coordination. November 15 - 19, 2009SERVICE COMPUTATION 20094
5
Pro-active Approach We proposed earlier and implemented an infrastructure- aware autonomic manager for change management –scheduler that computes possible open time slots in which changes can be applied without violating any of SLAs reservations. Here we propose pro-active energy-aware technique for change management in a cloud computing environment. November 15 - 19, 2009SERVICE COMPUTATION 20095
6
November 15 - 19, 2009SERVICE COMPUTATION 20096 Cloud Computing Architecture
7
Job distribution applications in a cloud computing: –intensive compute processing, non- interactive applications –user interactive: Web applications and Web services are typical examples. November 15 - 19, 2009SERVICE COMPUTATION 20097
8
Non-interactive applications dedicate one or more servers to each of these applications, number of dedicated servers depends on the underlying SLA and the availability of servers in the cloud servers should be run at their top speed (frequency) so the application can finish as soon as possible November 15 - 19, 2009SERVICE COMPUTATION 20098
9
Job distribution Assume that, based on its SLA, Job X requires s seconds response time for u users. From the historical data for Job X, we estimate the average processing required for a user query to be l instructions. Assume that job X is to be run on a server that runs on frequency f and on the average requires CPI clock ticks (CPU cycles) to execute an instruction. the server can execute q=(s*f)/(l*CPI) user queries within s seconds. If q<u, then the remaining (u-q) user requests should be routed to another server. November 15 - 19, 2009SERVICE COMPUTATION 20099
10
System model estimate the computing power (MIPS) needed to achieve the required response time client provides a histogram that shows the frequency of each expected query replace the minimum average response time constraint in SLA by the minimum number of instructions that the application is allowed to execute every second November 15 - 19, 2009SERVICE COMPUTATION 200910
11
November 15 - 19, 200911 Distribution of jobs onto servers SERVICE COMPUTATION 2009
12
System model Conversion of response time to MIPS –If user query has average response time of t1 seconds when it runs solely on a server configuration with x MIPS (million instructions per second), this can be benchmarked for each server configuration), then –to have an average response time of t2 seconds, it is required to run the query such that it can execute a minimum of (t1*x)/t2 million instructions per second. Power management of server –Minimum Fmin –Maximum Fmax –Discrete values in between Power – frequency relation November 15 - 19, 2009SERVICE COMPUTATION 200912
13
Mathematical analysis given k servers that should run on frequencies respectively, such that total compute load is: the total energy consumption is given by, November 15 - 19, 2009SERVICE COMPUTATION 200913
14
Mathematical analysis the number of servers k, that should run to optimize power consumption, is (assuming continuous frequency spectrum): Each server should run at frequency November 15 - 19, 2009SERVICE COMPUTATION 200914
15
Sample cloud load November 15 - 19, 2009SERVICE COMPUTATION 200915 Actual and Approximated Load due to several SLAs.
16
Servers available for change management in each time segment, –the number of idle servers in the cloud equals the difference between the total number of cloud servers and k t. –idle server is a candidate for change management. November 15 - 19, 2009SERVICE COMPUTATION 200916
17
November 15 - 19, 2009SERVICE COMPUTATION 200917 Servers Available for changes as a function of time
18
Scenario comparison Total energy consumption during one period (one day) using the pro-active approach is 37305 Watt-Hour, for an average of 1554 Watt. Total and the average energy consumption when using 5 % over- provisioning at various frequencies: November 15 - 19, 2009SERVICE COMPUTATION 200918
19
November 15 - 19, 2009SERVICE COMPUTATION 2009 Conclusion Pro-active management is the computation of when servers will be idle so they can be scheduled for change maintenance. Pro-active power management leads to considerable saving in total energy consumed, for specific examples ranging from 5-75%. Can be modified to include compute intensive jobs Can be modified to include hardware failure rates 19
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.