Download presentation
Presentation is loading. Please wait.
Published byMorris Jordan Modified over 9 years ago
1
XI HE Computing and Information Science Rochester Institute of Technology Rochester, NY USA xi.he@mail.rit.edu Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
2
Background Problem Definition Related work Data collection System model Analysis result Conclusion Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
3
61 billion kilowatt-hours of power in 2006, 1.5 percent of all US electricity use, worthy of $4.5 billion. Energy usage doubled between 2000 and 2006 Double again by 2011 [1] Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
4
Dynamic Voltage Scaling hardware Level Dynamic Frequency Scaling Fan Speeding Scaling Platform Virtualization Software Level Application Power Management Job Scheduling Middleware Level Virtual Machine Scheduling The basic idea is make the servers use as little electricity as possible. Cooling System Data Center Level Water management
5
Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab Dynamic Voltage Scaling Hardware Level Dynamic Frequency Scaling Fan Speeding Scaling Platform Virtualization Software Level Application Power Management Job Scheduling Middleware Level Virtual Machine Scheduling Platform Virtualization makes the server consolidation possible Cooling System Data Center Level Water management
6
Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab Dynamic Voltage Scaling Hardware Level Dynamic Frequency Scaling Fan Speeding Scaling Job Scheduling Middleware Level Virtual Machine Scheduling My Research Focus Cooling System Data Center Level Water management Platform Virtualization Software Level Application Power Management
7
Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab Dynamic Voltage Scaling Hardware Level Dynamic Frequency Scaling Fan Speeding Scaling Job Scheduling Middleware Level Virtual Machine Scheduling Cooling System Data Center Level Water management Platform Virtualization Software Level Application Power Management Save the cooling power and recycle water
8
Now we define the problem of thermal-aware scheduling as follows: Given a set of jobs. Find an optimal schedule to assign each job to the nodes to minimize the temperature increase in the data center. Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
9
The data is collected and provided by the Center for Computational Research (CCR) [3] at State University of New York at Buffalo from the U2 cluster. U2 clusters is composed of 1056 Dell PowerEdge SC1425 nodes each of which has 2 processors. The data from CCR is collected in a 30-days period, from 2009-02-20 to 2009-03-22. Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
10
We can model the data center Γ Γ = {C, J } Where, C indicates the U2 cluster, and J represents the jobs submitted to the data center to be scheduled for a period of time. Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
11
Where, the cluster C is composed of 1056 nodes. n i stands for the i th node in the cluster. T indicates the temperature in node n i which contain p 1 and p 2 2 processors. Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
12
Where, j i denotes the i th job running on a set of nodes. j i was submitted to the queue at the time of t submit and was queued at the time of t q. The job started execution at t s and ended execution at t e. It required cpus processors and consumed t cpu CPU time. Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
13
Job distribution in terms of its execution time. Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
14
Job distribution in terms of its size. Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
15
Job distribution in terms of its respond time. Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
16
Job arrival rate distribution Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
17
Temperature distribution Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
18
Relation between workload and temperature Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
19
The research is ongoing. Next step is to predict the future temperature. Then according to the future temperature, schedule the jobs. Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
20
Thermal-aware Task Placement in Data Centers [2] Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
21
Thermal-aware Task Placement in Data Centers [2] Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
22
22 Different task assignments lead to different power consumption distributions Different power consumption distributions lead to different temperature distributions Different temperature distributions lead to different total energy costs. For example, you have different cooling cost because you have to ensure the highest temperature is below the redline. Server task distribution Power consumption distribution Temperature distribution Energy cost
23
Questions? Comments? Suggestions? Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
25
[1] http://www.energystar.gov/ia/partners/prod_dev elopment/downloads/EPA_Datacenter_Report_Con gress_Final1.pdf [2] Tang, Qinghui; Gupta, Sandeep Kumar S.; Varsamopoulos, Georgios; Parallel and Distributed Systems, IEEE Transactions on Volume 19, Issue 11, Nov. 2008 Page(s):1458 - 1472 [3] “the Center of Computational Research.” [Online]. Available: http: //www.ccr.buffalo.edu/display/WEB/Home Rochester Institute of Technology Service Oriented Cyberinfrastructure Lab
26
Node1 Temp:113F Node2 Temp:115F Data Center Task1 (2G*1Hour)Task2 (1G*3Hour)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.