Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dynamic Power Management Using Online Learning Gaurav Dhiman, Tajana Simunic Rosing (CSE-UCSD) Existing DPM policies do not adapt optimally with changing.

Similar presentations


Presentation on theme: "Dynamic Power Management Using Online Learning Gaurav Dhiman, Tajana Simunic Rosing (CSE-UCSD) Existing DPM policies do not adapt optimally with changing."— Presentation transcript:

1 Dynamic Power Management Using Online Learning Gaurav Dhiman, Tajana Simunic Rosing (CSE-UCSD) Existing DPM policies do not adapt optimally with changing workloads Timeout and Predictive policies are heuristic (no guarantees) Stochastic policies are optimal for stationary workloads Policies outperform each other for different devices/workloads Take a set of DPM policies, each optimized for different workload Perform dynamic selection at run time to select the best suited policy for the current workload How to identify the best suited policy? How to identify changing workloads? Motivation Objectives Challenges Perform dynamic selection and evaluation of policies at run time Need a control algorithm to perform this activity We use ‘Online Learning Algorithm’ 1 for this control Guarantees performance close to that of best available policy in the set Solution: Use Online Learning Why Online Learning? Experiments and Results  Performed experiments on 2 devices:  Hard Disk Drive (HDD)  WLAN card Power/Performance Results for HDD HP-1 trace (Comparison with fixed timeout experts) Trace Type Value of α Timeout T be -30s Timeout30s-90sTimeout90s-180s HP-1 Trace low4.8%69%26.2% medium32.3%65.4%2.3% high100%0%0% HP-2 Trace low5.4%48.8%45.8% medium20.7%43.9%35.4% high100%0%0% Power/Performance Results for WLAN WWW Trace (Comparison with different experts)ExpertHDD Fixed Timeout Timeout = T be Adaptive Timeout Initial timeout = T be ; Adjustment = +0.1T be /-0.2T be ExponentialPredictive I n+l = a i n + (1 – a).I n, with a = 0.5 TISMDP Optimized for delay constraint of 7.5% of 7.5% Working Set Characteristics Frequency of selection of different fixed timeout experts 1 Based on Freund and Schaphire’s online allocation algorithm (“A decision-theoretic generalization of on-line learning and an application to boosting”) System Model ………. Expert1 Controller Working Set Device :Dormant Experts Expert selection Expert2Expert3ExpertN :Operational Expert Manages Power Dynamic Selection and Evaluation Controller Input Weight Vector for experts Loss (based on power savings/ performance delay) Updated Weight Vector for experts How it works? Controller Parameters: Initial weight vector, such that, Do for t = 1,2,3….. 1.Choose expert with highest probability factor in r t 2.Idle period starts -> operational expert performs DPM 3.Idle period ends -> evaluate performance of experts 4.Set the new weights vector to be Algorithm Selection w t = consists of weight factors corresponding to each expert r t is obtained by normalizing w t (referred as probability vector) Expert with highest probability factor selected as operational expert Evaluation Considers both energy savings and performance delay incurred by experts Loss evaluated with respect to an ideal offline policy: zero delay and maximum energy savings l t = consists of loss factors corresponding to each expert l ti = α l tie + (1- α) l tip Performance Bound Average loss incurred by our scheme for a given idle period ‘t’: Controller attempts to minimize the net loss: L G – min i L i ( and ) Can be shown that net loss of controller is bounded by or the average net loss per period decreases at the rate  Used workloads with varying characteristics


Download ppt "Dynamic Power Management Using Online Learning Gaurav Dhiman, Tajana Simunic Rosing (CSE-UCSD) Existing DPM policies do not adapt optimally with changing."

Similar presentations


Ads by Google