Download presentation
Presentation is loading. Please wait.
Published byAli Kyte Modified over 9 years ago
1
An Evaluation of Linear Models for Host Load Prediction Peter A. Dinda David R. O’Hallaron Carnegie Mellon University
2
2 Motivating Questions What are the properties of host load? Is host load predictable? What predictive models are appropriate? Are host load predictions useful?
3
3 Overview of Answers Host load exhibits complex behavior Self-similarity, epochal behavior Host load is predictable 1 to 30 second timeframe Simple linear models are sufficient Recommend AR(16) or better Predictions lead to useful estimates of task execution times Statistically rigorous approach
4
4 Outline Context: predicting task execution times –Mean squared load prediction error Offline trace-based evaluation –Host load traces –Linear models –Randomized methodology –Results of data-mining Online prediction of task execution times Related work Conclusion
5
5 Prediction-based Best-effort Distributed Real-time Scheduling Predicted Exec Time Task deadline nominal time ? deadline Task notifies scheduler of its CPU requirements (nominal time) and its deadline Scheduler acquires predicted task execution times for all hosts Scheduler assigns task to a host where its deadline can be met
6
6 Task deadline nominal time ? Load Sensor Load Predictor Exec Time Model Predicting Task Execution Times Predicted Exec Time deadline DEC Unix 5 second load average sampled at 1 Hz 1 to 30 second predictions
7
7 Confidence Intervals Bad Predictor No obvious choice Good Predictor Two good choices Predicted Exec Time Good predictors provide smaller confidence intervals Smaller confidence intervals simplify scheduling decisions Predicted Exec Time deadline
8
8 Task deadline nominal time ? Load Sensor Load Predictor Exec Time Model Load Prediction Focus Predicted Exec Time deadline CI length determined by mean squared error of predictor
9
9 Load Predictor Operation Measurements in Fit Interval Model Modeler Load Predictor Evaluator Measurements in Test Interval Prediction Stream z t+n-1,…, z t+1, z t z’ t,t+w z’ t,t+1 z’ t,t+2... z’ t+1,t+1+w z’ t+1,t+2 z’ t+1,t+3... z’ t+2,t+2+w z’ t+2,t+3 z’ t+2,t+4... Model Type Error Metrics Error Estimates One-time use Production Stream
10
10 Mean Squared Error …, z t+1, z t z’ t,t+w z’ t,t+1 z’ t,t+2... z’ t+1,t+1+w z’ t+1,t+2 z’ t+1,t+3... z’ t+2,t+2+w z’ t+2,t+3 z’ t+2,t+4... 1 step ahead predictions 2 step ahead predictions w step ahead predictions... ( - z t+i ) 2 Variance of z... (z’ t+i,t+i+1 - z t+i+1 ) 2 (z’ t+i,t+i+2 - z t+i+2 ) 2 (z’ t+i,t+i+w - z t+i+w ) 2 1 step ahead mean squared error 2 step ahead mean squared error w step ahead mean squared error... aw = a1 = a2 = z = Load Predictor Good Load Predictor : a1, a2,…, aw z
11
11 CIs From Mean Squared Error 95 % CI for exec time available in next second Predicted Load = 1.0
12
12 Massive reduction in confidence interval length using prediction Do such benefits consistently occur? Example of Improving the Confidence Interval
13
13 Outline Context: predicting task execution times –Mean squared load prediction error Offline trace-based evaluation –Host load traces –Linear models –Randomized methodology –Results of data-mining Online prediction of task execution times Related work Conclusion
14
14 Host Load Traces DEC Unix 5 second exponential average Full bandwidth captured (1 Hz sample rate) Long durations Also looked at “deconvolved” traces
15
15 Salient Properties of Load Traces +/-Extreme variation +Significant autocorrelation Suggests appropriateness of linear models +Significant average mutual information - Self-similarity / long range dependence +/-Epochal behavior + Stable spectrum during an epoch - Abrupt transitions between epochs (Detailed study in LCR98, SciProg99) +encouraging for prediction - discouraging for prediction
16
16 Linear Models (2000 sample fits, largest models in study, 30 steps ahead)
17
17 AR(p) Models –Fast to fit (4.2 ms, AR(32), 2000 points) –Fast to use (<0.15 ms, AR(32), 30 steps ahead) –Potentially less parsimonious than other models next value p previous values weights chosen to minimize mean square error for fit interval error
18
18 Evaluation Methodology Ran ~152,000 randomly chosen testcases on the traces –Evaluate models independently of prediction/evaluation framework –~30 testcases per trace, model class, parameter set Data-mine results Offline and online systems implemented using RPS Toolkit
19
19 Testcases Models –MEAN, LAST/BM(32) –Randomly chosen model from: AR(1..32), MA(1..8), ARMA(1..8,1..8), ARIMA(1..8,1..2,1..8), ARFIMA(1..8,d,1..8)
20
20 Evaluating a Testcase Measurements in Fit Interval Model Modeler Load Predictor Evaluator Measurements in Test Interval Prediction Stream z t+n-1,…, z t+1, z t z’ t,t+w z’ t,t+1 z’ t,t+2... z’ t+1,t+1+w z’ t+1,t+2 z’ t+1,t+3... z’ t+2,t+2+w z’ t+2,t+3 z’ t+2,t+4... Model Type Error Metrics Error Estimates One-time use Production Stream
21
21 Error Metrics Summary statistics for the 1,2,…,30 step ahead prediction errors of all three models –Mean squared error –Min, median, max, mean, mean absolute errors IID tests for 1 step ahead errors –Significant residual autocorrelations, Portmanteau Q (power of residuals), turning point test, sign test Normality test (R 2 of QQ plot) for 1 step ahead errors
22
22 Database 54 values characterize testcase, lead time SQL queries to answer questions select count(*), 100*avg((testvar-msqerr)/testvar) as avgpercentimprove from big where p=16 and q=0 and d=0 and lead=1 +----------+-------------------+ | count(*) | avgpercentimprove | +----------+-------------------+ | 1164 | 66.7681346166 | +----------+-------------------+ “How much do AR(16) models reduce the variability of 1 second ahead predictions?”
23
23 Comparisons Paired –MEAN vs BM/LAST vs another model Unpaired –All models –Unpaired t-test to compare expected mean square errors –Box plots to determine consistency
24
24 AR(16) vs. LAST
25
25 AR(16), BM(32)
26
26 Unpaired Box Plot Comparisons Good models achieve consistently low error Mean Squared Error Model AModel BModel C Inconsistent low error Consistent low error Consistent high error 2.5% 25% 50% Mean 75% 97.5%
27
27 1 second Predictions, All Hosts 2.5% 25% 50% Mean 75% 97.5% Predictive models clearly worthwhile
28
28 15 second Predictions, All Hosts 2.5% 25% 50% Mean 75% 97.5% Predictive models clearly worthwhile Begin to see differentiation between models
29
29 30 second Predictions, All Hosts 2.5% 25% 50% Mean 75% 97.5% Predictive models clearly beneficial even at long prediction horizons
30
30 1 Second Predictions, Dynamic Host 2.5% 25% 50% Mean 75% 97.5% Predictive models clearly worthwhile
31
31 15 Second Predictions, Dynamic Host 2.5% 25% 50% Mean 75% 97.5% Predictive models clearly worthwhile Begin to see differentiation between models
32
32 30 Second Predictions, Dynamic Host 2.5% 25% 50% Mean 75% 97.5% Predictive models clearly worthwhile Begin to see differentiation between models
33
33 Outline Context: predicting task execution times –Mean squared load prediction error Offline trace-based evaluation –Host load traces –Linear models –Randomized methodology –Results of data-mining Online prediction of task execution times Related work Conclusion
34
34 Online Prediction of Task Execution Times Replay selected load trace on host Continuously run 1 Hz AR(16)-based host load predictor Select random tasks –5 to 15 second intervals –0.1 to 10 second nominal times Estimate exec time using predictions –Assume priority-less round-robin scheduler Execute task –record nominal, predicted, and actual exec times
35
35 On-line Prediction Results Nominal time as predictionLoad prediction based Measurement of 1000 0.1-30 second tasks on lightly loaded host Prediction is beneficial even on lightly loaded hosts All tasks usefully predicted 10% of tasks drastically mispredicted
36
36 On-line Prediction Results Nominal time as predictionLoad prediction based Measurement of 3000 0.1-30 second tasks on heavily loaded, dynamic host Prediction is beneficial on heavily loaded, dynamic hosts 74% of tasks mispredicted 3% of tasks mispredicted
37
37 Related Work Workload studies for load balancing Mutka, et al [PerfEval ‘91] Harchol-Balter, et al [SIGMETRICS ‘96] Host load measurement and studies Network Weather Service [HPDC‘97, HPDC’99] Remos [HPDC’98] Dinda [LCR98, SciProg99] Host load prediction Wolski, et al [HPDC’99] (NWS) Samadani, et al [PODC’95]
38
38 Conclusions Rigorous study of host load prediction Host load is predictable despite its complex behavior Simple linear models are sufficient Recommend AR(16) or better Predictions lead to useful estimates of task running time
39
39 Availability RPS Toolkit –http://www.cs.cmu.edu/~pdinda/RPS.html –Includes on-line and off-line prediction tools Load traces and tools –http://www.cs.cmu.edu/~pdinda/LoadTraces/ Prediction testcase database –Available by request (pdinda@cs.cmu.edu) Remos –http://www.cs.cmu.edu/~cmcl/remulac/remos.html
40
40 Linear Time Series Models Choose weights j to minimize a 2 a is the confidence interval for t+1 predictions Unpredictable Random Sequence Fixed Linear Filter Partially Predictable Load Sequence
41
41 Online Resource Prediction System Sensor PredictorEvaluator Buffer Measurement Stream Prediction Stream Refit Signal Application User Control Req/Resp Stream
42
42 Execution Time Model
43
43 Prediction Errors Load Predictor …, z t+1, z t z’ t,t+w z’ t,t+1 z’ t,t+2... z’ t+1,t+1+w z’ t+1,t+2 z’ t+1,t+3... z’ t+2,t+2+w z’ t+2,t+3 z’ t+2,t+4... 1 step ahead predictions 2 step ahead predictions w step ahead predictions... 1 step ahead prediction errors i=0,1,...
44
44 Prediction Errors Load Predictor …, z t+1, z t z’ t,t+w z’ t,t+1 z’ t,t+2... z’ t+1,t+1+w z’ t+1,t+2 z’ t+1,t+3... z’ t+2,t+2+w z’ t+2,t+3 z’ t+2,t+4... 1 step ahead predictions 2 step ahead predictions w step ahead predictions... 1 step ahead prediction errors i=0,1,... 2 step ahead prediction errors
45
45 Prediction Errors Load Predictor …, z t+1, z t z’ t,t+w z’ t,t+1 z’ t,t+2... z’ t+1,t+1+w z’ t+1,t+2 z’ t+1,t+3... z’ t+2,t+2+w z’ t+2,t+3 z’ t+2,t+4... 1 step ahead predictions 2 step ahead predictions w step ahead predictions... 1 step ahead prediction errors i=0,1,...... 2 step ahead prediction errors w step ahead prediction errors...
46
46 Mean Squared Error Load Predictor …, z t+1, z t z’ t,t+w z’ t,t+1 z’ t,t+2... z’ t+1,t+1+w z’ t+1,t+2 z’ t+1,t+3... z’ t+2,t+2+w z’ t+2,t+3 z’ t+2,t+4... 1 step ahead predictions 2 step ahead predictions w step ahead predictions... (z’ t+i,t+i+1 - z t+i+1 ) 2 (z’ t+i,t+i+2 - z t+i+2 ) 2 (z’ t+i,t+i+w - z t+i+w ) 2 1 step ahead mean squared error 2 step ahead mean squared error w step ahead mean squared error... i=0,1,... aw = a1 = a2 =
47
47 Load Predictor Operation Load Predictor …, z t+1, z t z’ t,t+w z’ t,t+1 z’ t,t+2... z’ t+1,t+1+w z’ t+1,t+2 z’ t+1,t+3... z’ t+2,t+2+w z’ t+2,t+3 z’ t+2,t+4... 1 step ahead predictions 2 step ahead predictions w step ahead predictions...
48
48 CIs From Mean Squared Error z’ t,t+1 = 1.0 a1 = 0.1 “load in next second is predicted to be 1.0” z’ t,t+1 = [1.0 - 1.96 a1, 1.0 + 1.96 a1 ] with 95% confidence z’ t,t+1 = [0.38, 1.62] with 95% confidence t exec = 1/(1+z’ t,t+1 ) “your task will execute this long in the next second” “one second ahead predictions are this bad” t exec = 1/(1+1.0) = 0.5 seconds t exec = 1/(1+[0.38, 1.62]) = [0.38, 0.72] seconds with 95% confidence a1 = 0.01 t exec = 1/(1+[0.8, 1.2]) = [0.45, 0.56] seconds with 95% confidence
49
49 AR(1), LAST (big)
50
50 AR(2), LAST (big)
51
51 AR(4), LAST (big)
52
52 AR(8), LAST (big)
53
53 AR(16), LAST (big)
54
54 AR(32), LAST (big)
55
55 AR(1), BM(32)
56
56 AR(2), BM(32)
57
57 AR(4), BM(32)
58
58 AR(8), BM(32)
59
59 AR(16), BM(32)
60
60 AR(32), BM(32)
61
61 AR(p), LAST, +1 (big)
62
62 AR(p), LAST, +2 (big)
63
63 AR(p), LAST, +4 (big)
64
64 AR(p), LAST, +8 (big)
65
65 AR(p), LAST, +16 (big)
66
66 AR(p), LAST, +30 (big)
67
67 AR(p), BM(32), +1
68
68 AR(p), BM(32), +2
69
69 AR(p), BM(32), +4
70
70 AR(p), BM(32), +8
71
71 AR(p), BM(32), +16
72
72 AR(p), BM(32), +30
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.