Download presentation
Presentation is loading. Please wait.
Published byDomenic Phelps Modified over 9 years ago
1
Proactive Prediction Models for Web Application Resource Provisioning in the Cloud _______________________________ Samuel A. Ajila & Bankole A. Akindele
2
Presentation Outline Introduction to Problem Area Motivation Goals and Scope Contributions Related work Machine learning algorithms Implementation setup Evaluation metrics Selected results Conclusion 2
3
Introduction Cloud computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the data centers that provide those services (SaaS, PaaS and IaaS) Challenges include: Data security threats, performance unpredictability and prompt (quick) resource scaling Accurate Virtual machine (VM) resource provisioning: under and over-provisioning Current techniques like Control theory, Constraint programming and Machine learning 3
4
Motivation VM instantiation (scaling) takes time and ranges from 5 to 12 minutes Challenges that can result from the instantiation duration: Possibility of Service Level Agreement (SLA) violations - Cloud providers Poor customer’s Quality of Experience (QoE) – Cloud client Reputation loss – Cloud client/providers Presently, monitoring metrics made available to clients are limited to CPU, Memory and Network utilization The motivation is to “predict resource usage so that cloud providers can make adequate provisioning ahead of time” To extend the monitoring metrics by including Response time and Throughput 4
5
Goals and Scope Design and develop a cloud client prediction model for cloud resource provisioning in a Multitier web application environment The model would be capable of forecasting future resource usage to enable timely VM provisioning To achieve this goal, SVM, NN and LR learning techniques are analysed using the Java implementation of TPC-W (workload) The scope of this work is limited to IaaS Prediction model is built around the web server tier. It is possible to extend to other tiers 5
6
Contributions Design and development of a cloud client prediction model that uses historical data to forecast future resource usage The evaluation of the resource usage prediction capability of SVM, NN and LR using three benchmark workloads from TPC-W The extension of the prediction model to include Throughput and Response time, thus providing wider and better scaling decision options for cloud clients The comparison of the prediction capability of SVM, NN and LR models under random and steady traffic patterns 6
7
Related works Table 1: Auto-scaling techniques 7 TechniqueClassComments Threshold/Rule based ReactiveReacts to system changes but do not anticipate them Control theoryReactive/ProactiveExcept where used with a proactive approach it suffers reactive issues Queuing theoryReactiveAs complexity of system grows, the analytic formulas become difficult Reinforcement learning Reactive/ProactiveGood but converging at optimal policy can be unfeasibly long (state –action pairs) Time series analysisProactiveGood and promising
8
Machine learning algorithms 8
9
Machine learning algorithms (cont’d) 9 Figure 1 : Single hidden layer, feed forward neural network
10
Machine learning algorithms (cont’d) 10
11
Architecture 11 Figure 2 Implementation architecture
12
Implementation setup (cont’d) Time (minutes)1-756-63154-161350-357490-497504-511 Shopping mix users 8416816180248160 Browsing mix users 5211236320192160 Ordering mix users 5210828224268160 Total user Requests 18838880724708480 12 Total length of experiment was about 10 hours Selected experimental workload mix
13
Evaluation metrics 13 Evaluation is done on the 60% training and 40% held out test dataset The held out dataset is used to forecast to a maximum interval of 12 minutes (VM instantiation time reported by other authors)
14
CPU Training & Testing Performance 14
15
Selected results (cont’d) 15 CPU utilization test performance metric ModelMAPERMSEMAEPRED(25) LR36.1922.1315.980.36 NN50.4631.0819.820.34 SVR22.8411.848.740.64 Figure 5 CPU Utilization Actual and Predicted test model results
16
Selected results (cont’d) 16 Throughput test performance metric ModelMAPERMSEMAEPRED(25) LR24.623.722.870.63 NN38.906.124.460.47 SVR22.073.222.410.67 Figure 6 Throughput Actual and Predicted test model results
17
Selected results (cont’d) 17 Response time test performance metric ModelMAPERMSEMAEPRED(25) LR12.351.391.110.91 NN17.842.021.640.75 SVR9.921.210.870.93 Figure 7 Response time Actual and Predicted test model results
18
CPU - Comparison of Prediction Models 18
19
Throughput – Comparison of Prediction Models 19
20
Sensitivity Measurement Using Little’s Law 20
21
Conclusion SVR displayed superior prediction accuracy over both LR and NN in a typically nonlinear, not defined a-priori workload by: In the CPU utilization prediction model, SVR outperformed LR and NN by 58% and 120% respectively For the Throughput prediction model, SVR again outperformed LR and NN by 12% and 76% respectively; and finally, The Response time prediction model saw SVR outperforming LR and NN by 26% and 80% respectively. Based on this experimental results SVR may be accepted as the best prediction model in a nonlinear, not defined a-priori system 21
22
Future works SVR and other machine learning algorithms are good for forecasting, however training and retraining is a challenge Parameter selection is still empirical Combination of SVR and other predicting techniques may mitigate this challenge Other future direction include Inclusion of database tier for a more robust scaling decision Resource prediction on other non-web application workload 22
23
Questions 23 Thank You
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.