Download presentation
Presentation is loading. Please wait.
1
Cloud benchmarking, tools and challenges
Author: Elham Hojati, Adviser: Dr. Yong Chen Computer Science Department, Texas Tech University Abstract Methods and Techniques- Quantifying performance isolation methods [7] Benchmarking of a system is a process of assessing its performance and other characteristics, to be able to compare them with other systems. Benchmark tools try to answer the question “which is the best system in a given domain?” Cloud is one of the systems need benchmarking process. The problem is that cloud is a very complex and dynamic environment. Therefore there are some important differences between benchmarking in dynamic cloud environment with traditional benchmarking methods in static systems. Sharing of resources caused potential interference between participants which affects the performance of the cloud system for customers. Reducing the effect of sharing resources on the performance of each user is one of the major goals of cloud service providers. The article [7] has introduced three different types of representative metrics for measuring the level of isolation in cloud systems, Limitations of this method This method does not consider all three aspects of fairness property. As a feature work we are going to improve this method and develop an advanced isolation approach to consider all the aspects of fairness property. Motivation and Goals As a feature work we are going to improve benchmarking resource allocation process and develop an advanced isolation approach to consider all the aspects of fairness property. Our goal is considering various properties such as performance isolation along with fairness and elasticity features for dynamic shared resources environment. Figure 1 Fictitious isolation curve including upper and lower bounds [7]. Methods and Techniques- Resource allocation using game theory [6] Game theory can be used to solve the problem of resource allocation. A practical approximated solution has been proposed in [6]. Limitations of this method There are several limitation related to this method. In this method, it is supposed to consider fairness in their resource allocation method using algorithms that avoid starvation. But the problem is that they have not considered all three aspects of fairness property. The second limitation is that the proposed model is static. The problem is that cloud is a very complex and dynamic environment. It is not practical to use static model for exhibiting a dynamic environment. Comparing Tools and Frameworks (part of this table is from [3])) CloudCmp CloudSton e HiBench YCSB CloudSuit e Perfkit price- performan ce benchmar king Target Applicatio n Legacy application Social web applications Hadoop (MapReduc e) Database/P erformance comparison s Media streaming /server Servers functionality - performanc e Server price performanc e functionality Test environme nt Multiple instance types Amazon EC2 instances Hadoop cluster Data serving system Characteriz e scale-out workloads Public and private cloud Amazon, Google, Microsoft, Rackspace, IBM, HP and Linode Service IaaS PaaS Conclusion and future Project Plan We have performed a survey research about benchmarking in cloud and various challenges about creating and deploying cloud benchmarking. Also, in this research we have studied and compared several tools and frameworks of cloud benchmarking. Existing cloud benchmarks and metrics mostly focus on traditional metrics like throughput in a virtualized environment, or just consider single aspects like databases, or just focus on some cloud features. For example there are some methods for benchmarking resource allocation which consider fairness or elasticity. But there is not a method which consider both of them. Our goal is considering various aspects such as performance isolation along with fairness and elasticity features for dynamic shared resources environment. Challenges in Building Cloud Benchmark Acknowledgements Step 1: Meaningful Metric • Challenge 1: defining meaningful metric with considering elasticity and fairness Step 2: Workload Design • Challenge 2: Resources allocation/ Scalability and performance isolation Step 3: Workload Implementation • Challenge 3: Workload Generation. • Challenge 4: Fairness Step 4: Creating Trust • Challenge 5: Location. • Challenge 6: ownership. • Challenge 7: security This research is supported by the Cloud and Autonomic Computing site at Texas Tech University via the StackVelocity membership contribution and the Aerospace Corporation technical partnerships References [1] Alexandru Iosup, Radu Prodan, Dick Epema ,"IaaS Cloud Benchmarking: Approaches, Challenges, and Experience", Cloud Computing for Data-Intensive Applications 2014, pp [2] Enno Folkerts, Alexander Alexandrov, Kai Sachs, Alexandru Iosup, Volker Mark, Cafer Tosun, "Benchmarking in the Cloud: What it Should, Can, and Cannot Be", Lecture Notes in Computer Science Volume 7755, 2013, pp [3] C.Vazquez, R. Krishnan, and E. John, "Cloud Computing Benchmarking: A Survey", 2014. [4] Edward Wustenhoff, CTO, Burstorm, "Cloud Computing Benchmark RB-A, the 1st step to continuous price-performance benchmarking of the cloud", Rice Burstorm Price Performance Benchmark Report (RB-A) June 2015. [5] Bin Sun, Brian Hall, Hu Wang, Da Wei Zhang, Kai Ding, "Benchmarking Private Cloud Performance with User- Centric Metrics", 2014 IEEE International Conference on Cloud Engineering. [6] GuiyiWei · Athanasios V. Vasilakos · Yao Zheng ·, Naixue Xiong, "A game-theoretic method of fair resource allocation for cloud computing services", Springer Science+Business Media, LLC 2009. [7] RouvenKrebs, ChristofMomm, SamuelKounev,"Metrics and techniques for quantifying performance isolation in cloud environments", Elsevier 2013.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.