Presentation is loading. Please wait.

Presentation is loading. Please wait.

August 28, 2015 1 Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing Berkeley, CA, USA Alexandru Iosup, Nezih Yigitbasi,

Similar presentations


Presentation on theme: "August 28, 2015 1 Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing Berkeley, CA, USA Alexandru Iosup, Nezih Yigitbasi,"— Presentation transcript:

1 August 28, 2015 1 Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing Berkeley, CA, USA Alexandru Iosup, Nezih Yigitbasi, Dick Epema Parallel and Distributed Systems Group, Delft University of Technology, The Netherlands Simon Ostermann, Radu Prodan, Thomas Fahringer Distributed and Parallel Systems, University of Innsbruck, Austria

2 About the Team Team’s Recent Work in Performance The Grid Workloads Archive (Nov 2006) The Failure Trace Archive (Nov 2009) The Peer-to-Peer Trace Archive (Apr 2010) Tools: GrenchMark workload-based grid benchmarking, other Monitoring and Perf. Eval. tools Speaker: Alexandru Iosup Systems work: Tribler (P2P file sharing), Koala (grid scheduling), POGGI and CAMEO (massively multiplayer online gaming) Grid and Peer-to-Peer workload characterization and modeling August 28, 2015 2

3 Many-Tasks Scientific Computing Jobs comprising Many Tasks (1,000s) necessary to achieve some meaningful scientific goal Jobs submitted as bags-of-tasks or over short periods of time High-volume users over long periods of time Common in grid workloads [Ios06][Ios08] No practical definition (from “many” to “10,000/h”) August 28, 2015 3

4 Cloud Futures Workshop 2010 – Cloud Computing Support for Massively Social Gaming 4 The Real Cloud “The path to abundance” On-demand capacity Cheap for short-term tasks Great for web apps (EIP, web crawl, DB ops, I/O) “The killer cyclone” Not so great performance for scientific applications 1 (compute- or data-intensive) Long-term perf. variability 2 http://www.flickr.com/photos/dimitrisotiropoulos/4204766418/ Tropical Cyclone Nargis (NASA, ISSS, 04/29/08) 1- Iosup et al., Performance Analysis of Cloud Computing Services for Many Tasks Scientific Computing, (under submission). 2- Iosup et al., On the Performance Variability of Production Cloud Services, Technical Report PDS-2010-002, [Online] Available: http://pds.twi.tudelft.nl/reports/2010/PDS-2010-002.pdf http://pds.twi.tudelft.nl/reports/2010/PDS-2010-002.pdf VS

5 Research Question and Previous Work Do clouds and Many-Tasks Scientific Computing fit well, performance-wise? Virtualization Overhead Loss below 5% for computation [Barham03] [Clark04] Loss below 15% for networking [Barham03] [Menon05] Loss below 30% for parallel I/O [Vetter08] Negligible for compute-intensive HPC kernels [You06] [Panda06] Cloud Performance Evaluation Performance and cost of executing a sci. workflows [Dee08] Study of Amazon S3 [Palankar08] Amazon EC2 for the NPB benchmark suite [Walker08] or selected HPC benchmarks [Hill08] Theory: just use virtualization overhead results. Practice? August 28, 2015 5

6 6 Agenda 1.Introduction & Motivation 2.Proto-Many Task Users 3.Performance Evaluation of Four Clouds 4.Clouds vs Other Environments 5.Take Home Message

7 Proto-Many Task Users MTC user At least J jobs in B bags-of-tasks Trace-based analysis 6 grid traces, 4 parallel production environment traces Various criteria (combinations of values for J and B) Results “number of BoTs submitted 1,000 & number of tasks submitted 10,000” Easy to grasp + Dominate most traces (jobs and CPUTime) + 1-CPU jobs August 28, 2015 7

8 8 Agenda 1.Introduction & Motivation 2.Proto-Many Task Users 3.Performance Evaluation of Four Clouds 1.Experimental Setup 2.Selected Results 4.Clouds vs Other Environments 5.Take Home Message

9 Experimental Setup Environments Four commercial IaaS clouds (NIST definitions) Amazon EC2 GoGrid Elastic Hosts Mosso No Cluster instances (not released in Dec’08-Jan’09) August 28, 2015 9

10 Experimental Setup Experiment Design Principles Use complete test suites Repeat 10 times Use defaults, not tuning Use common benchmarks  Compare results with results for other systems Types of experiments Resource acquisition and release Single-Instance (SI) benchmarking Multiple-Instance (MI) benchmarking August 28, 2015 10

11 Resource Acquisition: Can Matter Can be significant For single instances (GoGrid) For multiple instances (all) Short-term variability can be high (GoGrid) Slow long-term growth August 28, 2015 11

12 Single Instances: Compute Performance Lower Than Expected ECU = 4.4 GFLOPS (at 100% efficient code) = 1.1GHz 2007 Opteron x 4 FLOPS/cycle (full pipeline) In our tests: 0.6-0.8 GFLOPS Sharing of the same physical machines (working set) Lack of code optimizations beyond –O3 –funroll-loops Metering requires more clarification Instances with excellent float/double addition perf. may have poor multiplication perf. (c1.medium, c1.xlarge) August 28, 2015 12

13 Multi-Instance: Low Efficiency in HPL Peak Performance 2 x c1.xlarge (16 cores) @ 176 GFLOPS, HPCC-227 (Cisco, 16c) @ 102, HPCC-286 (Intel, 16c) @ 180 16 x c1.xlarge (128 cores) @ 1,408 GFLOPS, HPCC-224 (Cisco, 128c) @ 819, HPCC-289 (Intel, 128c) @ 1,433 Efficiency Cloud: 15-50% even for small (<128) instance counts HPC: 60-70% August 28, 2015 13

14 Cloud Futures Workshop 2010 – Cloud Computing Support for Massively Social Gaming 14 Cloud Performance Variability Performance variability of production cloud services Infrastructure: Amazon Web Services Platform: Google App Engine Year-long performance information for nine services Finding: about half of the cloud services investigated in this work exhibits yearly and daily patterns; impact of performance variability depends on application. A. Iosup, N. Yigitbasi, and D. Epema, On the Performance Variability of Production Cloud Services, (under submission). Amazon S3: GET US HI operations

15 August 28, 2015 15 Agenda 1.Introduction & Motivation 2.Proto-Many Task Users 3.Performance Evaluation of Four Clouds 4.Clouds vs Other Environments 5.Take Home Message

16 Clouds vs Other Environments Trace-based simulation, DGSim (grid) simulator Compute-intensive, no data IO Source Env v Cloud w/ source-like performance v Cloud w/ real (measured) performance Slowdown for Sequential: 7 times, Parallel: 1-10 times Results Response time 4-10 times higher in real clouds Good for short-term, deadline-driven projects August 28, 2015 16

17 August 28, 2015 17 Take Home Message Many-Tasks Scientific Computing Quantitative definition: J jobs and B bags-of-tasks Extracted proto-MT users from grid and parallel prod. envs. Performance Evaluation of Four Commercial Clouds Amazon EC2, GoGrid, Elastic Hosts, Mosso Resource acquisition, Single- and Multi-Instance benchmarking Low compute and networking performance Clouds vs Other Environments An order of magnitude better performance needed for clouds Clouds already good for short-term, deadline-driven sci. comp.

18 August 28, 2015 18 Potential for Collaboration Other performance evaluation studies of clouds The new Amazon EC2 instance—Cluster Compute Other clouds? Data-intensive benchmarks General logs Failure Trace Archive Grid Workloads Archive …

19 August 28, 2015 19 Thank you! Questions? Observations? More Information: The Grid Workloads Archive: gwa.ewi.tudelft.nlgwa.ewi.tudelft.nl The Failure Trace Archive: fta.inria.frfta.inria.fr The GrenchMark perf. eval. tool: grenchmark.st.ewi.tudelft.nlgrenchmark.st.ewi.tudelft.nl Cloud research: www.st.ewi.tudelft.nl/~iosup/research_cloud.htmlwww.st.ewi.tudelft.nl/~iosup/research_cloud.html see PDS publication database at: www.pds.twi.tudelft.nl/www.pds.twi.tudelft.nl/ email: A.Iosup@tudelft.nl Big thanks to our collaborators: U. Wisc.-Madison, U Chicago, U Dortmund, U Innsbruck, LRI/INRIA Paris, INRIA Grenoble, U Leiden, Politehnica University of Bucharest, Technion, …


Download ppt "August 28, 2015 1 Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing Berkeley, CA, USA Alexandru Iosup, Nezih Yigitbasi,"

Similar presentations


Ads by Google