Workshop on Empirical Methods for the Analysis of Algorithms

Slides:



Advertisements
Similar presentations
CSE 160 – Lecture 9 Speed-up, Amdahl’s Law, Gustafson’s Law, efficiency, basic performance metrics.
Advertisements

Performance Measurement n Assignment? n Timing #include double When() { struct timeval tp; gettimeofday(&tp, NULL); return((double)tp.tv_sec + (double)tp.tv_usec.
Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Performance Analysis of Multiprocessor Architectures
 delivers evidence that a solution developed achieves the purpose for which it was designed.  The purpose of evaluation is to demonstrate the utility,
Distributed Process Scheduling Summery Distributed Process Scheduling Summery BY:-Yonatan Negash.
11Sahalu JunaiduICS 573: High Performance Computing5.1 Analytical Modeling of Parallel Programs Sources of Overhead in Parallel Programs Performance Metrics.
Paper Title Your Name CMSC 838 Presentation. CMSC 838T – Presentation Motivation u Problem paper is trying to solve  Characteristics of problem  … u.
1 Lecture 4 Analytical Modeling of Parallel Programs Parallel Computing Fall 2008.
CS 584. Logic The art of thinking and reasoning in strict accordance with the limitations and incapacities of the human misunderstanding. The basis of.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming with MPI and OpenMP Michael J. Quinn.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Quantitative.
Reliability on Web Services Pat Chan 31 Oct 2006.
CS 584 Lecture 11 l Assignment? l Paper Schedule –10 Students –5 Days –Look at the schedule and me your preference. Quickly.
Lecture 5 Today’s Topics and Learning Objectives Quinn Chapter 7 Predict performance of parallel programs Understand barriers to higher performance.
Parallel System Performance CS 524 – High-Performance Computing.
A Performance and Energy Comparison of FPGAs, GPUs, and Multicores for Sliding-Window Applications From J. Fowers, G. Brown, P. Cooke, and G. Stitt, University.
COMP s1 Computing 2 Complexity
Authors: Tong Li, Dan Baumberger, David A. Koufaty, and Scott Hahn [Systems Technology Lab, Intel Corporation] Source: 2007 ACM/IEEE conference on Supercomputing.
Computer Science 320 Measuring Speedup. What Is Running Time? T(N, K) says that the running time T is a function of the problem size N and the number.
Evaluation of Memory Consistency Models in Titanium.
Performance Evaluation of Parallel Processing. Why Performance?
Flynn’s Taxonomy SISD: Although instruction execution may be pipelined, computers in this category can decode only a single instruction in unit time SIMD:
Amdahl's Law Validity of the single processor approach to achieving large scale computing capabilities Presented By: Mohinderpartap Salooja.
Performance Measurement n Assignment? n Timing #include double When() { struct timeval tp; gettimeofday(&tp, NULL); return((double)tp.tv_sec + (double)tp.tv_usec.
CS453 Lecture 3.  A sequential algorithm is evaluated by its runtime (in general, asymptotic runtime as a function of input size).  The asymptotic runtime.
Performance Measurement. A Quantitative Basis for Design n Parallel programming is an optimization problem. n Must take into account several factors:
Generating RCPSP instances with Known Optimal Solutions José Coelho Generator and generated instances in:
Lecture 9 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Scaling Area Under a Curve. Why do parallelism? Speedup – solve a problem faster. Accuracy – solve a problem better. Scaling – solve a bigger problem.
Job scheduling algorithm based on Berger model in cloud environment Advances in Engineering Software (2011) Baomin Xu,Chunyan Zhao,Enzhao Hua,Bin Hu 2013/1/251.
Computer Science and Engineering Parallelizing Defect Detection and Categorization Using FREERIDE Leonid Glimcher P. 1 ipdps’05 Scaling and Parallelizing.
Extended Finite-State Machine Inference with Parallel Ant Colony Based Algorithms PPSN’14 September 13, 2014 Daniil Chivilikhin PhD student ITMO.
Improved Cross Entropy Method For Estimation Presented by: Alex & Yanna.
Parallel Programming with MPI and OpenMP
Rassul Ayani 1 Performance of parallel and distributed systems  What is the purpose of measurement?  To evaluate a system (or an architecture)  To compare.
Scaling Conway’s Game of Life. Why do parallelism? Speedup – solve a problem faster. Accuracy – solve a problem better. Scaling – solve a bigger problem.
Ramya Prabhakar, Seung Woo Son, Christina Patrick, Sri Hari Krishna Narayanan, Mahmut Kandemir Pennsylvania State University 4th International IEEE Security.
1 Potential for Parallel Computation Chapter 2 – Part 2 Jordan & Alaghband.
CEng 713, Evolutionary Computation, Lecture Notes parallel Evolutionary Computation.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Data Driven Resource Allocation for Distributed Learning
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 20th February 2014  
COMPUTATIONAL MODELS.
What Exactly is Parallel Processing?
Analysis of algorithms
Introduction to Parallelism.
University of Warwick, Department of Sociology, 2014/15 SO 201: SSAASS (Surveys and Statistics) (Richard Lampard) Analysing Means II: Nonparametric techniques.
Algorithm Analysis CSE 2011 Winter September 2018.
The non-parametric tests
Comparing Genetic Algorithm and Guided Local Search Methods
Chapter 3: Principles of Scalable Performance
Parallel Inversion of Polynomial Matrices
Test Case Purification for Improving Fault Localization
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Yiyu Shi*, Wei Yao*, Jinjun Xiong+ and Lei He*
Distributed Systems CS
CS 584.
COMP60621 Fundamentals of Parallel and Distributed Systems
Professor Ioana Banicescu CSE 8843
PERFORMANCE MEASURES. COMPUTATIONAL MODELS Equal Duration Model:  It is assumed that a given task can be divided into n equal subtasks, each of which.
Analysis of algorithms
CONTROL A process of monitoring and correcting subordinates performance to achieve organizational goals.
Maximizing Speedup through Self-Tuning of Processor Allocation
COMP60611 Fundamentals of Parallel and Distributed Systems
CS 684 Reporting Computational Experiements with Parallel Algorithms: Issues, Measures, and Experts’ Opinions Richard S. Barr SMU Betty L. Hickman University.
Model Selection, Seasonal Adjustment, Analyzing Results
A handbook on validation methodology. Metrics.
Generating Random Variates
Presentation transcript:

Workshop on Empirical Methods for the Analysis of Algorithms Evaluation of Parallel Metaheuristics by Enrique Alba and Gabriel Luque 17/02/2019

Introduction Parallelism is an approach that allows to reduce the execution time and also to improve the quality of the solutions. Clear metrics are necessary that allow to measure the performance of the parallel optimization procedure and to compare with other (parallel) approaches. Currently, there are several parallel metrics but their meaning and utilization in the metaheuristic community is not homogeneous. We are interested in revising, proposing, and applying parallel performance metrics and guidelines to ensure the correctness of our conclusions. 17/02/2019

Parallel Performance Metrics (I): Speedup The most important parallel measure is the speedup that computes the ratio between sequential and parallel execution times. Speedup taxonomy: Strong speedup. Weak speedup: Speedup with solution stop: Versus Panmixia. Orthodox. Speedup with predefined effort. 17/02/2019

Parallel Performance Metrics (II): Other Measures Efficiency: Incremental efficiency: Generalized incremental efficiency: Scaleup: Serial fraction: 17/02/2019

Inadequate Utilization of Parallel Measures (I) Computational effort evaluation: Eliminates effects of implementation, software and hardware. Misleading in the field of parallel methods: Evaluation time is not constant. The goal of parallel methods is (only) not the reduction of the number of evaluations but the reduction of time. Comparing means/medians: We cannot compare two averages or medians directly but we must compare the statistical distributions of the data. Statistical tests: Normal data: Student t-test or ANOVA. Otherwise: non parametric test (e.g., Kruskal-Wallis). 17/02/2019

Inadequate Utilization of Parallel Measures (II) Comparing algorithms with different accuracy: We are comparing different things. E.g.: comparing methods solving different problems or different instances. Comparing parallel versions vs. canonical serial ones: We are comparing different algorithms. E.g.: comparing different methods (e.g., pGA vs. pSA). Using a predefined effort: We are indirectly imposing the execution time. It is incorrect to use it to measure speedup. 17/02/2019

Examples (I): Panmictic vs. Orthodox Speedup Panmictic speedup provides superlinear values. Orthodox speedup are worse (sublinear) than panmictic one but fair and realistic. Both cases show a same trend but in other experiments the trends could even be contradictory. 17/02/2019

Examples (II): Speedup with Predefined Effort The termination condition is based on a predefined effort (a maximum number of evaluations). The calculation of speedup is not appropriate: Accuracy of the solution found is different. We have fixed the execution time, and then we can calculate a theoretical speedup: timem = cm * evalm * teval (where cm  1/m). Predefined effort: eval1 = evalm sm = c1 / cm. 17/02/2019

Examples (III): Other Parallel Metrics Efficiency makes easy the analysis of the speedup. We observe a moderate loss of efficiency when we increase the number of processors. Since the serial fraction is almost constant, that loss of efficiency is due to the limited parallelism. 17/02/2019

Conclusions In this work we have considered the issue of reporting parallel experimental research with parallel metaheuristics. We have observed that speedup is the most important metric in this field but several definition of it can be used. Speedup can be only applied when all the methods find solutions of similar quality and in these cases, the most appropriate definition is the orthodox one. The utilization of other metrics (efficiency and serial fraction) is an interesting complement to perform a complete and fair comparison. 17/02/2019

Reykjavik, Iceland, September 2006 Málaga (SPAIN) Questions? 17/02/2019