MOEA Testing and Analysis

Slides:



Advertisements
Similar presentations
Pareto Points Karl Lieberherr Slides from Peter Marwedel University of Dortmund.
Advertisements

Topic Outline ? Black-Box Optimization Optimization Algorithm: only allowed to evaluate f (direct search) decision vector x objective vector f(x) objective.
MOEAs University of Missouri - Rolla Dr. T’s Course in Evolutionary Computation Matt D. Johnson November 6, 2006.
Linear Regression.
Angers, 10 June 2010 Multi-Objective Optimisation (II) Matthieu Basseur.
Fast Algorithms For Hierarchical Range Histogram Constructions
CHAPTER 8 More About Estimation. 8.1 Bayesian Estimation In this chapter we introduce the concepts related to estimation and begin this by considering.
Fundamentals of Data Analysis Lecture 12 Methods of parametric estimation.
Multi-objective optimization multi-criteria decision-making.
Lecture 3 Nonparametric density estimation and classification
Assessment. Schedule graph may be of help for selecting the best solution Best solution corresponds to a plateau before a high jump Solutions with very.
A Study on Recent Fast Ways of Hypervolume Calculation for MOEAs Mainul Kabir ( ) and Nasik Muhammad Nafi ( ) Department of Computer Science.
Spring, 2013C.-S. Shieh, EC, KUAS, Taiwan1 Heuristic Optimization Methods Pareto Multiobjective Optimization Patrick N. Ngatchou, Anahita Zarei, Warren.
Ugo Montanari On the optimal approximation of descrete functions with low- dimentional tables.
Chapter 4 (Part 1): Non-Parametric Classification
A New Evolutionary Algorithm for Multi-objective Optimization Problems Multi-objective Optimization Problems (MOP) –Definition –NP hard By Zhi Wei.
Multi-Objective Evolutionary Algorithms Matt D. Johnson April 19, 2007.
Diversity Maintenance Behavior on Evolutionary Multi-Objective Optimization Presenter : Tsung Yu Ho at TEILAB.
Evaluating Hypotheses
Advisor: Yeong-Sung Lin Presented by Chi-Hsiang Chan 2011/5/23 1.
Decision Trees (2). Numerical attributes Tests in nodes are of the form f i > constant.
MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition
The Pareto fitness genetic algorithm: Test function study Wei-Ming Chen
Experimental Evaluation
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
A New Algorithm for Solving Many-objective Optimization Problem Md. Shihabul Islam ( ) and Bashiul Alam Sabab ( ) Department of Computer Science.
Scientific Computing Partial Differential Equations Poisson Equation Calculus of Variations.
Statistical Hypothesis Testing. Suppose you have a random variable X ( number of vehicle accidents in a year, stock market returns, time between el nino.
Quality Indicators (Binary ε-Indicator) Santosh Tiwari.
Evolutionary Multi-objective Optimization – A Big Picture Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical.
On comparison of different approaches to the stability radius calculation Olga Karelkina Department of Mathematics University of Turku MCDM 2011.
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Masoud Asadzadeh, Bryan A. Tolson, A. J. MacLean. Dept. of Civil & Environmental Engineering, University of Waterloo Hydrologic model calibration aims.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Practical Statistical Analysis Objectives: Conceptually understand the following for both linear and nonlinear models: 1.Best fit to model parameters 2.Experimental.
DIVERSITY PRESERVING EVOLUTIONARY MULTI-OBJECTIVE SEARCH Brian Piper1, Hana Chmielewski2, Ranji Ranjithan1,2 1Operations Research 2Civil Engineering.
1 CLUSTER VALIDITY  Clustering tendency Facts  Most clustering algorithms impose a clustering structure to the data set X at hand.  However, X may not.
Chapter 13 (Prototype Methods and Nearest-Neighbors )
Quality of Pareto set approximations Eckart Zitzler, Jörg Fliege, Carlos Fonseca, Christian Igel, Andrzej Jaszkiewicz, Joshua Knowles, Alexander Lotov,
 An exposure-response (E-R) analysis in oncology aims at describing the relationship between drug exposure and survival and in addition aims at comparing.
Multiobjective Optimization for Locating Multiple Optimal Solutions of Nonlinear Equation Systems and Multimodal Optimization Problems Yong Wang School.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Multiobjective Optimization  Particularities of multiobjective optimization  Multiobjective.
Evolutionary multi-objective algorithm design issues Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical.
Evolutionary Computing Chapter 12. / 26 Chapter 12: Multiobjective Evolutionary Algorithms Multiobjective optimisation problems (MOP) -Pareto optimality.
A Multiobjective Evolutionary Algorithm Using Gaussian Process based Inverse Modeling Ran Cheng 1, Yaochu Jin 1, Kaname Narukawa 2 and Bernhard Sendhof.
Selection and Recombination Temi avanzati di Intelligenza Artificiale - Lecture 4 Prof. Vincenzo Cutello Department of Mathematics and Computer Science.
Improving Random Immigrants Injections for Dynamic Multi-objective Optimization Problems. Md Nurullah Patwary Fahim ( ) Department of Computer Science.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
ZEIT4700 – S1, 2016 Mathematical Modeling and Optimization School of Engineering and Information Technology.
Pareto-Optimality of Cognitively Preferred Polygonal Hulls for Dot Patterns Antony Galton University of Exeter UK.
Function Optimization
SIMILARITY SEARCH The Metric Space Approach
Department of Computer Science
Random Testing: Theoretical Results and Practical Implications IEEE TRANSACTIONS ON SOFTWARE ENGINEERING 2012 Andrea Arcuri, Member, IEEE, Muhammad.
Assignment I TSP with Simulated Annealing
Dave Powell, Elon University,
Linear Programming.
Unfolding Problem: A Machine Learning Approach
Heuristic Optimization Methods Pareto Multiobjective Optimization
CEC 2003 Tatsuya Okabe, Yaochu Jin and Bernhard Sendhoff
Multi-Objective Optimization
Chen-Yu Lee, Jia-Fong Yeh, and Tsung-Che Chiang
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
RM-MEDA: A Regularity Model-Based Multiobjective Estimation of Distribution Algorithm BISCuit EDA Seminar
Nonparametric density estimation and classification
Unfolding with system identification
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Multiobjective Optimization
Presentation transcript:

Performance Evaluation of Multiobjective Evolutionary Algorithms METRICS

MOEA Testing and Analysis Specifically, they suggest that a well-designed experiment follows the following steps: 1. Define experimental goals; 2. Choose measures of performance - metrics; 3. Design and execute the experiment; 4. Analyze data and draw conclusions; 5. Report experimental results.

Choose measures of performance - metrics; Every algorithm can maintain a group of nondominated individuals at the end of the run. Sometimes the result from one algorithm fully dominates the other, which is the simplest condition. But generally, some results from one algorithm dominate some from another algorithm, and vice versa. Another reason for the special consideration on the performance evaluation is that we are interested in not only the convergence to PF* but also the distribution of the individuals along PF*. Adequately evaluating convergence and distribution is still an open problem in the field of MOEAs. Benchmark problem design is also an interesting field because we want to conveniently generate problems with different shapes of PF*, different convergent difficulties, different dimensions, etc.

Performance Indices After determining which benchmark MOPs to optimize, we need to make a careful decision on how to evaluate the performance of different MOEAs. Those criteria are performance indices (PI).

Three are normally the issues to take into consideration to design a good metric in the given domain (Zitzler, 2000): 1. Minimize the distance of the Pareto front produced by our algorithm with respect to the true Pareto front (assuming we know its location). 2. Maximize the spread of solutions found, so that we can have a distribution of vectors as smooth and uniform as possible. 3. Maximize the amount of elements of the Pareto optimal set found.

The Need for Quality Measures B A B Is A better than B? independent of user preferences Yes (strictly) No dependent on user preferences How much? In what aspects? Ideal: quality measures allow to make both type of statements

Independent of User Preferences Pareto set approximation (algorithm outcome) = set of incomparable solutions weakly dominates = not worse in all objectives sets not equal dominates = better in at least one objective strictly dominates = better in all objectives is incomparable to = neither set weakly better A B C D B A A C D C B C

Dependent on User Preferences Goal: Quality measures compare two Pareto set approximations A and B. application of quality measures hypervolume 432.34 420.13 distance 0.3308 0.4532 diversity 0.3637 0.3463 spread 0.3622 0.3601 cardinality 6 5 A B A B comparison and interpretation of quality values “A better”

1-Cardinality-based Performance Indices The research produced in the last few years has included a wide variety of metrics that assess the performance of an MOEA in one of the three aspects. Some examples are the following: 1-Cardinality-based Performance Indices Number of obtained solutions Error Ratio Coverage

The number of obtained solutions |Sj| (Cardinality) K. Deb: Multi-Objective Optimization Using Evolutionary Algorithms, Wiley, Chichester, U.K., 2001. Let S be the union of the J solution sets. Let Sj be a solution set (j=1, 2, 3, …, J). For comparing J solution sets (S1, S2, …, SJ), we use the number of obtained solution |Sj| f1 : reference solution r : current solution x |Sj| f2

Error Ratio (ER): This metric was proposed by Van Veldhuizen to indicate the percentage of solutions (from the non-dominated vectors found so far) that are not members of the true Pareto-optimal set: where n is the number of solutions in the current set of non-dominated set; ei = 0 if solution i is a member of the Pareto-optimal set, and ei = 1 otherwise. It should then be clear that ER = 0 indicates an ideal behavior, since it would mean that all the solutions generated by our MOEA belong to the Pareto-optimal set of the problem. This metric addresses the third issue from the list previously provided.

ER=2/3

In 1999, Zitzler suggested a binary PI called coverage (C) C(S1, S2) is the percent of the individuals in S2 who are weakly dominated by S1. The larger C(S1, S2) is, the better S1 outperforms S2 in C.

2-Distance-based Performance Indices Distance-based PI evaluate the performance of the solutions according to the distance to PF*. Generational Distance (GD): In 1999, Van Veldhuizen suggested a unary PI called generational distance (GD). First we need to define the minimum distance from S to PF* as

The GD measure can be written as follows: where S* is a reference solution set for evaluation the solution set Sj. dxr is the distance between a current solution x and a reference solution r. The GD measure is not the average distance from each solution in Sj to its nearest reference solution in S*. It is referred to as the generation distance. While the generation distance can only evaluate the convergency of the solution set Sj to S*, GD(Sj) can evaluate the distribution of Sj as well as the proximity of Sj to S*. f1 : reference solution r : current solution x dr-1, x-1 drx f2

It should be clear that a value of GD = 0 indicates that all the elements generated are in the Pareto-optimal set. Therefore, any other value will indicate how “far” we are from the global Pareto front of our problem. This metric addresses the first issue from the list previously provided.

Metrics for Diversity Spacing (SP): Here, one desires to measure the spread (distribution) of vectors throughout the non-dominated vectors found so far. Since the “beginning” and “end” of the current Pareto front found are known, a suitably defined metric judges how well the solutions in this front are distributed. Schott proposed such a metric measuring the range (distance) variance of neighboring vectors in the non-dominated vectors found so far. This metric is defined as:

Hyperarea and Ratio (HA,HR): The hyperarea (hypervolume) and hyperarea ratio metric which are Pareto compliant relate to the area of coverage of PFknown with respect to the objective space for a two objective MOP. This equates to the summation of all the rectangular areas, bounded by some reference point and (f1(x), f2(x)).

Mathematically, this is described in equation Also proposed is a hyperarea ratio metric defined as: where HA1 is the PFknown hyperarea and HA2 is the hyperarea of PFtrue.

Hypervolume in 2D Objective 2 A B C D E 5 4 3 2 1 Objective 1 1 2 3 4 Hypervolume {A, B, C, D, E} = 11 5 4 3 2 Reference point 1 Objective 1 1 2 3 4 5

PFtrue ’s H = 16+6+4+3 = 29 units2, and PFknown ’s H = 20+6+7.5 = 33.5 HR = 33.5/29 = 1.155