Presentation is loading. Please wait.

Presentation is loading. Please wait.

EASTERN MEDITERRANEAN UNIVERSITY

Similar presentations


Presentation on theme: "EASTERN MEDITERRANEAN UNIVERSITY"— Presentation transcript:

1 EASTERN MEDITERRANEAN UNIVERSITY
Computer Engineering Department CMPE538 EVOLUTIONARY MULTI-OBJECTIVE OPTIMIZATION cmpe.emu.edu.tr/cmpe538 Asst. Prof. Dr. Ahmet ÜNVEREN Office: CMPE212 Office Tel.:

2 Coello Coello, Carlos A.; Van Veldhuizen, David A. & Lamont, Gary B.,
TEXTBOOK Coello Coello, Carlos A.; Van Veldhuizen, David A. & Lamont, Gary B., “Evolutionary Algorithms for Solving Multi-Objective Problems”, 2nd Ed. Springer, 2007

3 Alternative Text Books: APPLICATIONS OF MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS edited by Carlos A Coello Coello (CINVESTAV-IPN, Mexico) & Gary B Lamont (Air Force Institute of Technology, Wright-Patterson AFB, USA)

4 Multiobjective Evolutionary Algorithms and Applications (Advanced information and Knowledge Processing) (Hardcover) by K. C. Tan (Author), E. F. Khor (Author), T. H. Lee (Author)  

5 METHOD OF ASSESSMENT Midterm I 30 % Midterm II 30% Final 30 %
Assignments 10 %

6 AIMS & OBJECTIVES To present the motivation for and basic concepts of optimization according to multiple criteria as opposed to traditional single-criterion optimization To introduce evolutionary algorithms as a well-suited methodology for multiobjective optimization To teach the participants how to apply the methodology to multiobjective optimization problems and evaluate the obtained results

7 GENERAL LEARNING OUTCOMES (COMPETENCES)
On successful completion of this course, all students will have developed knowledge and understanding of: How to use Evolutionary mutiobjective Algorithms Choose the most appropriate optimization method for the problem at hand Interpret multiobjective optimization outcomes

8 SCHEDULE WEEK TOPICS 1 Basic Concepts 2
MOP Evolutionary Algorithm Approaches 3 MOP Evolutionary Algorithm Approaches (Cont.) 4 MOEA Local Search and Coevolution 5 MOEA Local Search and Coevolution (Cont.) 6 MOEA Test Suites 7 Metrics 8 MOEA Theory and Issues 9 MOEA Theory and Issues (Cont.) 10 Applications 11 Parallel Multiobjective Evolutionary Algorithms 12 Multi-Criteria Decision Making 13 Multi-Criteria Decision Making (Cont.) 14 Alternative Metaheuristics for Multiobjective Optimization 15 Some Promising Paths of Future Research 16 Review

9 Basic Concepts Intoduction Attributes, goals, criteria and objectives.
Defining a multiobjective optimization problem. Pareto optimality. Pareto dominance and Pareto optimal set. Pareto front.

10 Introduction Motivation
Very often real world applications have several multiple conflicting objectives. Recently there has been a growing interest in evolutionary multiobjective optimization algorithms which combines two major disciplines: evolutionary computation theoretical frameworks of multicriteria decision making. In the first part, we will define some fundemental concepts of multiobjective optimization emphasizing the motivation and advantages of using evolutionary algorithms.

11 Evolutionary algorithms (EAs) are heuristics that use natural selection as their search engine to solve problems. The use of EAs for search and optimization tasks has become very popular in the last few years with a constant development of new algorithms, theoretical achievements and novel applications. One of the emergent research areas in which EAs have become increasingly popular is multiobjective optimization. In multiobjective optimization problems, we have two or more objective functions to be optimized at the same time, instead of having only one. As a consequence, there is no unique solution to multiobjective optimization problems, but instead, we aim to find all of the good trade-off solutions available (the so called Pareto optimal set).

12 What is a multiobjective optimization problem?
The Multiobjective Optimization Problem (MOP) (also called multicriteria optimization, multiperformance or vector optimization problem) can be defined (in words) as the problem of finding (Osyczka, 1985): a vector of decision variables which satisfies constraints and optimizes a vector function whose elements represent the objective functions. These functions form a mathematical description of performance criteria which are usually in conflict with each other. Hence, the term “optimize” means finding such a solution which would give the values of all the objective functions acceptable to the decision maker.

13 x x* f(x) Even though some real world problems can be reduced to a matter of single objective very often it is hard to define all the aspects in terms of a single objective. Defining multiple objectives often gives a better idea of the task. In single objective optimization, the search space is often well defined. As soon as there are several possibly contradicting objectives to be optimized simultaneously, there is no longer a single optimal solution but rather a whole set of possible solutions of equivalent quality. Definition 1 (General Single-Objective Optimization Problem) : minimizing (or maximizing) f(x) subject to gi(x) ≤ 0, i = {1, , m}, and hj(x) =0, j = {1, , p} x in Ω. A solution minimizes (or maximizes) the scalar f(x) where x is a n-dimensional decision variable vector x = (x1, , xn) from some universe Ω. optimum

14 The method for finding the global optimum (may not be unique) of any function is referred to as Global Optimization. Definition: (Single-Objective Global Minimum Optimization)

15 What is a multiobjective optimization problem?
The Multiobjective Optimization Problem (MOP) (also called multicriteria optimization, multiperformance or vector optimization problem) can be defined (in words) as the problem of finding (Osyczka, 1985): “a vector of decision variables which satisfies constraints and optimizes a vector function whose elements represent the objective functions. These functions form a mathematical description of performance criteria which are usually in conflict with each other. Hence, the term “optimize” means finding such a solution which would give the values of all the objective functions acceptable to the decision maker”.

16 Decision Variables OR

17 Constraints In most optimization problems there are always restrictions imposed by the particular characteristics of the environment or available resources (e.g., physical limitations, time restrictions, etc.). These restrictions must be satisfied in order to consider a certain solution acceptable. All these restrictions in general are called constraints, and they describe dependences among decision variables and constants (or parameters) involved in the problem. These constraints are expressed in form of mathematical inequalities: or equalities: Where, p<m If p>=m overconstrained n-p: degrees of freedom

18 Commensurable vs. Non-Commensurable
In order to know how “good” a certain solution is, it is necessary to have some criteria to evaluate it. These criteria are expressed as computable functions of the decision variables, called objective functions. In real-world problems, some functions are in conflict with others, and some must be minimized while others are maximized. These objective functions may be commensurable (measured in the same units) or non-commensurable (measured in different units). The multiple objectives being optimized almost always conflict, placing a partial, rather than total, ordering on the search space. In fact, finding the global optimum of a general MOP is an NP-Complete problem

19 When we try to optimize several objectives at the same time the search space also becomes partially ordered. To obtain the optimal solution, there will be a set of optimal trade-offs between the conflicting objectives. A multiobjective optimization problem is defined by a function f which maps a set of constraint variables to a set of objective values.

20

21

22 A Formal Definition MOPs are mainly be in the following form:
(maximize or)

23 What is the notion of optimum in multiobjective optimization?
Having several objective functions, the notion of “optimum” changes, because in MOPs, we are really trying to find good compromises (or “trade-offs”) rather than a single solution as in global optimization. The notion of “optimum” that is most commonly adopted is that originally proposed by Francis Ysidro Edgeworth in 1881.

24 This notion was later generalized by Vilfredo Pareto (in 1896)
This notion was later generalized by Vilfredo Pareto (in 1896). Although some authors call Edgeworth-Pareto optimum to this notion, we will use the most commonly accepted term: Pareto optimum.

25 Pareto Optimality Simply:
A state A dominates a state B, if A is better than B in at least one objective function and not worse with respect to all other objective functions.

26 Multiobjective Optimization Problem
Maximize subject to Many Pareto-optimal solutions Pareto Optimal Solutions Dominated solutions Maximize Maximize

27 Pareto Dominance Maximize A dominates B A B is dominated by A
(A is better than B) Maximize B A

28 Pareto Dominance Maximize A A and C are non-dominated with each other.
B Maximize

29 Pareto Optimal Solutions
A Pareto optimal solution is a solution that is not dominated by any other solutions. Pareto Optimal Solutions Maximize Maximize

30

31

32 MOEA Goals: Preserve nondominated points in objective space and associated solution points in decision space. Continue to make algorithmic progress towards the Pareto Front in objective function space. Maintain diversity of points on Pareto front (phenotype space) and/or of Pareto optimal solutions - decision space (genotype space). Provide the decision maker (DM) “enough” but limited number of Pareto points for selection resulting in decision variable values.

33 EMO algorithms are designed to simultaneously improve the diversity (D) and the convergence (C).
Two Goals of EMO Desired Evolution

34 ? The Knapsack Problem Goal: choose subset that
weight = 750g profit = 5 weight = 1500g profit = 8 weight = 300g profit = 7 weight = 1000g profit = 3 ? Goal: choose subset that maximizes overall profit minimizes total weight

35 The Solution Space profit weight 500g 1000g 1500g 2000g 2500g 3000g

36 The Trade-off Front Oobservations:  there is no single optimal solution, but  some solutions ( ) are better than others ( ) 500g 1000g 1500g 2000g 2500g 3000g 3500g weight profit 5 10 15 20 selecting a solution finding the good solutions

37 Decision Making: Selecting a Solution
Approaches: profit more important than cost (ranking) weight must not exceed 2400g (constraint) 500g 1000g 1500g 2000g 2500g 3000g 3500g weight profit 5 10 15 20 too heavy

38 When to Make the Decision
Before Optimization: ranks objectives, defines constraints,… searches for one (green) solution After Optimization: searches for a set of (green) solutions selects one solution considering constraints, etc. decision making often easier evolut. algorithms well suited too heavy 500g 1000g 1500g 2000g 2500g 3000g 3500g weight profit 5 10 15 20

39 General Optimization Algorithm Overview
general search and optimization techniques are classified into three categories: enumerative, deterministic, and stochastic (random).

40

41 Enumerative schemes are perhaps the simplest search strategy
Enumerative schemes are perhaps the simplest search strategy. Within some defined finite search space each possible solution is evaluated. Deterministic algorithms attempt this by incorporating problem domain knowledge. Many of these are considered graph/tree search algorithms. Stochastic methods require a function assigning fitness values to possible (or partial) solutions, and an encode/decode (mapping) mechanism between the problem and algorithm domains. Many MOPs are highdimensional, discontinuous, multimodal, and/or NP-Complete. Deterministic methods are often ineffective when applied to NP-Complete or other highdimensional problems because they are handicapped by their requirement for problem domain knowledge (heuristics) to direct or limit search in these exceptionally large search spaces. Problems exhibiting one or more of these above characteristics are termed irregular.

42 Deterministic Approaches:
Greedy algorithms make locally optimal choices, assuming optimal subsolutions are always part of the globally optimal solution. Thus, these algorithms fail unless that is not the case. Hillclimbing algorithms searching the direction of steepest ascent from the current position. These algorithms work best on unimodal functions, but the presence of local optima, plateaus, or ridges in the fitness (search) landscape reduce algorithm effectiveness Greedy and hillclimbing strategies are irrevocable. They repeatedly expand a node, examine all possible successors (then expanding the “most promising” node), and keep no record of past expanded nodes.

43 Many real-world scientific and engineering MOPs are irregular,
Enumerative and Deterministic search techniques are then unsuitable. Stochastic search and optimization approaches such as Simulated Annealing (SA) , Monte Carlo methods , Tabu search , and Evolutionary Computation (EC) were developed as alternative approaches for solving these irregular problems. Stochastic methods require a function assigning fitness values to possible (or partial) solutions, and an encode/decode (mapping) mechanism between the problem and algorithm domains. Although some are shown to “eventually” find an optimum most cannot guarantee the optimal solution. They in general provide good solutions to a wide range of optimization problems which traditional deterministic search methods find difficult.

44 A random search is the simplest stochastic search strategy, as it simply evaluates a given number of randomly selected solutions. A random walk is very similar, except that the next solution evaluated is randomly selected using the last evaluated solution as a starting point

45 Simulated annealing search
Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency

46 Monte Carlo methods involve simulations dealing with stochastic events;
they employ a pure random search where any selected trial solution is fully independent of any previous choice and its outcome. The current “best” solution and associated decision variables are stored as a comparator.

47 Tabu search is a meta-strategy developed to avoid getting “stuck” on local optima.
It keeps a record of both visited solutions and the “paths” which reached them in different “memories.” This information restricts the choice of solutions to evaluate next. Tabu search is often integrated with other optimization methods

48 EC embodies the techniques of
EC is a generic term for several stochastic search methods which computationally simulate the natural evolutionary process. EC embodies the techniques of genetic algorithms (GAs), evolution strategies (ESs), and evolutionary programming (EP), Collectively known as EAs . These techniques are loosely based on natural evolution and the Darwinian concept of “Survival of the Fittest” . Common between them are the reproduction, random variation, competition, and selection of contending individuals within some population . In general, an EA consists of a population of encoded solutions (individuals) manipulated by a set of operators and evaluated by some fitness function.

49 EVOLUTIONARY ALGORITHM BASICS


Download ppt "EASTERN MEDITERRANEAN UNIVERSITY"

Similar presentations


Ads by Google