Tamaki Okuda ● Tomoyuki Hiroyasu   Mitsunori Miki   Shinya Watanabe  

Slides:



Advertisements
Similar presentations
ZEIT4700 – S1, 2014 Mathematical Modeling and Optimization School of Engineering and Information Technology.
Advertisements

Topic Outline ? Black-Box Optimization Optimization Algorithm: only allowed to evaluate f (direct search) decision vector x objective vector f(x) objective.
MOEAs University of Missouri - Rolla Dr. T’s Course in Evolutionary Computation Matt D. Johnson November 6, 2006.
Angers, 10 June 2010 Multi-Objective Optimisation (II) Matthieu Basseur.
Biased Random Key Genetic Algorithm with Hybrid Decoding for Multi-objective Optimization Panwadee Tangpattanakul, Nicolas Jozefowiez, Pierre Lopez LAAS-CNRS.
Divided Range Genetic Algorithms in Multiobjective Optimization Problems Tomoyuki HIROYASU Mitsunori MIKI Sinya WATANBE Doshisha University.
Elitist Non-dominated Sorting Genetic Algorithm: NSGA-II
Mahyar Shafii December 2007
Multiobjective Optimization Chapter 7 Luke, Essentials of Metaheuristics, 2011 Byung-Hyun Ha R1.
Multi-Objective Optimization Using Evolutionary Algorithms
Spring, 2013C.-S. Shieh, EC, KUAS, Taiwan1 Heuristic Optimization Methods Pareto Multiobjective Optimization Patrick N. Ngatchou, Anahita Zarei, Warren.
Adaptive Multi-objective Differential Evolution with Stochastic Coding Strategy Wei-Ming Chen
A New Evolutionary Algorithm for Multi-objective Optimization Problems Multi-objective Optimization Problems (MOP) –Definition –NP hard By Zhi Wei.
Multi-Objective Evolutionary Algorithms Matt D. Johnson April 19, 2007.
Diversity Maintenance Behavior on Evolutionary Multi-Objective Optimization Presenter : Tsung Yu Ho at TEILAB.
Introduction to Genetic Algorithms Yonatan Shichel.
1 Genetic Algorithms. CS The Traditional Approach Ask an expert Adapt existing designs Trial and error.
Genetic Algorithms in Materials Processing N. Chakraborti Department of Metallurgical & Materials Engineering Indian Institute of Technology Kharagpur.
MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition
The Pareto fitness genetic algorithm: Test function study Wei-Ming Chen
1 Genetic Algorithms. CS 561, Session 26 2 The Traditional Approach Ask an expert Adapt existing designs Trial and error.
Resource Allocation Problem Reporter: Wang Ching Yu Date: 2005/04/07.
Chapter 6: Transform and Conquer Genetic Algorithms The Design and Analysis of Algorithms.
A New Algorithm for Solving Many-objective Optimization Problem Md. Shihabul Islam ( ) and Bashiul Alam Sabab ( ) Department of Computer Science.
Multiobjective Optimization Athens 2005 Department of Architecture and Technology Universidad Politécnica de Madrid Santiago González Tortosa Department.
Combining materials for composite-material cars Ford initiated research at a time when they took a look at making cars from composite materials. Graphite-epoxy.
Optimal Arrangement of Ceiling Cameras for Home Service Robots Using Genetic Algorithms Stefanos Nikolaidis*, ** and Tamio Arai** *R&D Division, Square.
Evolutionary Multi-objective Optimization – A Big Picture Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical.
Parallel Genetic Algorithms with Distributed-Environment Multiple Population Scheme M.Miki T.Hiroyasu K.Hatanaka Doshisha University,Kyoto,Japan.
On comparison of different approaches to the stability radius calculation Olga Karelkina Department of Mathematics University of Turku MCDM 2011.
A New Model of Distributed Genetic Algorithm for Cluster Systems: Dual Individual DGA Tomoyuki HIROYASU Mitsunori MIKI Masahiro HAMASAKI Yusuke TANIMURA.
Example II: Linear truss structure
Masoud Asadzadeh, Bryan A. Tolson, A. J. MacLean. Dept. of Civil & Environmental Engineering, University of Waterloo Hydrologic model calibration aims.
MOGADES: Multi-Objective Genetic Algorithm with Distributed Environment Scheme Intelligent Systems Design Laboratory , Doshisha University , Kyoto Japan.
Doshisha Univ. JapanGECCO2002 Energy Minimization of Protein Tertiary Structure by Parallel Simulated Annealing using Genetic Crossover Takeshi YoshidaTomoyuki.
A two-stage approach for multi- objective decision making with applications to system reliability optimization Zhaojun Li, Haitao Liao, David W. Coit Reliability.
Inferring Temporal Properties of Finite-State Machines with Genetic Programming GECCO’15 Student Workshop July 11, 2015 Daniil Chivilikhin PhD student.
Doshisha Univ., Japan Parallel Evolutionary Multi-Criterion Optimization for Block Layout Problems ○ Shinya Watanabe Tomoyuki Hiroyasu Mitsunori Miki Intelligent.
Omni-Optimizer A Procedure for Single and Multi-objective Optimization Prof. Kalyanmoy Deb and Santosh Tiwari.
Distributed Genetic Algorithms with a New Sharing Approach in Multiobjective Optimization Problems Tomoyuki HIROYASU Mitsunori MIKI Sinya WATANABE Doshisha.
Hybird Evolutionary Multi-objective Algorithms Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical Information.
Doshisha Univ., Kyoto, Japan CEC2003 Adaptive Temperature Schedule Determined by Genetic Algorithm for Parallel Simulated Annealing Doshisha University,
A Parallel Genetic Algorithm with Distributed Environment Scheme
Kanpur Genetic Algorithms Laboratory IIT Kanpur 25, July 2006 (11:00 AM) Multi-Objective Dynamic Optimization using Evolutionary Algorithms by Udaya Bhaskara.
2/29/20121 Optimizing LCLS2 taper profile with genetic algorithms: preliminary results X. Huang, J. Wu, T. Raubenhaimer, Y. Jiao, S. Spampinati, A. Mandlekar,
Soft Computing Multiobjective Optimization Richard P. Simpson.
ZEIT4700 – S1, 2015 Mathematical Modeling and Optimization School of Engineering and Information Technology.
Multi-objective Evolutionary Algorithms (for NACST/Seq) summarized by Shin, Soo-Yong.
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Multiobjective Optimization  Particularities of multiobjective optimization  Multiobjective.
Evolutionary multi-objective algorithm design issues Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical.
Evolutionary Computing Chapter 12. / 26 Chapter 12: Multiobjective Evolutionary Algorithms Multiobjective optimisation problems (MOP) -Pareto optimality.
Doshisha Univ., Kyoto Japan NCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems Intelligent Systems Design Laboratory,
Genetic Algorithm Dr. Md. Al-amin Bhuiyan Professor, Dept. of CSE Jahangirnagar University.
- Divided Range Multi-Objective Genetic Algorithms -
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
Parallel Simulated Annealing using Genetic Crossover Tomoyuki Hiroyasu Mitsunori Miki Maki Ogura November 09, 2000 Doshisha University, Kyoto, Japan.
Journal of Computational and Applied Mathematics Volume 253, 1 December 2013, Pages 14–25 Reporter : Zong-Dian Lee A hybrid quantum inspired harmony search.
Introduction to Algorithms: Brute-Force Algorithms.
Genetic Algorithm(GA)
Multi-objective Motion Planning Presented by Khalafalla Elkhier Supervised by Dr. Yasser Fouad.
ZEIT4700 – S1, 2016 Mathematical Modeling and Optimization School of Engineering and Information Technology.
Multiobjective Optimization Richard P. Simpson
Heuristic Optimization Methods Pareto Multiobjective Optimization
Multi-Objective Optimization
Doshisha Univ., Kyoto Japan
○ Hisashi Shimosaka (Doshisha University)
New Crossover Scheme for Parallel Distributed Genetic Algorithms
Tomoyuki HIROYASU Mitsunori MIKI Masahiro HAMASAKI Yusuke TANIMURA
Multiobjective Optimization
Mitsunori MIKI Tomoyuki HIROYASU Takanori MIZUTA
Presentation transcript:

DCMOGA: Distributed Cooperation model of Multi-Objective Genetic Algorithm Tamaki Okuda ● Tomoyuki Hiroyasu   Mitsunori Miki   Shinya Watanabe   Doshisha University, Kyoto Japan Thank you Chairperson. I’m Tamaki Okuda and a graduate student of Doshisha University in Kyoto Japan. Now I’m talking about our study, the title is “DCMOGA: Distributed Cooperation model of Multi-Objective Genetic Algorithm”.

Multi-objective Optimization Problems Multi-objective Optimization Problems(MOPs) In the optimization problems, when there are several objective functions, the problems are called multi-objective or multi-criterion problems. Design variables X = { x1, x2, … , xn } Objective function F = { f1(x), f2(x), … , fm(x) } Constrains Gi(x) < 0 ( i = 1, 2, … , k ) f1(x): Maximum f2(x): Maximum Pareto-optimal front Non-Dominated solutions Feasible region In the optimization problems, when there are several objective functions, the problems are called multi-objective or multi-criterion problems, MOPs. There are often trade-off relation between the objective functions, therefore the optimum solution is not only one. In MOPs, the concept of the pareto optimal solution is used. This figure shows the Pareto-optimal solution of this problem. In this figure, horizontal axis represents f1(x), and vertical axis represents f2(x), the higher right solutions is better. In this figure, Pareto-optimal front is illustrated as gray line, non-dominated solutions are green points. The first goal of solving the MOPs is finding Pareto-optimal solutions.

EMOs: Evolutionary Multi-criterion Optimization Typical method on EMOs: VEGA : Schaffer (1985) MOGA : Fonseca (1993) SPEA2 : Zitzler (2001) NPGA2 : Erickson, Mayer, Horn (2001) NSGA-II : Deb (2001) Good non-dominated solutions: Minimal distance to the Pareto-optimal front Uniform distribution Maximum spreading >> Proposed a new model of EMOs. This model searches for non-dominated solutions, which are widespread and closed to pareto-optimal front, and can make the existing algorithms more efficient. MOPs solved by evolutionary algorithms are often called EMOs. These are a part of leading researches in this category. The goal of these methods is finding good non-dominated set. The good non-dominated set satisfies these 3 points. The first point is that the distance of the resulting non-dominated front to Pareto-optimal front is minimized. The second point is a uniform distribution of the non-dominated solutions, that is, these found solutions are dispersed. The final point is maximum spreading, that is, the extent of the non-dominated front is maximized. In this presentation, we propose a new model of EMOs. This model searches for non-dominated solutions, which are widespread and closed to pareto-optimal front. And this model makes the existing algorithms more efficient.

DCMOGA DCMOGA: Distributed Cooperation model of Multi-Objective GA DC-scheme for MOGA The features of DC-scheme: N+1 sub populations (when N objects) 1 group: searches for pareto optimum by MOGA N groups: searches for optimum by SOGA (Single Objective GA) Cooperative search Exchanged between each group Adjustment of each sub population size The proposed model is called “Distributed Cooperation model of MOGA”, “DCMOGA”. In this presentation, the framework of the DCMOGA is called DC-scheme. DC-scheme has following features. In DC-scheme, when there are N objective functions, N+1 sub populations are generated. One group of them searches for the Pareto-optimal solutions by Multi-Objective GA, MOGA. The other N groups search for the optimum of each objective functions by single objective GA, SOGA. The other feature is the Cooperative search. This feature consists of the exchange of solutions between each group, and the adjustment of each sub population size. From next slide, I’m talking about these features in detail.

N+1 groups (sub populations) MOGA group: The Pareto-optimal solutions are searched by MOGA SOGA groups: The optimum of ith objective function is searched by SOGA. In DC-scheme, there are N+1 groups. The one Group of them is called MOGA group. The MOGA group is the group for searching for Pareto optimal solutions by MOGA. The other N groups are called SOGA groups. Each SOGA group is the group for searching for the optimum of ith objective function by SOGA. So, each SOGA group has its own objective function. This figure shows N+1 group in DC-scheme. In the MOGA group, the Pareto-optimal solutions are searched by MOGA. In the SOGA F1 group, the optimum of 1st objective function F1 is searched by SOGA. In the SOGA F2 group, the optimum of 2nd objective function F2 is searched by SOGA.

Cooperative search (1) Exchange of the solutions: Exchange the best solutions between the MOGA group and each SOGA group. In DC-scheme, the cooperative search is used. After some iterations in GA, the best solutions are exchanged between the MOGA group and each SOGA group. This figure shows the exchange of solutions. In the MOGA group, This green point indicates the exchanged solution from the SOGA f1 group, And this point indicates the best solution in SOGA F1 group. And the blue point from the SOGA f2 group, And this point indicates the best solution in SOGA F2 group. In the SOGA f1 group, This purple point indicates the exchanged solution from the MOGA group. And, this point indicates the best solution of 1st objective function in the MOGA group. In the SOGA f2 group, This purple point indicates from the MOGA group, too. And, this point indicates the best solution of 2nd objective function in the MOGA group. The solutions aren’t exchanged between each SOGA group.

Cooperative search (2) Adjustment of each sub population size: Some individuals move to other group. This adjustment is according to the function values of best solution in each group. Each sub population size changes, but the whole population size don’t change. >> The difference between search ability of each group is reduce In DC-scheme, each sub population size is adjusted. Some individuals move to other group. This adjustment is according to the function values of the best solution in each group. Each sub population size changes, but the whole population size don’t change. The difference between the search ability of each group is reduced The solutions are derived with the cooperation of MOGA and SOGA group. This figure shows the adjustment of each sub population size. When the best solution of the MOGA group is better than the SOGA F1 group, Some individuals in the MOGA group move to the SOGA F1 group. When the best solution of the SOGA F2 group is better than the MOGA group, Some individuals in the SOGA F2 group move to the MOGA group.

The algorithm of DC-scheme >> N objective function all individuals are initialized. all individuals are divided into N+1 groups. In the MOGA group, the pareto optimal solutions are searched, and in each SOGA group the optimum is searched. After some iterations, exchange the solutions between MOGA group and SOGA. The exchanged solutions are compared, and sub each population size is adjusted. The terminal condition is checked, >> 2 objective functions In this slide, I’m talking about the algorithm of DCMOGA. When MOPs has N objective functions, 1st step, all individuals are initialized. 2nd step, all the individuals are divided into N+1 groups. 3rd step, In the MOGA group, the pareto-optimal solutions are searched by MOGA. In each SOGA group, the optimum of their objective function is searched by single objective GA. 4th step, after some iterations in GA, the best solutions in each group are exchanged between the MOGA group and each SOGA group. the exchanged solutions are compared. And each sub population size is adjusted. 5th step, the terminal condition is checked. If the condition is not satisfied, the process is back to 3rd step. This figure shows DC-scheme algorithms in 2 objective functions. All individuals are initialized and divided into 3 groups. In each SOGA group, the optimum of each objective function is searched by SOGA.. In the MOGA group, the pareto optimum solutions are searched by MOGA.

Combined MOGA and SOGA method DCMOGA: Distributed Cooperation model of MOGA DC-scheme >> Combined MOGA and SOGA DC-scheme can combine any MOGA and SOGA. Used algorithms: SOGA: DGA MOGA: MOGA (Fonseca) SPEA2 (Zitzler) NSGA-II (Deb) We propose the Distributed Cooperation model of MOGA, Distributed Cooperation scheme. In DC-scheme, Multi-Objective GA and single objective GA are combined. And, DC-scheme can combine any MOGA and SOGA. In this presentation, In single objective GA, DGA is used. In Multi-Objective GA, 3 method are used. One is the improved MOGA. MOGA is proposed by Fonseca. The other methods are SPEA2 proposed by Zitzler and NSGA-II proposed Deb.

Test Problem: KP750-m 0/1 Knapsack Problem (750items, m knapsacks) -Combination problems Objectives: Constraints: profit of item j according to knapsack i weight of item j according to knapsack i capacity of knapsack i From this slide, I’m talking about numerical examples. This problem is combination problem. KP750-m is the Multi-objective 0/1 knapsack problem. There are 750 items and m knapsacks. This equation shows the formulation of this problem. In this presentation, the used knapsack problems is KP750-3.

Test Problem: KUR KUR (2 objective function, 100 design variables ) - Continuous - Multi-modal The KUR is continuous function. This function has a multi-modal function in f2 and pair-wise interactions among the variables in f1.

Test Problem: ZDT4 ZDT4 ( 2 objective function, 10 design variables ) - Continuous - Multi-modal The ZDT4 is continuous function, too. The feature of this function is multi-modal.

Performance Metrics Function(C) Coverage of two sets: C A: front1 B: front2 C(A,B) = 1/5 = 0.2 C(B,A) = 2/4 = 0.5 In this presentation, the used the coverage of two sets, proposed by Zitzler, as performance metric. This metric is a comparison of a pair of non-dominated sets. The function C is defined by this equation. This figure shows the coverage of two sets. In maximum optimization problems, There are set A and B. The green line represents the front 1 made by Set A. The purple line represents the front 2 made by Set B. In this figure, the orange points represent the non-dominated solutions, the blue points represent the solutions dominated by other solutions, And the gray line represents the Pareto-optimal front. The number of Set B is 5. The number of Set B dominated by Set A is 1. So, the function C(A,B) is 0.2. The number of Set A is 4. The number of Set A dominated by Set B is 2. So, The function C(B,A) is 0.5, also.

Applied models and Parameters MOGA / DCMOGA SPEA2 / DCSPEA2 NSGA-II / DCNSGA-II Parameters GA operator Crossover: 2 points crossover Mutation: bit flip KP750-2 KUR ZDT4 Chromosome length 750 2000 200 Population size 250 100 Crossover Rate 1.0 Mutation Rate 1/L (L: Chromosome length) Terminal Condition 5 x 105 10 x 106 2.5 x 104 Number of trial 30 This slide shows applied models and parameters. We applied 6 models to the problems. These are MOGA, SPEA2, NSGA-II, and the models combined with DC-scheme, DCMOGA, DCSPEA2, DCNSGA-II. This table shows the parameters used in these experiments. And this shows the used GA operators.

Results: KP750-3 DCMOGA MOGA << with DC-scheme is more widespread << without DC-scheme is closer to Pareto optimum I’d like to show you the derived non-dominated solutions by each model in KP750-3. These figures show each non-dominated set derived in 30 trials . this axis represents f1(x), this axis represents f2(x), this axis represents f3(x). So, the solutions on this side are better. In this figure, horizontal axis represents F1, and vertical axis represent F2 of KP750-3. So, the higher right solutions are better. The black points indicate the result of the DCMOGA. The blue points indicate the result of the MOGA. When the DC-scheme is combined, these algorithms can derive the wide-spread non-dominated solutions. But, When the DC-scheme is not combined, these algorithms can derive the non-dominated solutions closed to Pareto-optimal front.

Results: KP750-3 DCSPEA2 SPEA2 << with DC-scheme is more widespread These figures show the result of SPEA2 and DCSPEA2. In this figure, horizontal axis represents F1, and vertical axis represent F2 of KP750-3. So, the higher right solutions are better. The black points indicate the result of the DCSPEA2. The blue points indicate the result of the SPEA2. When the DC-scheme is combined, these algorithms can derive the wide-spread non-dominated solutions.

Results: KP750-3 DCNSGA-II NSGA-II << with DC-scheme is more widespread << without DC-scheme is closer to Pareto optimum These figures show the results of NSGA-II and DCNSGA-II. In this figure, horizontal axis represents F1, and vertical axis represent F2 of KP750-3. So, the higher right solutions are better. The black points indicate the result of the DCNSGA2. The blue points indicate the result of the NSGA2. When the DC-scheme is combined, these algorithms can derive the wide-spread non-dominated solutions. But, When the DC-scheme is not combined, these algorithms can derive the non-dominated solutions closed to Pareto-optimal front.

Results: KP750-3 The Coverage of two sets << Without DC superior than with DC << With DC superior than without DC << Without DC superior than with DC >> both results are almost the same. This figure shows the result of the coverage, function C. The gray bar represents the result of without DC-scheme. The blue bar represents the result of with DC-scheme. So, in NSGA-II, Without DC superior than with DC. In SPEA2, Without DC superior than with DC. In MOGA, Without DC superior than with DC. This result indicates that the both algorithms combined with DC-scheme and without DC-scheme show the same level.

Results: KUR DCMOGA MOGA DCSPEA2 SPEA2 These are the results of KUR. In these figures, horizontal axis represents f1(x), and vertical axis represents f2(x). Therefore, the lower left solutions are better. These figures shows the algorithm combined with DC-scheme can derive the closer to Pareto-optimal front and wide-spread non-dominated solutions than not combined.

Results: KUR DCNSGA-II NSGA-II >> With DC-scheme is superior << With DC superior than without DC << With DC superior than without DC << With DC superior than without DC >> With DC-scheme is superior These figures show the results of NSGA-II and DCNSGA-II. These results derive the same tendency as previous slide. This figure shows the result of the coverage of two sets, function C. This result shows the algorithms combined with DC-scheme can derive better non-dominated set than not combined.

Results: ZDT4 These are the results of ZDT4. In these figures, horizontal axis represents f1. and vertical axis represents f2. Therefore, the lower left solutions are better. These figures shows the algorithm combined with DC-scheme can often derive the non-dominated solutions closer to Pareto-optimal front than not combined. When DC-scheme is combined, there are several sub populations. So, the improvement of the local solutions are derived.

Results: ZDT4 >> With DC-scheme is superior << With DC superior than without DC << With DC superior than without DC << With DC superior than without DC >> With DC-scheme is superior These figures show the results of NSGA-II and DCNSGA-II. These results derive the same tendency as previous slide. This figure shows the result of the coverage of two sets, function C. This results shows the algorithms combined with DC-scheme can derive better non-dominated set than not combined.

Conclusion >> DC-scheme is efficient scheme for EMOs. We proposed a new model of EMOs. DCMOGA (DC-scheme): Distributed Cooperation model of MOGA Compared the algorithms combined with DC-scheme against algorithm without DC-scheme. The algorithms combined DC-scheme derives the efficient results. DC-scheme can be combined any other EMOs, and the algorithm is more efficient algorithm than without DC-scheme. >> DC-scheme is efficient scheme for EMOs. Now I’d like to show you the conclusions. In this study, We proposed new model of EMOs. That is Distributed Cooperation of Multi-Objective Genetic Algorithm, The concept of DCMOGA is called DC-scheme. In DC-scheme, 3 algorithms, MOGA, SPEA2, and NSGA-II are combined with DGA. We compared the algorithms combined with DC-scheme against algorithms without DC-scheme in 3 test functions. In all of the functions in which we compared to, the algorithms combined DC-scheme derives the efficient results. And, DC-scheme can be combined any other EMOs, and the algorithm is more efficient algorithm than without DC-scheme. Therefore, DC-scheme, DCMOGA is efficient model of EMOs. This rocket represents the concept 0f DCMOGA, DC-scheme. The rocket indicates EMOs, the boosters indicate DC. So, the rocket indicates DC-scheme. That’s all. Thank you.

Performance Metrics RNI: Ratio of Non-dominated Individuals derived from 2 types of non-dominated solutions Set A: front1 Set A: 3/5 = 0.6 Set B: 2/5 = 0.4 Set B: front2

Results: ZDT6

Results: ZDT6

Results: KP750-2

Results: KP750-2

Performance Metrics: Coverage(D) Coverage difference of two sets: D

The improved MOGA MOGA is proposed by Fonseca Improved MOGA: MOGA added to the Sharing and using pareto archive.

SOGA and MOGA DC-scheme are compared SOGA and MOGA >> SOGA and MOGA SOGA: 300 generation x 2 MOGA: 400