1 Reasons for parallelization Can we make GA faster? One of the most promising choices is to use parallel implementations. The reasons for parallelization.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems Introduction.
Advertisements

Using Parallel Genetic Algorithm in a Predictive Job Scheduling
CSE298 CSE300 DGA-1 CSE300 Agent-based Distributed Genetic Algorithms Rodrigo E. Caballero Computer Science & Engineering Department University of Connecticut.
Student : Mateja Saković 3015/2011.  Genetic algorithms are based on evolution and natural selection  Evolution is any change across successive generations.
Lecture 7-2 : Distributed Algorithms for Sorting Courtesy : Michael J. Quinn, Parallel Programming in C with MPI and OpenMP (chapter 14)
CIS December '99 Introduction to Parallel Architectures Dr. Laurence Boxer Niagara University.
Tuesday, May 14 Genetic Algorithms Handouts: Lecture Notes Question: when should there be an additional review session?
Genetic Algorithms as a Tool for General Optimization Angel Kuri 2001.
Non-Linear Problems General approach. Non-linear Optimization Many objective functions, tend to be non-linear. Design problems for which the objective.
Reference: Message Passing Fundamentals.
1 Lecture 8: Genetic Algorithms Contents : Miming nature The steps of the algorithm –Coosing parents –Reproduction –Mutation Deeper in GA –Stochastic Universal.
Estimation of Distribution Algorithms Ata Kaban School of Computer Science The University of Birmingham.
Study of a Paper about Genetic Algorithm For CS8995 Parallel Programming Yanhua Li.
PGA – Parallel Genetic Algorithm Hsuan Lee. Reference  E Cantú-Paz, A Survey on Parallel Genetic Algorithm, Calculateurs Paralleles, Reseaux et Systems.
Efficient Parallelization for AMR MHD Multiphysics Calculations Implementation in AstroBEAR.
Intro to AI Genetic Algorithm Ruth Bergman Fall 2002.
Intro to AI Genetic Algorithm Ruth Bergman Fall 2004.
Genetic Algorithm What is a genetic algorithm? “Genetic Algorithms are defined as global optimization procedures that use an analogy of genetic evolution.
Island Based GA for Optimization University of Guelph School of Engineering Hooman Homayounfar March 2003.
Parallel Genetic Algorithms with Distributed-Environment Multiple Population Scheme M.Miki T.Hiroyasu K.Hatanaka Doshisha University,Kyoto,Japan.
Genetic Algorithm.
Evolutionary Intelligence
Adapting Convergent Scheduling Using Machine Learning Diego Puppin*, Mark Stephenson †, Una-May O’Reilly †, Martin Martin †, and Saman Amarasinghe † *
A New Model of Distributed Genetic Algorithm for Cluster Systems: Dual Individual DGA Tomoyuki HIROYASU Mitsunori MIKI Masahiro HAMASAKI Yusuke TANIMURA.
Neural and Evolutionary Computing - Lecture 10 1 Parallel and Distributed Models in Evolutionary Computing  Motivation  Parallelization models  Distributed.
1 Paper Review for ENGG6140 Memetic Algorithms By: Jin Zeng Shaun Wang School of Engineering University of Guelph Mar. 18, 2002.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
Cristian Urs and Ben Riveira. Introduction The article we chose focuses on improving the performance of Genetic Algorithms by: Use of predictive models.
Soft Computing Lecture 18 Foundations of genetic algorithms (GA). Using of GA.
SOFT COMPUTING (Optimization Techniques using GA) Dr. N.Uma Maheswari Professor/CSE PSNA CET.
Othello Artificial Intelligence With Machine Learning
By: Andrew Moir. Table of Contents  Overview of Evolutionary Computation  Programming Strategies  Related Architecture  Efficiency Comparison  Implementations.
Zorica Stanimirović Faculty of Mathematics, University of Belgrade
Genetic Algorithms Michael J. Watts
Optimization Problems - Optimization: In the real world, there are many problems (e.g. Traveling Salesman Problem, Playing Chess ) that have numerous possible.
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
Introduction to GAs: Genetic Algorithms How to apply GAs to SNA? Thank you for all pictures and information referred.
2 Fundamentals of Genetic Algorithms Alexandre P. Alves da Silva and Djalma M. Falca˜o.
Evolving Virtual Creatures & Evolving 3D Morphology and Behavior by Competition Papers by Karl Sims Presented by Sarah Waziruddin.
Genetic Algorithms Introduction Advanced. Simple Genetic Algorithms: Introduction What is it? In a Nutshell References The Pseudo Code Illustrations Applications.
1 Machine Learning: Lecture 12 Genetic Algorithms (Based on Chapter 9 of Mitchell, T., Machine Learning, 1997)
Multiprossesors Systems.. What are Distributed Databases ? “ A Logically interrelated collection of shared data ( and a description of this data) physically.
Searching and optimization.  Applications and techniques  Branch-and-bound search  Genetic algorithms  Successive refinement  Hill climbing.
*Partially funded by the Austrian Grid Project (BMBWK GZ 4003/2-VI/4c/2004) Making the Best of Your Data - Offloading Visualization Tasks onto the Grid.
A Parallel Genetic Algorithm with Distributed Environment Scheme
1 Genetic Algorithms K.Ganesh Introduction GAs and Simulated Annealing The Biology of Genetics The Logic of Genetic Programmes Demo Summary.
Chapter 9 Genetic Algorithms.  Based upon biological evolution  Generate successor hypothesis based upon repeated mutations  Acts as a randomized parallel.
A l a p a g o s : a generic distributed parallel genetic algorithm development platform Nicolas Kruchten 4 th year Engineering Science (Infrastructure.
CS 484 Load Balancing. Goal: All processors working all the time Efficiency of 1 Distribute the load (work) to meet the goal Two types of load balancing.
Parallel Genetic Algorithms By Larry Hale and Trevor McCasland.
Coevolutionary Automated Software Correction Josh Wilkerson PhD Candidate in Computer Science Missouri S&T.
D Nagesh Kumar, IIScOptimization Methods: M8L5 1 Advanced Topics in Optimization Evolutionary Algorithms for Optimization and Search.
In the name of ALLAH Presented By : Mohsen Shahriari, the student of communication in Sajad institute for higher education.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Othello Artificial Intelligence With Machine Learning Computer Systems TJHSST Nick Sidawy.
1 Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations Genetic Algorithm (GA)
Genetic Algorithm Dr. Md. Al-amin Bhuiyan Professor, Dept. of CSE Jahangirnagar University.
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
1 Comparative Study of two Genetic Algorithms Based Task Allocation Models in Distributed Computing System Oğuzhan TAŞ 2005.
Breeding Swarms: A GA/PSO Hybrid 簡明昌 Author and Source Author: Matthew Settles and Terence Soule Source: GECCO 2005, p How to get: (\\nclab.csie.nctu.edu.tw\Repository\Journals-
Genetic Algorithm(GA)
 Presented By: Abdul Aziz Ghazi  Roll No:  Presented to: Sir Harris.
Hirophysics.com The Genetic Algorithm vs. Simulated Annealing Charles Barnes PHY 327.
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
CEng 713, Evolutionary Computation, Lecture Notes parallel Evolutionary Computation.
School of Computer Science & Engineering
Chapter 16: Distributed System Structures
Md. Tanveer Anwar University of Arkansas
Presentation transcript:

1 Reasons for parallelization Can we make GA faster? One of the most promising choices is to use parallel implementations. The reasons for parallelization 1) The reason of the nature 2) The reason of GA itself

2 A classification of parallel GA Basic idea: divide-and-conquer 1) Global parallelization: only one population, the behavior of the algorithm remains unchanged, easy to implement 2) Coarse-grained parallel GA( 粗粒度 ): the population is divided into multiple subpopulations, each subpopulation evolves isolated and exchanges individuals occasionally 3) Fine-grained parallel GA( 细粒度 ): the ideal case is to have just one individual for every processing element 4) Hybrid parallel GA

3 Global parallelization 1) Initialization 2) Repeat the following steps 2.1) Selection 2.2) Crossover 2.3) Mutation 2.4) Calculate the fitness

4 Is there any difference? 1) Initialization 2) Repeat the following steps 2.1) Selection 2.2) Crossover 2.3) Mutation 2.4) Calculate the fitness for i=1 to N par_do calculate the fitness of ith individual endfor

5 The basic characteristics This method maintains a single population and the evaluation of the individuals is done in parallel. Each individual competes with all the other chromosomes and also has a chance to mate with any other individual. The genetic operations are still global. Communication occurs only as each processor receives its subset of individuals to evaluate and when the processor return the fitness values.

6 Implementation The model does not assume anything about the underlying computer architecture. On a shared memory multiprocessor, the population can be stored in shared memory and each processor can read the individuals assigned to it and write the evaluation results back without any conflicts. It may be necessary to balance the computational load among the processors. On a distributed memory computer, the population can be stored in one processor. This “master” processor will be responsible for sending the individuals to the other processors (the slaves) for evaluation, collecting the results, and applying the genetic operators to produce the next generation.

7 The genetic operators Crossover and mutation can be parallelized using the same idea of partitioning the population and distribution the work among multiple processors. However, these operators are so simple that it is very likely that the time required to send individuals back and forth will offset any performance gains. The communication overhead is also a problem when selection is parallelized because most forms of selection need information about the entire population and thus require some communications.

8 Conclusion Global parallel GA is easy to implement and it can be a very efficient method of parallelization when the evaluation needs considerable computations. This method also has the advantage of preserving the search behavior of the GA, so we can apply directly all the theory for simple GA. Unbalanced load.

9 Coarse grained parallel GA 1) Initialization and divide all the individuals into p subpopulation 2) for i=1 to p par-do 2.1) for j=1 to n do selection, crossover, mutation calculate the fitness 2.2) select some individuals as the migrants 2.3) send emigrants and receive immigrants 3) Go to 2)

10 The basic characteristics Coarse-grained GA seems like a simple extension of the serial GA. The recipe is simple: take a few conventional (serial) GAs, run each of them on a node of a parallel computer, and at some predetermined times exchange a few individuals. Coarse-grain parallel computers are easily available, and even if there is no parallel computer available it is easy to simulate one with a network of workstations or even in a single processor machine. There is relatively little extra effort needed to convert a serial GA into a coarse-grained parallel GA. Most of the program of the serial GA remains the same and only a few subroutines need to be added to implement migration.

11 The basic characteristics Strong capability for avoiding premature convergence while exploiting good individuals, if migration rates/patterns well chosen

12 Migrant Selection Policy Who should migrate? Best guy? One random guy? Best and some random guys? Guy very different from best of receiving subpopulation? (“similarity reduction”) If migrate in large % of population each generation, acts like one big population, but with extra replacements – could actually SPEED premature convergence

13 Migrant Replacement Policy Who should a migrant replace? Random individual? Worst individual? Most similar individual (Hamming sense) Similar individual via crowding?

14 How Many Subpopulations? How many total evaluations can you afford? Total population size and number of generations and “generation gap” determine run time What should minimum subpopulation size be? Smaller than USUALLY spells trouble – rapid convergence of subpop – better for some problems Divide to get how many subpopulations you can afford

15 Fine-grained parallel GA 1) Partition the initial population with N individuals to N processors; 2) for i=1 to N par-do 2.1) Each processor select one individual from itself and its neighbour 2.2) Crossover with one individual from its neighbour and remain one offspring 2.3) Mutation 2.4) Calculate the fitness 3) Go to 2)

16 The basic characteristics The largest possibility of parallelization. There is intensive communication between the processors. It is common to place the individuals in a 2-D grid because in many massively parallel computers the processing elements are connected using this topology.

17 Hybrid parallel algorithms Combine the methods to parallelize GA and this results in hybrid-parallel GAs.

18 Some examples This hybrid GA combines a coarse-grained GA (at the high level) and a fine-grained GA (at the low level)

19 Some examples This hybrid GA combines a coarse-grained GA at the high level where each node is a global parallel GA

20 Some examples This hybrid uses coarse-grained Gas at both the high and low level. At the low level the migration rate is faster and the communication topology is much denser than at the high level.

21 Network model Here, k independent GAs run with independent memories, operators and function evaluations. At each generation, the best individuals discovered are broadcast to all the sub-populations.

22 Community model Here, the GA is mapped to a set of interconnected communities, consisting of a set of homes connected to a centralised town. Reproduction and function evaluations take place at home. Offspring are sent to town to find mates. After mating, "new couples" are assigned a home either in their existing community or in another community.

23 Why we introduce PGA? Allow a more extensive coverage of the search space and an increased probability to find the global optimum They could also be used for multi-objective optimisation, with each sub-population responsible for a specific objective, and co-evolution, with each sub-population responsible for a specific trait.