Download presentation
Presentation is loading. Please wait.
1
PGA – Parallel Genetic Algorithm Hsuan Lee
2
Reference E Cantú-Paz, A Survey on Parallel Genetic Algorithm, Calculateurs Paralleles, Reseaux et Systems Repartis, 1998 2010.10.07Hsuan Lee @ NTUEE2
3
Classes of Parallel Genetic Algorithm 2010.10.07Hsuan Lee @ NTUEE3 3 major classes of PGA Global Single-Population Master-Slave PGA Single-Population Fine-Grained PGA Multiple-Population Coarse-Grained PGA Hybrid of the above PGAs Hierarchical PGA
4
Classes of Parallel Genetic Algorithm 2010.10.07Hsuan Lee @ NTUEE4 Global Single-Population Master-Slave PGA Lowest level of parallelism Parallelize the calculation of fitness, Selection, Crossover… Also known as Global PGA.
5
Classes of Parallel Genetic Algorithm 2010.10.07Hsuan Lee @ NTUEE5 Single-Population Fine-Grained PGA Consists of one spatially structured population Selection and Crossover are restricted to a small neighborhood, but neighborhoods overlap, permitting some interaction among all the individuals Similar to the idea of niching Suited for massively parallel computers
6
Classes of Parallel Genetic Algorithm 2010.10.07Hsuan Lee @ NTUEE6 Multiple-Population Coarse-Grained PGA Consists of several subpopulations Exchange individuals occasionally. The exchange operation is called migration. Also known as multiple-deme PGA, distributed GA, coarse- grained PGA or “island” PGA Most popular PGA Most difficult to analyze Suited for fewer but stronger parallel computers
7
Classes of Parallel Genetic Algorithm 2010.10.07Hsuan Lee @ NTUEE7 3 major classes of PGA Global Single-Population Master-Slave PGA Single-Population Fine-Grained PGA Multiple-Population Coarse-Grained PGA The first one does not affect the behavior of GA, but the latter 2 do. Hybrid of the above PGAs Hierarchical PGA
8
Classes of Parallel Genetic Algorithm 2010.10.07Hsuan Lee @ NTUEE8 Hierarchical PGA Combines multiple-population PGA (at higher level) with master-slave PGA or fine-grained PGA (at lower level) Combines the advantages of its components
9
Master-Slave Parallelization 2010.10.07Hsuan Lee @ NTUEE9 Master does the global work that involves population-wise computation and assign local tasks to its slaves What to be parallelized? Evaluation of fitness Selection Some selections require population-wise calculation. Therefore it cannot be parallelized Selections that don’t require global computation are usually too simple to be parallelized. e.g. tournament selection Crossover Usually too simple to parallelize But for complex crossover that involves finding min-cut, parallelization may be an option
10
Master-Slave Parallelization 2010.10.07Hsuan Lee @ NTUEE10 Computer architecture makes a difference Shared memory Simpler. The population may be stored in shared memory and each slave processor can process on these individuals without conflict Distributed memory The individuals to be processed are sent to slave processors, creating a communication overhead. This inhibits the tendency to parallelize too easy tasks.
11
Fine-Grained Parallel GAs 2010.10.07Hsuan Lee @ NTUEE11 Neighborhood size The performance of the algorithm degrades as the size of the neighborhood increases The ratio of the radius of the neighborhood to the radius of the whole grid is a critical parameter
12
Fine-Grained Parallel GAs 2010.10.07Hsuan Lee @ NTUEE12 Topology Different individual placing topology can result in different performances 1. 2D mesh Most commonly used because this is usually the physical topology of processors 2. Ring 3. Cube 4. Torus (doughnut) Converges the fastest in some problems, due to the high connectivity of the structure 5. Hypercube
13
Multiple-Deme Parallel GAs 2010.10.07Hsuan Lee @ NTUEE13 Subpopulation size It is obvious that small population converges faster, but is more likely to converge to a local optimum rather than a global optimum The Idea is to use many small subpopulations that communicates occasionally to speed up GA while preventing from converging at local optimum
14
Multiple-Deme Parallel GAs 2010.10.07Hsuan Lee @ NTUEE14 Migration Timing Synchronous What’s the optimum frequency of migration? Is the communication cost small enough to make this PGA a good alternative of traditional GA? Asynchronous When is the right time to migrate?
15
Multiple-Deme Parallel GAs 2010.10.07Hsuan Lee @ NTUEE15 Topology – Migration destination Static 1. Any topology with “high connectivity and small diameter” 2. Random destination Dynamic According to destination subpopulation’s diversity?
16
Conclusion 2010.10.07Hsuan Lee @ NTUEE16 There are still a lot to be investigated in the field of PGA. Theoretical work is scarce.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.