Presentation is loading. Please wait.

Presentation is loading. Please wait.

Genetic Programming on General Purpose Graphics Processing Units (GPGPGPU) Muhammad Iqbal Evolutionary Computation Research Group School of Engineering.

Similar presentations


Presentation on theme: "Genetic Programming on General Purpose Graphics Processing Units (GPGPGPU) Muhammad Iqbal Evolutionary Computation Research Group School of Engineering."— Presentation transcript:

1 Genetic Programming on General Purpose Graphics Processing Units (GPGPGPU) Muhammad Iqbal Evolutionary Computation Research Group School of Engineering and Computer Sciences

2 Overview Graphics Processing Units (GPUs) are no longer limited to be used only for Graphics: High degree of programmability Fast floating point operations GPUs are now GPGPUs Genetic programming is a computationally intensive methodology so a prime candidate for using GPUs. 2

3 Outline Genetic Programming Genetic Programming Resource Demands GPU Programming Genetic Programming on GPU Automatically Defined Functions 3

4 Genetic Programming (GP) Evolutionary algorithm-based methodology To optimize a population of computer programs Tree based representation Example: 4 XOutput 01 13 27 313

5 GP Resource Demands GP is notoriously resource consuming CPU cycles Memory Standard GP system, 1μs per node Binary trees, depth 17: 131 ms per tree Fitness cases: 1,000 Population size: 1,000 Generations: 1,000 Number of runs: 100 »Runtime: 10 Gs ≈ 317 years Standard GP system, 1ns per node »Runtime: 116 days Limits to what we can approach with GP 5 [Banzhaf and Harding – GECCO 2009]

6 Sources of Speed-up Fast machines Vector Processors Parallel Machines (MIMD/SIMD) Clusters Loose Networks Multi-core Graphics Processing Units (GPU) 6

7 General Purpose Computation on GPU GPUs are not just for graphics operations High degree of programmability Fast floating point operations Useful for many numeric calculations Examples Physical simulations (e.g. fluids and gases) Protein Folding Image Processing 7

8 Why GPU is faster than CPU ? 8 The GPU Devotes More Transistors to Data Processing. [CUDA C Programming Guide Version 3.2 ]

9 GPU Programming APIs There are a number of toolkits available for programming GPUs. CUDA MS Accelerator RapidMind Shader programming So far, researchers in GP have not converged on one platform 9

10 CUDA Programming Massive number (>10000) of light-weight threads. 10

11 CUDA Memory Model 11 (Device) Grid Constant Memory Texture Memory Global Memory Block (0, 0) Shared Memory Local Memory Thread (0, 0) Registers Local Memory Thread (1, 0) Registers Block (1, 0) Shared Memory Local Memory Thread (0, 0) Registers Local Memory Thread (1, 0) Registers Host CUDA exposes all the different types of memory on the GPU: [CUDA C Programming Guide Version 3.2 ]

12 CUDA Programming Model GPU is viewed as a computing device operating as a coprocessor to the main CPU (host). Data-parallel, computationally intensive functions should be off-loaded to the device. Functions that are executed many times, but independently on different data, are prime candidates, i.e. body of for-loops. A function compiled for the device is called a kernel. 12

13 13

14 Stop Thinking About What to Do and Start Doing It! Memory transfer time expensive. Computation is cheap. No longer calculate and store in memory Just recalculates Built-in variables threadIdx blockIdx gridDim blockDim 14

15 Example: Increment Array Elements 15

16 Example: Matrix Addition 16

17 Example: Matrix Addition 17

18 Parallel Genetic Programming While most GP work is conducted on sequential computers, the following computationally intensive features make it well suited to parallel hardware: Individuals are run on multiple independent training examples. The fitness of each individual could be calculated on independent hardware in parallel. Multiple independent runs of the GP are needed for statistical confidence to the stochastic element of the result. 18 [Langdon and Banzhaf, EuroGP-2008]

19 A Many Threaded CUDA Interpreter for Genetic Programming Running Tree GP on GPU 8692 times faster than PC without GPU Solved 20-bits Multiplexor 2 20 = 1048576 fitness cases Has never been solved by tree GP before Previously estimated time: more than 4 years GPU has consistently done it in less than an hour Solved 37-bits Multiplexor 2 37 = 137438953472 fitness cases Has never been attempted before GPU solves it in under a day 19 [W.B.Langdon, EuroGP-2010]

20 Boolean Multiplexor d = 2 a n = a + d Num test cases = 2 n 20-mux 1 million test cases 37-mux 137 billion test cases 20 [W.B.Langdon, EuroGP-2010]

21 Genetic Programming Parameters for Solving 20 and 37 Multiplexors Terminals20 or 37 Boolean inputs D0 – D19 or D0 – D36 respectively FunctionsAND, OR, NAND, NOR FitnessPseudo random sample of 2048 of 1048576 or 8192 of 137438953472 fitness cases. Tournament4 members run on same random sample. New samples for each tournament and each generation. Population262144 Initial Population Ramped half-and-half 4:5 (20-Mux) or 5:7 (37-Mux) Parameters50% subtree crossover, 5% subtree 45% point mutation. Max depth 15, max size 511 (20-Mux) or 1023 (37-Mux) Termination5000 generations 21 [W.B.Langdon, EuroGP-2010] Solutions are found in generations 423 (20-Mux) and 2866 (37-Mux).

22 AND, OR, NAND, NOR XYX & Y 000 010 100 111 22 XYX d Y 001 011 101 110 XYX r Y 001 010 100 110 XYX | Y 000 011 101 111 AND: & NOR: r NAND: d OR: |

23 Evolution of 20-Mux and 37-Mux 23 [W.B.Langdon, EuroGP-2010]

24 6-Mux Tree I 24 [W.B.Langdon, EuroGP-2010]

25 6-Mux Tree II 25 [W.B.Langdon, EuroGP-2010]

26 6-Mux Tree III 26 [W.B.Langdon, EuroGP-2010]

27 Ideal 6-Mux Tree 27

28 Automatically Defined Functions (ADFs) Genetic programming trees often have repeated patterns. Repeated subtrees can be treated as subroutines. ADFs is a methodology to automatically select and implement modularity in GP. This modularity can: Reduce the size of GP tree Improve readability 28

29 Langdon’s CUDA Interpreter with ADFs ADFs slow down the speed 20-Mux taking 9 hours instead of less than an hour 37-Mux taking more than 3 days instead of less than a day Improved ADFs Implementation Previously used one thread per GP program Now using one thread block per GP program Increased level of parallelism Reduced divergence 20-Mux taking 8 to 15 minutes 37-Mux taking 7 to 10 hours 29

30 ThreadGP Scheme Every GP program is interpreted by its own thread. All fitness cases for a program evaluation are computed on the same stream processor As several threads interpreting different programs are run on each multiprocessor, a higher level of divergence may be expected than with the BlockGP scheme. 30 [Denis Robilliard, Genet Program Evolvable Mach-2009]

31 BlockGP Scheme Every GP program is interpreted by all threads running on a given multiprocessor. No divergence due to differences between GP programs, since multiprocessors are independent. However divergence can still occur between stream processors on the same multiprocessor, when: an if structure resolves into the execution of different branches within the set of fitness cases that are processed in parallel. 31 [Denis Robilliard, Genet Program Evolvable Mach-2009]

32 6-Mux with ADF 32

33 6-Mux with ADF 33

34 6-Mux with ADF 34

35 Conclusion 1: GP Powerful machine learning algorithm Capable of searching through trillions of states to find the solution Often have repeated patterns and can be compacted by ADFs But computationally expensive 35

36 Conclusion 2: GPU Computationally fast Relative low cost Need new programming paradigm, which is practical. Accelerates processing speed up to 3000 times for computationally intensive problems. But not well suited for memory intensive problems. 36

37 Acknowledgement Dr Will Browne and Dr Mengjie Zhang for Supervision. Kevin Buckley for Technical Support. Eric for helping in CUDA compilation. Victoria University of Wellington for Awarding “Victoria PhD Scholarship”. All of You for Coming. 37

38 38 Thank You Questions?


Download ppt "Genetic Programming on General Purpose Graphics Processing Units (GPGPGPU) Muhammad Iqbal Evolutionary Computation Research Group School of Engineering."

Similar presentations


Ads by Google