Presentation is loading. Please wait.

Presentation is loading. Please wait.

Particle swarm optimization

Similar presentations


Presentation on theme: "Particle swarm optimization"— Presentation transcript:

1 Particle swarm optimization
or Genetic algorithm ? Yuzhao Wang, Zhibin Yu, Qixiao Liu, JunQing Yu Hello, everyone, it is a great honor to be here to present my work. I am wangyuzhao. In this work, we focus on the comparison of two heuristics in Spark parameter tuning, namely genetic algorithm and particle swarm optimization . SIAT, CAS HUST

2 Motivation Big Data Framework — parameter tuning
Hundreds of configuration parameters (Multivalued) Critical for performance, optimization is needed! Tuning parameters is time-consuming Expertise knowledge is needed Data Collecting Model training Optimum searching As we all know, Spark is a famous in-memeory computing framwork, which has one hundred and sixty parameters , and these parameters are critical for performance. So we need to tune before running different kinds of programs. Manual tuning can be time-consuming ,and besides needs expertise knowledge. So we propose a formalism to automatically tune the parameters , this formalism consists of three parts.Firstly , Data Collecting, which collects the trainning data. second, model trainning , which uses the data collected from the first phase to construct the model.Third, optimum searching, which searches the configuration space to get the best congfiguration parameters for a given app. In this work , we forcus on the third phase , Optimum Searching. we conduct several experiments to compare the performance of two searching algorithms , PSO AND GA. Genetic Algorithm(GA) , Particle Swarm Optimization(PSO) Random Forest, Support Vector Machine . . .

3 Outline Backgrouds Proposal and implementation Experimental Evaluation

4 Background Natural, efficient presentation for heuristics
GA - floating point presentation PSO - location vector : feature parameter gene chromosome In order to solve the real-world problem using the heuristics , firstly we need to transform it to the formation, which heuristics can solve. As to GA , we use floating point presentation , as the picture depicts. Each block presents A feature. We call the block as gene, and several genes make up of a chromosome. As to PSO , We directly use the feature parameter as the location vector. Thus we construct a bridge between the problem space and the heuristic resolution. Performance, Parameter Values

5 Genetic Algorithm A genetic analogy
Selection : choose superior chromosomes according to the fitness function Crossover : generate offsprings from parents chromosome Mutation : induce randomness to evolution The genetic algorithm is an genetic analogy, the algorithm contains a population of chromosomes,as the picture shows. In order to evolve, three operators are performed, namely Selection, Crossover, Mutation.

6 Particle Swarm Optimization
Swarm intelligence location : a vector represents a feasible solution velocity : an addend to update the location local optimum move global optimum As to pso , it has two components, namely location and velocity. As the animation shows, It tries to make use of the three kinds of info of the swarm, inertia velocity , local and global optimum. This will lead each individual to a Better location. inertia velocity

7 Outline Backgrouds Model and implementation Experimental Evaluation

8 Performance model Random Forest (RF)
1) Collect data to form the searching space 2) Training the RF as we know, the heuristic need a fitness funtion ,and we choose the random forest algorithm to build the model due to its Better performance, as the practice suggests. The training process is just the same as the previous slide.

9 Searching heuristics Implementation details
GA: Extend 'genalg' library of R in Python PSO : Python package 'pyswarm' As to the implementation of pso and ga, we make use of existing package , such as R library GENALG and Python package ”pyswarm “. Both algorithms are rewrote in Python.

10 Outline Backgrouds Model and implementation Experimental Evaluation

11 Experimental setup Platform Metric
Desktop:AMD quad-core processor, 4 GB DDR4 memory Benchmark: PageRank Metric Evaluation of the convergent state: Standard Deviation Our experimental platform is equipped with a quad-core processor, and a 4GB DDR4 memory. As a case study, through the experiment , we use PageRank as our benchmark. on another espect, We use the Standard Deviation to evaluate the convergent state of the warm ,for it is more sensitive to describe the variace than the Mean value . iterate

12 GA & PSO Settings Find the optimal parameter set for GA and PSO respectively Case study with the 6-dimension model Z GA : z axis: convergence time ( higher = worse ) x axis: mutation chance y axis: elitism size X Y (16, 0.01) As we know , ga or pso itself has several parameters that impact its seaching ability. In order to ensure the fairness of comparison, firstly we try to find the best parameter for Ga and pso respectively. As an example, we analysed the 6-dimensional model. as to Ga, x, y axis is the chance and elitism size respectively. As the landscape depicts, as the elitism size increases ,the convergence time decreases obviously. as to PSO, the x,y axis are the two learning factors C1 and C2. And c2 has more influence on the pso convergence time. After the anlaysis, we are able to find the optimal parameter set for ga, and pso respectively. PSO: z axis: convergence time x axis: local learning factor C1 y axis: global learning factor C2 (2.5, 2.5)

13 GA & PSO Settings Find the optimal Popsize for GA and PSO
convergence time solution fitness As for the population size , we discussed its impact on the convergence time and solution quality. As the pictures shows ,Both heuristics' convegence time exhibits linear relation against the swarm size; and the searching accuracy measured by the fitness value exhibits an exponential form. Finally , we choose 25 as the popsize. With the best parameter chosen for GA and PSO, we compare their performance as searching heuristics.

14 GA & PSO Evaluation Convergence time comparison
1) In convergence time, pso is faster ga.mean = 250 pso.mean = 210 we compare the performance form two respects: the convergence time and the solution quality. See form the first picture, A pso’s mean convergence time is lower than Ga. And from the second picture, ga’s fitness curve overlaps with pso’s , which indicate that both heuristics' robustness Are similar in a degree. HOWERVER, THE MEAN fitness value of pso is 210ms, and ga 250, which means pso is slightly better. 2) Robustness: steadiness across 50 times running

15 GA & PSO Evaluation Scalability when dimension increases
AS FOR Scalability, see from the picture, the pso is faster across all tested dimensions. As for the variation of convergence time, pso is better than ga. Thus AS A conclusion, pso is better in terms of scalability. 1) PSO is faster across all tested dimensions 2) The convergence time of PSO across different dimensions shows less variations, thus, better scalability

16 GA & PSO Evaluation Solution quality 𝐷𝑒𝑣= 𝑝𝑠𝑜.𝑠 −𝑔𝑎.𝑠 𝑔𝑠.𝑠
From the table ,it is easy to know that although pso’s solution quality is slightly poorer than ga. So above all , we recommend one to use PSO in spark performance tuning context, since it performs much Faster while the performance is only slightly poorer than that by GA. 𝐷𝑒𝑣= 𝑝𝑠𝑜.𝑠 −𝑔𝑎.𝑠 𝑔𝑠.𝑠 Quality is the fitness value. PSO is slightly poorer in solution’s quality than GA

17 Conclusion PSO performs much faster while the performance is only slightly poorer than that of GA.

18


Download ppt "Particle swarm optimization"

Similar presentations


Ads by Google