Download presentation
Presentation is loading. Please wait.
Published byMildred Rice Modified over 9 years ago
1
The Particle Swarm: Theme and Variations on Computational Social Learning James Kennedy Washington, DC Kennedy.Jim@gmail.com
2
The Particle Swarm A stochastic, population-based algorithm for problem-solving Based on a social-psychological metaphor Used by engineers, computer scientists, applied mathematicians, etc. First reported in 1995 by Kennedy and Eberhart Constantly evolving
3
The Particle Swarm Paradigm is a Particle Swarm A kind of program comprising a population of individuals that interact with one another according to simple rules in order to solve problems, which may be very complex. It is an appropriate kind of description of the process of science.
4
Two Spaces The social network topology, and The state of the individual as a point in a Cartesian coordinate system Moving point = a particle A bunch of them = a swarm Note memes: “evolution of ideas” vs. “Changes in people who hold ideas”
5
Cognition as Optimization Cognitive consistency theories, incl. dissonance Feedforward Neural Nets Minimizing or maximizing a function result by adjusting parameters Parallel constraint satisfaction Particle swarm describes the dynamics of the network, as opposed to its equilibrium properties
6
Mind and Society Minsky: Minds are simply what brains do. No: minds are what socialized human brains do. … solipsism …
7
Dynamic Social Impact Theory i=f(SIN) Computer simulation – 2-d CA Each individual is both a target and source of influence “Euclidean” neighborhoods Binary, univariate individuals “Strength” randomly assigned Polarization Nowak, A., Szamrej, J., & Latané, B. (1990). From private attitude to public opinion: A dynamic theory of social impact. Psychological Review, 97, 362-376.
8
Particle Swarms: The Population To understand the particle swarm, you have to understand the interactions of the particles One particle is the stupidest thing in the world The population learns Every particle is a teacher and a learner Social learning, norms, conformity, group dynamics, social influence, persuasion, self-presentation, cognitive dissonance, symbolic interactionism, cultural evolution …
9
Neighborhoods: Population topology Gbest Lbest Particles learn from one another. Their communication structure determines how solutions propagate through the population. (All N=20)
10
von Neumann (“Square”) Topology Regular, easy to understand, works adequately with a variety of versions – perhaps not the best for any version, but not the worst.
11
The Particle Swarm: Pseudocode Initialize Population and constants Repeat Do i=1 to population size CurrentEval i = eval( ) If CurrentEval < pbest i then do pbest i = CurrentEval i For d=1 to Dimension p id =x id Next d If CurrentEval i < Pbest gbest then gbest=i End if g = best neighbor’s index For d=1 to Dimension v id = W*v id + U(0, AC) × (p id – x id ) + U(0, AC) × (p gd – x id ) x id = x id + v id Next d Next i Until termination criterion
12
Pseudocode Chunk #1 CurrentEval i = eval( ) If CurrentEval i < pbest i then do pbest i = CurrentEval i For d=1 to Dimension p id =x id Next d If CurrentEval i < pbest gbest then gbest=i End if Can come at top or bottom of the loop In gbest topology, g=gbest It’s useful to track the population best
13
Pseudocode Chunk #2 g = best neighbor’s index For d=1 to Dimension v id = W*v id + U(0, AC) × (p id – x id ) + U(0, AC) × (p gd – x id ) x id = x id + v id Next d Constriction Type 1”: W=0.7298, AC=W*2.05=1.496 Clerc 2006; W=0.7, AC=1.43 Inertia: W might vary with time, in (0.4, 0.9), etc., AC=2.0 typically Note three components of velocity Vmax - not necessary, might help Q: Are these formulas arbitrary?
14
Some Standard Test Functions Sphere Griewank Rosenbrock Rastrigin Schaffer’s f6
15
Test Functions: Typical Results
16
Step-Size Depends on Neighbors p i =0 p g =0 p i =+2 p g =-2 p i =+0.1 p g =-0.1 Movement of the particle through the search space is centered on the mean of p i and p g on each dimension, and its amplitude is scaled to their difference. Exploration vs. exploitation: automatic … “the box” …
17
Search distribution – Scaled to Neighborhood Q: What is the distribution of points that are tested by the particle? Previous bests constant at +10 A million iterations
18
“Bare Bones” particle swarm x = G((p i + p g )/2, abs(p i – p g )) G(mean, s.d.) is Gaussian RNG Simplified (!) Works pretty well, but not as good as canonical.
19
Kurtosis Tails trimmed Not trimmed Empirical observations with p’s held constant Peaked -- fat tails
20
Kurtosis High peak, fat tails Mean moments of the canonical particle swarm algorithm with previous bests set at +20, varying the number of iterations. Iterations MeanS.D. Skew- ness Kurtosis 1,0000.097037.7303-0.06178.008 3,0000.021441.52810.081418.813 10,000-0.008041.6614-0.067940.494 100,0000.002241.72290.2116170.204 1,000,0000.008041.30480.3808342.986
21
Bursts of Outliers “Volatility clustering” seems to typify the particle’s trajectory
22
Adding Bursts of Outliers to Bare Bones PSO Center = (p id + p gd )/2 SD = |p id - p gd | x id = G(0,1) if Burst = 0 and U(0,1)< PBurstStart then Burst = U(0, maxpower ) Else If Burst > 0 and U(0,1)< PBurstEnd then Burst = 0 End If If Burst > 0 then x id = x id ^ Burst x id = Center + x id * SD Sphere Rosenbrock Rastrigin Griewank30 Griewank10 f6 (Bubbled line is canonical PS)
23
“The Box” Where the particle goes next depends on which way it was already going, the random numbers, and the sign and magnitude of the differences. The area where it can go it crisply bounded, but the probability inside the box is not uniformly dense. v id = W*v id + U(0, AC) × (p id – x id ) + U(0, AC) × (p gd – x id ) x id = x id + v id
24
Empirical distribution of means of random numbers from different ranges Simulate with uniform RNG, trim tails
25
TUPS: Truncated-Uniform Particle Swarm Start at current position: x(t) Move weighted amount same direction: W1× (x(t) – x(t-1)) Find midpoint of extremes, and difference between them, on each dimension (the sides of the box) Weight that, add it to where you are, call it the “center” Generate uniformly distributed random number around the center, range slightly less than the length of the side That’s x(t+1)
26
TUPS: Truncated-Uniform Particle Swarm x(t+1)=x(t) + W1× (x(t) – x(t-1)) + W2 × ((U(-1,+1) × (width/2.5)) + center) W1=0.729; W2=1.494 Width is difference between highest and lowest (p-x) Center is width/2 Generates a point less than Width/2 from the center
27
Binary Particle Swarms S(x) = 1 / (1 + exp(-x)) v = [the usual] if rand() < S(v) then x = 1 else x = 0 Transform velocity with sigmoid function in (0.. 1) Use it as a probability threshold Though this is a radically different concept, the principles of particle interaction still apply (because the power is in the interactions).
28
FIPS-- The “fully-informed” particle swarm (Rui Mendes) v(t+1) = W1 × v(t) + Σ (rand() × W2/K × (p k – x(t))) x(t+1)=x(t)+v(t+1) (K=number of neighbors, k=index of neighbor, W2 is a sum.) Note that p i is not a source of influence in FIPS. Doesn’t select best neighbor. Orbits around the mean of neighborhood bests. This version is more dependent on topology.
29
Deconstructing Velocity x id (t+1) = x id (t) + W1(x id (t) - x id (t-1)) + (rand()×(W2)×(p id - x id (t)) + (rand()×(W2)×(p gd - x id (t)) Because x(t+1) = x(t) + v(t+1) we know that on the previous iteration, x(t) = x(t-1) + v(t) So we can find v(t) v(t) = x(t) – x(t-1) and can substitute that into the formula, to put it all in one line:
30
x id (t+1) = x id (t) + W1(x id (t)- x id (t-1)) + Σ (rand()×(W2/K)×(p kd - x id (t))) Generalization and Verbal Representation … or in words … NEW POSITION = CURRENT POSITION + PERSISTENCE + SOCIAL INFLUENCE We can generalize the canonical and FIPS versions:
31
Social Influence has two components: Central Tendency, and Dispersion NEW POSITION= CURRENT POSITION + PERSISTENCE + SOCIAL CENTRAL TENDENCY + SOCIAL DISPERSION … Hmmm, this gives us something to play with …!
32
NEW POSITION= CURRENT POSITION + PERSISTENCE + SOCIAL CENTRAL TENDENCY + SOCIAL DISPERSION Gaussian “Essential” Particle Swarm Note that only the last term has randomness in it – the rest is deterministic meanp=(p id + p gd )/2 disp=abs(p id – p gd )/2 x id (t+1)= x id (t) + W1(x id (t)- x id (t-1)) + W2*(meanp – x id ) + G(0,1)*disp G(0,1) is a Gaussian RNG
33
Gaussian Essential Particle Swarm x id (t+1)= x id (t) + W1(x id (t)- x id (t-1)) + W2*(meanp – x id ) + G(0,1)*disp FunctionTrialsCanonicalGaussian F6200.001513E-10 GRIEWANK200.00860.0103 GRIEWANK10200.05080.045 RASTRIGIN2056.86249.151 ROSENBROCK2041.83644.197 SPHERE2088E-1638E-24
34
Various Probability Distributions FunctionTrialsCanonicalGaussianTriangularDouble- Exponential Cauchy F6200.001513E-10 0.00570.0010.0029 GRIEWANK200.00860.0103 0.02750.01490.0253 GRIEWANK10200.05080.045 0.06940.05410.0768 RASTRIGIN2056.86249.151 140.9447.2633.829 ROSENBROCK2041.83644.197 67.89441.30842.054 SPHERE2088E-1638E-24 1.70E-182.40E-221.60E-17 There is clearly room for exploration here.
35
Gaussian FIPS FIPScenter= mean of (p kd – x id ) FIPSrange = mean of abs(p id - p kd ) x id = x id + W1 × (x id (t)-x id (t-1)) + W2 × FIPScenter + G(0,1) × (FIPSrange/2); Fully-informed – uses all neighbors
36
Gaussian FIPS FunctionTrialsCanonicalGaussian FIPS F620 0.00150.001 GRIEWANK20 0.00860.0007 GRIEWANK1020 0.05080.0215 RASTRIGIN20 56.86236.858 ROSENBROCK20 41.83640.365 SPHERE20 88E-1641E-29 Gaussian FIPS compared to Canonical PSO, square topology.
37
t(38), alpha=0.05 functp-valuerankinv. rank New alpha Sig. SPHERE3.170.0030160.008333 * GRIEWANK102.990.0048250.010000 * GRIEWANK2.640.0118340.012500 * RASTRIGIN2.210.0333430.016667. F60.450.6583520.025000. ROSENBROCK0.150.8782610.050000. Gaussian FIPS vs. Canonical PSO Ref: Jaccard, J. & Wan, C. K. (1996). LISREL approaches to interaction effects in multiple regression. Thousand Oaks, CA: Sage Publications. Modified Bonferroni correction
38
Mendes: Two Measures of Performance Color and shape indicate parameters of the social network – degree, clustering, etc.
39
Best Topologies Best-neighbor versions FIPS versions
40
Worst FIPS Sociometries and Proportions Successful
41
Understanding the Particle Swarm Lots of variations in particle movement formulas Teachers and learners Propagation of knowledge is central Interaction method and social network topology … interact It’s simple, but difficult to understand
42
In Sum There is some fundamental process Uses information from neighbors Seems to require balance between “persistence” and “influence” Decomposing a version we know is OK We can understand it We can improve it Particle Trajectories Arbitrary? -- not quite Can be replaced by RNG (trajectory is not the essence) How it works It works by sharing successes among individuals Need to look more closely at the sharing phenomenon itself
43
Send me a note Jim Kennedy Kennedy.Jim@gmail.com
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.