Theory Chapter 11.

Slides:



Advertisements
Similar presentations
Genetic Algorithms Chapter 3. A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing Genetic Algorithms GA Quick Overview Developed: USA in.
Advertisements

CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Genetic Algorithms Representation of Candidate Solutions GAs on primarily two types of representations: –Binary-Coded –Real-Coded Binary-Coded GAs must.
1 Wendy Williams Metaheuristic Algorithms Genetic Algorithms: A Tutorial “Genetic Algorithms are good at taking large, potentially huge search spaces and.
1 Structure of search space, complexity of stochastic combinatorial optimization algorithms and application to biological motifs discovery Robin Gras INRIA.
1 Lecture 8: Genetic Algorithms Contents : Miming nature The steps of the algorithm –Coosing parents –Reproduction –Mutation Deeper in GA –Stochastic Universal.
Estimation of Distribution Algorithms Ata Kaban School of Computer Science The University of Birmingham.
Theory Chapter 11. A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing Theory Overview (reduced w.r.t. book) Motivations and problems Holland’s.
Evolutionary Computational Intelligence
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
Intro to AI Genetic Algorithm Ruth Bergman Fall 2002.
Simulation Models as a Research Method Professor Alexander Settles.
Chapter 14 Genetic Algorithms.
Theory Chapter 11. A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing Theory Overview Motivations and problems Holland’s Schema Theorem.
Intro to AI Genetic Algorithm Ruth Bergman Fall 2004.
Mathematical Models of GAs Notes from * Chapter 4 of Mitchell’s An Intro. to GAs * Neal’s Research CS 536 – Spring 2006.
Genetic Algorithms Overview Genetic Algorithms: a gentle introduction –What are GAs –How do they work/ Why? –Critical issues Use in Data Mining –GAs.
Genetic Algorithms: A Tutorial
Genetic Algorithm.
Computer Implementation of Genetic Algorithm
Theory A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing Chapter 11 1.
FDA- A scalable evolutionary algorithm for the optimization of ADFs By Hossein Momeni.
Schemata Theory Chapter 11. A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing Theory Why Bother with Theory? Might provide performance.
CS 484 – Artificial Intelligence1 Announcements Lab 3 due Tuesday, November 6 Homework 6 due Tuesday, November 6 Lab 4 due Thursday, November 8 Current.
ART – Artificial Reasoning Toolkit Evolving a complex system Marco Lamieri Spss training day
Dynamical Systems Model of the Simple Genetic Algorithm Introduction to Michael Vose’s Theory Rafal Kicinger Summer Lecture Series 2002.
Theory Chapter 11. A.E. Eiben and J.E. Smith, EC Theory, modified by Ch. Eick Overview Motivations and problems Holland’s Schema Theorem – Derivation,
Genetic Algorithms Introduction Advanced. Simple Genetic Algorithms: Introduction What is it? In a Nutshell References The Pseudo Code Illustrations Applications.
1 Machine Learning: Lecture 12 Genetic Algorithms (Based on Chapter 9 of Mitchell, T., Machine Learning, 1997)
1 “Genetic Algorithms are good at taking large, potentially huge search spaces and navigating them, looking for optimal combinations of things, solutions.
1 Chapter 14 Genetic Algorithms. 2 Chapter 14 Contents (1) l Representation l The Algorithm l Fitness l Crossover l Mutation l Termination Criteria l.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Risk Analysis & Modelling Lecture 2: Measuring Risk.
Why do GAs work? Symbol alphabet : {0, 1, * } * is a wild card symbol that matches both 0 and 1 A schema is a string with fixed and variable symbols 01*1*
Edge Assembly Crossover
1. Genetic Algorithms: An Overview  Objectives - Studying basic principle of GA - Understanding applications in prisoner’s dilemma & sorting network.
EE749 I ntroduction to Artificial I ntelligence Genetic Algorithms The Simple GA.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Power of Heuristic; non- conventional search.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
D Nagesh Kumar, IIScOptimization Methods: M8L5 1 Advanced Topics in Optimization Evolutionary Algorithms for Optimization and Search.
GENETIC ALGORITHMS Tanmay, Abhijit, Ameya, Saurabh.
1 Chapter 3 GAs: Why Do They Work?. 2 Schema Theorem SGA’s features: binary encoding proportional selection one-point crossover strong mutation Schema.
Why do GAs work? Symbol alphabet : {0, 1, * } * is a wild card symbol that matches both 0 and 1 A schema is a string with fixed and variable symbols 01*1*
CAP6938 Neuroevolution and Artificial Embryogeny Evolutionary Computation Theory Dr. Kenneth Stanley January 25, 2006.
Presenter : Tsung-Yu Ho Review Mixing Problem Categories Crossover as a Mixer (√) Crossover as a Innovator (√) Crossover as a Disrupter All.
Genetic Algorithms. Solution Search in Problem Space.
Genetic Algorithms And other approaches for similar applications Optimization Techniques.
Genetic Algorithm. Outline Motivation Genetic algorithms An illustrative example Hypothesis space search.
 Presented By: Abdul Aziz Ghazi  Roll No:  Presented to: Sir Harris.
Chapter 14 Genetic Algorithms.
Introduction to Genetic Algorithms and Evolutionary Computing
Genetic Algorithms Author: A.E. Eiben and J.E. Smith
Random Testing: Theoretical Results and Practical Implications IEEE TRANSACTIONS ON SOFTWARE ENGINEERING 2012 Andrea Arcuri, Member, IEEE, Muhammad.
Dr. Kenneth Stanley September 13, 2006
C.-S. Shieh, EC, KUAS, Taiwan
Evolution strategies Can programs learn?
Subject Name: File Structures
Meta-Heuristic Algorithms 16B1NCI637
Genetic Algorithms: A Tutorial
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Genetic Algorithms Chapter 3.
Using Quotient Graphs to Model Neutrality in Evolutionary Search Dominic Wilson Devinder Kaur University of Toledo.
A Gentle introduction Richard P. Simpson
Machine Learning: UNIT-4 CHAPTER-2
Traveling Salesman Problem by Genetic Algorithm
Genetic Algorithms: A Tutorial
MGS 3100 Business Analysis Regression Feb 18, 2016
Population Methods.
Presentation transcript:

Theory Chapter 11

Overview Motivations and problems Holland’s Schema Theorem Derivation, Implications, Refinements Dynamical Systems & Markov Chain Models Statistical Mechanics Reductionist Techniques Techniques for Continuous Spaces No Free Lunch ?

Why Bother with Theory? Might provide performance guarantees Convergence to the global optimum can be guaranteed providing certain conditions hold Might aid better algorithm design Increased understanding can be gained about operator interplay etc. Mathematical Models of EAs also inform theoretical biologists Because you never know ….

Problems with Theory ? EAs are vast, complex dynamical systems with many degrees of freedom The type of problems for which they do well, are precisely those it is hard to model The degree of randomness involved means stochastic analysis techniques must be used Results tend to describe average behaviour After 100 years of work in theoretical biology, they are still using fairly crude models of very simple systems ….

Holland’s Schema Theorem A schema (pl. schemata) is a string in a ternary alphabet ( 0,1 # = “don’t care”) representing a hyperplane within the solution space. E.g. 0001# #1# #0#, ##1##0## etc Two values can be used to describe schemata, the Order (number of defined positions) = 6,2 the Defining Length - length of sub-string between outmost defined positions = 9, 3

Example Schemata 01# 1## 001 1#0 111 H o(H) d(H) 001 3 2 01# 2 1 000 100 01# 001 1#0 H o(H) d(H) 001 3 2 01# 2 1 1#0 2 2 1## 1 0

Schema Fitnesses The true “fitness” of a schema H is taken by averaging over all possible values in the “don’t care” positions, but this is effectively sampled by the population, giving an estimated fitness f(H) With Fitness Proportionate Selection Ps(instance of H) = n(H,t) * f(H,t) / (<f> * ) therefore proportion in next parent pool is: m’(H,t+1) = m(H,t) * f(H,t) / <f>

Schema Disruption I One Point Crossover selects a crossover point at random from the l-1 possible points For a schema with defining length d the random point will fall inside the schema with probability = d(H) / (l-1). If recombination is applied with probability Pc the survival probability is 1.0 - Pc*d(H)/(l-1)

Schema Disruption II The probability that bit-wise mutation with probability Pm will NOT disrupt the schemata is simply the probability that mutation does NOT occur in any of the defining positions, Psurvive (mutation) = ( 1- Pm)o(H) = 1 – o(H) * Pm + terms in Pm2 +… For low mutation rates, this survival probability under mutation approximates to 1 - o(h)* Pm

The Schema Theorem Put together, the proportion of a schema H in successive generations varies as: Condition for schema to increase its representation is: Inequality is due to convergence affecting crossover disruption, exact versions have been developed

Implications 1: Operator Bias One Point Crossover less likely to disrupt schemata which have short defining lengths relative to their order, as it will tend to keep together adjacent genes this is an example of Positional Bias Uniform Crossover No positional bias since choices independent BUT is far more likely to pick 50% of the bits from each parent, less likely to pick (say) 90% from one this is called Distributional Bias Mutation also shows Distributional Bias, but not Positional

Operator Biases ctd Operator Bias has been extensively studied by Eschelman and Schaffer ( empirically) and theoretically by Spears & DeJong. Results emphasise the importance of utilising all available problem specific knowledge when choosing a representation and operators for a new problem

Implications 2:The Building Block Hypothesis Closely related to the Schema Theorem is the “Building Block Hypothesis” (Goldberg 1989) This suggests that Genetic Algorithms work by discovering and exploiting “building blocks” - groups of closely interacting genes - and then successively combining these (via crossover) to produce successively larger building blocks until the problem is solved. Has motivated study of Deceptive problems Based on the notion that the lower order schemata within a partition lead the search in the opposite direction to the global optimum i.e. for a k-bit partition there are dominant epistatic interactions of order k-1

Deception and Walsh Transforms Simple function of Unitation Much analysis has concentrated on Walsh Transforms Similar to Fourier transform in continuous domain Natural link between schema and Walsh partitions 1.0 0.6 F(u) 4 3 2 1 u

Criticisms of the Schema Theorem It presents an inequality that does not take into account the constructive effects of crossover and mutation Exact versions have been derived Have links to Price’s theorem in biology Because the mean population fitness, and the estimated fitness of a schema will vary from generation to generation, it says nothing about gen. t+2 etc. “Royal Road” problems constructed to be GA-easy based on schema theorem turned out to be better solved by random mutation hill-climbers BUT it remains a useful conceptual tool and has historical importance

Other Landscape Metrics As well as epistasis and deception, several other features of search landscapes have been proposed as providing explanations as to what sort of problems will prove hard for GAs fitness-distance correlation number of peaks present in the landscape the existence of plateaus all these imply a neighbourhood structure to the search space. It must be emphasised that these only hold for one operator

Vose’ Dynamical Systems Model Let n be the size of a finite search space Construct a population vector p with n elements giving the proportion of the population in each possible state. n x n Mixing Matrix, M, represents operation of crossover and mutation on population n x n Selection Matrix S represents action of selection

Dynamical Systems 2 The existence, location and stability of fixed points or attractors for this system is given by the set of coupled equations defining G For selection-mutation algorithms using fitness-proportion selection, G is linear and the fixed points are given by its Eigenvectors Note that these are infinite population models extensions to finite populations are possible but computationally intractable Lots of interests in ways of aggregating states schemata are one option, unitation is another that permits analysis of some well known problems e.g. deception, royal road

Markov Chain Analysis A system is called a Markov Chain if It can exist only in one of a finite number of states So can be described by a variable Xt The probability of being in any state at time t+1 depends only on the state at time t. Frequently these probabilities can be defined in a transition matrix, and the theory of stochastic processes allows us to reason using them. Has been used to provide convergence proofs Can be used with F and M to create exact probabilistic models for binary coded Gas, but these are huge

Statistical mechanics models Use techniques borrowed from physics Just as per schemata and equivalence classes such as unitation, these use a few statistics to model the behaviour of a complex system Statistics chosen are the cumulants of the fitness distribution related to mean, variance, skewness etc of population fitness Cannot model e.g. best fitness Can provide more accurate predictions of short term (rather than steady state) behaviour of finite pops. than dynamical systems approaches

Reductionist Approaches “Engineering” type approach of studying different operator and processes in isolation Analysis of Takeover times for different selection operators via Markov Chains Analysis of “mixing times” for crossover to put together building blocks to create new solutons Analysis of population sizes needed based on schema variance, signal to noise ratios etc Can provide useful pointers for designing EAs in practice, e.g. T(takeover) < T(mixing) =>premature convergence

Continuous Space models Theory is probably more advanced than for discrete spaces, includes self-adaptation Most analyses models two variables: Progress Rate: distance of centre of mass of pop from global optimum as a function of time Quality Gain : expected improvement in fitness between generations Lots of theory describing behaviour on simple (sphere, corridor) models These are often good descriptors of local properties of landscapes Theory has been extended to noisy environments

No Free Lunch Theorems Implications IN LAYMAN’S TERMS, Averaged over all problems For any performance metric related to number of distinct points seen All non-revisiting black-box algorithms will display the same performance Implications New black box algorithm is good for one problem => probably poor for another Makes sense not to use “black-box algorithms” Lots of ongoing work showing counter-examples