Biologically Inspired Computation Really finishing off EC.

Slides:



Advertisements
Similar presentations
Heuristics, and what to do if you dont know what to do Carl Hultquist.
Advertisements

Ali Husseinzadeh Kashan Spring 2010
Biologically Inspired Computing: Operators for Evolutionary Algorithms
Greedy Algorithms.
School of Computer Science
Chapter 3: Planning and Scheduling Lesson Plan
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
1 Transportation problem The transportation problem seeks the determination of a minimum cost transportation plan for a single commodity from a number.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Genetic Algorithms Contents 1. Basic Concepts 2. Algorithm
21-May-15 Genetic Algorithms. 2 Evolution Here’s a very oversimplified description of how evolution works in biology Organisms (animals or plants) produce.
EvoNet Flying Circus Introduction to Evolutionary Computation Brought to you by (insert your name) The EvoNet Training Committee The EvoNet Flying Circus.
Biologically Inspired Computing: Evolutionary Algorithms: Encodings, Operators and Related Issues: Timetables and Groups This is a lecture seven of `Biologically.
1 Wendy Williams Metaheuristic Algorithms Genetic Algorithms: A Tutorial “Genetic Algorithms are good at taking large, potentially huge search spaces and.
Estimation of Distribution Algorithms Let’s review what have done in EC so far: We have studied EP and found that each individual searched via Gaussian.
1 Greedy Algorithms. 2 2 A short list of categories Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
EvoNet Flying Circus Introduction to Evolutionary Computation Brought to you by (insert your name) The EvoNet Training Committee The EvoNet Flying Circus.
Imagine that I am in a good mood Imagine that I am going to give you some money ! In particular I am going to give you z dollars, after you tell me the.
EAs for Combinatorial Optimization Problems BLG 602E.
© J. Christopher Beck Lecture 18: Timetabling with Workforce Capacity.
Genetic Algorithms Nehaya Tayseer 1.Introduction What is a Genetic algorithm? A search technique used in computer science to find approximate solutions.
Biologically Inspired Computing: Introduction to Evolutionary Algorithms This is lecture four of `Biologically Inspired Computing’ Contents: Local Search.
Genetic Algorithms: A Tutorial
Genetic Algorithm.
Lecture 23. Greedy Algorithms
Genetic Algorithms and Ant Colony Optimisation
Physical Mapping of DNA Shanna Terry March 2, 2004.
Evolutionary Algorithms BIOL/CMSC 361: Emergence Lecture 4/03/08.
SOFT COMPUTING (Optimization Techniques using GA) Dr. N.Uma Maheswari Professor/CSE PSNA CET.
Biologically Inspired Computing: Evolutionary Algorithms: Encodings, Operators and Related Issues: Rules, Trees, Graphs This is additional material for.
Evolution Strategies Evolutionary Programming Genetic Programming Michael J. Watts
Lecture 8: 24/5/1435 Genetic Algorithms Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Genetic Algorithms Michael J. Watts
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
Design of an Evolutionary Algorithm M&F, ch. 7 why I like this textbook and what I don’t like about it!
PSO and ASO Variants/Hybrids/Example Applications & Results Lecture 12 of Biologically Inspired Computing Purpose: Not just to show variants/etc … for.
Neural and Evolutionary Computing - Lecture 5 1 Evolutionary Computing. Genetic Algorithms Basic notions The general structure of an evolutionary algorithm.
Genetic Algorithms Genetic Algorithms – What are they? And how they are inspired from evolution. Operators and Definitions in Genetic Algorithms paradigm.
1 “Genetic Algorithms are good at taking large, potentially huge search spaces and navigating them, looking for optimal combinations of things, solutions.
Biologically Inspired Computation Some of the images in this lecture come from slides for a Course in Swarm Intelligence given at : Lecture 5: Introducing.
Genetic Algorithms. Evolutionary Methods Methods inspired by the process of biological evolution. Main ideas: Population of solutions Assign a score or.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
A New Evolutionary Approach for the Optimal Communication Spanning Tree Problem Sang-Moon Soak Speaker: 洪嘉涓、陳麗徽、李振宇、黃怡靜.
Biologically Inspired Computing: Optimisation This is mandatory additional material for `Biologically Inspired Computing’ Contents: Optimisation; hard.
1 Genetic Algorithms and Ant Colony Optimisation.
Genetic Algorithms Przemyslaw Pawluk CSE 6111 Advanced Algorithm Design and Analysis
1 Algorithms & Data Structures for Games Lecture 2A Minor Games Programming.
Introduction to Genetic Algorithms. Genetic Algorithms We’ve covered enough material that we can write programs that use genetic algorithms! –More advanced.
Optimization Problems
Biologically Inspired Computing: Evolutionary Algorithms: Some Details and Examples This is additional material for lecture 2 of `Biologically Inspired.
Biologically Inspired Computing: Indirect Encodings, and Hyperheuristics This is DWC’s lecture 7 for `Biologically Inspired Computing’ Contents: direct.
The Minimum Label Spanning Tree Problem: Illustrating the Utility of Genetic Algorithms Yupei Xiong, Univ. of Maryland Bruce Golden, Univ. of Maryland.
Genetic Algorithms. Solution Search in Problem Space.
Greedy Algorithms.
ECE 448 Lecture 4: Search Intro
Data Mining (and machine learning)
Advanced Artificial Intelligence Evolutionary Search Algorithm
CW1.
Genetic Algorithms: A Tutorial
`Biologically Inspired Computing’
Advanced Analysis of Algorithms
EE368 Soft Computing Genetic Algorithms.
Boltzmann Machine (BM) (§6.4)
Searching for solutions: Genetic Algorithms
This is additional material for week 2 of
Biologically Inspired Computing: Operators for Evolutionary Algorithms
Traveling Salesman Problem by Genetic Algorithm
Bin Packing Michael T. Goodrich Some slides adapted from slides from
Genetic Algorithms: A Tutorial
Presentation transcript:

Biologically Inspired Computation Really finishing off EC

But first: Finishing off encodings Information about mandatory reading Information about CW2

E.g. encoding a timetable I 4, 5, 13, 1, 1, 7, 13, 2 Generate any string of 8 numbers between 1 and 16, and we have a timetable! Fitness may be + + etc … Figure out an encoding, and a fitness function, and you can try to evolve solutions. montuewedthur 9:00E4, E5 E2E3, E7 11:00E8 2:00E6 4:00E1 Exam1 in 4 th slot Exam2 in 5 th slot Etc …

E.g. encoding a timetable II 4, 5, 13, 1, 1, 7, 13, 2 montuewedthur 9:00E4, E5 E3, E7 11:00E8 2:00E6, E2 4:00E1 Use the 4 th clash-free slot for exam1 Use the 5 th clash-free slot for exam2 (clashes with E4,E8) Use the 13 th clash-free slot for exam3 Etc …

So, a common approach is to build an encoding around an algorithm that builds a solution Don’t encode a candidate solution directly … instead encode parameters/features for a constructive algorithm that builds a candidate solution

e.g.: bin-packing – given collection of items, pack them into the fewest possible number of bins

Engineering Constructive Algorithms A typical constructive algorithm for bin-packing: Put the items in a specific sequence (e.g. smallest to largest) Initialise solution Repeat nitems times choose next item, place it in first bin it fits (create a new empty bin if necessary) Indirect Encodings often involve using a constructive algorithm,

Example using ‘First-Fit Ascending’ (FFA) constructive algorithm for bin-packing FFA

First-fit Ascending

Example using First-Fit Descending First-fit Descending

First-Fit Descending

Notes: In other problems, FFA gets better results than FFD There are many other constructive heuristics for bin packing, e.g. Using formulae that choose next item based on distribution of unplaced item sizes and empty spaces... There are constructive algorithms for most problems (e.g. Timetabling, scheduling, etc....) Often, ‘indirect encodings’ for EAs use constructive algorithms. Common approach: the encoding is permutation, and the solution is built using the items in that order READ THE FALKENAUER PAPER TO SEE GOOD EA ENCODING FOR BIN-PACKING

Encodings that use constructive algorithms The indirect encoding for timetabling, a few slides ago, is an example. The ‘underlying’ constructive algorithm is: Line up the exams in order e1, e2, … eN Repeat until all exams are scheduled: take the next exam in the list, and put it in the first place it can go without clashes This provides only a single solution, the same every time we run it. This may be very bad in terms of other factors, such as consecutive exams, time allowed for marking, etc. How did we modify it so that it could generate a big space of different solutions?

Encodings that use constructive algorithms Line up the exams in order e1, e2, … eN Repeat until all exams are scheduled: take the next exam in the list, and put it in the first place it can go without clashes

Encodings that use constructive algorithms Line up the exams in order e1, e2, … eN Repeat until all exams are scheduled: take the next exam in the list, and put it in the first place it can go without clashes

Encodings that use constructive algorithms Line up the exams in order e1, e2, … eN Repeat until all exams are scheduled: take the next exam in the list, and put it in the Nth place it can go without clashes The chromosome encodes each of the Ns. The original constructive algorithm corresponds to running the above on the chromosome “ ….”. We could also engineer the original constructive algorithm into an encoding in a quite different way. How?

Encodings that use constructive algorithms Line up the exams in order e1, e2, … eN Repeat until all exams are scheduled: take the next exam in the list, and put it in the first place it can go without clashes

Encodings that use constructive algorithms Randomly permute the exams e1, …, eN Repeat until all exams are scheduled: take the next exam in the list, and put it in the first place it can go without clashes This is a fine constructive algorithm, which will provide a different solution depending on the permutation. It is easily used as an encoding: the chromosome provides the permutation.

Prim’s algorithm for building the minimal spanning tree (see an earlier lecture) is an example. Djikstra’s shortest path algorithm is also an example. In both of these cases, the optimal solution is guaranteed to be found, since MST and SP are easy problems. But usually we see constructive methods used to give very fast `OK’ solutions to hard problems. Other well known constructive algorithms

On engineering constructive methods Some Constructive Heuristics are deterministic. I.e. they give the same answer each time. Some are stochastic – I.e. they may give a different solution in different runs. Usually, if we have a deterministic constructive method such as FFD, we can engineer a stochastic version of it. E.g. instead of choosing the next-lightest item in each step, we might choose randomly between the lightest three unplaced items.

Bin packing example direct encoding: 2, 3, 2, 3,  item 1 is in bin 2, item 2 is in bin 3, item 3 is in bin 2, etc... (often a bin will be over capacity, so fitness function will have to include penalties) Bin packing example indirect encoding: candidate solution is a perm of the items, e.g. 4, 2, 1, 3, 5...meaning: First place item 4 in the first available bin it fits in, then place item 2 in the first available... etc.

Direct vs Indirect Encodings Direct: straightforward genotype (encoding)  phenotype (actual solution) mapping Easy to estimate effects of mutation Fast interpretation of chromosome (hence speedier fitness evlaluation) Indirect/Hybrid: Easier to exploit domain knowledge – (e.g. use this in the constructive heuristic) Hence, possible to `encode away’ undesirable features Hence, can seriously cut down the size of the search space But, slow interpretation Neighbourhoods are highly rugged.

Example real-number Encoding (and: How EAs can innovate, rather than just optimize) D1, D2, D3, D4 D5 D6 D1 >= D2 >= D3, D4 <= D5 <= D6 Fixed at six diameters, five sections Design shape for a two-phase jet nozzle

2, 1.8, 1.1, The encoding enforces these constraints: D1 >= D2 >= D3, D4 <= D5 <= D6 Fixed at six diameters, five sections A simple encoding

Z1, Z2, D1, D2, D3 Dsmall…, Dn, Dn+1, … Middle section constrained to be smallest, That’s all Mutations can change diameters, add sections, and delete sections Num sections before smallest Num sections after smallest Section diameters A more complex encoding– bigger search space, slower, but potential for innovative solutions

Mandatory reading slides for - Operators (typical mutation and crossover operators for different types of encoding) - Selection (various standard selection methods) - More encodings

About CW2

Pamela Hardaker, Benjamin N. Passow and David Elizondo. Walking State Detection from Electromyographic Signals towards the Control of Prosthetic Limbs UKCI 2013 got signals like this, but on her thigh just above the knee: standing walking running

Current knee-joint prospects need manual intervention to change between standing/walking/running modes (the wearer presses a button) Can we train a Neural Network to automatically detect when to change, on the basis of nerve signals from the last 30ms ?

About CW2 Snapshot of Pamela’s data Time signal state 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking

About CW2 Snapshot of Pamela’s data Time signal state 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking Max signal strength in last 30ms Max signal strength in last 20ms Max signal strength in last 10ms range of sig in last 30ms range … last 20ms range … last 10ms mean … last 30ms mean … last 20ms mean … last 10ms Outputs: 1 0 0, or (standing) (waking) (running)

About CW2 Snapshot of Pamela’s data Time signal state 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: standing 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking 0: walking Max signal strength in last 30ms Max signal strength in last 20ms Max signal strength in last 10ms range of sig in last 30ms range … last 20ms range … last 10ms mean … last 30ms mean … last 20ms mean … last 10ms CW2: evolve a neural network that predicts the state

What you will do From me, you get: Working NN code that does the job already What you will do: Implement new mutation and crossover operators within my code, and test them on Pamela’s data Write a report comparing the performance of the different operators

If time, a look at the key bits of those operator slides …

Operators for real-valued k-ary encodings Here the chromosome is a string of k real numbers, which each may or may not have a fixed range (e.g. between −5 and 5). e.g. 0.7, 2.8, −1.9, 1.1, 3.4, −4.0, −0.1, −5.0, … All of the mutation operators for k-ary encodings, as previously described in these slides, can be used. But we need to be clear about what it means to randomly change a value. In the previous slides for k-ary mutation operators we assumed that a change to a gene mean to produce a random (new) value anywhere in the range of possible values …

Operators for real-valued k-ary encodings that’s fine … we can do that with a real encoding, but this means we are choosing the new value for a gene uniformly at random.

Mutating a real-valued gene using a uniform distrution Range of allowed values for gene: 0—10 New value can be anywhere in the range, with any number equally likely.

But, in real-valued encodings, we usually use Gaussian (‘Normal’) distributions … so that the new value is more likely than not to be close to the old value. typically we generate a perturbation from a Gaussian distribution, like this one, and add that perturbation to the old value Probability of choosing this perturbation −1 − perturbation 0.1

Mutation in real-valued encodings Most common is to use the previously indicated mutation operators (e.g. single- gene, genewise) but with Gaussian perturbations rather than uniformly chosen new values.

Crossover in real-valued encodings All of the k-ary crossover operators previously described can be used. But there are other common ones that are only feasible for real-valued encodings:

Box and Line crossover operators for real-valued chromosomes – figure is from this paper: Treat the parents like vectors Fig assumes you are crossing over two 2-gene parents, x (x1,x2) and y (y1,y2) Box: child is (x1 + u1 (y1 – x1), x2 + u2(y2 – x2) ) Where u1 and u2 are uniform random numbers between 0 and 1 Line: child is: x + u (y – x) or (x1+ u(y1 – x1), x2 + u (y2 – x2) ) Where u is a random number between 0 and 1

Box and Line crossover: general form Parent1: x1, x2, x3, …., xN Parent2: y1, y2, y3, …, yN given parameter α (typically values are 0, 0.1 or 0.25) General form: Child is ((x1 – α) + u (y1 – x1), ((x2 – α) + u (y2 – x2), … etc) Where: u is a uniform random number between 0 and 1+2α Line crossover: α = 0 ; u is generated once for each crossover, and is the same for every gene. Extended line crossover: α > 0, u is generated once for each crossover, and is the same for every gene. Box crossover: α = 0 ; u is different for every gene Extended box crossover: α > 0 ; u is different for every gene