153 Linear Programming Problem. 154 which can be written in the form:- Assuming a feasible solution exists it will occur at a corner of the feasible region.

Slides:



Advertisements
Similar presentations
Genetic Algorithm.
Advertisements

CS6800 Advanced Theory of Computation
1 Transportation problem The transportation problem seeks the determination of a minimum cost transportation plan for a single commodity from a number.
Tuesday, May 14 Genetic Algorithms Handouts: Lecture Notes Question: when should there be an additional review session?
Content Based Image Clustering and Image Retrieval Using Multiple Instance Learning Using Multiple Instance Learning Xin Chen Advisor: Chengcui Zhang Department.
Non-Linear Problems General approach. Non-linear Optimization Many objective functions, tend to be non-linear. Design problems for which the objective.
1 Lecture 8: Genetic Algorithms Contents : Miming nature The steps of the algorithm –Coosing parents –Reproduction –Mutation Deeper in GA –Stochastic Universal.
Object Recognition Using Genetic Algorithms CS773C Advanced Machine Intelligence Applications Spring 2008: Object Recognition.
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
Intro to AI Genetic Algorithm Ruth Bergman Fall 2002.

Intro to AI Genetic Algorithm Ruth Bergman Fall 2004.
Chapter 6: Transform and Conquer Genetic Algorithms The Design and Analysis of Algorithms.
Genetic Algorithms Overview Genetic Algorithms: a gentle introduction –What are GAs –How do they work/ Why? –Critical issues Use in Data Mining –GAs.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Universidad de los Andes-CODENSA The Continuous Genetic Algorithm.
Genetic Programming.
Genetic Algorithm.
Evolutionary Intelligence
© Negnevitsky, Pearson Education, CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University.
Efficient Model Selection for Support Vector Machines
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
SOFT COMPUTING (Optimization Techniques using GA) Dr. N.Uma Maheswari Professor/CSE PSNA CET.
Intro. ANN & Fuzzy Systems Lecture 36 GENETIC ALGORITHM (1)
Genetic algorithms Prof Kang Li
Zorica Stanimirović Faculty of Mathematics, University of Belgrade
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
Genetic algorithms Charles Darwin "A man who dares to waste an hour of life has not discovered the value of life"
S J van Vuuren The application of Genetic Algorithms (GAs) Planning Design and Management of Water Supply Systems.
An Introduction to Genetic Algorithms Lecture 2 November, 2010 Ivan Garibay
Genetic Algorithms Genetic Algorithms – What are they? And how they are inspired from evolution. Operators and Definitions in Genetic Algorithms paradigm.
Computational Complexity Jang, HaYoung BioIntelligence Lab.
Genetic Algorithms Introduction Advanced. Simple Genetic Algorithms: Introduction What is it? In a Nutshell References The Pseudo Code Illustrations Applications.
Genetic Algorithms Siddhartha K. Shakya School of Computing. The Robert Gordon University Aberdeen, UK
GENETIC ALGORITHM A biologically inspired model of intelligence and the principles of biological evolution are applied to find solutions to difficult problems.
Derivative Free Optimization G.Anuradha. Contents Genetic Algorithm Simulated Annealing Random search method Downhill simplex method.
© Negnevitsky, Pearson Education, Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Introduction,
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
 Negnevitsky, Pearson Education, Lecture 9 Evolutionary Computation: Genetic algorithms n Introduction, or can evolution be intelligent? n Simulation.
Numerical Methods.
Genetic Algorithms Genetic algorithms provide an approach to learning that is based loosely on simulated evolution. Hypotheses are often described by bit.
Genetic Algorithms What is a GA Terms and definitions Basic algorithm.
METAHEURISTICS Genetic Algorithm Jacques A. Ferland Department of Informatique and Recherche Opérationnelle Université de Montréal
Genetic Algorithms. 2 Overview Introduction To Genetic Algorithms (GAs) GA Operators and Parameters Genetic Algorithms To Solve The Traveling Salesman.
EE749 I ntroduction to Artificial I ntelligence Genetic Algorithms The Simple GA.
Genetic Algorithms. The Basic Genetic Algorithm 1.[Start] Generate random population of n chromosomes (suitable solutions for the problem) 2.[Fitness]
Waqas Haider Bangyal 1. Evolutionary computing algorithms are very common and used by many researchers in their research to solve the optimization problems.
Solving Function Optimization Problems with Genetic Algorithms September 26, 2001 Cho, Dong-Yeon , Tel:
GENETIC ALGORITHM Basic Algorithm begin set time t = 0;
D Nagesh Kumar, IIScOptimization Methods: M8L5 1 Advanced Topics in Optimization Evolutionary Algorithms for Optimization and Search.
1 Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations Genetic Algorithm (GA)
Genetic Algorithms. Underlying Concept  Charles Darwin outlined the principle of natural selection.  Natural Selection is the process by which evolution.
Genetic Algorithm Dr. Md. Al-amin Bhuiyan Professor, Dept. of CSE Jahangirnagar University.
Selection and Recombination Temi avanzati di Intelligenza Artificiale - Lecture 4 Prof. Vincenzo Cutello Department of Mathematics and Computer Science.
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
Genetic Algorithms. Solution Search in Problem Space.
Genetic Algorithms And other approaches for similar applications Optimization Techniques.
Genetic Algorithm. Outline Motivation Genetic algorithms An illustrative example Hypothesis space search.
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
Chapter 14 Genetic Algorithms.
Genetic Algorithms Author: A.E. Eiben and J.E. Smith
Genetic Algorithms.
An Evolutionary Approach
Bulgarian Academy of Sciences
Introduction to Genetic Algorithm (GA)
Artificial Intelligence (CS 370D)
Genetic Algorithms Chapter 3.
EE368 Soft Computing Genetic Algorithms.
Boltzmann Machine (BM) (§6.4)
Searching for solutions: Genetic Algorithms
Presentation transcript:

153 Linear Programming Problem

154 which can be written in the form:- Assuming a feasible solution exists it will occur at a corner of the feasible region. That is at an intersection of equality constraints. The simplex linear programming algorithm systematically searches the corners of the feasible region in order to locate the optimum.

155 Example: Consider the problem:

156 Plotting contours of f ( x ) and the constraints produces: solution (b) (c) (a) f increasing

157 The maximum occurs at the intersection of (a) and (b):- At the other intersections (corners) of the feasible region: max

158 Solution using MATLAB Optimisation Toolbox Routine LP DEMO

159 GENETIC ALGORITHMS Refs : - Goldberg, D.E.: ‘ Genetic Algorithms in Search, Optimization and Machine Learning’ (Addison Wesley,1989) Michalewicz, Z.: ‘Genetic Algorithms + Data Structures = Evolution Programs’ (Springer Verlag, 1992)

160 Genetic Algorithms are search algorithms based on the mechanics of natural selection and natural genetics. They start with a group of knowledge structures which are usually coded into binary strings ( chromosomes ). These structures are evaluated within some environment and the strength ( fitness ) of a structure is defined. The fitness of each chromosome is calculated and a new set of chromosomes is then formulated by random selection and reproduction. Each chromosome is selected with a probability determined by it’s fitness and, hence, chromosomes with the higher fitness values will tend to survive and those with lower fitness values will tend to become extinct.

161 The selected chromosomes then undergo certain genetic operations such as crossover, where chromosomes are paired and randomly exchange information, and mutation, where individual chromosomes are altered. The resulting chromosomes are re-evaluated and the process is repeated until no further improvement in overall fitness is achieved. In addition, there is often a mechanism to preserve the current best chromosome ( elitism ). “Survival of the fittest”

162 Genetic Algorithm Flow Diagram Initial Population and Coding Selection “survival of the fittest” Elitism Crossover Mutation Mating

163 Components of a Genetic Algorithm (GA) a genetic representation a way to create an initial population of potential solutions an evaluation function rating solutions in terms of their “ fitness ” genetic operators that alter the composition of children during reproduction values of various parameters (population size, probabilities of applying genetic operators, etc)

164 Differences from Conventional Optimisation GAs work with a coding of the parameter set, not the parameters themselves GAs search from a population of points, not a single point GAs use probabilistic transition rules, not deterministic rules GAs have the capability of finding a global optimum within a set of local optima

165 Initial Population and Coding Consider the problem: where, without loss of generality, we assume that f is always +ve (achieved by adding a +ve constant if necessary). Also assume: Suppose we wish to represent x i to d decimal places. That is each range needs to be cut into ( b i - a i ).10 d equal sizes. Let m i be the smallest integer such that Then x i can be coded as a binary string of length m i. Also, to interpret the string, we use:

166 Each chromosome (population member) is represented by a binary string of length: where the first m 1 bits map x 1 into a value from the range [ a 1, b 1 ], the next group of m 2 bits map x 2 into a value from the range [ a 2, b 2 ] etc; the last m n bits map x n into a value from the range [ a n, b n ]. To initialise a population, we need to decide upon the number of chromosomes ( pop_size ). We then initialise the bit patterns, often randomly, to provide an initial set of potential solutions.

167 Selection ( roulette wheel principle ) We mathematically construct a ‘ roulette wheel ’ with slots sized according to fitness values. Spinning this wheel will then select a new population according to these fitness values with the chromosomes with the highest fitness having the greatest chance of selection. The procedure is:

168 1)Calculate the fitness value eval( v i ) for each chromosome v i ( i = 1,..., pop_size ) 2)Find the total fitness of the population 3)Calculate the probability of a selection, p i, for each chromosome v i ( i = 1,..., pop_size ) 4)Calculate a cumulative probability q i for each chromosome v i ( i = 1,..., pop_size )

169 The selection process is based on spinning the roulette wheel pop_size times; each time we select a single chromosome for a new population as follows: 1)Generate a random number r in the range [0,1] 2)If r < q 1, select the first chromosome v 1 ; otherwise select the i th chromosome v i such that: Note that some chromosomes would be selected more than once: the best chromosomes get more copies and worst die off. “ survival of the fittest ” All the chromosomes selected then replace the previous set to obtain a new population.

170 p1p1 p2p2 p3p3 p4p4 p5p5 p6p6 p7p7 p8p8 p9p9 p 10 p 11 p 12 segment area proportional to p i, i=1,...,12 example :

171 Crossover We choose a parameter value p c as the probability of crossover. Then the expected number of chromosomes to undergo the crossover operation will be p c. pop_size. We proceed as follows:- (for each chromosome in the new population) 1)Generate a random number r from the range [0,1]. 2)If r < p c, then select the given chromosome for crossover. ensuring that an even number is selected. Now we mate the selected chromosomes randomly:-

172 For each pair of chromosomes we generate a random number pos from the range [1, m -1], where m is the number of bits in each chromosome. The number pos indicates the position of the crossing point. Two chromosomes: are replaced by a pair of their offspring ( children )

173 Mutation We choose a parameter value p m as the probability of mutation. Mutation is performed on a bit-by-bit basis giving the expected number of mutated bits as p m. m.pop_size. Every bit, in all chromosomes in the whole population, has an equal chance to undergo mutation, that is change from a 0 to 1 or vice versa. The procedure is: For each chromosome in the current population, and for each bit within the chromosome:- 1) Generate a random number r from the range [0,1]. 2)If r < p m, mutate the bit.

174 Elitism It is usual to have a means for ensuring that the best value in a population is not lost in the selection process. One way is to store the best value before selection and, after selection, replace the poorest value with this stored best value.

175 Example global max

176 Let us work to a precision of two decimal places. then the chromosome length m must satisfy: Also let pop_size = 10, p c = 0.25, p m = 0.04 To ensure that a positive fitness value is always achieved we will work on val = f ( x ) + 2

177 Consider that the initial population has been randomly selected as follows (giving also the corresponding values of x, val, probabilities and accumulated probabilities) * fittest member of the population Note for v 1 :

178 Selection Assume 10 random numbers, range [0,1], have been obtained as follows: These will select: v 4 v 6 v 8 v 1 v 2 v 7 v 9 v 7 v 5 v 9 giving the new population:

179 Note that the best chromosome v 3 in the original population has not been selected and would be destroyed unless elitism is applied.

180 Assume the 10 random numbers: These will select v 1, v 6, v 8, v 9 for crossover. Now assume 2 more random numbers in the range [1,8] are obtained:- Crossover ( p c = 0.25)

181 Mating v 1 and v 6 crossing over at bit 8:- no change Mating v 8 and v 9 crossing over at bit 4:- produces giving the new population:-

182 bit for mutation

183 mutation ( p m = 0.04) Suppose a random number generator selects bit 2 of v 2 and bit 8 of v 9 to mutate, resulting in:- Total fitness F = ** weakest

184 Elitism So far the iteration has resulted in a decrease in overall fitness (from to 30.54). However, if we now apply elitism we replace v 8 in the current population by v 3 from the original population, to produce: Total fitness F = 32.61

185 resulting now in an increase of overall fitness (from to 32.61) at the end of the iteration. The GA would now start again by computing a new roulette wheel and repeating selection, crossover, mutation and elitism; repeating this procedure for a pre-selected number of iterations.

186 Final results from a MATLAB GA program using parameters: pop_size = 30, m = 22, p c = 0.25, p m =0.01

x val Tabulated results : The optimum val = at x = Hence: remembering that val ( x ) = f ( x ) + 2

188 DEMO

189 ON-LINE OPTIMISATION - INTEGRATED SYSTEM OPTIMISATION AND PARAMETER ESTIMATION (ISOPE) An important application of numerical optimisation is the determination and maintenance of optimal steady- state operation of industrial processes, achieved through selection of regulatory controller set-point values. Often, the optimisation criterion is chosen in terms of maximising profit, minimising costs, achieving a desired quality of product, minimising energy usage etc. The scheme is of a two-layer hierarchical structure:-

190 OPTIMISATION (based on steady-state model) set points outputs Note that the steady-state values of the outputs are determined by the controller set-points assuming, of course, that the regulatory controllers maintain stability. REGULATORY CONTROL e.g. PID Controllers control signals measurements INDUSTRIAL PROCESS inputs

191 The set points are calculated by solving an optimisation problem, usually based on the optimisation of a performance criterion (index) subject to a steady-state mathematical model of the industrial process. Note that it is not practical to adjust the set points directly using a ‘trial and error’ technique because of process uncertainty and non- repeatability of measurements of the outputs. Inevitably, the steady-state model will be an approximation of the real industrial process, the approximation being both in structure and parameters. We call this the model-reality difference problem.

192 ISOPE Principle ROP - Real Optimisation Problem Complex Intractable MOP - Model Based Optimisation Problem Simplified (e.g... Linear - Quadratic) Tractable ??? Can We Find the Correct Solution of ROP By Iterating on MOP in an Appropriate Way YES By Applying Integrated System Optimisation And Parameter Estimation - ISOPE

193 Iterative Optimisation and Parameter Estimation In order to cope with model-reality differences, parameter estimation can be used giving the following standard two-step approach:-

194 1.Apply current set points values and, once transients have died away, take measurements of the real process outputs. use these measurements to estimate the steady-state model parameters corresponding to these set point values. This is the parameter estimation step. 2.Solve the optimisation problem of determining the extremum of the performance index subject to the steady- state model with current parameter values. This is the optimisation step and the solution will provide new values of the controller set points. The method is iterative applied through repeated application of steps 1 and 2 until convergence is achieved.

195 Standard Two-Step Approach MODEL BASED OPTIMISATION c REGULATORY CONTROL REAL PROCESS PARAMETER ESTIMATION y  y*y* y*y*

196 Example real solution Now consider the two-step approach:- parameter estimation

197 optimisation Hence, at iteration k: i.e. This first-order difference equation will converge (i.e. stable) since and

198 HENCE, THE STANDARD TWO STEP APPROACH DOES NOT CONVERGE TO THE CORRECT SOLUTION!!! final solution

199 Integrated Approach The standard two-step approach fails, in general, to converge to the correct solution because it does not properly take account of the interaction between the parameter estimation problem and the system optimisation problem. Initially, we use an equality v = c to decouple the set points used in the estimation problem from those in the optimisation problem. We then consider an equivalent integrated problem:-

200 This is clearly equivalent to the real optimisation problem ROP

201 If we also write the model based optimisation problem as: (by eliminating y in J(c,y)) giving the equivalent problem:-

202 Form the Lagrangian: with associated optimality conditions: together with:-

203 Condition (1) gives rise to the modified optimisation problem: which is the same as:- and the modifier is given from (2) and (3) :-

204 Modified Two-Step Approach MODIFIER c,c, y PARAMETER ESTIMATION y  c REGULATORY CONTROL REAL PROCESS y*y* y*y* MODEL BASED OPTIMISATION

205 Modified two-step algorithm The starting point of iteration k is an estimated set point vector v k. Step 1 : parameter estimation Apply the current set points v k and measure the corresponding real process outputs y* k. Also, compute the model outputs y k = f(v k,  ) and determine the model parameters  k such that y k = y* k. Step 2 : modified optimisation (i) compute the modifier vector

206 (ii) solve the modified optimisation problem to produce a new estimated set point vector c k. (iii) update the set points (relaxation) where the matrix K, usually diagonal, is chosen to regulate stability. (Note: if K = I, then v k+1 = c k ). We then repeat from Step 1 until convergence is achieved.

207 Acquisition of derivatives The computation of modifier requires the derivatives:-

208 Some methods for obtaining (i)applying perturbations to the set points and computing the derivatives by finite differences. (ii) using a dynamic model to estimate the derivatives (recommended method). (iii) estimate the derivative matrix using Broyden’s method (recommended method).

209 Note:- Digital filtering can be used with effect to smooth the computational values of. Note:- When = 0 we arrive back to the standard two-step approach. From the expression for  we see that the standard two step approach will only achieve the correct solution when the model structure is chosen such that: a condition rarely achieved in practice.

210 Example: Consider the same example as used previously: where the real solution is :- c = 0.4, y * = 1.8, J * = 0.2 parameter estimation This is unchanged. Hence: where Hence: modifier

211 modified optimisation

212 At iteration k (with K = 1) i.e. IF this difference equation converges the result will be:-

213 which is the correct solution However, it does not converge since (the eigenvalue is outside the unit circle) Hence, it is necessary to apply relaxation in the algorithm to produce the iterative scheme:- where g is a gain parameter (g > 0).

214 Then:- This will converge provided (Hence, typically use g=0.4) Then: and: i.e. THE CORRECT REAL SOLUTION IS OBTAINED

215 ISOPE (THE MODIFIED TWO-STEP APPROACH) ACHIEVES THE CORRECT STEADY STATE REAL PROCESS OPTIMUM IN SPITE OF MODEL- REALITY DIFFERENCES

216

217 final solution

218 When g = 0.4 convergence is achieved in a single iteration. This is because the eigenvalue = 0. (| g| = 0)

219 Example:

220

221 DEMO