Download presentation
Presentation is loading. Please wait.
Published byLogan Daniels Modified over 9 years ago
1
153 Linear Programming Problem
2
154 which can be written in the form:- Assuming a feasible solution exists it will occur at a corner of the feasible region. That is at an intersection of equality constraints. The simplex linear programming algorithm systematically searches the corners of the feasible region in order to locate the optimum.
3
155 Example: Consider the problem:
4
156 Plotting contours of f ( x ) and the constraints produces: solution (b) (c) (a) f increasing
5
157 The maximum occurs at the intersection of (a) and (b):- At the other intersections (corners) of the feasible region: max
6
158 Solution using MATLAB Optimisation Toolbox Routine LP DEMO
7
159 GENETIC ALGORITHMS Refs : - Goldberg, D.E.: ‘ Genetic Algorithms in Search, Optimization and Machine Learning’ (Addison Wesley,1989) Michalewicz, Z.: ‘Genetic Algorithms + Data Structures = Evolution Programs’ (Springer Verlag, 1992)
8
160 Genetic Algorithms are search algorithms based on the mechanics of natural selection and natural genetics. They start with a group of knowledge structures which are usually coded into binary strings ( chromosomes ). These structures are evaluated within some environment and the strength ( fitness ) of a structure is defined. The fitness of each chromosome is calculated and a new set of chromosomes is then formulated by random selection and reproduction. Each chromosome is selected with a probability determined by it’s fitness and, hence, chromosomes with the higher fitness values will tend to survive and those with lower fitness values will tend to become extinct.
9
161 The selected chromosomes then undergo certain genetic operations such as crossover, where chromosomes are paired and randomly exchange information, and mutation, where individual chromosomes are altered. The resulting chromosomes are re-evaluated and the process is repeated until no further improvement in overall fitness is achieved. In addition, there is often a mechanism to preserve the current best chromosome ( elitism ). “Survival of the fittest”
10
162 Genetic Algorithm Flow Diagram Initial Population and Coding Selection “survival of the fittest” Elitism Crossover Mutation Mating
11
163 Components of a Genetic Algorithm (GA) a genetic representation a way to create an initial population of potential solutions an evaluation function rating solutions in terms of their “ fitness ” genetic operators that alter the composition of children during reproduction values of various parameters (population size, probabilities of applying genetic operators, etc)
12
164 Differences from Conventional Optimisation GAs work with a coding of the parameter set, not the parameters themselves GAs search from a population of points, not a single point GAs use probabilistic transition rules, not deterministic rules GAs have the capability of finding a global optimum within a set of local optima
13
165 Initial Population and Coding Consider the problem: where, without loss of generality, we assume that f is always +ve (achieved by adding a +ve constant if necessary). Also assume: Suppose we wish to represent x i to d decimal places. That is each range needs to be cut into ( b i - a i ).10 d equal sizes. Let m i be the smallest integer such that Then x i can be coded as a binary string of length m i. Also, to interpret the string, we use:
14
166 Each chromosome (population member) is represented by a binary string of length: where the first m 1 bits map x 1 into a value from the range [ a 1, b 1 ], the next group of m 2 bits map x 2 into a value from the range [ a 2, b 2 ] etc; the last m n bits map x n into a value from the range [ a n, b n ]. To initialise a population, we need to decide upon the number of chromosomes ( pop_size ). We then initialise the bit patterns, often randomly, to provide an initial set of potential solutions.
15
167 Selection ( roulette wheel principle ) We mathematically construct a ‘ roulette wheel ’ with slots sized according to fitness values. Spinning this wheel will then select a new population according to these fitness values with the chromosomes with the highest fitness having the greatest chance of selection. The procedure is:
16
168 1)Calculate the fitness value eval( v i ) for each chromosome v i ( i = 1,..., pop_size ) 2)Find the total fitness of the population 3)Calculate the probability of a selection, p i, for each chromosome v i ( i = 1,..., pop_size ) 4)Calculate a cumulative probability q i for each chromosome v i ( i = 1,..., pop_size )
17
169 The selection process is based on spinning the roulette wheel pop_size times; each time we select a single chromosome for a new population as follows: 1)Generate a random number r in the range [0,1] 2)If r < q 1, select the first chromosome v 1 ; otherwise select the i th chromosome v i such that: Note that some chromosomes would be selected more than once: the best chromosomes get more copies and worst die off. “ survival of the fittest ” All the chromosomes selected then replace the previous set to obtain a new population.
18
170 p1p1 p2p2 p3p3 p4p4 p5p5 p6p6 p7p7 p8p8 p9p9 p 10 p 11 p 12 segment area proportional to p i, i=1,...,12 example :
19
171 Crossover We choose a parameter value p c as the probability of crossover. Then the expected number of chromosomes to undergo the crossover operation will be p c. pop_size. We proceed as follows:- (for each chromosome in the new population) 1)Generate a random number r from the range [0,1]. 2)If r < p c, then select the given chromosome for crossover. ensuring that an even number is selected. Now we mate the selected chromosomes randomly:-
20
172 For each pair of chromosomes we generate a random number pos from the range [1, m -1], where m is the number of bits in each chromosome. The number pos indicates the position of the crossing point. Two chromosomes: are replaced by a pair of their offspring ( children )
21
173 Mutation We choose a parameter value p m as the probability of mutation. Mutation is performed on a bit-by-bit basis giving the expected number of mutated bits as p m. m.pop_size. Every bit, in all chromosomes in the whole population, has an equal chance to undergo mutation, that is change from a 0 to 1 or vice versa. The procedure is: For each chromosome in the current population, and for each bit within the chromosome:- 1) Generate a random number r from the range [0,1]. 2)If r < p m, mutate the bit.
22
174 Elitism It is usual to have a means for ensuring that the best value in a population is not lost in the selection process. One way is to store the best value before selection and, after selection, replace the poorest value with this stored best value.
23
175 Example global max
24
176 Let us work to a precision of two decimal places. then the chromosome length m must satisfy: Also let pop_size = 10, p c = 0.25, p m = 0.04 To ensure that a positive fitness value is always achieved we will work on val = f ( x ) + 2
25
177 Consider that the initial population has been randomly selected as follows (giving also the corresponding values of x, val, probabilities and accumulated probabilities) * fittest member of the population Note for v 1 :
26
178 Selection Assume 10 random numbers, range [0,1], have been obtained as follows:- 0.47 0.61 0.72 0.03 0.18 0.69 0.83 0.68 0.54 0.83 These will select: 0.47 0.61 0.72 0.03 0.18 0.69 0.83 0.68 0.54 0.83 v 4 v 6 v 8 v 1 v 2 v 7 v 9 v 7 v 5 v 9 giving the new population:
27
179 Note that the best chromosome v 3 in the original population has not been selected and would be destroyed unless elitism is applied.
28
180 Assume the 10 random numbers:- 1 2 3 4 5 6 7 8 9 10 0.07 0.94 0.57 0.36 0.31 0.14 0.60 0.07 0.07 1.00 These will select v 1, v 6, v 8, v 9 for crossover. Now assume 2 more random numbers in the range [1,8] are obtained:- Crossover ( p c = 0.25)
29
181 Mating v 1 and v 6 crossing over at bit 8:- no change Mating v 8 and v 9 crossing over at bit 4:- produces giving the new population:-
30
182 bit for mutation
31
183 mutation ( p m = 0.04) Suppose a random number generator selects bit 2 of v 2 and bit 8 of v 9 to mutate, resulting in:- Total fitness F = 30.54 ** weakest
32
184 Elitism So far the iteration has resulted in a decrease in overall fitness (from 32.08 to 30.54). However, if we now apply elitism we replace v 8 in the current population by v 3 from the original population, to produce: Total fitness F = 32.61
33
185 resulting now in an increase of overall fitness (from 32.08 to 32.61) at the end of the iteration. The GA would now start again by computing a new roulette wheel and repeating selection, crossover, mutation and elitism; repeating this procedure for a pre-selected number of iterations.
34
186 Final results from a MATLAB GA program using parameters: pop_size = 30, m = 22, p c = 0.25, p m =0.01
35
187 1.8500 4.8500 1.8503 4.8502 1.8500 4.8500 1.8496 4.8495 1.8500 4.8500 1.8500 4.8500 0.3503 2.6497 1.8504 4.8502 1.8269 4.3663 1.8504 4.8502 1.8503 4.8502 1.8500 4.8500 1.8265 4.3520 1.8503 4.8502 1.8386 4.7222 1.8500 4.8500 1.8496 4.8495 1.8500 4.8500 1.8503 4.8502 1.8504 4.8502 1.8500 4.8500 1.8500 4.8500 1.8503 4.8502 1.8500 4.8500 1.8496 4.8495 1.8496 4.8495 1.8503 4.8502 1.8500 4.8500 1.8500 4.8500 1.8968 3.1880 x val Tabulated results : The optimum val = 4.8502 at x = 1.8504 Hence: remembering that val ( x ) = f ( x ) + 2
36
188 DEMO
37
189 ON-LINE OPTIMISATION - INTEGRATED SYSTEM OPTIMISATION AND PARAMETER ESTIMATION (ISOPE) An important application of numerical optimisation is the determination and maintenance of optimal steady- state operation of industrial processes, achieved through selection of regulatory controller set-point values. Often, the optimisation criterion is chosen in terms of maximising profit, minimising costs, achieving a desired quality of product, minimising energy usage etc. The scheme is of a two-layer hierarchical structure:-
38
190 OPTIMISATION (based on steady-state model) set points outputs Note that the steady-state values of the outputs are determined by the controller set-points assuming, of course, that the regulatory controllers maintain stability. REGULATORY CONTROL e.g. PID Controllers control signals measurements INDUSTRIAL PROCESS inputs
39
191 The set points are calculated by solving an optimisation problem, usually based on the optimisation of a performance criterion (index) subject to a steady-state mathematical model of the industrial process. Note that it is not practical to adjust the set points directly using a ‘trial and error’ technique because of process uncertainty and non- repeatability of measurements of the outputs. Inevitably, the steady-state model will be an approximation of the real industrial process, the approximation being both in structure and parameters. We call this the model-reality difference problem.
40
192 ISOPE Principle ROP - Real Optimisation Problem Complex Intractable MOP - Model Based Optimisation Problem Simplified (e.g... Linear - Quadratic) Tractable ??? Can We Find the Correct Solution of ROP By Iterating on MOP in an Appropriate Way YES By Applying Integrated System Optimisation And Parameter Estimation - ISOPE
41
193 Iterative Optimisation and Parameter Estimation In order to cope with model-reality differences, parameter estimation can be used giving the following standard two-step approach:-
42
194 1.Apply current set points values and, once transients have died away, take measurements of the real process outputs. use these measurements to estimate the steady-state model parameters corresponding to these set point values. This is the parameter estimation step. 2.Solve the optimisation problem of determining the extremum of the performance index subject to the steady- state model with current parameter values. This is the optimisation step and the solution will provide new values of the controller set points. The method is iterative applied through repeated application of steps 1 and 2 until convergence is achieved.
43
195 Standard Two-Step Approach MODEL BASED OPTIMISATION c REGULATORY CONTROL REAL PROCESS PARAMETER ESTIMATION y y*y* y*y*
44
196 Example real solution Now consider the two-step approach:- parameter estimation
45
197 optimisation Hence, at iteration k: i.e. This first-order difference equation will converge (i.e. stable) since and
46
198 HENCE, THE STANDARD TWO STEP APPROACH DOES NOT CONVERGE TO THE CORRECT SOLUTION!!! final solution
47
199 Integrated Approach The standard two-step approach fails, in general, to converge to the correct solution because it does not properly take account of the interaction between the parameter estimation problem and the system optimisation problem. Initially, we use an equality v = c to decouple the set points used in the estimation problem from those in the optimisation problem. We then consider an equivalent integrated problem:-
48
200 This is clearly equivalent to the real optimisation problem ROP
49
201 If we also write the model based optimisation problem as: (by eliminating y in J(c,y)) giving the equivalent problem:-
50
202 Form the Lagrangian: with associated optimality conditions: together with:-
51
203 Condition (1) gives rise to the modified optimisation problem: which is the same as:- and the modifier is given from (2) and (3) :-
52
204 Modified Two-Step Approach MODIFIER c,c, y PARAMETER ESTIMATION y c REGULATORY CONTROL REAL PROCESS y*y* y*y* MODEL BASED OPTIMISATION
53
205 Modified two-step algorithm The starting point of iteration k is an estimated set point vector v k. Step 1 : parameter estimation Apply the current set points v k and measure the corresponding real process outputs y* k. Also, compute the model outputs y k = f(v k, ) and determine the model parameters k such that y k = y* k. Step 2 : modified optimisation (i) compute the modifier vector
54
206 (ii) solve the modified optimisation problem to produce a new estimated set point vector c k. (iii) update the set points (relaxation) where the matrix K, usually diagonal, is chosen to regulate stability. (Note: if K = I, then v k+1 = c k ). We then repeat from Step 1 until convergence is achieved.
55
207 Acquisition of derivatives The computation of modifier requires the derivatives:-
56
208 Some methods for obtaining (i)applying perturbations to the set points and computing the derivatives by finite differences. (ii) using a dynamic model to estimate the derivatives (recommended method). (iii) estimate the derivative matrix using Broyden’s method (recommended method).
57
209 Note:- Digital filtering can be used with effect to smooth the computational values of. Note:- When = 0 we arrive back to the standard two-step approach. From the expression for we see that the standard two step approach will only achieve the correct solution when the model structure is chosen such that: a condition rarely achieved in practice.
58
210 Example: Consider the same example as used previously: where the real solution is :- c = 0.4, y * = 1.8, J * = 0.2 parameter estimation This is unchanged. Hence: where Hence: modifier
59
211 modified optimisation
60
212 At iteration k (with K = 1) i.e. IF this difference equation converges the result will be:-
61
213 which is the correct solution However, it does not converge since (the eigenvalue is outside the unit circle) Hence, it is necessary to apply relaxation in the algorithm to produce the iterative scheme:- where g is a gain parameter (g > 0).
62
214 Then:- This will converge provided (Hence, typically use g=0.4) Then: and: i.e. THE CORRECT REAL SOLUTION IS OBTAINED
63
215 ISOPE (THE MODIFIED TWO-STEP APPROACH) ACHIEVES THE CORRECT STEADY STATE REAL PROCESS OPTIMUM IN SPITE OF MODEL- REALITY DIFFERENCES
64
216
65
217 final solution
66
218 When g = 0.4 convergence is achieved in a single iteration. This is because the eigenvalue = 0. (|1 - 2.5g| = 0)
67
219 Example:
68
220
69
221 DEMO
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.