Download presentation
Presentation is loading. Please wait.
Published byMarion Banks Modified over 9 years ago
1
Ch. Eick: Num. Optimization with GAs Numerical Optimization General Framework: objective function f(x 1,...,x n ) to be minimized or maximized constraints: g i (x 1,...,x n ) leq/eq 0 (i=1,...,m) x i >= 0 i=1,...,n (optional) Approaches: Classical: Differentiate the function and find points with a gradient of 0: –problem: f has to be differentiable –does not cope with constraints –equation systems have to be solved that are frequently “nasty” (iterative algorithms such as Newton-Raphson’s method can be used). –Lagrange multipliers are employed to cope with constraints. if g 1,...,g m and f are linear: linear programming can be used. In the case that at least one function is non-linear general analytical solutions do no longer exist; and iteration algorithms have to be used.
2
Ch. Eick: Num. Optimization with GAs Popular Numerical Methods Newton-Raphson’s Method to solve: f(x)=0 f(x) is approximated by its tangent at the point (x n, f(x n )) and x n+1 is taken as the abcissa of the point of intersection of the tangent with the x-acis; that is, x n+1 is determined using: f(x n ) + (x n+1 x n ) f’(x n ) = 0 x n+1 = x n + h n with h n = ( f(x n ) / f’(x n )) the iterations are broken off when |h n | is less than the largest tolerable error. The Simplex Method is used to optimize a linear function with a set of linear constraints (linear programming). Quadratic programming [31] optimizes a quadratic function with linear constraints. Other interation methods (similar to Newton’s method) relying on x v+1 = x v v d v where d v is a direction and v denotes the “jump” performed in the particular direction. Use quadratic/linear approximations of the optimization problem, and solve the optimization problem in the approximated space. Other popular optimization methods: the penalty trajectory method [220], the sequential quadratic penalty function method, and the SOLVER method [80].
3
Ch. Eick: Num. Optimization with GAs Numerical Optimization with GAs Coding alternatives include: binary coding Gray codes real-valued GAs Usually lower and upper bounds for variables have to be provided as the part of the optimization problem. Typical operators include: standard mutation and crossover non-uniform and boundary mutation arithmetical, simple, and heuristic crossover Constraints are a major challenge for function optimization. Ideas to cope with the problem include: elimination of equations through variable reduction. values in a solution are dynamic: they are nolonger independent of each other, but rather their contents is constrainted by the contents of other variables of the solution: in some cases a bound for possible changes can be computed (e.g. for convex search spaces (GENOCOP)). penalty functions. repair algorithms (GENETIC2)
4
Ch. Eick: Num. Optimization with GAs Penalty Function Approach Problem: f(x 1,...,x n ) has to be maximized with constraints g i (x 1,...,x n ) leq/eq 0 (i=1,...,m) define a new function: f’(x 1,...,x n )= f(x 1,...,x n ) + i=1,m w i h i (x 1,...,x n ) with: For g i (x 1,...,x n ) = 0: h i (x 1,...,x n ):= g i (x 1,...,x n ) For g i (x 1,...,x n ) <= 0: h i (x 1,...,x n ):= IF g i (x 1,...,x n ) < 0 THEN 0 ELSE g i (x 1,...,x n ) Remarks Penalty Function Approach: needs a lot of fine tuning, especially the selection of weights w i is very critical for the performance of the optimizer. frequently, the GA gets deceived only exploring the space of illegal solution, especially if penalties are too low; on the other hand, situations of premature convergence can arise when the GA terminates with a local minimum that is surrounded by illegal solutions, so that the GA cannot escape the local minimum, because the penalty for traversing illegal solutions is too high. a special approach called sequential quadratic penalty function method[9,39] has gained significant popularity.
5
Ch. Eick: Num. Optimization with GAs Sequential Quadratic Penalty Function Method Idea: instead of optimizing the constrainted function f(x), optimize: F(x,r) = f(x) + (1/(2 r)) (h 1 (x) 2 +...+h m (x) 2 ) It has been shown by Fiacco et al. [189] that the solutions of optimizing the constrainted function f and the solutions of optimizing F are identical for r-- 0. However, it turned out to be difficult to minimize F in the limit with Newton’s method (see Murray [220]). More recently, Broyden and Attila [39,40] found a more efficient method; GENOCOP II that is discussed in our textbook employs this method.
6
Ch. Eick: Num. Optimization with GAs Basic Loop of the SQPF Method 1) Differentiate F(x,r) yielding F’(x,r); 2) Choose a starting vector x 0, choose a starting value r o >0; 3) r’:= r o ; x’:=x 0 ; REPEAT Solve F’(x,r’)=G(x)=0 for starting vector x’ yielding vector x1; x’:=x1; Decrease r’ by division through >1 UNTIL r’ is sufficiently close to 0; RETURN(x’);
7
Ch. Eick: Num. Optimization with GAs Various Numerical Crossover Operators Let p1=(x1,y1) and p2=(x2,y2); crossover operators crossover(p1,p2) include: simple crossover: max a (x1,y2 a+y1 (1-a)); max a (x2,y1 a+y2 (1-a)) Whole arithmetical crossover: a p1 + (1-a) p2 with a [0,1] heuristic crossover(Wright[312]): p1 + (p1 p2) a with a [0,1] if f(p1)>f(p2) Example: let p1=(1,2), p2=(5,1) be points a convex 2D-space: x 2 +y 2 leq 28 and f(p1)>f(p2) p2=(5,1) p1=(1,2) p hc =(-3,3) p hc’ =(0, 2.25) a=0.25 a=1.0 p sc1 =(5,1.7) p sc2 =(1,1) simple crossover yields: (1,1) and (5,sqrt(3)) (25+3=28). arithmetical crossover yields: all points along the line between p1 and p2. heuristic crossover yields: all points along the line between p1 and p hc =(-3,3).
8
Ch. Eick: Num. Optimization with GAs Another Example (Crossover Operators) Let p1=(0,0,0) and p2=(1,1,1) in an unconstrainted search space: arithmetical crossover produces: (a,a,a) with a [0,1] simple crossover produces: (0,0,1), (0,1,1), (1,0,0), and (1,1,0). heuristic crossover produces: (a,a,a) with a [1,2], if f((1,1,1))>f((0,0,0)) (a,a,a) with a [-1,0], if f((1,1,1))<f((0,0,0)) (1,1,1) (0,0,0)
9
Ch. Eick: Num. Optimization with GAs Problems of Optimization with Constraints legal solutions illegal solutions legal solutions S+ S S SS S S:= a solution S+:= the optimal solution
10
Ch. Eick: Num. Optimization with GAs A Harder Optimization Problem illegal solutions legal solutions illegal solutions
11
Ch. Eick: Num. Optimization with GAs A Friendly Convex Search Space illegal solutions legal solutions p pupu plpl p1 p2 Convexity (1) p1 and p2 in S => all points between p1 and p2 are in S (2) p in S => exactly two borderpoints can be found: p u and p l illegal solutions
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.