Download presentation
Presentation is loading. Please wait.
Published bySamson Thomas Modified over 9 years ago
1
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No constraints
2
2 Outline General optimization strategy Optimization of second degree polynomials Zero order methods –Random search –Powell’s method First order methods –Steepest descent –Conjugate gradient Second order methods
3
3 General optimization strategy Start q=0 q=q+1 Pick search direction, S q One dimensional search x q =x q-1 + * q S q Converged ? Exit
4
4 Optimization of second-degree polynomials Quadratic: F(X)=a 11 x 1 2 +a 12 x 1 x 2 +…+a nn x n 2 = {X} T [A]{X} [A] is equal to one half the Hessian matrix, [H] There is a linear transformation {X}=[S]{Y} such that: F(Y)= 1 y 1 2 +...+ n y n 2 (no coupling terms) [S]: columns are eigenvectors of [A], S 1, …,S n S 1, …,S n are also eigenvectors of [H]
5
5 Optimization of Second-degree polynomials Define conjugate directions S 1, …,S n S 1, …,S n are otrhogonal ( i.e. their dot products are zero) because matrix [A] is symmetric –Note that conjugate directions are also linearly independent. The orthogonality property is stronger than the linear independence property; orthogonal vectors are always linearly independent but linearly independent vectors are not necessarily orthogonal. i : eigenvalues of [A], which are equal to one half of the eigenvalues of the Hessian matrix
6
6 Optimization of second-degree polynomials We can find the exact minimum of a second degree polynomial by performing n one- dimensional searches in the conjugate directions S 1, …,S n If all eigenvalues of [A] are positive then a second degree polynomial has a unique minimum
7
7 Zero-order methods; random search Random number generator: generates sample of values of variables drawn for a spcified probbility distribution. Available in most programming languages. Idea: For F(x 1,…, x n ), generate random n- tuples {x 1 1,…,x n 1 }, {x 1 2,…,x n 2 },…, {x 1 N,…,x n N }. Find minimum.
8
8 Powell’s method Efficient, reliable, popular Based on conjugate directions, although it does not use Hessian matrix
9
9 Searching for optimum in Powell’s method S1S1 S2S2 S3S3 S4S4 S5S5 S6S6 First iteration:S 1 -S 3 Second iteration: S 4 -S 6 Directions S 3, S 6 conjugate Present iteration: use last two search directions from previous iteration
10
10 Powell’s method: algorithm x0x0 Define set of n search directions S q coordinate unit vectors, q=1,…,n x =x 0, y=x q=0 q=q+1 Find * to min F(x q-1 + * S q ) x q = (x q-1 + * S q ) q=n ? N Find conjugate direction S q+1 =x q -y Find * to min F(x q + * S q+1 ) Converged ? Y Y Exit Update search directions S q =S q+1 q=1,…,n y=x q+1 N One iteration, n+1 one dimensional searches x q+1 = (x q + * S q+1 )
11
11 Powell’s method Second degree polynomial; optimum in n iterations Each iteration involves n+1 one- dimensional searches n(n+1) one dimensional searches total
12
12 First-order methods: Steepest Descent Idea: Search in the direction of the negative gradient, –Starting from a design move by a small amount. Objective function reduces most along the direction of
13
13 Algorithm Perform one-dimesnional minimization in steepest descent direction x0x0 S= - Find * to min F(x+ * S) x=x+ * S Converged ? Stop Yes No Determine steepest descent direction Update design
14
14 Steepest Descent Pros: Easy to implement, robust, makes quick progress in the beginning of optimization. Cons: Too slow toward end of optimization
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.