Presentation is loading. Please wait.

Presentation is loading. Please wait.

Gradient Methods In Optimization

Similar presentations


Presentation on theme: "Gradient Methods In Optimization"— Presentation transcript:

1 Gradient Methods In Optimization
University of Tehran Faculty of Engineering School of Mechanical Engineering Advanced Numerical Methods Gradient Methods In Optimization Prof. M. Raisee Dehkordi By Meysam Rezaei Barmi Fall 2005

2 Gradient Methods In Optimization
- Steepest Descent Method - Conjugate Gradient Method - Generalized Reduced Gradient Method

3 Steepest Descent Method
Gradient of the function If we move along the gradient direction from any point in n-dimentional space, the function value increases at the fastest rate. The gradient vector represent the direction of Steepest Ascent, the negative of the gradient vector denotes the direction of Steepest Descent.

4 Start with initial trial point
Steepest Descent Algorithm Start with initial trial point Find the search direction Find to minimize No Is optimum? Yes

5 Example

6 Conjugate Gradient Method
The convergence characteristics of the Steepest Descent method can be greatly improved by modifying it into a Conjugate Gradient method. Any minimization method that makes use of the conjugate directions is quadratically convergent. The method minimized a quadratic function in n steps or less. Each function can be approximated well by a quadratic near the optimum point ( With Taylor series ), optimum point is found in a finite number of iteration with using quadratically convergent method.

7 Start with initial trial point
Conjugate Gradient Algorithm Start with initial trial point Find the search direction Find to minimize Find the search direction Find to minimize No Is optimum? Yes

8 Example

9 We can use conjugate Gradient method to solve linear equations system
With minimizing Because in optimized point we have:

10 Generalized Reduced Gradient Method
One of popular search methods for optimizing constrained functions is the Generalized Reduced Gradient method (GRG). Cost Function : Constraints : n : Number of variables n-m : Number of decision Variables m : Number of constraints

11 Assumed: n=4 , m=2 We want to optimize cost function Subject to Choose and two decision variables.

12 Where , Therefore Thus

13 With substituting for and from previous part, we have

14 Therefore we determine Generalized Reduced Gradient.
If for minimization, choose If for minimization, choose

15 Generalized Reduced Gradient Algorithm
Choose decision variables (n-m) and their step sizes Initialize decision variables Move towards constraints Calculate Move towards constraints Decreased

16 Example Initial step size Last step size


Download ppt "Gradient Methods In Optimization"

Similar presentations


Ads by Google