Optimization Methods One-Dimensional Unconstrained Optimization

Slides:



Advertisements
Similar presentations
Curved Trajectories towards Local Minimum of a Function Al Jimenez Mathematics Department California Polytechnic State University San Luis Obispo, CA
Advertisements

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 14.
Optimization.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Optimization : The min and max of a function
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Optimization of thermal processes
Optimization 吳育德.
Optimization Introduction & 1-D Unconstrained Optimization
5/20/ Multidimensional Gradient Methods in Optimization Major: All Engineering Majors Authors: Autar Kaw, Ali.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Performance Optimization
Numerical Optimization
Function Optimization Newton’s Method. Conjugate Gradients
458 Interlude (Optimization and other Numerical Methods) Fish 458, Lecture 8.
Tutorial 12 Unconstrained optimization Conjugate gradients.
Design Optimization School of Engineering University of Bradford 1 Numerical optimization techniques Unconstrained multi-parameter optimization techniques.
Revision.
Optimization Mechanics of the Simplex Method
Tutorial 5-6 Function Optimization. Line Search. Taylor Series for Rn
Optimization Methods One-Dimensional Unconstrained Optimization
Optimization Linear Programming and Simplex Method
Unconstrained Optimization Problem
Nonlinear programming
Lecture 17 Today: Start Chapter 9 Next day: More of Chapter 9.
Function Optimization. Newton’s Method Conjugate Gradients Method
Advanced Topics in Optimization
D Nagesh Kumar, IIScOptimization Methods: M2L3 1 Optimization using Calculus Optimization of Functions of Multiple Variables: Unconstrained Optimization.
Why Function Optimization ?
Math for CSLecture 51 Function Optimization. Math for CSLecture 52 There are three main reasons why most problems in robotics, vision, and arguably every.
Optimization Methods One-Dimensional Unconstrained Optimization
Tier I: Mathematical Methods of Optimization
Introduction to Optimization (Part 1)

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
UNCONSTRAINED MULTIVARIABLE
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Nonlinear programming Unconstrained optimization techniques.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Response surfaces. We have a dependent variable y, independent variables x 1, x 2,...,x p The general form of the model y = f(x 1, x 2,...,x p ) +  Surface.
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
Department of Mechanical Engineering, The Ohio State University Sl. #1GATEWAY Optimization.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Chapter 10 Minimization or Maximization of Functions.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Lecture 18 Today: More Chapter 9 Next day: Finish Chapter 9.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm
Gradient Methods In Optimization
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm One double-sided cheat sheet (8.5in x 11in) allowed Bring your calculator to the exam Chapters.
Inequality Constraints Lecture 7. Inequality Contraints (I) n A Review of Lagrange Multipliers –As we discussed last time, the first order necessary conditions.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
1 Optimization Linear Programming and Simplex Method.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Optimal Control.
Non-linear Minimization
Chapter 14.
Chapter 7 Optimization.
Optimization Part II G.Anuradha.
Outline Unconstrained Optimization Functions of One Variable
3.8 Newton’s Method How do you find a root of the following function without a graphing calculator? This is what Newton did.
Performance Optimization
Outline Preface Fundamentals of Optimization
Section 3: Second Order Methods
Presentation transcript:

Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods

Optimization Methods One-Dimensional Unconstrained Optimization Golden-Section Search Quadratic Interpolation Newton's Method Multi-Dimensional Unconstrained Optimization Non-gradient or direct methods Gradient methods Linear Programming (Constrained) Graphical Solution Simplex Method

Gradient The gradient vector of a function f, denoted as f, tells us that from an arbitrary point Which direction is the steepest ascend/descend? i.e. Direction that will yield the greatest change in f How much we will gain by taking that step? Indicate by the magnitude of f = || f ||2

Gradient – Example Problem: Employ gradient to evaluate the steepest ascent direction for the function f(x, y) = xy2 at point (2, 2). Solution: 8 unit θ 4 unit

The direction of steepest ascent (gradient) is generally perpendicular, or orthogonal, to the elevation contour.

Detecting Optimum Point For 1-D problems If f'(x') = 0 and If f"(x') < 0, then x' is a maximum point If f"(x') > 0, then x' is a maximum point If f"(x') = 0, then x' is a saddle point What about for multi-dimensional problems?

Detecting Optimum Point For 2-D problems, if a point is an optimum point, then In addition, if the point is a maximum point, then Question: If both of these conditions are satisfied for a point, can we conclude that the point is a maximum point?

Detecting Optimum Point When viewed along the x and y directions. When viewed along the y = x direction. (a, b) is a saddle point

Detecting Optimum Point For 2-D functions, we also have to take into consideration of That is, whether a maximum or a minimum occurs involves both partial derivatives w.r.t. x and y and the second partials w.r.t. x and y.

Hessian Matrix (or Hessian of f ) Also known as the matrix of second partial derivatives. Its determinant, |H|, provides a way to discern if a function has reached an optimum or not.

Detecting Optimum Point Assuming that the partial derivatives are continuous at and near the point being evaluated The quantity |H| is equal to the determinant of the Hessian matrix of f.

Finite Difference Approximation using Centered-difference approach Used when evaluating partial derivatives is inconvenient.

Steepest Ascent Method Start at x1 = { x1, x2, …, xn } i = 0 Repeat i = i + 1 Si = f at xi Find h such that f (xi + hSi) is maximized xi+1 = xi + hSi Until (|(f(xi+1) – f(xi)) / f(xi+1)| < es1 or || xi+1 – xi||/||xi+1|| < es2) Steepest ascent method converges linearly.

Steepest Ascent Method – Maximizing f (xi + hSi) Let g(h) = f (xi + hSi) g(h) is a parameterized version of f (xi) and has only one variable. If g(h') is optimal, then f (xi + h'Si) is also optimal Thus to find h that maximizes f (xi + hSi), we can find the the h that maximizes g(h) using any method for optimizing 1-D function (Bisection, Newton-method, etc.)

Example: Suppose f(x, y) = 2xy + 2x – x2 – 2y2 Using the steepest ascent method to find the next point if we are moving from point (-1, 1). Next step is to find h that maximize g(h)

If h = 0. 2 maximizes g(h), then x = -1+6(0. 2) = 0. 2 and y = 1-6(0 If h = 0.2 maximizes g(h), then x = -1+6(0.2) = 0.2 and y = 1-6(0.2) = -0.2 would maximize f(x, y). So moving along the direction of gradient from point (-1, 1), we would reach the optimum point (which is our next point) at (0.2, -0.2).

Conjugate Gradient Approaches (Fletcher-Reeves) Methods moving in conjugate directions converge quadratically. Idea: Calculate conjugate direction at each points based on the gradient as Converge faster than Powell's method. Ref: Engineering Optimization (Theory & Practice), 3rd ed, by Singiresu S. Rao.

Newton's Method One-dimensional Optimization Multi-dimensional Optimization At the optimal Newton's Method Hi is the Hessian matrix (or matrix of 2nd partial derivatives) of f evaluated at xi.

Newton's Method Converge quadratically May diverge if the starting point is not close enough to the optimum point.

Marquardt Method Idea When a guessed point is far away from the optimum point, use the Steepest Ascend method. As the guessed point is getting closer and closer to the optimum point, gradually switch to the Newton's method.

Marquardt Method The Marquardt method achieves the objective by modifying the Hessian matrix H in the Newton's Method in the following way: Initially, set α0 a huge number. Decrease the value of αi in each iteration. When xi is close to the optimum point, makes αi zero (or close to zero).

Marquardt Method Whenαi is large Whenαi is close to zero Steepest Ascend Method: (i.e., Move in the direction of the gradient.) Whenαi is close to zero Newton's Method