Chapter 14.

Slides:



Advertisements
Similar presentations
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 14.
Advertisements

1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
Least squares CS1114
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Optimization : The min and max of a function
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Optimization of thermal processes
Optimization 吳育德.
Optimization methods Review
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
5/20/ Multidimensional Gradient Methods in Optimization Major: All Engineering Majors Authors: Autar Kaw, Ali.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Function Optimization Newton’s Method. Conjugate Gradients
Tutorial 12 Unconstrained optimization Conjugate gradients.
Motion Analysis (contd.) Slides are from RPI Registration Class.
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Optimization Methods One-Dimensional Unconstrained Optimization
Optimization Mechanics of the Simplex Method
Tutorial 5-6 Function Optimization. Line Search. Taylor Series for Rn
Optimization Methods One-Dimensional Unconstrained Optimization
Function Optimization. Newton’s Method Conjugate Gradients Method
Advanced Topics in Optimization
D Nagesh Kumar, IIScOptimization Methods: M2L3 1 Optimization using Calculus Optimization of Functions of Multiple Variables: Unconstrained Optimization.
Math for CSLecture 51 Function Optimization. Math for CSLecture 52 There are three main reasons why most problems in robotics, vision, and arguably every.
Optimization Methods One-Dimensional Unconstrained Optimization

UNCONSTRAINED MULTIVARIABLE
Principles of Computer-Aided Design and Manufacturing Second Edition 2004 ISBN Author: Prof. Farid. Amirouche University of Illinois-Chicago.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Nonlinear programming Unconstrained optimization techniques.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Particle Swarm Optimization by Dr. Shubhajit Roy Chowdhury Centre for VLSI and Embedded Systems Technology, IIIT Hyderabad.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Chapter 10 Minimization or Maximization of Functions.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm
Steepest Descent Method Contours are shown below.
Gradient Methods In Optimization
Survey of unconstrained optimization gradient based algorithms
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Introduction to Statistical Quality Control, 4th Edition Chapter 13 Process Optimization with Designed Experiments.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Optimal Control.
Function Optimization
Chapter 7. Classification and Prediction
LECTURE 11: Advanced Discriminant Analysis
Non-linear Minimization
Machine Learning Basics
Collaborative Filtering Matrix Factorization Approach
Chapter 7 Optimization.
Instructor :Dr. Aamer Iqbal Bhatti
Optimization and Some Traditional Methods
The loss function, the normal equation,
Mathematical Foundations of BME Reza Shadmehr
EEE 244-8: Optimization.
Numerical Computation and Optimization
Part 4 - Chapter 13.
Outline Preface Fundamentals of Optimization
What are optimization methods?
Outline Preface Fundamentals of Optimization
Section 3: Second Order Methods
Electricity and Magnetism
Multivariable optimization with no constraints
Presentation transcript:

Chapter 14

Multidimensional Unconstrained Optimization Chapter 14 Techniques to find minimum and maximum of a function of several variables are described. These techniques are classified as: That require derivative evaluation Gradient or descent (or ascent) methods That do not require derivative evaluation Non-gradient or direct methods.

Figure 14.1

DIRECT METHODS Random Search Based on evaluation of the function randomly at selected values of the independent variables. If a sufficient number of samples are conducted, the optimum will be eventually located. Example: maximum of a function f (x, y)=y-x-2x2-2xy-y2 can be found using a random number generator.

Figure 14.2

Advantages/ Works even for discontinuous and nondifferentiable functions. Always finds the global optimum rather than the global minimum. Disadvantages/ As the number of independent variables grows, the task can become onerous. Not efficient, it does not account for the behavior of underlying function.

Univariate and Pattern Searches More efficient than random search and still doesn’t require derivative evaluation. The basic strategy is: Change one variable at a time while the other variables are held constant. Thus problem is reduced to a sequence of one-dimensional searches that can be solved by variety of methods. The search becomes less efficient as you approach the maximum.

Figure 14.3

Pattern directions can be used to shoot directly along the ridge towards maximum. Figure 14.4

Figure 14.5 Best known algorithm, Powell’s method, is based on the observation that if points 1 and 2 are obtained by one-dimensional searches in the same direction but from different starting points, then, the line formed by 1 and 2 will be directed toward the maximum. Such lines are called conjugate directions.

GRADIENT METHODS Gradients and Hessians The Gradient/ If f(x,y) is a two dimensional function, the gradient vector tells us What direction is the steepest ascend? How much we will gain by taking that step? Directional derivative of f(x,y) at point x=a and y=b

Figure 14.6 For n dimensions

The Hessian/ For one dimensional functions both first and second derivatives valuable information for searching out optima. First derivative provides (a) the steepest trajectory of the function and (b) tells us that we have reached the maximum. Second derivative tells us that whether we are a maximum or minimum. For two dimensional functions whether a maximum or a minimum occurs involves not only the partial derivatives w.r.t. x and y but also the second partials w.r.t. x and y.

Figure 14.7 Figure 14.8

Assuming that the partial derivatives are continuous at and near the point being evaluated The quantity [H] is equal to the determinant of a matrix made up of second derivatives

The Steepest Ascend Method Figure 14.9 Start at an initial point (xo,yo), determine the direction of steepest ascend, that is, the gradient. Then search along the direction of the gradient, ho, until we find maximum. Process is then repeated.

The problem has two parts Determining the “best direction” and Determining the “best value” along that search direction. Steepest ascent method uses the gradient approach as its choice for the “best” direction. To transform a function of x and y into a function of h along the gradient section: h is distance along the h axis

If xo=1 and yo=2 Figure 14.10