Part 4 - Chapter 13.

Slides:



Advertisements
Similar presentations
Line Search.
Advertisements

Nonlinear Programming McCarl and Spreen Chapter 12.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Linear Inequalities and Linear Programming Chapter 5 Dr.Hayk Melikyan/ Department of Mathematics and CS/ Linear Programming in two dimensions:
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Optimisation The general problem: Want to minimise some function F(x) subject to constraints, a i (x) = 0, i=1,2,…,m 1 b i (x)  0, i=1,2,…,m 2 where x.
Optimization Introduction & 1-D Unconstrained Optimization
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Nonlinear programming: One dimensional minimization methods
Optimization in Engineering Design 1 Lagrange Multipliers.
MIT and James Orlin © Nonlinear Programming Theory.
OPTIMAL CONTROL SYSTEMS
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Optimization Mechanics of the Simplex Method
Optimization Methods One-Dimensional Unconstrained Optimization
Mech300 Numerical Methods, Hong Kong University of Science and Technology. 1 Part Four Optimization.
Optimization Methods One-Dimensional Unconstrained Optimization
Tier I: Mathematical Methods of Optimization
Principles of Computer-Aided Design and Manufacturing Second Edition 2004 ISBN Author: Prof. Farid. Amirouche University of Illinois-Chicago.
{ ln x for 0 < x < 2 x2 ln 2 for 2 < x < 4 If f(x) =
AP CALCULUS AB Chapter 4: Applications of Derivatives Section 4.1:
Part 2 Chapter 7 Optimization
ENCI 303 Lecture PS-19 Optimization 2
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
Systems of Inequalities in Two Variables Sec. 7.5a.
Chapter 3 Non-Linear Functions and Applications Section 3.1
Part 4 Nonlinear Programming 4.5 Quadratic Programming (QP)
Example Ex. Find Sol. So. Example Ex. Find (1) (2) (3) Sol. (1) (2) (3)
Optimization unconstrained and constrained Calculus part II.
Unit 1 Review Standards 1-8. Standard 1: Describe subsets of real numbers.
ZEIT4700 – S1, 2015 Mathematical Modeling and Optimization School of Engineering and Information Technology.
Chapter 10 Minimization or Maximization of Functions.
Introduction to Optimization
Calculus-Based Optimization AGEC 317 Economic Analysis for Agribusiness and Management.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
By Liyun Zhang Aug.9 th,2012. * Method for Single Variable Unconstraint Optimization : 1. Quadratic Interpolation method 2. Golden Section method * Method.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm One double-sided cheat sheet (8.5in x 11in) allowed Bring your calculator to the exam Chapters.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Optimal Control.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 / Chapter 5.
Copyright © 2014, 2010, 2007 Pearson Education, Inc.
Systems of Equations and Inequalities
Mathematical Programming
Using Derivatives to Find Absolute Maximum and Minimum Values
Copyright © 2014, 2010, 2007 Pearson Education, Inc.
Introduction to Optimization
Dr. Arslan Ornek IMPROVING SEARCH
Chapter 14.
ALGEBRA II HONORS/GIFTED SECTION 3-4 : LINEAR PROGRAMMING
Optimization of Process Flowsheets
3-3 Optimization with Linear Programming
Chapter 7 Optimization.
Chapter 7: Optimization
Classification of optimization problems
Outline Unconstrained Optimization Functions of One Variable
Chapter 3: Polynomial Functions
Systems of Inequalities and Linear Programming
Derivatives in Action Chapter 7.
Numerical Computation and Optimization
Section Linear Programming
Part 2 Chapter 7 Optimization
More Nonlinear Functions and Equations
Calculus-Based Optimization AGEC 317
Part 4 Nonlinear Programming
Multivariable optimization with no constraints
Bracketing.
Presentation transcript:

Part 4 - Chapter 13

Optimization Part 4 Root finding and optimization are related, both involve guessing and searching for a point on a function. Fundamental difference is: Root finding is searching for zeros of a function or functions Optimization is finding the minimum or the maximum of a function of several variables.

figure PT4.1

Mathematical Background An optimization or mathematical programming problem generally be stated as: Find x, which minimizes or maximizes f(x) subject to Where x is an n-dimensional design vector, f(x) is the objective function, di(x) are inequality constraints, ei(x) are equality constraints, and ai and bi are constants

Optimization problems can be classified on the basis of the form of f(x): If f(x) and the constraints are linear, we have linear programming. If f(x) is quadratic and the constraints are linear, we have quadratic programming. If f(x) is not linear or quadratic and/or the constraints are nonlinear, we have nonlinear programming. When equations(*) are included, we have a constrained optimization problem; otherwise, it is unconstrained optimization problem.

Figure PT4.5

One-Dimensional Unconstrained Optimization Chapter 13 In multimodal functions, both local and global optima can occur. In almost all cases, we are interested in finding the absolute highest or lowest value of a function. Figure 13.1

How do we distinguish global optimum from local one? By graphing to gain insight into the behavior of the function. Using randomly generated starting guesses and picking the largest of the optima as global. Perturbing the starting point to see if the routine returns a better point or the same local minimum.

Golden-Section Search A unimodal function has a single maximum or a minimum in the a given interval. For a unimodal function: First pick two points that will bracket your extremum [xl, xu]. Pick an additional third point within this interval to determine whether a maximum occurred. Then pick a fourth point to determine whether the maximum has occurred within the first three or last three points The key is making this approach efficient by choosing intermediate points wisely thus minimizing the function evaluations by replacing the old values with new values.

Figure 13.2

The second say that the ratio of the length must be equal The first condition specifies that the sum of the two sub lengths l1 and l2 must equal the original interval length. The second say that the ratio of the length must be equal Golden Ratio

Figure 13.4

The method starts with two initial guesses, xl and xu, that bracket one local extremum of f(x): Next two interior points x1 and x2 are chosen according to the golden ratio The function is evaluated at these two interior points.

Two results can occur: If f(x1)>f(x2) then the domain of x to the left of x2 from xl to x2, can be eliminated because it does not contain the maximum. Then, x2 becomes the new xl for the next round. If f(x2)>f(x1), then the domain of x to the right of x1 from xl to x2, would have been eliminated. In this case, x1 becomes the new xu for the next round. New x1 is determined as before

The real benefit from the use of golden ratio is because the original x1 and x2 were chosen using golden ratio, we do not need to recalculate all the function values for the next iteration.

Newton’s Method A similar approach to Newton- Raphson method can be used to find an optimum of f(x) by defining a new function g(x)=f‘(x). Thus because the same optimal value x* satisfies both f‘(x*)=g(x*)=0 We can use the following as a technique to the extremum of f(x).