ENCI 303 Lecture PS-19 Optimization 2

Slides:



Advertisements
Similar presentations
Solving LP Models Improving Search Special Form of Improving Search
Advertisements

Engineering Optimization
Introduction to Algorithms
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Optimization 吳育德.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Linear Programming?!?! Sec Linear Programming In management science, it is often required to maximize or minimize a linear function called an objective.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Nonlinear Programming
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Linear programming Thomas S. Ferguson University of California at Los Angeles Compressive Sensing Tutorial PART 3 Svetlana Avramov-Zamurovic January 29,
The Simplex Method: Standard Maximization Problems
MIT and James Orlin © Nonlinear Programming Theory.
Numerical Optimization
Design and Analysis of Algorithms
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Methods For Nonlinear Least-Square Problems
Optimization Methods One-Dimensional Unconstrained Optimization
Optimization Methods One-Dimensional Unconstrained Optimization
Unconstrained Optimization Problem
MAE 552 – Heuristic Optimization Lecture 1 January 23, 2002.
Branch and Bound Algorithm for Solving Integer Linear Programming
Advanced Topics in Optimization
ISM 206 Lecture 6 Nonlinear Unconstrained Optimization.
Objectives: Set up a Linear Programming Problem Solve a Linear Programming Problem.
Why Function Optimization ?
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
Optimization Methods One-Dimensional Unconstrained Optimization
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
Tier I: Mathematical Methods of Optimization
Name: Mehrab Khazraei(145061) Title: Penalty or Exterior penalty function method professor Name: Sahand Daneshvar.
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
UNCONSTRAINED MULTIVARIABLE
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
456/556 Introduction to Operations Research Optimization with the Excel 2007 Solver.
Nonlinear programming Unconstrained optimization techniques.
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
Ken YoussefiMechanical Engineering Dept. 1 Design Optimization Optimization is a component of design process The design of systems can be formulated as.
Linear Programming Data Structures and Algorithms A.G. Malamos References: Algorithms, 2006, S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani Introduction.
Systems of Inequalities in Two Variables Sec. 7.5a.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
296.3Page :Algorithms in the Real World Linear and Integer Programming II – Ellipsoid algorithm – Interior point methods.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
A comparison between PROC NLP and PROC OPTMODEL Optimization Algorithm Chin Hwa Tan December 3, 2008.
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
Chapter 10 Minimization or Maximization of Functions.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
3.4: Linear Programming  Intro: Oftentimes we want to optimize a situation - this means to:  find a maximum value (such as maximizing profits)  find.
3-5: Linear Programming. Learning Target I can solve linear programing problem.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Generalization Error of pac Model  Let be a set of training examples chosen i.i.d. according to  Treat the generalization error as a r.v. depending on.
Lecture 4 Chapter 3 Improving Search
Optimal Control.
deterministic operations research
Dr. Arslan Ornek IMPROVING SEARCH
Chap 3. The simplex method
3-3 Optimization with Linear Programming
Linear Programming Objectives: Set up a Linear Programming Problem
Chapter 7 Optimization.
L5 Optimal Design concepts pt A
Chapter 7: Systems of Equations and Inequalities; Matrices
Part 4 - Chapter 13.
Performance Optimization
Presentation transcript:

ENCI 303 Lecture PS-19 Optimization 2

Overview of lecture Linear optimization problems. Unconstrained optimization. Constrained optimization. Looking ahead… Next Monday: Optimization case study. Next Wednesday and Thursday: Network analysis.

Linear optimization problems (1) Linear optimization problem: Objective function and constraints are all linear in the design variables. Find x1,…,xn to maximize c1x1+  + cnxn subject to ai1x1+  + ainxn  bi (inequality constraints) aj1x1+  + ajnxn = bj (equality constraints). Example: The textile example and transportation example are linear optimization problems.

Linear optimization problems (2) Structure of linear optimization problems: For a linear optimization problem with two design variables, the feasible region is a polygon and a global optimum occurs at a corner or along an edge of the polygon. If a global optimum occurs at a corner, then it is unique; if it occurs along an edge, then any other point on that same edge is also a global optimum.

Linear optimization problems (3) Example: Find x1 and x2 to maximize 2x1+ x2 subject to 2x1 x2  8 x1+ 2x2  14 x1+ x2  4 x1, x2  0. Using the Excel Solver, solution is x1 = 6 and x2 = 4.

Linear optimization problems (4) (…continued) global maximum

Linear optimization problems (5) Exercise: In a factory producing electronic components, x1 is the number of batches of resistors and x2 the number of batches of capacitors produced per week. Each batch of resistors makes 7 units of profit and each batch of capacitors makes 13 units of profit. Both resistors and capacitors require a two-stage process to produce. In any given week, at most 18 units of time may be allocated to processes in stage 1, and at most 54 units of time to processes in stage 2. A batch of resistors requires 1 unit of time in stage 1 and 5 units of time in stage 2. A batch of capacitors requires 3 units of time in stage 1 and 6 units of time in stage 2. How many units of resistors and capacitors should be produced each week so as to maximize profit?

Linear optimization problems (6) Exercise: (…continued) What are the design variables? What is the objective function? What are the constraints?

Linear optimization problems (7) Exercise: (…continued) Show the feasible region on a graph and use it to find the optimum solution. Solution is x1 = 6, x2 = 4.

Linear optimization problems (8) With n design variables, the feasible region is a polytope in n dimensions, whose boundaries are (n1)-dimensional hyperplanes. If a unique global optimum exists, it will occur at one of the corners of the polytope. An algorithm for finding a global optimum for a linear optimization problem is the simplex method. It works by moving from one corner of the feasible region polytope to another along the boundaries, to locate one that optimizes the objective function. If one or more design variables are integer-valued, the branch and bound algorithm is used to solve a sequence of linear optimization problems using the simplex method, with additional constraints imposed at each stage to force integer design variables to take integer values.

Unconstrained optimization (1) Unconstrained optimization problem: Objective function is, in general, a nonlinear function of the design variables, and there are no constraints on the design variables. Find x1,…,xn to maximize f(x1,…,xn). Example: Least squares estimation in linear regression is an example of unconstrained nonlinear optimization.

Unconstrained optimization (2) Example: (figure) The displacements, dx and dy, of a nonlinear spring system with two springs, under an applied load, can be obtained by minimizing the potential energy: where Fx and Fy are the forces in the x and y directions resulting from the applied load, k1 and k2 are the spring constants, and 1 and 2 are the extensions of the springs, which are related to the displacements according to

Unconstrained optimization (3) Example: (…continued) If k1 =1, k2 = 2, Fx = 0 and Fy = 2, find dx and dy. This is an unconstrained nonlinear optimization problem: Find dx and dy to minimize Using the Excel Solver, the solution is dx = 0.46, dy = 1.35 .

Unconstrained optimization (4) Notations and definitions: Let x = (x1,…,xn) be the vector of design variables. The gradient vector, f, and Hessian, 2f, of f(x) are the column vector and nn symmetric matrix defined by

Unconstrained optimization (5) Result: The sufficient conditions for a point x* to be a local optimum of f(x) are and

Unconstrained optimization (6) Methods for solving unconstrained optimization problems are iterative in nature, i.e. they move from one point to another until they get to an optimum solution or close to one. All of the methods have four basic components: A starting point, x0. Search direction d = (d1,…,dn). Step size  > 0. Stopping rule.

Unconstrained optimization (7) In the first iteration, a search direction vector d0 and step size 0 are computed. Algorithm moves from starting point x0 to a new point x1 according to x1 = x0 + 0d0. The search direction and step size are chosen so that f(x1) < f(x0) for a minimization problem; f(x1) > f(x0) for a maximization problem. f(x1) is computed and the stopping rule is checked to see whether to stop the algorithm.

Unconstrained optimization (8) The steps for iteration k are Compute search direction vector dk1. Compute step size  k1. Compute new point: xk = xk1 +  k1dk1. Compute f(xk). Check stopping rule: If stop, solution is xk; otherwise, do another iteration. The search direction and step size are chosen so that f(xk) < f(xk1) for a minimization problem; f(xk) > f(xk1) for a maximization problem.

Unconstrained optimization (9) Choice of starting point x0 is important because the methods can only find the local optimum that is closest to the starting point. Even though the design variables are unconstrained, it is usually possible in practice to specify upper and lower bounds for the variables. A grid can then be defined between those bounds and a grid search can be performed to obtain a starting point, i.e. compute the value of the objective function at each grid point and choose the point with the smallest (for a min problem) or largest (for a max problem) objective function value as the starting point.

Unconstrained optimization (10) Some common stopping rules: Upper bound on computation time: Stop if t > tmax. Upper bound on number of iterations: Stop if k > kmax. Lower bound on relative change in objective function values: Stop if for some small positive . Lower bound on the norm of the gradient vector: Stop if

Unconstrained optimization (11) The methods differ in the way the search direction and step size are computed. We shall look at four methods: Steepest descent method. Conjugate gradient method. Newton method. Quasi-Newton methods. We shall describe these methods in the context of a minimization problem. In the Excel Solver the conjugate gradient method or a quasi-Newton method are available.

Unconstrained optimization: Steepest descent method (1) At iteration k, must choose dk1 and  k1 to get xk = xk1 +  k1dk1, so that f(xk) < f(xk1). By Taylor’s expansion, f(xk) = f(xk1 +  k1dk1)  f(xk1) +  k1f(xk1)T dk1, and so to achieve f(xk) < f(xk1), must have f(xk)  f(xk1) < 0   k1f(xk1)T dk1 < 0. Since the step size must be positive, must have f(xk1)T dk1 < 0.

Unconstrained optimization: Steepest descent method (2) (…continued) Choose dk1 = f(xk1) as the search direction. This is called the steepest descent direction. (figure) Example: If find the steepest descent direction at xk1 = (1, 2).

Unconstrained optimization: Steepest descent method (3) After finding the search direction dk1, the step size can be found by searching along dk1 for an  that minimizes f(xk1 +  dk1). This is called line search and is itself an optimization problem in a single variable: Find  to minimize f(xk1 +  dk1) subject to  > 0.

Unconstrained optimization: Steepest descent method (4) Example: If xk1 = (2, 1), and dk1 = (1, 0), find the step size  k1. and so Find  to minimize subject to  > 0. Using the Excel Solver,  k1 = 1. Illustration of the steepest descent method. (figure)

Unconstrained optimization: Conjugate gradient method Initial search direction is the steepest descent direction: d0 = f(x0). For k  2, (Polak-Rebiere conjugate direction) or (Fletcher-Reeves conjugate direction) Step size: Use line search. Illustration of the conjugate gradient method. (figure)

Unconstrained optimization: Newton method Quadratic approximation of f(x) using Taylor’s expansion is Minimum of f(x) is found by setting f = 0, giving Putting x = xk, we have and so the search direction is Step size = 1.

Unconstrained optimization: Quasi-Newton methods Search direction: Replace [2f(xk1)]1 in the search direction for Newton method by a symmetric, positive definite matrix Hk1, i.e. Hk1 must satisfy the quasi-Newton condition, so that it serves as an approximation to [2f(xk1)]1. Step size: Use line search.

Constrained optimization (1) Constrained optimization problem: Objective function is, in general, a nonlinear function of the design variables. Constraints may also involve nonlinear functions of the design variables. Find x1,…,xn to maximize f(x1,…,xn) subject to gi(x1,…,xn)  0 (inequality constraints) hj(x1,…,xn) = 0 (equality constraints).

Constrained optimization (2) The Excel Solver uses the generalized reduced gradient method for constrained optimization. The method has the same basic components (i.e. starting point, search direction, step size and stopping rule) as any unconstrained optimization method, but differ in the details, which enable it to handle the constraints.

Reading assignment Next lecture: Sec. 11.1, 11.5