Presentation is loading. Please wait.

Presentation is loading. Please wait.

ENCI 303 Lecture PS-19 Optimization 2

Similar presentations


Presentation on theme: "ENCI 303 Lecture PS-19 Optimization 2"— Presentation transcript:

1 ENCI 303 Lecture PS-19 Optimization 2

2 Overview of lecture Linear optimization problems.
Unconstrained optimization. Constrained optimization. Looking ahead… Next Monday: Optimization case study. Next Wednesday and Thursday: Network analysis.

3 Linear optimization problems (1)
Linear optimization problem: Objective function and constraints are all linear in the design variables. Find x1,…,xn to maximize c1x1+  + cnxn subject to ai1x1+  + ainxn  bi (inequality constraints) aj1x1+  + ajnxn = bj (equality constraints). Example: The textile example and transportation example are linear optimization problems.

4 Linear optimization problems (2)
Structure of linear optimization problems: For a linear optimization problem with two design variables, the feasible region is a polygon and a global optimum occurs at a corner or along an edge of the polygon. If a global optimum occurs at a corner, then it is unique; if it occurs along an edge, then any other point on that same edge is also a global optimum.

5 Linear optimization problems (3)
Example: Find x1 and x2 to maximize 2x1+ x2 subject to 2x1 x2  8 x1+ 2x2  14 x1+ x2  4 x1, x2  0. Using the Excel Solver, solution is x1 = 6 and x2 = 4.

6 Linear optimization problems (4)
(…continued) global maximum

7 Linear optimization problems (5)
Exercise: In a factory producing electronic components, x1 is the number of batches of resistors and x2 the number of batches of capacitors produced per week. Each batch of resistors makes 7 units of profit and each batch of capacitors makes 13 units of profit. Both resistors and capacitors require a two-stage process to produce. In any given week, at most 18 units of time may be allocated to processes in stage 1, and at most 54 units of time to processes in stage 2. A batch of resistors requires 1 unit of time in stage 1 and 5 units of time in stage 2. A batch of capacitors requires 3 units of time in stage 1 and 6 units of time in stage 2. How many units of resistors and capacitors should be produced each week so as to maximize profit?

8 Linear optimization problems (6)
Exercise: (…continued) What are the design variables? What is the objective function? What are the constraints?

9 Linear optimization problems (7)
Exercise: (…continued) Show the feasible region on a graph and use it to find the optimum solution. Solution is x1 = 6, x2 = 4.

10 Linear optimization problems (8)
With n design variables, the feasible region is a polytope in n dimensions, whose boundaries are (n1)-dimensional hyperplanes. If a unique global optimum exists, it will occur at one of the corners of the polytope. An algorithm for finding a global optimum for a linear optimization problem is the simplex method. It works by moving from one corner of the feasible region polytope to another along the boundaries, to locate one that optimizes the objective function. If one or more design variables are integer-valued, the branch and bound algorithm is used to solve a sequence of linear optimization problems using the simplex method, with additional constraints imposed at each stage to force integer design variables to take integer values.

11 Unconstrained optimization (1)
Unconstrained optimization problem: Objective function is, in general, a nonlinear function of the design variables, and there are no constraints on the design variables. Find x1,…,xn to maximize f(x1,…,xn). Example: Least squares estimation in linear regression is an example of unconstrained nonlinear optimization.

12 Unconstrained optimization (2)
Example: (figure) The displacements, dx and dy, of a nonlinear spring system with two springs, under an applied load, can be obtained by minimizing the potential energy: where Fx and Fy are the forces in the x and y directions resulting from the applied load, k1 and k2 are the spring constants, and 1 and 2 are the extensions of the springs, which are related to the displacements according to

13 Unconstrained optimization (3)
Example: (…continued) If k1 =1, k2 = 2, Fx = 0 and Fy = 2, find dx and dy. This is an unconstrained nonlinear optimization problem: Find dx and dy to minimize Using the Excel Solver, the solution is dx = 0.46, dy = 1.35 .

14 Unconstrained optimization (4)
Notations and definitions: Let x = (x1,…,xn) be the vector of design variables. The gradient vector, f, and Hessian, 2f, of f(x) are the column vector and nn symmetric matrix defined by

15 Unconstrained optimization (5)
Result: The sufficient conditions for a point x* to be a local optimum of f(x) are and

16 Unconstrained optimization (6)
Methods for solving unconstrained optimization problems are iterative in nature, i.e. they move from one point to another until they get to an optimum solution or close to one. All of the methods have four basic components: A starting point, x0. Search direction d = (d1,…,dn). Step size  > 0. Stopping rule.

17 Unconstrained optimization (7)
In the first iteration, a search direction vector d0 and step size 0 are computed. Algorithm moves from starting point x0 to a new point x1 according to x1 = x0 + 0d0. The search direction and step size are chosen so that f(x1) < f(x0) for a minimization problem; f(x1) > f(x0) for a maximization problem. f(x1) is computed and the stopping rule is checked to see whether to stop the algorithm.

18 Unconstrained optimization (8)
The steps for iteration k are Compute search direction vector dk1. Compute step size  k1. Compute new point: xk = xk1 +  k1dk1. Compute f(xk). Check stopping rule: If stop, solution is xk; otherwise, do another iteration. The search direction and step size are chosen so that f(xk) < f(xk1) for a minimization problem; f(xk) > f(xk1) for a maximization problem.

19 Unconstrained optimization (9)
Choice of starting point x0 is important because the methods can only find the local optimum that is closest to the starting point. Even though the design variables are unconstrained, it is usually possible in practice to specify upper and lower bounds for the variables. A grid can then be defined between those bounds and a grid search can be performed to obtain a starting point, i.e. compute the value of the objective function at each grid point and choose the point with the smallest (for a min problem) or largest (for a max problem) objective function value as the starting point.

20 Unconstrained optimization (10)
Some common stopping rules: Upper bound on computation time: Stop if t > tmax. Upper bound on number of iterations: Stop if k > kmax. Lower bound on relative change in objective function values: Stop if for some small positive . Lower bound on the norm of the gradient vector: Stop if

21 Unconstrained optimization (11)
The methods differ in the way the search direction and step size are computed. We shall look at four methods: Steepest descent method. Conjugate gradient method. Newton method. Quasi-Newton methods. We shall describe these methods in the context of a minimization problem. In the Excel Solver the conjugate gradient method or a quasi-Newton method are available.

22 Unconstrained optimization: Steepest descent method (1)
At iteration k, must choose dk1 and  k1 to get xk = xk1 +  k1dk1, so that f(xk) < f(xk1). By Taylor’s expansion, f(xk) = f(xk1 +  k1dk1)  f(xk1) +  k1f(xk1)T dk1, and so to achieve f(xk) < f(xk1), must have f(xk)  f(xk1) < 0   k1f(xk1)T dk1 < 0. Since the step size must be positive, must have f(xk1)T dk1 < 0.

23 Unconstrained optimization: Steepest descent method (2)
(…continued) Choose dk1 = f(xk1) as the search direction. This is called the steepest descent direction. (figure) Example: If find the steepest descent direction at xk1 = (1, 2).

24 Unconstrained optimization: Steepest descent method (3)
After finding the search direction dk1, the step size can be found by searching along dk1 for an  that minimizes f(xk1 +  dk1). This is called line search and is itself an optimization problem in a single variable: Find  to minimize f(xk1 +  dk1) subject to  > 0.

25 Unconstrained optimization: Steepest descent method (4)
Example: If xk1 = (2, 1), and dk1 = (1, 0), find the step size  k1. and so Find  to minimize subject to  > 0. Using the Excel Solver,  k1 = 1. Illustration of the steepest descent method. (figure)

26 Unconstrained optimization: Conjugate gradient method
Initial search direction is the steepest descent direction: d0 = f(x0). For k  2, (Polak-Rebiere conjugate direction) or (Fletcher-Reeves conjugate direction) Step size: Use line search. Illustration of the conjugate gradient method. (figure)

27 Unconstrained optimization: Newton method
Quadratic approximation of f(x) using Taylor’s expansion is Minimum of f(x) is found by setting f = 0, giving Putting x = xk, we have and so the search direction is Step size = 1.

28 Unconstrained optimization: Quasi-Newton methods
Search direction: Replace [2f(xk1)]1 in the search direction for Newton method by a symmetric, positive definite matrix Hk1, i.e. Hk1 must satisfy the quasi-Newton condition, so that it serves as an approximation to [2f(xk1)]1. Step size: Use line search.

29 Constrained optimization (1)
Constrained optimization problem: Objective function is, in general, a nonlinear function of the design variables. Constraints may also involve nonlinear functions of the design variables. Find x1,…,xn to maximize f(x1,…,xn) subject to gi(x1,…,xn)  0 (inequality constraints) hj(x1,…,xn) = 0 (equality constraints).

30 Constrained optimization (2)
The Excel Solver uses the generalized reduced gradient method for constrained optimization. The method has the same basic components (i.e. starting point, search direction, step size and stopping rule) as any unconstrained optimization method, but differ in the details, which enable it to handle the constraints.

31 Reading assignment Next lecture: Sec. 11.1, 11.5


Download ppt "ENCI 303 Lecture PS-19 Optimization 2"

Similar presentations


Ads by Google