Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 OR II GSLM 52800. 2 Outline  classical optimization – unconstrained optimization  dimensions of optimization  feasible direction.

Similar presentations


Presentation on theme: "1 OR II GSLM 52800. 2 Outline  classical optimization – unconstrained optimization  dimensions of optimization  feasible direction."— Presentation transcript:

1 1 OR II GSLM 52800

2 2 Outline  classical optimization – unconstrained optimization  dimensions of optimization  feasible direction

3 3 Classical Optimization Results  Unconstrained Optimization  different dimensions of optimization conditions  nature of conditions  necessary conditions ( 必要條件 ): satisfied by any minimum (and possibly by some non-minimum points)  sufficient conditions ( 充分條件 ): if satisfied by a point, implying that the point is a minimum (though some minima may not satisfy the conditions)  order of conditions  first-order conditions: in terms of the first derivatives of f & g j  second-order conditions: in terms of the second derivatives of f & g j  general assumptions: f, g, g j  C 1 (i.e., once continuously differentiable) or C 2 (i.e., twice continuously differentiable) as required by the conditions

4 4 Feasible Direction  S   n : the feasible region  x  S: a feasible point  a feasible direction d of x: if there exists  > 0 such that x+  d  S for 0 <  < 

5 5 Two Key Concepts for Classical Results   f: the direction of steepest accent  gradient of f at x 0 being orthogonal to the tangent of the contour f(x) = c at x 0

6 6 The Direction of Steepest Accent  f  contours of f(x 1, x 2 ) =   f   f: direction of steepest accent  f  in some sense, increment of unit move depending on the angle with  f  within  90  of  f: increasing  closer to 0  : increasing more  beyond  90  of  f: decreasing  closer to 180  : decreasing more  above results generally true for any f x2x2 x1x1

7  f(x 1, x 2 ) =  f(x 10, x 20 ) = c  d on the tangent plane at x 0  f(x 0 +  d)  c for small   roughly speaking, for f(x 0 ) = c, f(x 0 +  d) = c for small  when d is on the tangent plane at x 0 7 Gradient of f at x 0 Being Orthogonal to the Tangent of the Contour f(x) = c at x 0

8 8 First-Order Necessary Condition (FONC)  f  C 1 on S and x * a local minimum of f  then for any feasible direction d at x *,  T f(x * )d  0  increasing of f at any feasible direction f(x) = x 2 for 2  x  5 f(x, y) = x 2 + y 2 for 0  x, y  2 f(x, y) = x 2 + y 2 for x  3, y  3

9 9 FONC for Unconstrained NLP  f  C 1 on S & x * an interior local minimum (i.e., without touching any boundary)   T f(x * ) = 0

10 10 FONC Not Sufficient  Example 3.2.2: f(x, y) = -(x 2 + y 2 ) for 0  x, y   T f((0, 0))d = 0 for all feasible direction d  (0, 0): a maximum point  Example 3.2.3: f(x) = x 3   f(0) = 0  x = 0 a stationary point

11 11 Feasible Region with Non-negativity Constraints  Example 3.2.4. (Example 10.8 of JB) Find candidates of the minimum points by the FONC.  min f(x) =  subject to x 1  0, x 2  0, x 2  0 or, equivalently

12 12 Second-Order Conditions  another form of Taylor’s Theorem  f(x) = f(x * )+  T f(x * )(x-x * ) +0.5(x- x * ) T H(x * )(x - x * )+ ,  where  being small, dominated by other terms  if  T f(x * )(x-x * ) = 0,  f(x)  f(x * )  (x- x * ) T H(x * )(x - x * )  0

13 13 Second-Order Necessary Condition  f  C 2 on S  if x * is a local minimum of f, then for any feasible direction d   n at x *,  (i).  T f(x * )d  0, and  (ii). if  T f(x * )d = 0, then d T H(x * )d  0

14 14 Example 3.3.1(a)  SONC satisfied f(x) = x 2 for 2  x  5 f(x, y) = x 2 + y 2 for 0  x, y  2 f(x, y) = x 2 + y 2 for x  3, y  3

15 15 Example 3.3.1(b)  SONC: more discriminative than FONC  f(x, y) = -(x 2 + y 2 ) for 0  x, y in Example 3.2.2  (0, 0), a maximum point, failing the SONC

16 16 SONC for Unconstrained NLP  f  C 2 in S  x * an interior local minimum of f, then  (i).  T f(x * ) = 0, and  (ii). for all d, d T H(x * )d  0  (ii). for all d, d T H(x * )d  0 H(x * ) being positive semi-definite  (ii)  H(x * ) being positive semi-definite  convex f satisfying (ii) (and actually more)

17 17 Example 3.3.2  identity candidates of minimum points for the f(x) =   T f(x * ) =  x = (1, -1) or (-1, -1)  H(x) =  (1, -1) satisfying SONC but not (-1, -1)

18 18 SONC Not Sufficient  f(x, y) = -(x 4 + y 4 )   T f((0, 0))d = 0 for all d  (0, 0) a maximum

19 19 SOSC for Unconstrained NLP  f  C 2 on S   n and x * an interior point  if  (i).  T f(x * ) = 0, and  (ii). H(x * ) is positive definite  x * a strict local minimum of f

20 20 SOSC Not Necessary  Example 3.3.4.  x = 0 a minimum of f(x) = x 4  SOSC not satisfied

21 21 Example 3.3.5  In Example 3.2.4, is (1, 1, 1) a minimum? ..  6 > 0;  positive definite, i.e., SOSC satisfied

22 22 Effect of Convexity  If for all y in the neighborhood of x *  S,  T f(x * )(y-x * )  0  convexity of f implies  f(y)  f(x * ) +  T f(x * )(y-x * )  f(x * )  x * a local min of f in the neighborhood of x *  x * a global minimum of f

23 23 Effect of Convexity  f  C 2 convex  H positive semi-definite everywhere  Taylor's Theorem, when  T f(x * )(x-x * ) = 0,  f(x) = f(x * ) +  T f(x * )(x-x * ) + (x- x * ) T H(  x * + (1-  )x)(x - x * ) = f(x * ) + (x- x * ) T H(  x * + (1-  )x)(x - x * )  f(x * )   x * a local min  a global min

24 24 Effect of Convexity  facts of convex functions  (i). a local min = a global min  (ii). H(x) positive semi-definite everywhere  (iii). strictly convex function, H(x) positive definite everywhere  implications  for f  C 2 convex function, the FONC  T f(x * ) = 0 is sufficient for x * to be a global minimum  if f strictly convex, x * the unique global min


Download ppt "1 OR II GSLM 52800. 2 Outline  classical optimization – unconstrained optimization  dimensions of optimization  feasible direction."

Similar presentations


Ads by Google