Presentation is loading. Please wait.

Presentation is loading. Please wait.

Linear Programming 2011 1 (Convex) Cones  Def: closed under nonnegative linear combinations, i.e. K is a cone provided a 1, …, a p  K  R n, 1, …, p.

Similar presentations


Presentation on theme: "Linear Programming 2011 1 (Convex) Cones  Def: closed under nonnegative linear combinations, i.e. K is a cone provided a 1, …, a p  K  R n, 1, …, p."— Presentation transcript:

1

2 Linear Programming 2011 1 (Convex) Cones  Def: closed under nonnegative linear combinations, i.e. K is a cone provided a 1, …, a p  K  R n, 1, …, p  0   i = 1 p i a i  K ( Note: Usually cones are defined as closed only under nonnegative scalar multiplication)  Observations (Characteristics)  Subspaces are cones  For any family of cones { K i : i  I },  i  I K i is a cone also.  Any nonempty cone contains 0  K 1, K 2 are cones, then so is K 1 +K 2 = { (x+y): x  K 1, y  K 2 }  Halfspaces are cones: H = { x  R n : a’x  0 }

3 Linear Programming 2011 2  Description of cones: Any subset A  R n generates a cone K(A) (define K(  ) = {0} ) K(A) = { 1 a 1 + … + p a p : p  1, i  R +, a i  A}, called “conical span of A” “conical hull of A” =  Ki  A, Ki is cone K i (outside description) They are the same. Finite basis result is false for cones (e.g. ice cream cones). Hence we will restrict our attention to the cones with finite conical basis.

4 Linear Programming 2011 3  Conical dual: For any A  R n, define the (conical) dual of A to be A + = { x  R n : Ax  0 }, where Ax  0 means a’x  0 for all a  A. ( some people use Ax  0 ) It is called a constrained cone since it is the solution set of some homogeneous inequalities. When A is a cone, A + is defined as dual cone (or polar cone) of A. Note that A + is always a cone – a constrained cone. For A: m  n, A + (with rows of A regarded as the vectors in the set A) is finitely constrained (polyhedron).

5 Linear Programming 2011 4 (conical) dual of A  R n a1a1 a2a2 a3a3 A={a 1, a 2, a 3 } {x: a 1 ’x  0} {x: a 3 ’x  0} A + ={ x: Ax  0}

6 Linear Programming 2011 5  Prop: Suppose A, B  R n. Then (1)B  A  A +  B + (2) A  A ++ (3)A + = A +++ (4)A = A ++  A is a constrained cone (5)If B  A and B generates A conically, then A + = B +. Pf) parallels the cases for subspaces.

7 Linear Programming 2011 6 A ++ A  A ++ a1a1 a2a2 a3a3 A={a 1, a 2, a 3 } A + ={ x: Ax  0}

8 Linear Programming 2011 7 A ++ A + = A +++ a1a1 a2a2 a3a3 A={a 1, a 2, a 3 } A + ={ x: Ax  0}A +++ ={ x: A ++ x  0}

9 Linear Programming 2011 8 A A = A ++  A is a constrained cone A = constrained cone A + ={ x: Ax  0} A ++ = A

10 Linear Programming 2011 9  Thm (Weyl): Any nonempty finitely generated cone is polyhedral (finitely constrained). Pf) Use Fourier-Motzkin elimination (later).   Cor 1: Among all subsets A  R n with finite conical basis A = A ++  A is a nonempty cone.  Cor 2: Given A: m  n, consider K = { y’A: y  0}, L = { x: Ax  0}. Then K + = L, L + = K.

11 Linear Programming 2011 10 A ++ K + = L, L + = K a1a1 a2a2 a3a3 A={a 1, a 2, a 3 } A + ={ x: Ax  0}

12 Linear Programming 2011 11  Cor 3 (Farkas’ lemma): Given A: m  n, c  R n, exactly one of the two holds (I) there exists y  R + m s.t. y’A = c’ (II) there exists x  R n s.t. Ax  0, c’x > 0. Pf) Show ~ (I)  (II) ~ (I)  c  K  { y’A : y  0}  c  K ++ (by Cor 1)   x  K + (Ax  0) s.t. cx > 0  (II) holds 

13 Linear Programming 2011 12 K Farkas’ Lemma a1a1 a2a2 a3a3 K + ={ x: Ax  0} A : m  n, c  R n Case (1): c  K (  y  0 such that y’A=c’) c

14 Linear Programming 2011 13 K Farkas’ Lemma a1a1 a2a2 a3a3 K + ={ x: Ax  0} Case (2): c  K (  x such that Ax  0, c’x>0) c c

15 Linear Programming 2011 14  Farkas’ lemma is core of LP duality theory (details later). There are many other forms of theorems of the alternatives and they are important and powerful tools in optimization theory.  ex)  verifying that c* (c’x* = c* for some x*) is an optimal value of a LP (in minimization form) is the same as to verify that the following system has no solution. -c’x + c* > 0 Ax – b  0 Truth of claim can be verified by giving a solution to the alternative system. question: similar result possible for integer form?  Finding projection of a polyhedron to a lower dimensional space (later)  absence of arbitrage condition in finance theory.  KKT optimality condition for nonlinear program ...

16 Linear Programming 2011 15 Absence of Arbitrage  text Chapter 4, p167-169  Text use the form (I)there exists some x  0 such that Ax = b (II)there exists some vector p such that p’A  0’ and p’b < 0 (here, columns of A are generators of a cone) Compare with (I) there exists y  R + m s.t. y’A = c’ (II) there exists x  R m s.t. Ax  0, c’x > 0.

17 Linear Programming 2011 16  n different assets are traded in a market (single period) m possible states after the end of the period r si : return on investment of 1 dollar on asset i and the state is s at the end of the period payoff matrix R: m  n  x i : amount held of asset i x i can be negative x i > 0: has bought x i units of asset i, receive r si x i if state s occurs x i < 0: “short position”, selling |x i | units of asset i at the beginning, with the promise to buy them back at the end. (seller’s position in futures contract, payout r si |x i |, i.e. receiving a payoff of r si x i if state s occurs)

18 Linear Programming 2011 17  Given a portfolio x, the resulting wealth when state s occurs is, w s =  i = 1 n r si x i  w = Rx Let p i be the price of asset i in the beginning, then cost of acquiring portfolio x is p’x. What are the fair prices for the assets?  Absence of arbitrage condition: asset prices should always be such that no investor can get a guaranteed nonnegative payoff out of a negative investment (no free lunch) Hence, if Rx  0, then we must have p’x  0, i.e. there exists no vector x such that x’R’  0’, x’p < 0. So, by Farkas’ lemma, there exists q  0 such that R’q = p, i.e. p i =  s = 1 m q s r si ( Here, R’ = A, p = b in Farkas’ lemma )

19 Linear Programming 2011 18 Fourier-Motzkin Elimination  Solving system of inequalities (refer text section 2.8) Idea similar to Gaussian elimination. Eliminate one variable at a time with some mechanism reserved to recover the feasible values later.  Given a system of inequalities and equations: (I) eliminate x n in (I) and obtain system (II) which consists of linear equations and inequalities now in variables x 1, x 2, …, x n-1. And we want to have (x 1, x 2, …, x n ) satisfies (I) for some x n  (x 1, x 2, …, x n-1 ) satisfies (II). (i.e. we want (II) which does not miss any feasible solution to (I) and does not include any vector not satisfying (I) ) Then (I) consistent  (II) consistent (have a solution).

20 Linear Programming 2011 19  Related concept: projection of vectors to a lower dimensional space  Def: If x = ( x 1, x 2, …, x n )  R n and k  n, the projection mapping  k : R n  R k is defined as  k (x) =  k (x 1, x 2, …, x n ) = ( x 1, …, x k ) For S  R n,  k (S) = {  k (x) : x  S} Equivalently,  k (S) = {(x 1,…, x k ):  x k+1, …, x n s.t. (x 1, …, x n )  S}  To determine whether a polyhedron P is nonempty, find  n-1 (P)  …   1 (P) and determine whether the one dimensional polyhedron is nonempty. (But it is inefficient)

21 Linear Programming 2011 20  Elimination algorithm: (0) If all coefficients of x n in (I) are 0, then take (II) same as (I). (1)  some relation, say i-th, with a in  0, and this relation is ‘=‘. Then derive (II) from (I) by Gauss-Jordan elimination. ( a i1 x 1 + … + a in x n = b i  x n = 1/a in ( b i - a i1 x 1 - … - a in x n ), substitute into (I). Clearly ( x 1, …, x n ) solves (I)  ( x 1, …, x n-1 ) solves (II). ) (continued)

22 Linear Programming 2011 21 (continued) (2) Rewrite each constraint  j=1 n a ij x j  b i as a in x n  -  j=1 n-1 a ij x j +b i, i = 1, …, m If a in  0, divide both sides by a in. By letting x = (x 1, …, x n-1 ), we obtain x n  d i + f i ’x,if a in > 0 d j + f j ’x  x n,if a jn < 0 0  d k + f k ’x, if a kn = 0, where d i, d j, d k  R and f i, f j, f k  R n-1 Let (II) be the system defined by d j + f j ’x  d i + f i ’x,if a in > 0 and a jn < 0 0  d k + f k ’x, if a kn = 0 ( and remaining equations ) 

23 Linear Programming 2011 22  Ex) x 1 + x 2  1 x 1 + x 2 + 2x 3  2 2x 1 + 3x 3  3 x 1 - 4x 3  4 -2x 1 + x 2 - x 3  5  0  1 - x 1 - x 2 x 3  1 – (x 1 /2) – (x 2 /2) x 3  1 – (2x 1 /3) -1 + (x 1 /4)  x 3 -5 – 2x 1 + x 2  x 3  0  1 - x 1 - x 2 -1 + (x 1 /4)  1 – (x 1 /2) – (x 2 /2) -1 + (x 1 /4)  1 – (2x 1 /3) -5 – 2x 1 + x 2  1 – (x 1 /2) – (x 2 /2) -5 – 2x 1 + x 2  1 – (2x 1 /3)

24 Linear Programming 2011 23  Thm 2.10: The polyhedron Q (defined by system (II)) constructed by the elimination algorithm is equal to  n-1 (P) of P. Pf) If x   n-1 (P),  x n such that (x, x n )  P. In particular, x= (x, x n ) satisfies system (I), hence also satisfies system (II). It shows that  n- 1 (P)  Q. Let x  Q. Then x satisfies min { j : ajn 0} (d i + f i ’x). Let x n be a number between the two sides of the above inequality. Then (x, x n )  P, which shows Q   n-1 (P).   Observe that for x = (x 1, …, x n ), we have  n-2 (  n-1 (x)) =  n-2 (x). Also  n-2 (  n-1 (P)) =  n-2 (P). Hence obtain  1 (P) recursively to determine if P is empty or to find a solution.  A solution in P can be recovered recursively starting from  1 (P) and finding x i that lies in [ min { j : ajn 0} (d i + f i ’x) ].

25 Linear Programming 2011 24  Cor 2.4: Let P  R n+k be a polyhedron. Then, the set { x  R n : there exists y  R k such that (x, y)  P} is also a polyhedron. ( Will be used to prove the Weyl’s Theorem. Other proof technique is not apparent.)  Cor 2.5: Let P  R n be a polyhedron and A be an m  n matrix. Then the set Q = { Ax: x  P} is also a polyhedron. Pf) Q = { y  R m : y = Ax, x  P}. Hence Q is the projection of the polyhedron {(x, y)  R n+m : y = Ax, x  P} onto the y coordinates.   Cor 2.6: The convex hull of a finite number of vectors (called polytope) is a polyhedron. Pf) The convex hull {  i=1 k i x i :  i i =1, i  0} is the image of the polyhedron { ( 1, …, k ) :  i i = 1, i  0} under the mapping that maps ( 1, …, k ) to  i i x i. (The mapping can be expressed as A, where the columns of the matrix A are x i vectors. We will see a different proof later.) 

26 Linear Programming 2011 25 Remarks  FM elimination not efficient as an algorithm. Number of inequalities grows exponentially as we eliminate variables.  Can also handle strict inequalities.  Can solve LP problem max{ c’x : Ax  b}  Consider Ax  b, z = c’x and eliminate x and find z as large as possible in the one dimensional polyhedron. Solution can be recovered by backtracking.  FM gives an algorithm to find the projection of P = { (x, y)  R n+p : Ax + Gy  b} onto the x space Pr x (P) = { x  R n : (x, y)  P for some y  R p }. But how can we characterize Pr x (P) for arbitrary P?

27 Linear Programming 2011 26  Concept of projection becomes important in recent optimization theory (especially in integer programming) as new techniques using projections have been developed. (e.g. RLT (reformulation and linearization technique))  Formulation in a higher dimensional space and use the projection to lower dimensional space may give stronger formulation in integer programming (in terms of strength of LP relaxation). (e.g. Node + edge variable formulation stronger than edge formulation for weighted maximal b-clique problem) : Given a complete undirected graph G = (V, E), weight c e, e  E. Find a clique (complete subgraph) of size (number of nodes in the clique) at most b and sum of edge weights in the clique is maximum.

28 Linear Programming 2011 27  Weyl’s Theorem: Any nonempty finitely generated cone is polyhedral (i.e. finitely constrained). Pf) K = { y’A: y  0} for A: m  n = { x : x – y’A = 0, y  0 is a consistent system in (x, y)} Use FM elimination to get rid of y’s = ( x : some linear homog. system in x is consistent} Write these relation as Bx  0 Then K = { x: Bx  0} -- polyhedral   Note that we get homogeneous system Bx  0 if we apply FM.

29 Linear Programming 2011 28  Minkowski’s Theorem: Any polyhedral cone is nonempty and finitely generated. Pf) Let L be a polyhedral cone. Clearly L  . We know that L = L ++ from earlier Prop. Part 2 of Cor. 2 says L + is finitely generated ( L + = K ) By Weyl’s Thm, L + itself is polyhedral. By part 2 of Cor. 2 (L + ) + is finitely generated.  L ++ = L from above  L is finitely generated.   FM elimination leads to Weyl-Minkowski cone representation. (nonempty finitely generated cone  finitely constrained (polyhedral))  Extend this result to affine version  Def: The set of all convex combinations of a finite point set is called a polytope.

30 Linear Programming 2011 29  Affine Weyl Theorem: (i.e. finitely generated are finitely constrained) Suppose P = { x  R n : x’ = y’B + z’C, y  0, z  0,  i z i = 1 }, B: p  n, C: q  n. Then  matrix A: m  n and b  R m s.t. P = { x  R n : Ax  b}. (special case where B is vacuous is that polytope is polyhed.) Pf) (use technique called homogenization) If P =  (i.e. B, C vacuous, i.e. p = q = 0), take A =[0,…,0], b = -1. If P  , consider P’  R n+1 defined as Observe that x  P  (x, 1)  P’

31 Linear Programming 2011 30 and P’ is finitely generated nonempty cone in R n+1. Apply Weyl’s Thm to P’ in R n+1 to get P’ = { (x, x n+1 ) : A’(x, x n+1 )’  0} for some A’: m  (n+1) i.e. A’ = [ A: d] with d = last column of A’ Define b = -d, then A’ = [ A: -b] Observe that x  P  (x, 1)  P’  A’(x, 1)’  0  (A: -b)(x, 1)’  0  Ax  b   Note that we changed the problem as the problem involving a cone in R n+1, for which we know more properties, and used the results for cones to prove the theorem.

32 Linear Programming 2011 31  Affine Minkowski Theorem: (i.e. finitely constrained are finitely generated) Suppose P = { x  R n : Ax  b}, A: m  n, b  R m. Then  matrices, B: p  n, C: q  n such that P = { x  R n : x’ = y’B + z’C, y, z  0,  i z i = 1} Pf) For P = , take p = q = 0, i.e. B, C vacuous. Otherwise, again homogenize and consider Then x  P  (x, 1)  P’ P’ is a polyhedral cone, so Minkowski’s Thm applies. Hence  matrices, B’: l  (n+1) such that P’ = { (x, x n+1 )  R n+1 : (x, x n+1 )’ = y’B’, y  R + l }

33 Linear Programming 2011 32 (continued) Break B’ into 2 parts so that all rows of B’ with 0 last component come as top rows and rows with nonzero last component come as bottom rows. Note that all nonzero values in the last column of B’ must be > 0. It doesn’t change P’. Then we have x  P  (x, 1)  P’ i.e. x  P  x = y’B + z’C, y  0, z  0,  i z i = 1 

34 Linear Programming 2011 33  Geometric view of homogenization in Affine Minkowski Thm 0 1 RnRn R P={x: Ax  b} P’  R n+1

35 Linear Programming 2011 34  Think about similar picture for affine Weyl.  Affine Weyl, Minkowski Thm together provides “Double Description Thm” We can describe polyhedron as  (finite) intersection of halfspaces    (finite) conical combination of points + convex combination of points ( i.e. P = C + Q, where C is a cone and Q is a polytope).  Existence of different representations has been shown. Next question is how to identify the representation.


Download ppt "Linear Programming 2011 1 (Convex) Cones  Def: closed under nonnegative linear combinations, i.e. K is a cone provided a 1, …, a p  K  R n, 1, …, p."

Similar presentations


Ads by Google