Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tutorial 12 Linear programming Quadratic programming.

Similar presentations


Presentation on theme: "Tutorial 12 Linear programming Quadratic programming."— Presentation transcript:

1 Tutorial 12 Linear programming Quadratic programming

2 M4CS 20052 Tutorial 14 We already discussed that the meaning of the constraints in the optimization is to define search region Ω within the space R n of definition of f(x). Generally, each equality constraint reduced dimensionality by 1, and each inequality constraint defines a region within the space without dimensionality reduction. Now, consider the minimization of the linear function f(x): f(x) = c T x + b, and the search region Ω defined by constraints: Linear case

3 M4CS 20053 Tutorial 14 x*x* c T x + b Ω The figure above illustrates, that in this linear case the minimum is reached on the boundary of the region Ω. We will leave the proof to you, and proceed to more general and a slightly harder case of convex functions. Linear case: Illustration

4 M4CS 20054 Tutorial 14 Definition 1: The region Ω is called convex if Convexity Definition 2: The function f(x) is called convex if If ‘≥’ is replaced with ‘>’ the function will be called ‘strictly convex’ Note, that linear function is convex, but not strictly convex.

5 M4CS 20055 Tutorial 14 Convexity: Illustration The figure above illustrates the relations between linear, convex and strictly convex functions. Convex Linear Strictly Convex General

6 M4CS 20056 Tutorial 14 Linear programming: Definition Minimization of linear function within the convex region Ω is called a linear programming. Note, that with appropriate and sufficiently ‘tall’ matrix B (sufficiently many inequality conditions) arbitrary convex region can be approximated with arbitrarily large accuracy. (Can you prove it?). Many of the problems in linear programming have additional constraint of non-negative components of x: x i ≥0. We will prove another, yet related, claim: A set of linear equality and inequality constraints define a convex region.

7 M4CS 20057 Tutorial 14 Quadratic programming: Definition The only difference between the quadratic programming and linear programming, is that the function can be a quadratic form:

8 M4CS 20058 Tutorial 14 Linear constraints define convex regions 1/3 A region Ω, defined by a set of linear equality and inequality constraints is convex. Proof: 1. If Ω is an empty region, or contains a single point the definition for convexity is satisfied trivially. 2. Consider any. We need to prove that

9 M4CS 20059 Tutorial 14 Proof (equality constraints) 2/3 Assume the contrary -. This means that x 3 violates at least one of the constraints. First, assume that it is one of the equality constraints: Note, that since, they satisfy this constraint: and. Applying the definition of x 3 and linearity on scalar product, we obtain: Thus, x 3 satisfies all the equality constraints, contrary to the assumption.

10 M4CS 200510 Tutorial 14 Proof (inequality constraints) 3/3 Now, assume that x3 violates an inequality constraint. This means, that. On the other hand, therefore Let us write, and Since, we obtain Thus, x 3 satisfies all the inequality constraints, contrary to the assumption. We have proven, that for any x1, x2 satisfying linear equality and inequality constraints, and, also satisfies these constraints. Therefore, linear constraints define a convex region.

11 M4CS 200511 Tutorial 14 Example: Support Vector Machines 1/6 Given the set of labeled points, which is linearly separable, find the vector w which defines the hyperplanes separating the set with maximum margin. xTw=xTw= w

12 M4CS 200512 Tutorial 14 Example: Support Vector Machines 2/6 For the sake of elegant mathematical description, we define the data matrix A and the label matrix D as following: We are looking for the vector w, and appropriate constant  such that: Note, that these two cases can be combined: (1)

13 M4CS 200513 Tutorial 14 Note, that by multiplying w and  by some factor, we seemingly increase the separation between the planes in (1). Therefore, the best separation is has to maintain inequality (1) and simultaneously minimize the length of w: Example: Support Vector Machines 3/6 (2) This is a constrained minimization problem with quadratic function and linear inequality constraints. It is called quadratic programming.

14 M4CS 200514 Tutorial 14 w=QUADPROG(H,f,A,b) attempts to solve the quadratic programming problem: min 0.5*w'*H*w + f'*w subject to: A*w <= b w Example: SVM, solution in Matlab 4/6 We will use the Matlab’s quadprog: >> help quadprog In our case, H=I; f=0; For clarity, we bring the constraint (2) to the form compatible with matlab’s notation:

15 M4CS 200515 Tutorial 14 w=QUADPROG(H,f,A,b) attempts to solve the quadratic programming problem: min 0.5*w'*H*w + f'*w subject to: A*w <= b w Example: SVM, solution in Matlab 5/6 Thus, we have:

16 M4CS 200516 Tutorial 14 Example: SVM, solution in Matlab 6/6 n=2; PN=20; A = [rand(PN,2)+.1; -rand(PN,2) -.1] % the data D = diag([ones(1,PN),-ones(1,PN)]) % the labels plot(A(1:PN,1),A(1:PN,2),'g*'); % plot the data hold on; plot(A(PN+1:2*PN,1),A(PN+1:2*PN,2),'bo'); % adjust the input to quadprog() H = eye(n); f = zeros(n,1); AA = -D*A; b = -ones(PN*2,1); w = quadprog(H,f,AA,b) % quadratic programming – takes milliseconds % Plot the separating plane W_orth = [-.3:.01:.3]'*[w(2),-w(1)]; plot(W_orth(:,1), W_orth(:,2),'k.')


Download ppt "Tutorial 12 Linear programming Quadratic programming."

Similar presentations


Ads by Google