A Randomized Polynomial- Time Simplex Algorithm for Linear Programming CS3150 Course Presentation.

Slides:



Advertisements
Similar presentations
3.4 Linear Programming 10/31/2008. Optimization: finding the solution that is either a minimum or maximum.
Advertisements

Find the solutions. Find the Vertex and Max (-1, 0) (5, 0) (2, 10)
Analysis of Algorithms CS 477/677
1 LP Duality Lecture 13: Feb Min-Max Theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum.
Linear Programming – Simplex Method
Linear Programming?!?! Sec Linear Programming In management science, it is often required to maximize or minimize a linear function called an objective.
A Randomized Polynomial-Time Simplex Algorithm for Linear Programming Daniel A. Spielman, Yale Joint work with Jonathan Kelner, M.I.T.
The Simplex Method: Standard Maximization Problems
The Simplex Algorithm An Algorithm for solving Linear Programming Problems.
LINEAR PROGRAMMINGExample 1 MaximiseI = x + 0.8y subject tox + y  x + y  x + 2y  2400 Initial solution: I = 0 at (0, 0)
Linear Programming Unit 2, Lesson 4 10/13.
1 The Smoothed Analysis of Algorithms: Simplex Methods and Beyond Shang-Hua Teng Boston University/Akamai Joint work with Daniel Spielman (MIT)
Design and Analysis of Algorithms
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Introduction to Linear and Integer Programming Lecture 7: Feb 1.
1 Maximum matching Max Flow Shortest paths Min Cost Flow Linear Programming Mixed Integer Linear Programming Worst case polynomial time by Local Search.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
 Linear Programming and Smoothed Complexity Richard Kelley.
Problem Set # 4 Maximize f(x) = 3x1 + 2 x2 subject to x1 ≤ 4 x1 + 3 x2 ≤ 15 2x1 + x2 ≤ 10 Problem 1 Solve these problems using the simplex tableau. Maximize.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Course: Advanced Algorithms CSG713, Fall 2008 CCIS Department, Northeastern University Dimitrios Kanoulas.
Linear Programming Operations Research – Engineering and Math Management Sciences – Business Goals for this section  Modeling situations in a linear environment.
Algeoc. OPPOSITES Opposite numbers make ZERO!!!!!!!
Linear Programming Chapter 13 Supplement.
This presentation shows how the tableau method is used to solve a simple linear programming problem in two variables: Maximising subject to two  constraints.
ECE 556 Linear Programming Ting-Yuan Wang Electrical and Computer Engineering University of Wisconsin-Madison March
3.4 Linear Programming p Optimization - Finding the minimum or maximum value of some quantity. Linear programming is a form of optimization where.
Solve problems by using linear programming.
Opener. Notes: 3.4 Linear Programming Optimization  Many real-life problems involve a process called optimization.  This means finding a maximum or.
Linear Programming – Simplex Method
10.4 Solving Polynomial Equations in Factored Form Objective: I will use the zero-product property to find solutions to polynomial equations that are factored.
Linear Programming Problem. Definition A linear programming problem is the problem of optimizing (maximizing or minimizing) a linear function (a function.
Warm-Up 3.4 1) Solve the system. 2) Graph the solution.
The numerator must always be 1 degree less than the denominator
Soham Uday Mehta. Linear Programming in 3 variables.
Optimization - Lecture 4, Part 1 M. Pawan Kumar Slides available online
3.4: Linear Programming Objectives: Students will be able to… Use linear inequalities to optimize the value of some quantity To solve linear programming.
Warm-upWarm-up Sketch the region bounded by the system of inequalities: 1) 2) Sketch the region bounded by the system of inequalities: 1) 2)
3-5: Linear Programming. Learning Target I can solve linear programing problem.
3.4 Linear Programming Objective:
The minimum cost flow problem. Solving the minimum cost flow problem.
1 Smoothed Analysis of Algorithms: Why The Simplex Method Usually Takes Polynomial Time Shang-Hua Teng Boston University/Akamai Joint work with Daniel.
3.4 Linear Programming p Optimization - Finding the minimum or maximum value of some quantity. Linear programming is a form of optimization where.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Submodularity Reading Group Matroid Polytopes, Polymatroid M. Pawan Kumar
Lower Bounds on Extended Formulations Noah Fleming University of Toronto Supervised by Toniann Pitassi.
The Set-covering Problem Problem statement –given a finite set X and a family F of subsets where every element of X is contained in one of the subsets.
Discrete Optimization
8.1 Determine whether the following statements are correct or not
Lap Chi Lau we will only use slides 4 to 19
ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS
Topics in Algorithms Lap Chi Lau.
Solving Equations by Factoring and Problem Solving
Linear Programming A potter wants to make and sell serving bowls and plates. A bowl uses 5 pounds of clay. A plate uses 4 pounds of clay. The potter has.
3.2 Linear Programming 3 Credits AS
Uri Zwick – Tel Aviv Univ.
3-3 Optimization with Linear Programming
Linear Programming.
8.4 Linear Programming p
Solving Equations in Factored Form
Problem Solving 4.
EMIS 8373 Complexity of Linear Programming
CS5321 Numerical Optimization
Section 3.4 Sensitivity Analysis.
LINEAR PROGRAMMING Example 1 Maximise I = x + 0.8y
15th Scandinavian Workshop on Algorithm Theory
Section Linear Programming
1.6 Linear Programming Pg. 30.
Practical Issues Finding an initial feasible solution Cycling
Presentation transcript:

A Randomized Polynomial- Time Simplex Algorithm for Linear Programming CS3150 Course Presentation

Linear Programming Example: ・ Find the maximum value of p = 3x - 2y + 4z ・ subject to ・ 4x + 3y - z >= 3 ・ x + 2y + z<=4 ・ x >= 0, y >= 0, z >= 0

Linear Programming Objective Function max z T x Constraints s.t. A x  y

Simplex Method - Intuition Objective: –Min C = 3x + 4y Constraints: – ・ 3x - 4y <= 12, – ・ x + 2y >= 4 – ・ x >= 1, y >= 0.

Simplex Method - Intuition max z T x s.t. A x  y Worst-Case: exponential Average-Case: polynomial Widely used in practice

Shadow Vertices

Shadow vertex pivot rule objective start z

Complexity Landscape

Complexity Landscape of Perturbed Problem

Some issues Problem max z T x s.t. A x  y –The perturbed problem is no longer the original problem we want to solve! Solution –Reduce the original problem to another problem, where the perturb won’t affect solution. –Is this set of constraints bounded? A’w<=1

Intuition of the Algorithm Since the right hand side won’t affect solution, we want to carefully choose it so that the Shadow-Vertex Simplex will run poly-time with high probability.

Intuition of the Proof P is the polytope of A’w<=1 Case 1: –The polytope P is in k-near-isotropic position

K-near-isotropic position

Intuition of the Proof Case 1: –The polytope is in k-near-isotropic position – Case 2: –The polytope is not in k-near-isotropic position

K-near-isotropic Case Upper bound of total shadow length (Shadow Size). Lower bound expected length of each edge. Number of edges of the shadow is poly in size w.h.p.

K-near-isotropic position

Randomization Each of vector is a independently exponentially distributed random variable with expectation Project onto a random plane

None-K-near-isotropic Case By Running the shadow vertex for a limited amount of time we can either: –Find the optimal –Or find a way to eliminate bad events w.h.p.

None-K-near-isotropic Case

K-near-isotropic Case Upper bound of the total shadow length. Lower bound the expected length of each edge. Number of edges of the shadow is poly in size w.h.p.

Upper Bound of Shadow Size A’w<=1 A’w<=1+r

Shadow Size in Case 1 The expected shadow size is at most:

Upper Bound of Shadow Size

K-near-isotropic Case Upper bound of the total shadow length. Lower bound the expected length of each edge. Number of edges of the shadow is poly in size w.h.p.

Expected Edge Length The Expected Edge Length is at least:

Case 1 Main Theorem The expected number of edges is at most

None-K-Isotropic Case The expected shadow size inside any given ball is small

None-K-near-Isotropic Case Upper bound of the total shadow length within the given ball. Lower bound the expected length of each edge within the given ball. Number of edges of the shadow is poly in size w.h.p.

Outline of the Algorithm Run Shadow Vertex Simplex on the Randomized Input –If Find Optimal then halt Else transform the coordinates and run shadow vertex simplex on the transformed inputs Algorithm will halt in poly-time w.h.p.

Summery and Intuitions Deterministic algorithms run exponential time on some “bad” inputs By introducing some randomness into the algorithm fixed the problem. The Randomized algorithm run poly time on all inputs with high probability. Start with something strict, which is easy to prove the poly-bound, eliminate the bad events in poly-time.