1 Smoothed Analysis of Algorithms: Why The Simplex Method Usually Takes Polynomial Time Shang-Hua Teng Boston University/Akamai Joint work with Daniel.

Slides:



Advertisements
Similar presentations
A Randomized Polynomial- Time Simplex Algorithm for Linear Programming CS3150 Course Presentation.
Advertisements

Approximation Algorithms
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 8 Tuesday, 11/19/02 Linear Programming.
Theory of Computing Lecture 14 MAS 714 Hartmut Klauck.
Effective Heuristics for NP-Hard Problems Arising in Molecular Biology Richard M. Karp Bangalore, January 5, 2011.
A Randomized Polynomial-Time Simplex Algorithm for Linear Programming Daniel A. Spielman, Yale Joint work with Jonathan Kelner, M.I.T.
On the randomized simplex algorithm in abstract cubes Jiři Matoušek Charles University Prague Tibor Szabó ETH Zürich.
Uri Zwick – Tel Aviv Univ. Randomized pivoting rules for the simplex algorithm Upper bounds TexPoint fonts used in EMF. Read the TexPoint manual before.
CS38 Introduction to Algorithms Lecture 15 May 20, CS38 Lecture 15.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Approximation Algorithms
1 The Smoothed Analysis of Algorithms: Simplex Methods and Beyond Shang-Hua Teng Boston University/Akamai Joint work with Daniel Spielman (MIT)
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 9 Wednesday, 11/15/06 Linear Programming.
An Algorithm for Polytope Decomposition and Exact Computation of Multiple Integrals.
Linear Programming and Approximation
Jie Gao Joint work with Amitabh Basu*, Joseph Mitchell, Girishkumar Stony Brook Distributed Localization using Noisy Distance and Angle Information.
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Introduction to Linear and Integer Programming Lecture 7: Feb 1.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Approximation Algorithm: Iterative Rounding Lecture 15: March 9.
Implicit Hitting Set Problems Richard M. Karp Harvard University August 29, 2011.
Lecture 6: Greedy Algorithms I Shang-Hua Teng. Optimization Problems A problem that may have many feasible solutions. Each solution has a value In maximization.
1 Vertex Cover Problem Given a graph G=(V, E), find V' ⊆ V such that for each edge (u, v) ∈ E at least one of u and v belongs to V’ and |V’| is minimized.
1 Maximum matching Max Flow Shortest paths Min Cost Flow Linear Programming Mixed Integer Linear Programming Worst case polynomial time by Local Search.
Solution methods for NP-hard Discrete Optimization Problems.
Computability and Complexity 24-1 Computability and Complexity Andrei Bulatov Approximation.
 Linear Programming and Smoothed Complexity Richard Kelley.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
1 Randomness in Computation Example 1: Breaking symmetry. Example 2: Finding witnesses. Example 3: Monte Carlo integration. Example 4: Approximation algorithms.
Problem Set # 4 Maximize f(x) = 3x1 + 2 x2 subject to x1 ≤ 4 x1 + 3 x2 ≤ 15 2x1 + x2 ≤ 10 Problem 1 Solve these problems using the simplex tableau. Maximize.
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
LINEAR PROGRAMMING PROBLEM Definition and Examples.
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
Course: Advanced Algorithms CSG713, Fall 2008 CCIS Department, Northeastern University Dimitrios Kanoulas.
1 IE 607 Heuristic Optimization Introduction to Optimization.
Ch. 11: Optimization and Search Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 some slides from Stephen Marsland, some images.
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
Approximation Algorithms for Stochastic Combinatorial Optimization Part I: Multistage problems Anupam Gupta Carnegie Mellon University.
Polyhedral Optimization Lecture 1 – Part 2 M. Pawan Kumar Slides available online
Computational Geometry Piyush Kumar (Lecture 5: Linear Programming) Welcome to CIS5930.
Linear Programming Piyush Kumar. Graphing 2-Dimensional LPs Example 1: x y Feasible Region x  0y  0 x + 2 y  2 y  4 x  3 Subject.
Round and Approx: A technique for packing problems Nikhil Bansal (IBM Watson) Maxim Sviridenko (IBM Watson) Alberto Caprara (U. Bologna, Italy)
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.
1 Smoothed Analysis of Algorithms Shang-Hua Teng Boston University Akamai Technologies Inc Joint work with Daniel Spielman (MIT)
The Traveling Salesman Problem Over Seventy Years of Research, and a Million in Cash Presented by Vladimir Coxall.
Discrete Optimization Lecture #2 2008/2/271Shi-Chung Chang, NTUEE, GIIE, GICE Last Time 1.Course Overview » A taxonomy of optimization problems » Course.
§1.4 Algorithms and complexity For a given (optimization) problem, Questions: 1)how hard is the problem. 2)does there exist an efficient solution algorithm?
Optimization - Lecture 4, Part 1 M. Pawan Kumar Slides available online
Linear Programming Maximize Subject to Worst case polynomial time algorithms for linear programming 1.The ellipsoid algorithm (Khachian, 1979) 2.Interior.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
March 7, Using Pattern Recognition Techniques to Derive a Formal Analysis of Why Heuristic Functions Work B. John Oommen A Joint Work with Luis.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Linear Programming Piyush Kumar Welcome to CIS5930.
Ma 162 Finite Math Fall 2006 Review Topics: linear models systems of linear equations Matrix methods linear optimization: graphical and simplex methods.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
An Affine-invariant Approach to Linear Programming Mihály Bárász and Santosh Vempala.
Submodularity Reading Group Matroid Polytopes, Polymatroid M. Pawan Kumar
Discrete Optimization
ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS
Optimization problems such as
Uri Zwick – Tel Aviv Univ.
Haim Kaplan and Uri Zwick
Great Ideas: Algorithm Implementation
Uri Zwick – Tel Aviv Univ.
Introduction to Algorithms Second Edition by
ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS
Linear Constrained Optimization
Presentation transcript:

1 Smoothed Analysis of Algorithms: Why The Simplex Method Usually Takes Polynomial Time Shang-Hua Teng Boston University/Akamai Joint work with Daniel Spielman (MIT) Gaussian Perturbation with variance  2

2 Remarkable Algorithms and Heuristics Work well in practice, but Worst case: bad, exponential, contrived. Average case: good, polynomial, meaningful?

3 Random is not typical

4 Smoothed Analysis of Algorithms: worst case max x T(x) average case avg r T(r) smoothed complexity max x avg r T(x+  r)

5 Smoothed Analysis of Algorithms Interpolate between Worst case and Average Case. Consider neighborhood of every input instance If low, have to be unlucky to find bad input instance

6 Complexity Landscape

7 Classical Example: Simplex Method for Linear Programming max z T x s.t. A x  y Worst-Case: exponential Average-Case: polynomial Widely used in practice

8 CarbsProteinFatIronCost 1 slice bread ¢ 1 cup yogurt ¢ 2tsp Peanut Butter ¢ US RDA Minimum Minimize 30 x x x 3 s.t. 30x x x 3  300 5x 1 + 9x 2 + 8x 3  x x x 3  70 10x x 3  100 x 1, x 2, x 3  0 The Diet Problem

9 max z T x s.t. A x  y Max x 1 +x 2 s.t x 1  1 x 2  1 -x 1 - 2x 2  1 Linear Programming

10 Smoothed Analysis of Simplex Method max z T x s.t. A x  y G is Gaussian max z T x s.t. (A +  G) x  y

11 Worst-Case: exponential Average-Case: polynomial Smoothed Complexity: polynomial max z T x s.t. a i T x  ±1, ||a i ||   1 max z T x s.t. (a i +  g i ) T x  ±1 Smoothed Analysis of Simplex Method

12 Perturbation yields Approximation For polytope of good aspect ratio

13 But, combinatorially

14 The Simplex Method

15 Simplex Method (Dantzig, ‘47) Exponential Worst-Case (Klee-Minty ‘72) Avg-Case Analysis (Borgwardt ‘77, Smale ‘82, Haimovich, Adler, Megiddo, Shamir, Karp, Todd) Ellipsoid Method (Khaciyan, ‘79) Interior-Point Method (Karmarkar, ‘84) Randomized Simplex Method (m O(  d) ) (Kalai ‘92, Matousek-Sharir-Welzl ‘92) History of Linear Programming

16 Shadow Vertices

17 Another shadow

18 Shadow vertex pivot rule objective start z

19 Theorem: For every plane, the expected size of the shadow of the perturbed tope is poly(m,d,1/  )

20 Theorem: For every z, two-Phase Algorithm runs in expected time poly(m,d,1/  ) z

21 Vertex on a 1,…,a d maximizes z iff z  cone(a 1,…,a d ) 0 a1a1 a2a2 z z A Local condition for optimality

22 Polar ConvexHull(a 1, a 2, … a m ) Primal a 1 T x  1 a 2 T x  1 … a m T x  1

23

24 max   z  ConvexHull(a 1, a 2,..., a m ) z Polar Linear Program

25 Initial Simplex Opt Simplex

26 Shadow vertex pivot rule

27

28 Count facets by discretizing to N directions, N  

29 Count pairs in different facets Pr Different Facets [] < c/N So, expect c Facets

30 Expect cone of large angle

31 Angle Distance

32 Isolate on one Simplex

33 Integral Formulation

34 Example: For a and b Gaussian distributed points, given that ab intersects x-axis Prob[  <  ] = O(  2 )  a b

35  a b

36 a b

37 a b

38 a b

39 a b

40 a b

41 Claim: For  <   P  <  2  a b

42 Change of variables  a b z u v da db = |(u+v)sin(  du dv dz d 

43 Analysis: For  <   P  <  2 Slight change in  has little effect on i for all but very rare u,v,z   

44 Distance: Gaussian distributed corners a1a1 a2a2 a3a3 p

45 Idea: fix by perturbation

46 Trickier in 3d

47 Future Research – Simplex Method Smoothed analysis of other pivot rules Analysis under relative perturbations. Trace solutions as un-perturb. Strongly polynomial algorithm for linear programming?

48 A Theory Closer to Practice Optimization algorithms and heuristics, such as Newton’s Method, Conjugate Gradient, Simulated Annealing, Differential Evolution, etc. Computational Geometry, Scientific Computing and Numerical Analysis Heuristics solving instances of NP-Hard problems. Discrete problems? Shrink intuition gap between theory and practice.