1 The Smoothed Analysis of Algorithms: Simplex Methods and Beyond Shang-Hua Teng Boston University/Akamai Joint work with Daniel Spielman (MIT)
2 Outline Why Simplex Method What Condition Numbers/Gaussian Elimination Numerical Analysis Conjectures and Open Problems
3 Wonderful algorithms and heuristics that work well in practice, but whose performance cannot be understood through traditional analyses. Motivation for Smoothed Analysis worst-case analysis: if good, is wonderful. But, often exponential for these heuristics examines most contrived inputs average-case analysis: a very special class of inputs may be good, but is it meaningful?
4 Random is not typical
5 Analyses of Algorithms: worst case max x T(x) average case E r T(r) smoothed complexity
6 Instance of smoothed framework x is Real n -vector r is Gaussian random vector, variance 2 measure smoothed complexity as function of n and
7 Complexity Landscape run time input space
8 Complexity Landscape run time input space worst case
9 Complexity Landscape run time input space worst case average case
10 run time input space Smoothed Complexity Landscape
11 run time input space Smoothed Complexity Landscape smoothed complexity
12 Smoothed Analysis of Algorithms Interpolate between Worst case and Average Case. Consider neighborhood of every input instance If low, have to be unlucky to find bad input instance
13 Motivating Example: Simplex Method for Linear Programming max z T x s.t. A x y Worst-Case: exponential Average-Case: polynomial Widely used in practice
US RDA Minimum 20¢618862tsp Peanut Butter 80¢ cup yogurt 30¢ slice bread CostIronFatProteinCarbs Minimize 30 x x x 3 s.t. 30x x x 3 300 5x 1 + 9x 2 + 8x 3 x x x 3 70 10x x 3 100 x 1, x 2, x 3 0 The Diet Problem
15 The Simplex Method start opt
16 Simplex Method (Dantzig, ‘47) Exponential Worst-Case (Klee-Minty ‘72) Avg-Case Analysis (Borgwardt ‘77, Smale ‘82, Haimovich, Adler, Megiddo, Shamir, Karp, Todd) Ellipsoid Method (Khaciyan, ‘79) Interior-Point Method (Karmarkar, ‘84) Randomized Simplex Method (m O( d) ) (Kalai ‘92, Matousek-Sharir-Welzl ‘92) History of Linear Programming
17 Smoothed Analysis of Simplex Method max z T x s.t. A x y max z T x s.t. G is Gaussian Theorem: For all A, simplex method takes expected time polynomial [Spielman-Teng 01]
18 Shadow Vertices
19 Another shadow
20 Shadow vertex pivot rule objective start z
21 Theorem: For every plane, the expected size of the shadow of the perturbed tope is poly(m,d,1/ )
22 max z ConvexHull(a 1, a 2,..., a m ) z Polar Linear Program
23 Initial Simplex Opt Simplex
24 Shadow vertex pivot rule
25
26 Count facets by discretizing to N directions, N
27 Count pairs in different facets Pr Different Facets [] < c/N So, expect c Facets
28 Expect cone of large angle
29 Intuition for Smoothed Analysis of Simplex Method After perturbation, “most” corners have angle bounded away from flat most: some appropriate measure angle: measure by condition number of defining matrix start opt
30 Condition number at corner Corner is given by Condition number is sensitivity of x to change in C and b distance of C to singular
31 Condition number at corner Corner is given by Condition number is
32 Connection to Numerical Analysis Measure performance of algorithms in terms of condition number of input 1. Bound the running time of an algorithm solving a problem in terms of its condition number. 2. Prove it is unlikely that a random problem instance has large condition number. Average-case framework of Smale:
33 Connection to Numerical Analysis Measure performance of algorithms in terms of condition number of input 1. Bound the running time of an algorithm solving a problem in terms of its condition number. 2’. Prove it is unlikely that a perturbed problem instance has large condition number. Smoothed Suggestion:
34 Condition Number Edelman ‘88: for standard Gaussian random matrix Theorem: for Gaussian random matrix variance centered anywhere [Sankar-Spielman-Teng 02]
35 Condition Number Edelman ‘88: for standard Gaussian random matrix Theorem: for Gaussian random matrix variance centered anywhere [Sankar-Spielman-Teng 02] (conjecture)
36 Gaussian Elimination A = LU Growth factor: With partial pivoting, can be 2 n Precision needed (n ) bits For every A,
37 Condition Number and Iterative LP Solvers Renegar defined condition number for LP maximize subject to distance of (A, b, c) to ill-posed linear program related to sensitivity of x to change in (A,b,c) Number of iterations of many LP solvers bounded by function of condition number: Ellipsoid, Perceptron, Interior Point, von Neumann
38 Smoothed Analysis of Perceptron Algorithm Theorem: For perceptron algorithm [Blum-Dunagan 01] Note: slightly weaker than a bound on expectation Bound through “wiggle room”, a condition number
39 Smoothed Analysis of Renegar’s Cond Number Theorem: [Dunagan-Spielman-Teng 02] Corollary: smoothed complexity of interior point method is for accuracy Compare: worst-case complexity of IPM is iterations, note
40 Perturbations of Structured and Sparse Problems Structured perturbations of structured inputs perturb Zero-preserving perturbations of sparse inputs Or, perturb discrete structure… perturb non-zero entries
41 Goals of Smoothed Analysis Relax worst-case analysis Maintain mathematical rigor Provide plausible explanation for practical behavior of algorithms Develop a theory closer to practice
42 Geometry of
43 Geometry of (union bound) should be d 1/2
44 For, Apply to random Lemma: So Improving bound on conjecture
45 Smoothed Analysis of Renegar’s Cond Number Theorem: [Dunagan-Spielman-Teng 02] Corollary: smoothed complexity of interior point method is for accuracy Compare: worst-case complexity of IPM is iterations, note conjecture