Download presentation
Presentation is loading. Please wait.
2
1 The Smoothed Analysis of Algorithms: Simplex Methods and Beyond Shang-Hua Teng Boston University/Akamai Joint work with Daniel Spielman (MIT)
3
2 Outline Why Simplex Method What Condition Numbers/Gaussian Elimination Numerical Analysis Conjectures and Open Problems
4
3 Wonderful algorithms and heuristics that work well in practice, but whose performance cannot be understood through traditional analyses. Motivation for Smoothed Analysis worst-case analysis: if good, is wonderful. But, often exponential for these heuristics examines most contrived inputs average-case analysis: a very special class of inputs may be good, but is it meaningful?
5
4 Random is not typical
6
5 Analyses of Algorithms: worst case max x T(x) average case E r T(r) smoothed complexity
7
6 Instance of smoothed framework x is Real n -vector r is Gaussian random vector, variance 2 measure smoothed complexity as function of n and
8
7 Complexity Landscape run time input space
9
8 Complexity Landscape run time input space worst case
10
9 Complexity Landscape run time input space worst case average case
11
10 run time input space Smoothed Complexity Landscape
12
11 run time input space Smoothed Complexity Landscape smoothed complexity
13
12 Smoothed Analysis of Algorithms Interpolate between Worst case and Average Case. Consider neighborhood of every input instance If low, have to be unlucky to find bad input instance
14
13 Motivating Example: Simplex Method for Linear Programming max z T x s.t. A x y Worst-Case: exponential Average-Case: polynomial Widely used in practice
15
14 1007050300US RDA Minimum 20¢618862tsp Peanut Butter 80¢02.59101 cup yogurt 30¢101.55301 slice bread CostIronFatProteinCarbs Minimize 30 x 1 + 80 x 2 + 20 x 3 s.t. 30x 1 + 10 x 2 + 6 x 3 300 5x 1 + 9x 2 + 8x 3 50 1.5x 1 + 2.5 x 2 + 18 x 3 70 10x 1 + 6 x 3 100 x 1, x 2, x 3 0 The Diet Problem
16
15 The Simplex Method start opt
17
16 Simplex Method (Dantzig, ‘47) Exponential Worst-Case (Klee-Minty ‘72) Avg-Case Analysis (Borgwardt ‘77, Smale ‘82, Haimovich, Adler, Megiddo, Shamir, Karp, Todd) Ellipsoid Method (Khaciyan, ‘79) Interior-Point Method (Karmarkar, ‘84) Randomized Simplex Method (m O( d) ) (Kalai ‘92, Matousek-Sharir-Welzl ‘92) History of Linear Programming
18
17 Smoothed Analysis of Simplex Method max z T x s.t. A x y max z T x s.t. G is Gaussian Theorem: For all A, simplex method takes expected time polynomial [Spielman-Teng 01]
19
18 Shadow Vertices
20
19 Another shadow
21
20 Shadow vertex pivot rule objective start z
22
21 Theorem: For every plane, the expected size of the shadow of the perturbed tope is poly(m,d,1/ )
23
22 max z ConvexHull(a 1, a 2,..., a m ) z Polar Linear Program
24
23 Initial Simplex Opt Simplex
25
24 Shadow vertex pivot rule
26
25
27
26 Count facets by discretizing to N directions, N
28
27 Count pairs in different facets Pr Different Facets [] < c/N So, expect c Facets
29
28 Expect cone of large angle
30
29 Intuition for Smoothed Analysis of Simplex Method After perturbation, “most” corners have angle bounded away from flat most: some appropriate measure angle: measure by condition number of defining matrix start opt
31
30 Condition number at corner Corner is given by Condition number is sensitivity of x to change in C and b distance of C to singular
32
31 Condition number at corner Corner is given by Condition number is
33
32 Connection to Numerical Analysis Measure performance of algorithms in terms of condition number of input 1. Bound the running time of an algorithm solving a problem in terms of its condition number. 2. Prove it is unlikely that a random problem instance has large condition number. Average-case framework of Smale:
34
33 Connection to Numerical Analysis Measure performance of algorithms in terms of condition number of input 1. Bound the running time of an algorithm solving a problem in terms of its condition number. 2’. Prove it is unlikely that a perturbed problem instance has large condition number. Smoothed Suggestion:
35
34 Condition Number Edelman ‘88: for standard Gaussian random matrix Theorem: for Gaussian random matrix variance centered anywhere [Sankar-Spielman-Teng 02]
36
35 Condition Number Edelman ‘88: for standard Gaussian random matrix Theorem: for Gaussian random matrix variance centered anywhere [Sankar-Spielman-Teng 02] (conjecture)
37
36 Gaussian Elimination A = LU Growth factor: With partial pivoting, can be 2 n Precision needed (n ) bits For every A,
38
37 Condition Number and Iterative LP Solvers Renegar defined condition number for LP maximize subject to distance of (A, b, c) to ill-posed linear program related to sensitivity of x to change in (A,b,c) Number of iterations of many LP solvers bounded by function of condition number: Ellipsoid, Perceptron, Interior Point, von Neumann
39
38 Smoothed Analysis of Perceptron Algorithm Theorem: For perceptron algorithm [Blum-Dunagan 01] Note: slightly weaker than a bound on expectation Bound through “wiggle room”, a condition number
40
39 Smoothed Analysis of Renegar’s Cond Number Theorem: [Dunagan-Spielman-Teng 02] Corollary: smoothed complexity of interior point method is for accuracy Compare: worst-case complexity of IPM is iterations, note
41
40 Perturbations of Structured and Sparse Problems Structured perturbations of structured inputs perturb Zero-preserving perturbations of sparse inputs Or, perturb discrete structure… perturb non-zero entries
42
41 Goals of Smoothed Analysis Relax worst-case analysis Maintain mathematical rigor Provide plausible explanation for practical behavior of algorithms Develop a theory closer to practice http://math.mit.edu/~spielman/SmoothedAnalysis
43
42 Geometry of
44
43 Geometry of (union bound) should be d 1/2
45
44 For, Apply to random Lemma: So Improving bound on conjecture
46
45 Smoothed Analysis of Renegar’s Cond Number Theorem: [Dunagan-Spielman-Teng 02] Corollary: smoothed complexity of interior point method is for accuracy Compare: worst-case complexity of IPM is iterations, note conjecture
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.