Download presentation
Presentation is loading. Please wait.
Published byVernon Shaw Modified over 9 years ago
2
1 Smoothed Analysis of Algorithms Shang-Hua Teng Boston University Akamai Technologies Inc Joint work with Daniel Spielman (MIT)
3
2 Outline Part I: Introduction to Algorithms Part II: Smoothed Analysis of Algorithms Part III: Geometric Perturbation
4
3 Part I: Introduction to Algorithms Type of Problems Complexity of Algorithms Randomized Algorithms Approximation Worst-case analysis Average-case analysis
5
4 Algorithmic Problems Decision Problem: –Can we 3-color a given graph G? Search Problem: –Given a matrix A and a vector b, find an x –s.t., A x = b Optimization Problem: –Given a matrix A and a vector b, and an objective vector c, find an x that –maximize c T x s.t. A x b
6
5 The size and family of a problem Instance of a problem –Input: for example, a graph, a matrix, a set of points –desired output {yes,no}, coloring, solution-vector, convex hull –Input and output size Amount of memory needed to store the input and output For example: number of vertices in a graph, dimensions of a matrix, the cardinality of a point set A problem is a family of instances.
7
6 An Example Median of a set of numbers Input: a set of numbers { a 1,, a 2,…, a n } Output: a i that maximizes 1 2 3 4 5 6 7 8 9 10
8
7 Quick Selection Quick Selection ({ a 1,, a 2,…, a n }, k ) 1.Choose a 1 in { a 1,, a 2,…, a n } 2.Divide the set into 3.Cases: 1.If k = |L| + 1, return a 1 ; 2.If k < |L|, recursively apply Quick_Selection(L,k) 3.If k > |L|+1, recursively apply Quick_Selection(L,n-k-1)
9
8 Worse-Case Time Complexity Let T({ a 1,, a 2,…, a n } ) be the number of basic steps needed in Quick-Selection for input { a 1,, a 2,…, a n }. We classify inputs by their size Let A n be the set of all input of size n. T(n) = n 2 - n
10
9 Better Algorithms from Worst-case View point Divide and Conquer –Linear-time algorithm –Blum-Floyd-Pratt-Rivest-Tarjan
11
10 Average-Case Time Complexity Let A n be the set of all input of size n. Choose { a 1,, a 2,…, a n } uniformly at random E(T(n)) = O(n)
12
11 Randomized Algorithms Quick Selection ({ a 1,, a 2,…, a n }, k ) 1.Choose a random element s in { a 1,, a 2,…, a n } 2.Divide the set into 3.Cases: 1.If k = |L| + 1, return s; 2.If k < |L|, recursively apply Quick_Selection(L,k) 3.If k > |L|+1, recursively apply Quick_Selection(L,n-k-1)
13
12 Expected Worse-Case Complexity of Randomized Algorithms E(T(n)) = O(n)
14
13 Approximation Algorithms Sampling-Selection ({ a 1,, a 2, …, a n }) 1.Choose a random element a i from { a 1,, a 2,…, a n } 2.Return a i. a i is a δ–median if Prob[ a i is a (1/4)–median ] = 0.5
15
14 Approximation Algorithms Sampling-Selection ({ a 1,, a 2, …, a n }) 1.Choose a set S of random k elements from { a 1,,…, a n } 2.Return the median a i of S. Complexity: O(k)
16
15 When k = 3
17
16 Iterative Middle-of-3 (Miller-Teng) Randomly assign elements from {a 1,, a 2, …, a n } to the leaves
18
17 Summary Algorithms and their complexity Worst-case complexity Average-cast complexity Design better worse-case algorithm Design better algorithm with randomization Design faster algorithm with approximation
19
18 Sad and Exciting Reality Most interesting optimization problems are hard P vs NP NP-complete problems –Coloring, maximum independent set, graph partitioning –Scheduling, optimal VLSI layout, optimal web-traffic assignment, data mining and clustering, optimal DNS and TCPIP protocols, integer programming… Some are unknown: –Graph isomorphism –factorization
20
19 Good News Some fundamental problems are solvable in polynomial time –Sorting, selection, lower dimensional computational geometry –Matrix problems Eigenvalue problem Linear systems –Linear programming (interior point method) –Mathematical programming
21
20 Better News I Randomization helps –Testing of primes (essential to RSA) –VC-dimension and sampling for computational geometry and machine learning –Random walks various statistical problems –Quicksort –Random routing on parallel network –Hashing
22
21 Better News II Approximation algorithms –On-line scheduling –Lattice basis reduction (e.g. in cryptanalysis) –Approximate Euclidean TSP and Steiner trees –Graph partitioning –Data clustering –Divide-and-conquer method for VLSI layout
23
22 Real Stories Practical algorithms and heuristics –Great-performance empirically –Used daily by millions and millions of people –Worked routinely from chip design to airline scheduling Applications –Internet routing and searching –Scientific simulation –Optimization
24
23 PART II: Smoothed Analysis of Algorithms Introduction of Smoothed Analysis Why smoothed analysis? Smoothed analysis of the Simplex Method for Linear Programming
25
24 Smoothed Analysis of Algorithms: Why The Simplex Method Usually Takes Polynomial Time Daniel A. Spielman (MIT) Shang-Hua Teng (Boston University) Gaussian Perturbation with variance 2
26
25 Remarkable Algorithms and Heuristics Work well in practice, but Worst case: bad, exponential, contrived. Average case: good, polynomial, meaningful?
27
26 Random is not typical
28
27 Smoothed Analysis of Algorithms: worst case max x T(x) average case avg r T(r) smoothed complexity max x avg r T(x+ r)
29
28 Smoothed Analysis of Algorithms Interpolate between Worst case and Average Case. Consider neighborhood of every input instance If low, have to be unlucky to find bad input instance
30
29 Smoothed Complexity
31
30 Classical Example: Simplex Method for Linear Programming max z T x s.t. A x y Worst-Case: exponential Average-Case: polynomial Widely used in practice
32
31 CarbsProteinFatIronCost 1 slice bread3051.51030¢ 1 cup yogurt1092.5080¢ 2tsp Peanut Butter6818620¢ US RDA Minimum3005070100 Minimize 30 x 1 + 80 x 2 + 20 x 3 s.t. 30x 1 + 10 x 2 + 6 x 3 300 5x 1 + 9x 2 + 8x 3 50 1.5x 1 + 2.5 x 2 + 18 x 3 70 10x 1 + 6 x 3 100 x 1, x 2, x 3 0 The Diet Problem
33
32 max z T x s.t. A x y Max x 1 +x 2 s.t x 1 1 x 2 1 -x 1 - 2x 2 1 Linear Programming
34
33 Smoothed Analysis of Simplex Method max z T x s.t. A x y G is Gaussian max z T x s.t. (A + G) x y
35
34 Worst-Case: exponential Average-Case: polynomial Smoothed Complexity: polynomial max z T x s.t. a i T x ±1, ||a i || 1 max z T x s.t. (a i + g i ) T x ±1 Smoothed Analysis of Simplex Method
36
35 Perturbation yields Approximation For polytope of good aspect ratio
37
36 But, combinatorially
38
37 The Simplex Method
39
38 Simplex Method (Dantzig, ‘47) Exponential Worst-Case (Klee-Minty ‘72) Avg-Case Analysis (Borgwardt ‘77, Smale ‘82, Haimovich, Adler, Megiddo, Shamir, Karp, Todd) Ellipsoid Method (Khaciyan, ‘79) Interior-Point Method (Karmarkar, ‘84) Randomized Simplex Method (m O( d) ) (Kalai ‘92, Matousek-Sharir-Welzl ‘92) History of Linear Programming
40
39 Shadow Vertices
41
40 Another shadow
42
41 Shadow vertex pivot rule objective start z
43
42 Theorem: For every plane, the expected size of the shadow of the perturbed tope is poly(m,d,1/ )
44
43 Theorem: For every z, two-Phase Algorithm runs in expected time poly(m,d,1/ ) z
45
44 Vertex on a 1,…,a d maximizes z iff z cone(a 1,…,a d ) 0 a1a1 a2a2 z z A Local condition for optimality
46
45 Polar ConvexHull(a 1, a 2, … a m ) Primal a 1 T x 1 a 2 T x 1 … a m T x 1
47
46
48
47 max z ConvexHull(a 1, a 2,..., a m ) z Polar Linear Program
49
48 Initial Simplex Opt Simplex
50
49 Shadow vertex pivot rule
51
50
52
51 Count facets by discretizing to N directions, N
53
52 Count pairs in different facets Pr Different Facets [] < c/N So, expect c Facets
54
53 Expect cone of large angle
55
54 Angle Distance
56
55 Isolate on one Simplex
57
56 Integral Formulation
58
57 Example: For a and b Gaussian distributed points, given that ab intersects x-axis Prob[ < ] = O( 2 ) a b
59
58 a b
60
59 a b
61
60 a b
62
61 a b
63
62 Claim: For < P < 2 a b
64
63 Change of variables a b z u v da db = |(u+v)sin( du dv dz d
65
64 Analysis: For < P < 2 Slight change in has little effect on i for all but very rare u,v,z
66
65 Distance: Gaussian distributed corners a1a1 a2a2 a3a3 p
67
66 Idea: fix by perturbation
68
67 Trickier in 3d
69
68 Future Research – Simplex Method Smoothed analysis of other pivot rules Analysis under relative perturbations. Trace solutions as un-perturb. Strongly polynomial algorithm for linear programming?
70
69 A Theory Closer to Practice Optimization algorithms and heuristics, such as Newton’s Method, Conjugate Gradient, Simulated Annealing, Differential Evolution, etc. Computational Geometry, Scientific Computing and Numerical Analysis Heuristics solving instances of NP-Hard problems. Discrete problems? Shrink intuition gap between theory and practice.
71
70 Part III: Geometric Perturbation Three Dimensional Mesh Generation
72
Delaunay Triangulations for Well-Shaped 3D Mesh Generation Shang-Hua Teng Boston University Akamai Technologies Inc.
73
Collaborators: Siu-Wing Cheng, Tamal Dey, Herbert Edelsbrunner, Micheal Facello Damrong Guoy Gary Miller, Dafna Talmor, Noel Walkington Xiang-Yang Li and Alper Üngör
74
73 3D Unstructured Meshes
75
74 Surface and 2D Unstructured Meshes courtesy N. Amenta, UT Austin courtesy NASA courtesy Ghattas, CMU
76
Numerical Methods Point Set: Triangulation: ad hoc octreeDelaunay Domain, Boundary, and PDEs elementdifference volume Finite Ax=b direct method Mesh Generation geometric structures Linear System algorithm data structures Approximation Numerical Analysis Formulation Math+Engineering iterative method multigrid
77
Outline Mesh Generation in 2D –Mesh Qualities –Meshing Methods –Meshes and Circle Packings Mesh Generation in 3D –Slivers –Numerical Solution: Control Volume Method –Geometric Solution: Sliver Removal by Weighted Delaunay Triangulations –Smoothed Solution: Sliver Removal by Perturbation
78
Badly Shaped Triangles
79
Aspect Ratio (R/r)
80
Meshing Methods Advancing Front Quadtree and Octree Refinement Delaunay Based –Delaunay Refinement –Sphere Packing –Weighted Delaunay Triangulation –Smoothing by Perturbation The goal of a meshing algorithm is to generate a well-shaped mesh that is as small as possible.
81
Balanced Quadtree Refinements (Bern-Eppstein-Gilbert)
82
Quadtree Mesh
83
Delaunay Triangulations
84
Why Delaunay? Maximizes the smallest angle in 2D. Has efficient algorithms and data structures. Delaunay refinement: –In 2D, it generates optimal size, natural looking meshes with 20.7 o (Jim Ruppert)
85
84 Delaunay Refinement (Jim Ruppert) 2D insertion1D insertion
86
85 Delaunay Mesh
87
86 Local Feature Spacing (f) f: R The radius of the smallest sphere centered at a point that intersects or contains two non-incident input features
88
87 Well-Shaped Meshes and f
89
88 f is 1-Lipschitz and Optimal
90
89 Sphere-Packing
91
90 p -Packing a Function f No large empty gap: the radius of the largest empty sphere passing q is at most f(q). f(p)/2 q
92
91 The Delaunay triangulation of a -packing is a well-shaped mesh of optimal size. Every well-shaped mesh defines a -packing. The Packing Lemma (2D) (Miller-Talmor-Teng-Walkington)
93
92 Part I: Meshes to Packings
94
93 Part II: Packings to Meshes
95
94 3D Challenges Delaunay failed on aspect ratio Quadtree becomes octree (Mitchell-Vavasis) Meshes become much larger Research is more Challenging!!!
96
Badly Shaped Tetrahedra
97
Slivers
98
Radius-Edge Ratio (Miller-Talmor-Teng-Walkington) R L R/L
99
98 The Packing Lemma (3D) (Miller-Talmor-Teng-Walkington) The Delaunay Triangulation of a -packing is a well-shaped mesh (using radius-edge ratio) of optimal size. Every well-shaped (aspect-ratio or radius- edge ratio) mesh defines a -packing.
100
99 Uniform Ball Packing In any dimension, if P is a maximal packing of unit balls, then the Delaunay triangulation of P has radius-edge at most 1. ||e|| is at least 2 r is at most 2 r
101
100 Constant Degree Lemma (3D) (Miller-Talmor-Teng-Walkington) The vertex degree of the Delaunay triangulation with a constant radius- edge ratio is bounded by a constant.
102
101 Delaunay Refinement in 3D Shewchuck
103
Slivers
104
Sliver: the geo-roach
105
Coping with Slivers: Control-Volume-Method ( Miller-Talmor-Teng-Walkington)
106
Sliver Removal by Weighted Delaunay (Cheng-Dey-Edelsbrunner-Facello-Teng)
107
Weighted Points and Distance p z
108
Orthogonal Circles and Spheres
109
Weighted Bisectors
110
Weighted Delaunay
111
Weighted Delaunay and Convex Hull
112
Parametrizing Slivers D Y L
113
Interval Lemma 0 N(p)/3 Constant Degree: The union of all weighted Delaunay triangulations with Property [ ] and Property [ ] has a constant vertex degree
114
Pumping Lemma (Cheng-Dey-Edelsbrunner-Facello-Teng) D Y z H r s p P q
115
Sliver Removal by Flipping One by one in an arbitrary ordering fix the weight of each point Implementation: flip and keep the best configuration.
116
115 Experiments (Damrong Guoy, UIUC)
117
116 Initial tetrahedral mesh: 12,838 vertices, all vertices are on the boundary surface
118
117 Dihedral angle < 5 degree: 13,471
119
118 Slivers after Delaunay refinement: 881
120
119 Slivers after sliver-exudation: 12
121
120 15,503 slivers 142 slivers, less elements, better distribution 1183 slivers
122
121 5636 slivers 18 slivers, less elements, better distribution 563 slivers Triceratops
123
122 4532 slivers 1 sliver, less elements, better distribution 173 slivers Heart
124
123 Smoothing and Perturbation Perturb mesh vertices Re-compute the Delaunay triangulation
125
124 Well-shaped Delaunay Refinement (Li and Teng) Add a point near the circumcenter of bad element Avoids creating small slivers Well-shaped meshes
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.