Download presentation
Presentation is loading. Please wait.
Published byTeresa Holmes Modified over 9 years ago
1
Distance Scales, Embeddings, and Metrics of Negative Type By James R. Lee Presented by Andy Drucker Mar. 8, 2007 CSE 254: Metric Embeddings
2
Negative Type Metrics
8
Also, L_1 is NEG… Finally, NEG metrics arise in SDP instances for important problems like Sparsest Cut.
9
Embedding NEG metrics NEG metrics are restrictive. It was conjectured that they embed with O(1) distortion into L1, but this was disproved by Khot and Vishnoi [KV].
10
Embedding NEG metrics NEG metrics are restrictive. It was conjectured that they embed with O(1) distortion into L1, but this was disproved by Khot and Vishnoi [KV].
11
Lee’s Methodology (bird’s-eye view)
14
The Gluing Lemma
17
Proof Methodology (2) Now let's see how to deal with one particular scale [tau, 2 tau].
18
Proof Methodology (2) Now let's see how to deal with one particular scale [tau, 2 tau].
19
Proof Methodology (2) Now let's see how to deal with one particular scale [tau, 2 tau]
20
Here is a sketch outline of how we find well-separated sets (A, B) in the image f(X) of a NEG metric X. Given a direction vector u, define L_u, R_u as the points which lie (very) ‘approximately’ in the direction of u, -u respectively. (with fair probability, both sets are ‘big’.)
21
Here is a sketch outline of how we find well-separated sets in the image f(X) of a NEG metric X. Given a direction vector u, define L_u, R_u as the points which lie (very) ‘approximately’ in the direction of u, -u respectively. (with fair probability, both sets are ‘big’.) If L_u, R_u aren’t well-separated, match off nearby points from the two sets until the remainder is well separated.
22
Here is a sketch outline of how we find well-separated sets in the image f(X) of a NEG metric X. Given a direction vector u, define L_u, R_u as the points which lie (very) ‘approximately’ in the direction of u, -u respectively. (with fair probability, both sets are ‘big’.) If L_u, R_u aren’t well-separated, match off nearby points from the two sets until the remainder is well-separated. If what’s left are big sets, we’ve succeeded.
23
But suppose that for most u, we don’t succeed.
24
This means that for most u, we find a sizeable matching in X, such that the matched points x, y are close and have (x – y) pointing approximately in the direction of u.
25
But suppose that for most u, we don’t succeed. This means that for most u, we find a sizeable matching in X, such that the matched points x, y are close and have (x – y) pointing approximately in the direction of u. By carefully studying the geometry of obtuse-angle-free point sets, we show that this property cannot happen strongly--a contradiction.
26
Specifically, we derive a subset C of X called a `matching core’, which for a substantial fraction of directions u can find a sizeable matching within itself, such that matched points are close and point in the approximate direction of u.
27
Specifically, we derive a subset C of X called a `matching core’, which for a substantial fraction of directions u can find a sizeable matching within itself, such that matched points are are close and point in the approximate direction of u. We develop a lower bound on the size of (obtuse angle-free) matching cores, which in this case allows us to conclude that C must be impossibly large—larger than X itself!
28
Specifically, we derive a subset C of X called a `matching core’, which for a substantial fraction of directions u can find a sizeable matching within itself, such that matched points are close and point in the approximate direction of u. We develop a lower bound on the size of (obtuse angle-free) matching cores, which in this case allows us to conclude that C must be impossibly large—larger than X itself! This contradiction tells us that for a certain fraction of u, we must succeed.
29
Some fundamental notions
32
Example of Parameter Tradeoffs
33
Cover Size Lower Bounds Why? Each point in C can only ‘cover’ x in a small fraction of directions.
34
Cover Size Lower Bounds Why? Each point in C can only ‘cover’ x in a small fraction of directions. Here’s where we’re going: in an obtuse angle-free matching core C, we can exhibit a point x that’s very well-covered by C; hence, C is very large.
35
Transferring a Cover
37
Boosting cover probability (delta)
38
([ARV]) Proof uses Levy’s Spherical Isoperimetric Inequality (concentration of measure for Lipschitz functions).
39
‘Matching Covers’
41
‘Matching Covers’ (cont’d)
44
NEG metrics and Big Cores
47
Proof starts with the set C and alternately whittles it down and builds it up (to an S within C that is covered by C), boosting the sigma parameter of S while maintaining delta and slowly degrading l.
48
Where does the obtuse angle-free property of f(X) get used? Suppose point y of C helps to cover x, and point x’ is at most k hops in X away from x, where each hop is of length at most 1. The NEG angle…
49
y helps to cover x’, somewhat, but the parameters (in Lemma 4.3) are weakened to the extent that x’ is far away from x. How far apart are x, x’?
50
x, x’ are separated by k hops in X of length 1… could that put them at a distance k? Not without obtuse angles!
51
By the triangle inequality on X’s own metric, ||x – x’||^2 <= k, so ||x – x’|| <= sqrt(k).
52
This phenomenon is crucial in bounding the degradation of our parameters during each phase in the inductive procedure.
53
And that’s all I’m going to say…
54
Bibliography [ALN] Sanjeev Arora, James R. Lee, Assaf Naor. Euclidean distortion and the Sparsest Cut, STOC ’05. [ARV] Sanjeev Arora, Satish Rao, Umesh Vazirani. Expander Flows, Geometric Embeddings and Graph Partitioning, STOC ’04. Expander Flows, Geometric Embeddings and Graph Partitioning [KV] Subhash Khot, Nisheeth Vishnoi. The unique games conjecture, integrality gap for cut problems and embeddability of negative type metrics into L1 FOCS ‘05.The unique games conjecture, integrality gap for cut problems and embeddability of negative type metrics into L1
55
NEG Metrics Posse—L2, not L7
56
The Fuller Story… There is one unmentioned ingredient, Prop. 4.6, in the proof of the Big Core Theorem. First, we need to be familiar with the ‘set neighborhood operator’ Tau and its iterates…
58
Boosting Sigma
60
Strength relies on set neighborhood not being too large! (beta)
61
Boosting Sigma (cont’d)
64
NEG metrics and Big Cores
65
Proof of BCT by Claim 4.8
66
Sketch: Proof by Induction
67
Proof by Induction
71
Induction—Step 1
72
Induction—Step 1 (boosting sigma)
73
Induction—Step 2
75
Analysis
85
What’s Next?
87
Finding Separated Sets
94
Finding A Matching Core
99
Parameters of the Core
101
Exploiting the Core
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.