First passage percolation on rotationally invariant fields

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Occupancy Problems m balls being randomly assigned to one of n bins. (Independently and uniformly) The questions: - what is the maximum number of balls.
Quantum One-Way Communication is Exponentially Stronger than Classical Communication TexPoint fonts used in EMF. Read the TexPoint manual before you delete.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
The General Linear Model. The Simple Linear Model Linear Regression.
On the Spread of Viruses on the Internet Noam Berger Joint work with C. Borgs, J.T. Chayes and A. Saberi.
Dynamic percolation, exceptional times, and harmonic analysis of boolean functions Oded Schramm joint w/ Jeff Steif.
1 Numerical geometry of non-rigid shapes Consistent approximation of geodesics in graphs Consistent approximation of geodesics in graphs Tutorial 3 © Alexander.
1 Analyzing Kleinberg’s (and other) Small-world Models Chip Martel and Van Nguyen Computer Science Department; University of California at Davis.
1 An Asymptotically Optimal Algorithm for the Max k-Armed Bandit Problem Matthew Streeter & Stephen Smith Carnegie Mellon University NESCAI, April
From PET to SPLIT Yuri Kifer. PET: Polynomial Ergodic Theorem (Bergelson) preserving and weakly mixing is bounded measurable functions polynomials,integer.
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
Time Series Basics Fin250f: Lecture 3.1 Fall 2005 Reading: Taylor, chapter
1 Review of Probability Theory [Source: Stanford University]
Dasgupta, Kalai & Monteleoni COLT 2005 Analysis of perceptron-based active learning Sanjoy Dasgupta, UCSD Adam Tauman Kalai, TTI-Chicago Claire Monteleoni,
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
Lattices for Distributed Source Coding - Reconstruction of a Linear function of Jointly Gaussian Sources -D. Krithivasan and S. Sandeep Pradhan - University.
1 Analyzing Kleinberg’s (and other) Small-world Models Chip Martel and Van Nguyen Computer Science Department; University of California at Davis.
1 Biased card shuffling and the asymmetric exclusion process Elchanan Mossel, Microsoft Research Joint work with Itai Benjamini, Microsoft Research Noam.
Probability Distributions Continuous Random Variables.
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
Final Exam Review II Chapters 5-7, 9 Objectives and Examples.
Digital Image Processing, 2nd ed. © 2002 R. C. Gonzalez & R. E. Woods Chapter 11 Representation & Description Chapter 11 Representation.
 The range of a data set is the difference between the maximum and minimum data entries in the set. The find the range, the data must be quantitative.
1 Statistical Distribution Fitting Dr. Jason Merrick.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 6 Normal Probability Distributions 6-1 Review and Preview 6-2 The Standard Normal.
Joint work with Yuval Peres, Mikkel Thorup, Peter Winkler and Uri Zwick Overhang Bounds Mike Paterson DIMAP & Dept of Computer Science University of Warwick.
Challenges and Opportunities Posed by Power Laws in Network Analysis Bruno Ribeiro UMass Amherst MURI REVIEW MEETING Berkeley, 26 th Oct 2011.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Measures of Central Tendency And Spread Understand the terms mean, median, mode, range, standard deviation.
Section 6-5 The Central Limit Theorem. THE CENTRAL LIMIT THEOREM Given: 1.The random variable x has a distribution (which may or may not be normal) with.
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
Copyright © 2015, 2012, and 2009 Pearson Education, Inc. 1 Chapter Descriptive Statistics 2.
7 sum of RVs. 7-1: variance of Z Find the variance of Z = X+Y by using Var(X), Var(Y), and Cov(X,Y)
Overhang bounds Mike Paterson Joint work with Uri Zwick,
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
Section 3-2 Measures of Variation.
Chapter 2 Descriptive Statistics 1 Larson/Farber 4th ed.
Chapter 2: Frequency Distributions. Frequency Distributions After collecting data, the first task for a researcher is to organize and simplify the data.
Joint Moments and Joint Characteristic Functions.
Computer Performance Modeling Dirk Grunwald Prelude to Jain, Chapter 12 Laws of Large Numbers and The normal distribution.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Hypothesis Testing. Suppose we believe the average systolic blood pressure of healthy adults is normally distributed with mean μ = 120 and variance σ.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Section 2.4 Measures of Variation 1 of 149 © 2012 Pearson Education, Inc. All rights reserved.
Analysis of Boolean Functions and Complexity Theory Economics Combinatorics …
Conditional Expectation
CHAPTER 11 Mean and Standard Deviation. BOX AND WHISKER PLOTS  Worksheet on Interpreting and making a box and whisker plot in the calculator.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Trees.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Describing Distributions of Quantitative Data
Stat 223 Introduction to the Theory of Statistics
Chapter 7 Review.
Chapter 5 Limits and Continuity.
Markov Chains Mixing Times Lecture 5
Lebesgue measure: Lebesgue measure m0 is a measure on i.e., 1. 2.
Randomized Algorithms
Chapter 2 Descriptive Statistics.
Chapter 5. Optimal Matchings
Depth Estimation via Sampling
Instructor: Shengyu Zhang
Randomized Algorithms
CSCI B609: “Foundations of Data Science”
Matrix Martingales in Randomized Numerical Linear Algebra
General Strong Polarization
2.4 Measures of Variation.
CPSC 641: Network Traffic Self-Similarity
Presentation transcript:

First passage percolation on rotationally invariant fields Allan Sly Princeton University September 2016 Joint work with Riddhipratim Basu (Stanford) and Vladas Sidoravicius (NYU Shanghai)

First Passage Percolation Model: 𝑋 𝑖,𝑗 an IID random field of numbers 𝑇 𝑥,𝑦 minimum sum along paths from x to y. 1 3 5 6 9 7 8 4 2 By Subadditive Ergodic Theorem: lim 1 𝑛 𝑇 0,𝑛 𝑥 = 𝜇 𝑥 𝑎.𝑠.

Variance Central question: What is the variance? By Poincare inequality [Kesten ’91] 𝑉𝑎𝑟 𝑇 𝑛 𝑥 =𝑂(𝑛) Using hypercontractivity for Boolean case 𝑉𝑎𝑟 𝑇 𝑛 𝑥 =𝑂(𝑛/ log 𝑛 ) [Benjamini, Kalai, Schramm ’03] Extended to a wider range of distributions [Damron, Hanson, Sosoe ’15] For oriented last passage percolation with exponential or geometric entries 𝑉𝑎𝑟 𝑇 𝑛 𝑥 ~ 𝐶 𝑛 2/3 [Johansson ’00]

Rotationally Invariant models Our model: Take Φ: ℝ 2 → ℝ 2 rotationally invariant, smooth and compactly supported. Let Γ:ℝ→(𝑎,𝑏) be continuous and strictly increasing. Set 𝑋 𝑥,𝑦 =Γ Φ 𝑢−𝑥,𝑣−𝑦 𝑑𝐵(𝑢,𝑣) Define the distance as 𝑇 𝑥,𝑦 = min 𝛾 𝛾 𝑋

Main Result Main result (Basu, Sidoravicius, S. ‘16) For some 𝜖>0, 𝑉𝑎𝑟 𝑇 𝑛 =𝑂( 𝑛 1−𝜖 ) The specifics of the model are not that important, should hold for models with Rotational invariance FKG Property Short range of dependence. E.G. Graph distances for supercritical random geometric graphs.

Basic Approach Mutli-scale: V n ≔𝑉𝑎𝑟 𝑇 𝑛 . Set 𝑍 𝑛 = 𝑀 2 log 𝑀 𝑛 = 𝑛 1−𝜖 . We show that 𝑉 𝑀 𝑘 ≤ 𝑍 𝑀 𝑘 = 𝑀 2 𝑘 . Enough to show that for all 𝑛, 𝑉 𝑛 ≤ 𝑍 𝑛 ⇒ 𝑉 𝑀𝑛 ≤ 𝑍 𝑀𝑛 = 𝑀 2 𝑍 𝑛 . Block version of Kesten’s bounds 𝑉 ℓ𝑛 ≤𝐶 ℓ 𝑉 𝑛 Chaos estimate – path is highly sensitive to noise.

Kesten’s martingale argument Reveal the sites one by one: 𝑀 𝑖 =𝔼 𝑇 𝑛 ℱ 𝑖 ] Then 𝑉𝑎𝑟 𝑇 𝑛 = Σ 𝑖 𝑉𝑎𝑟 𝑀 𝑖 − 𝑀 𝑖−1 ≤ Σ 𝑖 𝔼𝑉𝑎𝑟 T n | ℱ 𝑖 c The value of block i will only matter if it is on the optimal path so Σ 𝑖 𝔼 𝑉𝑎𝑟 T n | ℱ 𝑖 c ≍ 𝐶𝔼 #{𝑖 :𝑖∈𝛾}≍ 𝐶𝑛 With some extra tricks one can also get concentration bounds.

Multiscale version of Kesten argument Split grid into blocks length 𝑛, height W n = 𝑛 1/2 𝑍 𝑛 1/4 Revealing blocks - analyze Doob martigale of 𝑇 ℓ𝑛 What we need Relate point to point with side to side Variance: 𝑉 ℓ𝑛 ≤ 𝐶 ℓ 𝑍 𝑛 Concentration: ℙ 𝑇 ℓ𝑛 −𝔼 𝑇 ℓ𝑛 ≥𝑥 ℓ 𝑍 𝑛 ≤ 𝐶 𝑒 −𝑐 𝑥 2/3 𝔼 𝑇 ℓ𝑛 −𝜇 ℓ 𝑛 ≤𝐶 ℓ 𝑍 𝑛 Transversal Fluctuations of order ℓ 3/4 𝑊 𝑛

Side to side Diagonal Length 𝑛 2 + 𝑊 𝑛 2 = 𝑛 2 +𝑛 𝑍 𝑛 1/2 ≈𝑛+ 1 2 𝑍 𝑛 1/2 And 𝑍 𝑛 1/2 is the bound on the standard deviation.

Transversal fluctuations To move up 𝑘 blocks, extra length is 2 𝑘 2 𝑍 𝑛 . For midpoint 𝑃 𝑘−𝑏𝑙𝑜𝑐𝑘 𝑓𝑙𝑢𝑐𝑡𝑢𝑎𝑡𝑖𝑜𝑛 ≤ 𝐶 𝑒 −𝑐 𝑘 4/3 For other dyadic points use chaining. At least on segment must deviate from its mean by at least 1 2 𝑘 2 𝑍 𝑛 𝑘 𝑊 𝑛

Side to side To compare the maximum side to side length 𝑇 𝑛 + with point to point 𝑋 𝑛 . Use chaining 𝔼 𝑇 𝑛 + −𝔼 𝑇 𝑛 ≤ 𝐶 𝑍 𝑛 1/2

Side to side To compare the minimum side to side length 𝑇 𝑛 − with point to point 𝑇 𝑛 + . Split up path 𝔼 𝑇 𝑛 + ≤𝔼 𝑇 4𝑛 5 − +2𝔼 max 𝑖𝑗 𝑇 𝑛 10 ,𝑖,𝑗 + + 𝐶 𝑍 𝑛 1/2 Max Min Max Max

Relating mean to 𝜇 By subadditivity 𝔼 𝑇 𝑛 >𝑛 𝜇. By enumerating over long paths we show that for C large if 𝔼 𝑇 𝑛 − ≥𝑛𝜇+𝐶 𝑍 𝑛 then lim 1 ℓ𝑛 𝑇 ℓ𝑛 > 𝜇.

Concentration 𝜏= Σ 𝑖 # 𝑏𝑙𝑜𝑐𝑘𝑠 𝑖𝑛 𝐶𝑜𝑙𝑢𝑚𝑛 𝑖 2 𝜏= Σ 𝑖 # 𝑏𝑙𝑜𝑐𝑘𝑠 𝑖𝑛 𝐶𝑜𝑙𝑢𝑚𝑛 𝑖 2 Similarly to transversal fluctuations ℙ 𝜏> 𝐶+𝑥 ℓ ≤ 𝑒 − 𝑥ℓ 2/3 Apply Doob martingale and Kesten’s concentration argument revealing columns one at a time. Can not take union bound over all paths because of sub-exponential tails

Proof by contradiction Case 1: Either for some 1≤ℓ≤𝑀 we have 𝑉 ℓ𝑛 ≤ 𝛿 ℓ 𝑍 𝑛 in which case we show that 𝑉 𝑀𝑛 ≤ 𝐶 𝛿 𝑀 𝑍 𝑛 ≤ 𝑍 𝑀𝑛 . Case 2: For all ℓ≤𝑀 𝑉 ℓ𝑛 ≥ 𝛿 ℓ 𝑍 𝑛 Use chaos argument. This case never actually happens as we believe 𝑉 𝑛 ≍ 𝑛 2/3 .

Super-concentration – chaos In the context of FPP: Super-concentration: Better than Poincare inequality i.e. 𝑉 𝑛 =𝑜(𝑛) Chaos: with 𝛾 the optimal path and 𝛾′ the optimal path after resampling 𝜖 fraction of the field then 𝛾∩ 𝛾 ′ ≤𝑜(𝑛) Super-concentration ⇔ Chaos [e.g. Chatterjee ‘14 ] Works well for block version.

Proving Chaos Aim: Resample 𝜖 fraction of the blocks and find good alternatives to the original path. Need to understand the field conditioned on the path before and after resampling. Similar to [BSS ‘14]

Percolation type estimates We have control of the transversal fluctuations of the path. A percolation estimate says that all paths with reasonable fluctuations spend most of their time in “typical” regions. Atypical

FKG type estimates Conditioning on the location and value of the path is a positive event for the rest of the field. We can use FKG to sample create regions that are very positive which the optimal path must avoid (before and after resampling.

Planting a configuration For a region A, suppose that if 𝑋 𝐴 , 𝑋 𝐴 𝑐 =( 𝒳 𝐴 , 𝒳 𝐴 𝑐 ) such that 𝛾 does not intersect 𝐴 then ℙ 𝑋 𝐴 = 𝒳 𝐴 𝑋 𝐴 𝑐 = 𝒳 𝐴 𝑐 ,𝛾] ≥ℙ[ 𝑋 𝐴 = 𝒳 𝐴 ] So we can plant configurations provided they avoid A. A

Big changes Using our assumption that 𝑉𝑎𝑟 𝑇 ℓ𝑛 ≥ 𝛿 𝑍 ℓ𝑛 we show that we can find regions with very long and very short geodesics. By interpolation between them in 1/𝜖 steps we can find regions with a large change with positive probability after 𝜖 resampling. We look for regions which become much shorter.

Pulling the paths apart We design a collection of events which together separate the old and new paths. Positive probability at each location. Concentration estimates separate path at 𝛿 fraction of location w.h.p.

Multi-scale Improvements We look for improvements on a range of scales. Show that 𝛾∩ 𝛾 ′ ≤𝛿|𝛾|. Conclude 𝑉 𝑀𝑛 ≤ 𝑀 2 𝑍 𝑛 = 𝑍 𝑀𝑛

Lattice models Can rotational invariance be relaxed? Should be sufficient that the limiting shape is smooth and has positive curvature in a neighbourhood of the direction.

Thank you for listening