Chris Beck, Russell Impagliazzo, Shachar Lovett. History of Sampling Problems Earliest uses of randomness in algorithms were for sampling, not decision.

Slides:



Advertisements
Similar presentations
Quantum t-designs: t-wise independence in the quantum world Andris Ambainis, Joseph Emerson IQC, University of Waterloo.
Advertisements

How to Solve Longstanding Open Problems In Quantum Computing Using Only Fourier Analysis Scott Aaronson (MIT) For those who hate quantum: The open problems.
Slow and Fast Mixing of Tempering and Swapping for the Potts Model Nayantara Bhatnagar, UC Berkeley Dana Randall, Georgia Tech.
Boolean Circuits of Depth-Three and Arithmetic Circuits with General Gates Oded Goldreich Weizmann Institute of Science Based on Joint work with Avi Wigderson.
Property Testing and Communication Complexity Grigory Yaroslavtsev
Time-Space Tradeoffs in Resolution: Superpolynomial Lower Bounds for Superlinear Space Chris Beck Princeton University Joint work with Paul Beame & Russell.
List decoding Reed-Muller codes up to minimal distance: Structure and pseudo- randomness in coding theory Abhishek Bhowmick (UT Austin) Shachar Lovett.
Discrepancy Minimization by Walking on the Edges Raghu Meka (IAS/DIMACS) Shachar Lovett (IAS)
Time-Space Tradeoffs in Resolution: Superpolynomial Lower Bounds for Superlinear Space Chris Beck Princeton University Joint work with Paul Beame & Russell.
The Rectangle Escape Problem By: Ehsan Emam-Jomeh-Zadeh Sepehr Assadi Supervisor: Dr. Hamid Zarrabi-Zadeh Sharif University of Technology B.Sc. Thesis.
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Randomized Algorithms Randomized Algorithms CS648 Lecture 8 Tools for bounding deviation of a random variable Markov’s Inequality Chernoff Bound Lecture.
BAYESIAN INFERENCE Sampling techniques
Approximate Counting via Correlation Decay Pinyan Lu Microsoft Research.
1 CE 530 Molecular Simulation Lecture 8 Markov Processes David A. Kofke Department of Chemical Engineering SUNY Buffalo
Monte Carlo Simulation Methods - ideal gas. Calculating properties by integration.
1 Sampling Lower Bounds via Information Theory Ziv Bar-Yossef IBM Almaden.
DAST 2005 Week 4 – Some Helpful Material Randomized Quick Sort & Lower bound & General remarks…
Computer vision: models, learning and inference Chapter 10 Graphical Models.
Discrepancy Minimization by Walking on the Edges Raghu Meka (IAS & DIMACS) Shachar Lovett (IAS)
Computational Complexity Polynomial time O(n k ) input size n, k constant Tractable problems solvable in polynomial time(Opposite Intractable) Ex: sorting,
Applications of types of SATs Arash Ahadi What is SAT?
1 Theoretical Physics Experimental Physics Equipment, Observation Gambling: Cards, Dice Fast PCs Random- number generators Monte- Carlo methods Experimental.
1 Lesson 3: Choosing from distributions Theory: LLN and Central Limit Theorem Theory: LLN and Central Limit Theorem Choosing from distributions Choosing.
Quantum Two 1. 2 Time Independent Approximation Methods 3.
Pseudorandomness Emanuele Viola Columbia University April 2008.
New calibration procedure in analytical chemistry in agreement to VIM 3 Miloslav Suchanek ICT Prague and EURACHEM Czech Republic.
Statistical Sampling-Based Parametric Analysis of Power Grids Dr. Peng Li Presented by Xueqian Zhao EE5970 Seminar.
Monte Carlo Methods in Statistical Mechanics Aziz Abdellahi CEDER group Materials Basics Lecture : 08/18/
An FPTAS for #Knapsack and Related Counting Problems Parikshit Gopalan Adam Klivans Raghu Meka Daniel Štefankovi č Santosh Vempala Eric Vigoda.
P, NP, and Exponential Problems Should have had all this in CS 252 – Quick review Many problems have an exponential number of possibilities and we can.
One-way multi-party communication lower bound for pointer jumping with applications Emanuele Viola & Avi Wigderson Columbia University IAS work done while.
Author: B. C. Bromley Presented by: Shuaiyuan Zhou Quasi-random Number Generators for Parallel Monte Carlo Algorithms.
Information Complexity: an Overview Rotem Oshman, Princeton CCI Based on work by Braverman, Barak, Chen, Rao, and others Charles River Science of Information.
Polynomials Emanuele Viola Columbia University work partially done at IAS and Harvard University December 2007.
Approximate Inference: Decomposition Methods with Applications to Computer Vision Kyomin Jung ( KAIST ) Joint work with Pushmeet Kohli (Microsoft Research)
Molecular Modelling - Lecture 2 Techniques for Conformational Sampling Uses CHARMM force field Written in C++
Computing the value at risk of a portfolio via overlapping hypercubes Marcello Galeotti 1.
Massive Data Sets and Information Theory Ziv Bar-Yossef Department of Electrical Engineering Technion.
7. Metropolis Algorithm. Markov Chain and Monte Carlo Markov chain theory describes a particularly simple type of stochastic processes. Given a transition.
Markov Chain Monte Carlo for LDA C. Andrieu, N. D. Freitas, and A. Doucet, An Introduction to MCMC for Machine Learning, R. M. Neal, Probabilistic.
Pseudorandom Bits for Constant-Depth Circuits with Few Arbitrary Symmetric Gates Emanuele Viola Harvard University June 2005.
CS774. Markov Random Field : Theory and Application Lecture 15 Kyomin Jung KAIST Oct
Spatial decay of correlations and efficient methods for computing partition functions. David Gamarnik Joint work with Antar Bandyopadhyay (U of Chalmers),
Hardness amplification proofs require majority Emanuele Viola Columbia University Work also done at Harvard and IAS Joint work with Ronen Shaltiel University.
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
ArXiv: Hing Yin Tsang 1, Chung Hoi Wong 1, Ning Xie 2, Shengyu Zhang 1 1.The Chinese University of Hong Kong 2.Florida International University.
Bayesian statistics named after the Reverend Mr Bayes based on the concept that you can estimate the statistical properties of a system after measuting.
The Poincaré Constant of a Random Walk in High- Dimensional Convex Bodies Ivona Bezáková Thesis Advisor: Prof. Eric Vigoda.
Search by quantum walk and extended hitting time Andris Ambainis, Martins Kokainis University of Latvia.
Separations inQuery Complexity Pointer Functions Basedon Andris Ambainis University of Latvia Joint work with: Kaspars Balodis, Aleksandrs Belovs Troy.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Learning Deep Generative Models by Ruslan Salakhutdinov
Umans Complexity Theory Lectures
Advanced Statistical Computing Fall 2016
Sequential Algorithms for Generating Random Graphs
High Performance Computing and Monte Carlo Methods
Circuit Lower Bounds A combinatorial approach to P vs NP
Monomer-dimer model and a new deterministic approximation algorithm for computing a permanent of a 0,1 matrix David Gamarnik MIT Joint work with Dmitriy.
Joint work with Avishay Tal (IAS) and Jiapeng Zhang (UCSD)
Lecture 5 – Improved Monte Carlo methods in finance: lab
Tight Fourier Tails for AC0 Circuits
K-wise vs almost K-wise permutations, and general group actions
DNF Sparsification and Counting
Classical Algorithms from Quantum and Arthur-Merlin Communication Protocols Lijie Chen MIT Ruosong Wang CMU.
Opinionated Lessons #39 MCMC and Gibbs Sampling in Statistics
Convergence of Sequential Monte Carlo Methods
Switching Lemmas and Proof Complexity
Chapter 4. Supplementary Questions
Presentation transcript:

Chris Beck, Russell Impagliazzo, Shachar Lovett

History of Sampling Problems Earliest uses of randomness in algorithms were for sampling, not decision problems – “Equation of State Calculations by Fast Computing Machines” [Metropolis, Rosenbluth, Rosenbluth, Teller, Teller ‘53] – Complexity Theory of Randomized Sampling [Jerrum Valiant Vazirani ‘86] – Markov Chain Monte Carlo method [Jerrum Sinclair ‘89]

History of Sampling Problems

The task of explicitly exhibiting a distribution which is hard to sample for concrete models was raised more recently: – [Ambainis, Schulman, Ta-shma, Vazirani, Wigderson ‘03] “The Quantum Communication Complexity of Sampling” – [Goldreich, Goldwasser, Nussboim ‘10] “On the implementation of huge random objects” – [Viola’10] “On the Complexity of Distributions”

Prior Work

Applications to Data Structures

Second Main Result

Sketch of Proof

Detour: Regularity in Ckt. Complexity Given a circuit, get a small decision tree. Each circuit at a leaf is the restriction of original circuit, according to path to that leaf. All or “most” leaves are “balanced” / “regular” / “pseudorandom”.

Actual Sketch of Proof

Balanced Decision Forests

Contrast with Other Results

Handy Corollary of Main Result

Conclusion: “Tail-Bound LMN”

Open Questions

Thanks!

Second Main Result In proving the strengthened sampling lower bound, we were led to discover a new concentration bound for sums of random variables computed by decision trees. This concentration bound extends a long line of work aimed at obtaining Chernoff-like concentration despite limited independence. We believe it will have other applications.

Decision Forests

Balanced Decision Trees

Applications of [LV’11] Lovett and Viola also gave several interesting applications for their work: – Lower Bounds for Succinct Data Structures

Prior Work