Benjamin Doerr Max-Planck-Institut für Informatik Saarbrücken Component-by-Component Construction of Low-Discrepancy Point Sets joint work with Michael.

Slides:



Advertisements
Similar presentations
Quantum t-designs: t-wise independence in the quantum world Andris Ambainis, Joseph Emerson IQC, University of Waterloo.
Advertisements

Approximating the area under a curve using Riemann Sums
Benjamin Doerr MPII Saarbrücken joint work with Quasi-Random Rumor Spreading Tobias Friedrich U Berkeley Anna Huber MPII Saarbrücken Thomas Sauerwald U.
Quasirandom Rumor Spreading Tobias Friedrich Max-Planck-Institut für Informatik Saarbrücken.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
On Complexity, Sampling, and -Nets and -Samples. Range Spaces A range space is a pair, where is a ground set, it’s elements called points and is a family.
Maintaining Variance and k-Medians over Data Stream Windows Brian Babcock, Mayur Datar, Rajeev Motwani, Liadan O’Callaghan Stanford University.
Benjamin Doerr Max-Planck-Institut für Informatik Saarbrücken Introduction to Quasirandomness.
Learning Juntas Elchanan Mossel UC Berkeley Ryan O’Donnell MIT Rocco Servedio Harvard.
Benjamin Doerr Max-Planck-Institut für Informatik Saarbrücken Quasirandomness.
Efficient Query Evaluation on Probabilistic Databases
Noga Alon Institute for Advanced Study and Tel Aviv University
Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)
Learning Submodular Functions Nick Harvey, Waterloo C&O Joint work with Nina Balcan, Georgia Tech.
Yi Wu (CMU) Joint work with Parikshit Gopalan (MSR SVC) Ryan O’Donnell (CMU) David Zuckerman (UT Austin) Pseudorandom Generators for Halfspaces TexPoint.
ECIV 201 Computational Methods for Civil Engineers Richard P. Ray, Ph.D., P.E. Error Analysis.
The Growth of Functions
Coloring the edges of a random graph without a monochromatic giant component Reto Spöhel (joint with Angelika Steger and Henning Thomas) TexPoint fonts.
Maximum likelihood (ML) and likelihood ratio (LR) test
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
Proximity algorithms for nearly-doubling spaces Lee-Ad Gottlieb Robert Krauthgamer Weizmann Institute TexPoint fonts used in EMF. Read the TexPoint manual.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
CSCE 411 Design and Analysis of Algorithms Andreas Klappenecker TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAA.
1 Variance Reduction via Lattice Rules By Pierre L’Ecuyer and Christiane Lemieux Presented by Yanzhi Li.
Deterministic Length Reduction: Fast Convolution in Sparse Data and Applications Written by: Amihood Amir, Oren Kapah and Ely Porat.
Randomized Process of Unknowns and Implicitly Enforced Bounds on Parameters Jianer Chen Department of Computer Science & Engineering Texas A&M University.
Jan 6-10th, 2007VLSI Design A Reduced Complexity Algorithm for Minimizing N-Detect Tests Kalyana R. Kantipudi Vishwani D. Agrawal Department of Electrical.
Maximum Likelihood (ML), Expectation Maximization (EM)
Coloring random graphs online without creating monochromatic subgraphs Torsten Mütze, ETH Zürich Joint work with Thomas Rast (ETH Zürich) and Reto Spöhel.
Fast integration using quasi-random numbers J.Bossert, M.Feindt, U.Kerzel University of Karlsruhe ACAT 05.
Maximum likelihood (ML)
Instructor Neelima Gupta
Hellman’s TMTO 1 Hellman’s TMTO Attack. Hellman’s TMTO 2 Popcnt  Before we consider Hellman’s attack, consider simpler Time-Memory Trade-Off  “Population.
Analysis of Monte Carlo Integration Fall 2012 By Yaohang Li, Ph.D.
Random Number Generators CISC/QCSE 810. What is random? Flip 10 coins: how many do you expect will be heads? Measure 100 people: how are their heights.
Primal-Dual Meets Local Search: Approximating MST’s with Non-uniform Degree Bounds Author: Jochen Könemann R. Ravi From CMU CS 3150 Presentation by Dan.
1 Growth of Functions CS 202 Epp, section ??? Aaron Bloomfield.
Lecture 2 Number Representation and accuracy
THE STABILITY BOX IN INTERVAL DATA FOR MINIMIZING THE SUM OF WEIGHTED COMPLETION TIMES Yuri N. Sotskov Natalja G. Egorova United Institute of Informatics.
Random Sampling, Point Estimation and Maximum Likelihood.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Trust-Aware Optimal Crowdsourcing With Budget Constraint Xiangyang Liu 1, He He 2, and John S. Baras 1 1 Institute for Systems Research and Department.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Disclosure risk when responding to queries with deterministic guarantees Krish Muralidhar University of Kentucky Rathindra Sarathy Oklahoma State University.
Quasi-Monte Carlo Methods Fall 2012 By Yaohang Li, Ph.D.
1 Building The Ultimate Consistent Reader. 2 Introduction We’ve already built a consistent reader (cube-Vs.-point)... Except it had variables ranging.
Measuring complexity Section 7.1 Giorgi Japaridze Theory of Computability.
CS 2601 Runtime Analysis Big O, Θ, some simple sums. (see section 1.2 for motivation) Notes, examples and code adapted from Data Structures and Other Objects.
Umans Complexity Theory Lectures Lecture 7b: Randomization in Communication Complexity.
Sampling and estimation Petter Mostad
Quantum Computing MAS 725 Hartmut Klauck NTU
In Chapters 6 and 8, we will see how to use the integral to solve problems concerning:  Volumes  Lengths of curves  Population predictions  Cardiac.
Section 10.5 Let X be any random variable with (finite) mean  and (finite) variance  2. We shall assume X is a continuous type random variable with p.d.f.
1 Covering Non-uniform Hypergraphs Endre Boros Yair Caro Zoltán Füredi Raphael Yuster.
Benjamin Doerr 1, Michael Gnewuch 2, Nils Hebbinghaus 1, Frank Neumann 1 1 Max-Planck-Institut für Informatik Saarbrücken A Rigorous View on Neutrality.
01/26/05© 2005 University of Wisconsin Last Time Raytracing and PBRT Structure Radiometric quantities.
Hartmut Klauck Centre for Quantum Technologies Nanyang Technological University Singapore.
Sketching complexity of graph cuts Alexandr Andoni joint work with: Robi Krauthgamer, David Woodruff.
Data Structures I (CPCS-204) Week # 2: Algorithm Analysis tools Dr. Omar Batarfi Dr. Yahya Dahab Dr. Imtiaz Khan.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Mean Value Theorem 5.4.
Introduction to Algorithms
The Growth of Functions
Exact Algorithms via Monotone Local Search
5. Quasi-Monte Carlo Method
CSC 4170 Theory of Computation Time complexity Section 7.1.
CS21 Decidability and Tractability
Ch5 Initial-Value Problems for ODE
Introduction to Quantum Information Processing CS 467 / CS 667 Phys 467 / Phys 767 C&O 481 / C&O 681 Lecture 17 (2005) Richard Cleve DC 3524
CSC 4170 Theory of Computation Time complexity Section 7.1.
Presentation transcript:

Benjamin Doerr Max-Planck-Institut für Informatik Saarbrücken Component-by-Component Construction of Low-Discrepancy Point Sets joint work with Michael Gnewuch (Kiel), Peter Kritzer (Salzburg), and Friedrich Pillichshammer (Linz)

Benjamin Doerr Reminder: Star Discrepancy  Definition: N –s ∈ N “ dimension ” (Austrian notation) – P = {p 0, p 1,..., p N-1 } multi-set of N points in [0,1) s –Discrepancy function: For x ∈ [0,1] s,  Δ(x,P) := λ([0,x)) – #{i : p i ∈ [0,x)} / N  “ (how many points should be in [0,x) – how many actually are there) normalized by N ” –Star discrepancy: D*(P) := sup{ | Δ(x,P) | : x ∈ [0,1] s }  Measure of how evenly P is distributed in [0,1) s

Benjamin Doerr Reminder: Star Discrepancy  Application: Numerical Integration – Given f : [0,1] s → R – Compute/estimate ∫ [0,1] s f(x) dx !  Hope: ∫ [0,1] s f(x) dx ≈ (1/N) ∑ i f(p i )  Koksma-Hlawka inequality: | ∫ [0,1] s f(x) dx - (1/N) ∑ i f(p i ) | ≤ V(f) D*(P) – V(f): Variation in the sense of Hardy and Krause  Low star discrepancy = good integration

Benjamin Doerr Reminder: Star Discrepancy  How good?

Benjamin Doerr Reminder: Star Discrepancy  Very good! There are N-point sets P with D*(P) ≤ C s (log N) s-1 / N  “More points = drastically reduced integration error”  Really? Note: All constants ‘C’ may be different. They never depend on N. If they depend on s, I call them ‘C s ’.

Benjamin Doerr Reminder: Star Discrepancy  Very good! – There are N-point sets P with D*(P) ≤ C s (log N) s-1 / N  “More points = drastically reduced integration error”  Really? No!

Benjamin Doerr Problem: Only good for many points! – Increasing for N ≤ e 10 (more points = worse integration?) – ≥ 1 (trivial bound), if N ≤ – ≥ D*(random point set), if N ≤ 10 2∙10 Need for “small” low-discrepancy point sets!

Benjamin Doerr Motivation, Outline  Previous slides: –O((log N) s-1 /N) bounds only good for many points  many: at least exponential in dimension. –Otherwise: Random points have better guarantees.  Plan for this talk: –Be inspired by random points –...and use this to construct better point sets  Note: Almost all ugly details omitted in this talk! – For many technicalities, the sharpest bounds and more results see the full paper (MCMAppl, to appear).

Benjamin Doerr Previous Work (1)  Heinrich, Novak, Wasilkowski, Woźniakowski (Acta Arith., 2002): – There are point sets with D*(P) ≤ C (s/N) 1/2  randomized construction  Talagrand inequality – Good bounds for N polynomial in s –  Existential result only, implicit constants not known

Benjamin Doerr Previous Work (2)  We build on previous results by D., Gnewuch, Srivastav (JoC ‘ 05, MCQMC ’ 06): – D*(P) ≤ C (s/N) 1/2 (log N) 1/2, C small – via randomized rounding:  discrepancy guarantee holds with high probability – derandomization:  deterministic construction of P in run-time (CN) s+2  computes the exact star discrepancy on the way – wait for Michael ’ s talk (next talk) to see how difficult computing the star discrepancy can be...

Benjamin Doerr Rounding Approach  Task: Put N = 16 points in the unit cube nicely

Benjamin Doerr Rounding Approach  Task: Put N = 16 points in the unit cube nicely  Partition the cube into small rectangles ( “ boxes ” )

Benjamin Doerr Rounding Approach  Task: Put N = 16 points in the unit cube nicely  Partition the cube into small rectangles ( “ boxes ” )  Compute the ‘ fair ’ number x B of points for each box B: x B = N vol(B)

Benjamin Doerr Rounding Approach  Task: Put N = 16 points in the unit cube nicely  Partition the cube into small rectangles ( “ boxes ” )  Compute the ‘ fair ’ number x B of points for each box B: x B = N vol(B)  Round these numbers to integers y B

Benjamin Doerr Rounding Approach  Task: Put N = 16 points in the unit cube nicely  Partition the cube into small rectangles ( “ boxes ” )  Compute the ‘ fair ’ number x B of points for each box B: x B = N vol(B)  Round these numbers to integers y B such that for all aligned corners C, y C := ∑ B⊆C y B is close to x C := ∑ B⊆C x B x C =12.25 y C =12

Benjamin Doerr Rounding Approach  Task: Put N = 16 points in the unit cube nicely  Partition the cube into small rectangles ( “ boxes ” )  Compute the ‘ fair ’ number x B of points for each box B: x B = N vol(B)  Round these numbers to integers y B such that for all aligned corners C, y C := ∑ B⊆C y B is close to x C := ∑ B⊆C x B. Then put y B points in B arbitrarily

Benjamin Doerr Classical Rounding Theory  Let x 1,..., x n be numbers, N:=||x|| 1 and I 1,..., I m ⊆ {1,...,n}.  Randomized Rounding: – If x i is an integer, y i := x i – If not, then y i := ⌈x i ⌉ with probability equal to the fractional part of x i and y i := ⌊x i ⌋ otherwise  Theorem: With probability 1-ε, we have for all 1 ≤ k ≤ m (*) | ∑ i ∈ I k y i - ∑ i ∈ I k x i | ≤ (0.5 N log(2m/ε)) 1/2  Derandomization: A rounding (y i ) satisfying (*) with ε=1 can be computed deterministically in time O(mn).  Experiment: Derandomization yields smaller rounding errors.

Benjamin Doerr Rounding Approach (continued)  Task: Put N = 16 points in the unit cube nicely  Partition the cube into small rectangles ( “ boxes ” )  Compute the ‘ fair ’ number x B of points for each box B: x B = N vol(B)  Round these numbers to integers y B such that for all aligned corners C, | ∑ B⊆C y B - ∑ B⊆C x B | ≤ (0.5 N log(2 #boxes)) 1/

Benjamin Doerr New Result:  The same can be done in a component-by-component way: – Compoment-by-compoment: Given an (s-1)-dimensional low- discrepancy point set, add an s th component to each point. – Adjust the randomized-rounding approach accordingly.  Advantage: – Fewer variables to be rounded in each iteration. – Total run-time (over all s iterations) roughly N (s+3)/2 instead of N s+2.  Surprise: Discrepancy increases only by a factor of s. – Roughly C s 3/2 N -1/2 log(N) 1/2 instead of C s 1/2 N -1/2 log(N) 1/2  That ’ s OK, because we can now compute N 2 points in roughly the same time as needed for N points before.

Benjamin Doerr Summary and Conclusion  Result: Component-by-Component derandomized construction is much faster and yields only slightly higher discrepancies compared to “ all at once ”.  Outlook: Could also be useful if components are of different importance. E.g., do the expensive derandomization only for few components.  Open problem: Come up with something really efficient.... (instead of N Cs ). Merci/Thanks!