Two-point Sampling Speaker: Chuang-Chieh Lin

Slides:



Advertisements
Similar presentations
Limitations of Quantum Advice and One-Way Communication Scott Aaronson UC Berkeley IAS Useful?
Advertisements

STATISTICS Joint and Conditional Distributions
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Continuous Random Variables Chapter 5 Nutan S. Mishra Department of Mathematics and Statistics University of South Alabama.
Joint and marginal distribution functions For any two random variables X and Y defined on the same sample space, the joint c.d.f. is For an example, see.
Randomized Algorithms Kyomin Jung KAIST Applied Algorithm Lab Jan 12, WSAC
CIS 2033 based on Dekking et al. A Modern Introduction to Probability and Statistics Slides by Michael Maurizi Instructor Longin Jan Latecki C9:
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Great Theoretical Ideas in Computer Science.
Statistics Lecture 18. Will begin Chapter 5 today.
Probability Theory Part 2: Random Variables. Random Variables  The Notion of a Random Variable The outcome is not always a number Assign a numerical.
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
The Goldreich-Levin Theorem: List-decoding the Hadamard code
Learning decision trees derived from Hwee Tou Ng, slides for Russell & Norvig, AI a Modern Approachslides Tom Carter, “An introduction to information theory.
Finite probability space set  (sample space) function P:  R + (probability distribution)  P(x) = 1 x 
Chapter 4 Joint Distribution & Function of rV. Joint Discrete Distribution Definition.
Information Theory and Security
Pairs of Random Variables Random Process. Introduction  In this lecture you will study:  Joint pmf, cdf, and pdf  Joint moments  The degree of “correlation”
Section 11.4 Language Classes Based On Randomization
Theory of Probability Statistics for Business and Economics.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
1 Two-point Sampling. 2 X,Y: discrete random variables defined over the same probability sample space. p(x,y)=Pr[{X=x}  {Y=y}]: the joint density function.
1 Lecture 7: Discrete Random Variables and their Distributions Devore, Ch
Channel Capacity.
2.1 Introduction In an experiment of chance, outcomes occur randomly. We often summarize the outcome from a random experiment by a simple number. Definition.
Introduction to Probability Theory ‧ 2- 2 ‧ Speaker: Chuang-Chieh Lin Advisor: Professor Maw-Shang Chang National Chung Cheng University Dept. CSIE, Computation.
Computer Vision Lecture 6. Probabilistic Methods in Segmentation.
Exam 2: Rules Section 2.1 Bring a cheat sheet. One page 2 sides. Bring a calculator. Bring your book to use the tables in the back.
ICS 353: Design and Analysis of Algorithms
NP-Completness Turing Machine. Hard problems There are many many important problems for which no polynomial algorithms is known. We show that a polynomial-time.
Intro to Probability Slides from Professor Pan,Yan, SYSU.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Chapter 31 Conditional Probability & Conditional Expectation Conditional distributions Computing expectations by conditioning Computing probabilities by.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
PROBABILITY AND COMPUTING RANDOMIZED ALGORITHMS AND PROBABILISTIC ANALYSIS CHAPTER 1 IWAMA and ITO Lab. M1 Sakaidani Hikaru 1.
Statistical methods in NLP Course 2
Randomness and Computation: Some Prime Examples
Statistics Lecture 19.
ECE 313 Probability with Engineering Applications Lecture 7
Probabilistic Algorithms
Random Variables.
CHAPTER 2 RANDOM VARIABLES.
Randomness and Computation: Some Prime Examples
Example: X = Cholesterol level (mg/dL)
STATISTICS Random Variables and Distribution Functions
Estimating L2 Norm MIT Piotr Indyk.
Introduction to Algorithms 6.046J/18.401J
Streaming & sampling.
Lecture 8 Randomized Algorithms
Speaker: Chuang-Chieh Lin National Chung Cheng University
Some Rules for Expectation
CS 154, Lecture 6: Communication Complexity
Randomized Algorithms
STA 291 Spring 2008 Lecture 7 Dustin Lueker.
CSCE 411 Design and Analysis of Algorithms
Randomized Algorithms Markov Chains and Random Walks
Lecture 34 Section 7.5 Wed, Mar 24, 2004
The Practice of Statistics
Statistics Lecture 12.
3.0 Functions of One Random Variable
Monte Carlo I Previous lecture Analytical illumination formula
The Chernoff bound Speaker: Chuang-Chieh Lin Coworker: Chih-Chieh Hung
Patrick Lee 12 July 2003 (updated on 13 July 2003)
Moments of Random Variables
Mathematical Expectation
Presentation transcript:

Two-point Sampling Speaker: Chuang-Chieh Lin Advisor: Professor Maw-Shang Chang National Chung Cheng University 2018/12/10

Computation Theory Lab, CSIE, CCU, Taiwan References Professor S. C. Tsai’s lecture slides. Randomized Algorithms, Rajeev Motwani and Prabhakar Raghavan. 2018/12/10 Computation Theory Lab, CSIE, CCU, Taiwan

Joint probability density function X, Y: discrete random variables defined over the same probability sample space. p(x, y) = Pr[{X = x}{Y = y}]: the joint probability density function (pdf) of X and Y. Thus, and P r [ Y = y ] x p ( ; ) P r [ X = x j Y y ] p ( ; ) . 2018/12/10 Computation Theory Lab, CSIE, CCU, Taiwan

Computation Theory Lab, CSIE, CCU, Taiwan A sequence of random variables is called pairwise independent if for all i j, and x, yR, Pr[Xi = x | Xj = y] = Pr[Xi = x]. 2018/12/10 Computation Theory Lab, CSIE, CCU, Taiwan

Randomized Polynomial time (RP) The class RP (for Randomized Polynomial time) consists of all languages L that have a randomized algorithm A running in worse-case polynomial time such that for any input x in * ( is the alphabet set), F x 2 L ) P r [ A ( a c e p t s ] ¸ 1 . = 2018/12/10 Computation Theory Lab, CSIE, CCU, Taiwan

Try to reduce the random bits… We now consider trying to reduce the number of random bits used by RP algorithms. Let L be a language and A be a randomized algorithm for deciding whether an input string x belongs to L or not. 2018/12/10 Computation Theory Lab, CSIE, CCU, Taiwan

Computation Theory Lab, CSIE, CCU, Taiwan Given x, A picks a random number r from the range Zn= {0,1,…, n 1}, with the following property: If x  L, then A(x, r) = 1 for at least half the possible values of r. If x  L, then A(x, r) = 0 for all possible choices of r. 2018/12/10 Computation Theory Lab, CSIE, CCU, Taiwan

Computation Theory Lab, CSIE, CCU, Taiwan Observe that for any x  L, a random choice of r is a witness with probability at least ½. Goal: We want to increase this probability, i.e., decrease the error probability. 2018/12/10 Computation Theory Lab, CSIE, CCU, Taiwan

A strategy for error-reduction There is a strategy as follows: Pick t > 1 values, r1,r2,..rt  Zn. Compute A(x, ri) for i = 1, …, t. If for any i, A(x, ri) = 1, then declare x  L. The error probability of this strategy is at most 2t. Yet it still uses (t log n) random bits. Why? 2018/12/10 Computation Theory Lab, CSIE, CCU, Taiwan

Strategy of two point sampling Actually, we can use fewer random bits. Choose a, b randomly from Zn. Let ri = a i + b mod n, i =1,…, t , then compute A(x, ri). If for any i, A(x, ri) = 1, then declare xL. Now what is the error probability? 2018/12/10 Computation Theory Lab, CSIE, CCU, Taiwan

Computation Theory Lab, CSIE, CCU, Taiwan ri = ai + b mod n, i =1,…, t. ri’s are pairwise independent. (See [MR95], Exercise 3.7 in page 52) Let Suppose E[A(x,ri)] = \mu \geq ½, we have Var[A(x,ri)] = \mu * (1-mu). By simple calculus analysis, we have Var[A(x,ri)] \leq ¼, thus Var[Y] = \sum_{i =1}^{t} Var[A(x,ri)] \leq t/4. 2018/12/10 Computation Theory Lab, CSIE, CCU, Taiwan

Computation Theory Lab, CSIE, CCU, Taiwan What is the probability of the event {Y = 0}? Y = 0  |Y E[Y]| t /2. Thus Chebyshev’s Inequality: Pr[|X x|tX] 1/t2 2018/12/10 Computation Theory Lab, CSIE, CCU, Taiwan

Thank you.