1 Two-point Sampling. 2 X,Y: discrete random variables defined over the same probability sample space. p(x,y)=Pr[{X=x}  {Y=y}]: the joint density function.

Slides:



Advertisements
Similar presentations
Hypothesis testing –Revisited A method for deciding whether the sample that you are looking at has been changed by some type of treatment (Independent.
Advertisements

Probability Theory Part 1: Basic Concepts. Sample Space - Events  Sample Point The outcome of a random experiment  Sample Space S The set of all possible.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Randomized Algorithms Kyomin Jung KAIST Applied Algorithm Lab Jan 12, WSAC
CIS 2033 based on Dekking et al. A Modern Introduction to Probability and Statistics Slides by Michael Maurizi Instructor Longin Jan Latecki C9:
Chain Rules for Entropy
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Chernoff Bounds, and etc.
Independence of random variables
Analysis of Algorithms CS 477/677 Randomizing Quicksort Instructor: George Bebis (Appendix C.2, Appendix C.3) (Chapter 5, Chapter 7)
Complexity 26-1 Complexity Andrei Bulatov Interactive Proofs.
Statistics Lecture 18. Will begin Chapter 5 today.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 12 June 18, 2006
Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition.
Probability Theory Part 2: Random Variables. Random Variables  The Notion of a Random Variable The outcome is not always a number Assign a numerical.
1 Fingerprint 2 Verifying set equality Verifying set equality v String Matching – Rabin-Karp Algorithm.
The Goldreich-Levin Theorem: List-decoding the Hadamard code
Evaluating Hypotheses
Samples vs. Distributions Distributions: Discrete Random Variable Distributions: Continuous Random Variable Another Situation: Sample of Data.
–Def: A language L is in BPP c,s ( 0  s(n)  c(n)  1,  n  N) if there exists a probabilistic poly-time TM M s.t. : 1.  w  L, Pr[M accepts w]  c(|w|),
Prof. Bart Selman Module Probability --- Part e)
1 Chapter 5 A Measure of Information. 2 Outline 5.1 Axioms for the uncertainty measure 5.2 Two Interpretations of the uncertainty function 5.3 Properties.
Finite probability space set  (sample space) function P:  R + (probability distribution)  P(x) = 1 x 
Information Theory and Security
Distribution Function properties. Density Function – We define the derivative of the distribution function F X (x) as the probability density function.
Sampling Distributions  A statistic is random in value … it changes from sample to sample.  The probability distribution of a statistic is called a sampling.
Quadratic Residuosity and Two Distinct Prime Factor ZK Protocols By Stephen Hall.
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
Section 11.4 Language Classes Based On Randomization
Dept of Bioenvironmental Systems Engineering National Taiwan University Lab for Remote Sensing Hydrology and Spatial Modeling STATISTICS Random Variables.
Practice Problems Actex 8. Section 8 -- #5 Let T 1 be the time between a car accident and reporting a claim to the insurance company. Let T 2 be the time.
The Complexity of Primality Testing. What is Primality Testing? Testing whether an integer is prime or not. – An integer p is prime if the only integers.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Ariel Rosenfeld.  Input: a stream of m integers i1, i2,..., im. (over 1,…,n)  Output: the number of distinct elements in the stream.  Example – count.
Channel Capacity.
Chapter 5: Probability Analysis of Randomized Algorithms Size is rarely the only property of input that affects run time Worst-case analysis most common.
Multiple Random Variables Two Discrete Random Variables –Joint pmf –Marginal pmf Two Continuous Random Variables –Joint Distribution (PDF) –Joint Density.
2.1 Introduction In an experiment of chance, outcomes occur randomly. We often summarize the outcome from a random experiment by a simple number. Definition.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 5.2: Recap on Probability Theory Jürgen Sturm Technische Universität.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Expectation for multivariate distributions. Definition Let X 1, X 2, …, X n denote n jointly distributed random variable with joint density function f(x.
Statistics for Business & Economics
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Two Random Variables.
Math 4030 – 6a Joint Distributions (Discrete)
Math 4030 – 11b Method of Least Squares. Model: Dependent (response) Variable Independent (control) Variable Random Error Objectives: Find (estimated)
TELECOMMUNICATIONS Dr. Hugh Blanton ENTC 4307/ENTC 5307.
Discrete-time Random Signals
Intro to Probability Slides from Professor Pan,Yan, SYSU.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Great Theoretical Ideas in Computer Science for Some.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Chapter 31 Conditional Probability & Conditional Expectation Conditional distributions Computing expectations by conditioning Computing probabilities by.
1 Review of Probability and Random Processes. 2 Importance of Random Processes Random variables and processes talk about quantities and signals which.
Dan Boneh Introduction Discrete Probability (crash course, cont.) Online Cryptography Course Dan Boneh See also:
Function of a random variable Let X be a random variable in a probabilistic space with a probability distribution F(x) Sometimes we may be interested in.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
PROBABILITY AND COMPUTING RANDOMIZED ALGORITHMS AND PROBABILISTIC ANALYSIS CHAPTER 1 IWAMA and ITO Lab. M1 Sakaidani Hikaru 1.
Randomness and Computation: Some Prime Examples
Statistics Lecture 19.
Random Variables.
Randomness and Computation: Some Prime Examples
Conditional Probability
PRODUCT MOMENTS OF BIVARIATE RANDOM VARIABLES
Lecture 8 Randomized Algorithms
Some Rules for Expectation
Factoring Polynomials
Two-point Sampling Speaker: Chuang-Chieh Lin
Section 11.7 Probability.
Presentation transcript:

1 Two-point Sampling

2 X,Y: discrete random variables defined over the same probability sample space. p(x,y)=Pr[{X=x}  {Y=y}]: the joint density function of X and Y. Thus, and

3 A sequence of random variables is called pairwise independent if for all i  j, and x,y  R, Pr[X i =x|X j =y]=Pr[X i =x]. Let L be a language and A be a randomized algorithm for deciding whether an input string x belongs to L or not.

4 Given x, A picks a random number r from the range Z n ={0,1,..,n-1}, with the following property: If x  L, then A(x,r)=1 for at least half the possible values of r. If x  L, then A(x,r)=0 for all posssible choices of r.

5 Use k-wise ind, we can improve the bound further, but still within polynomial. Observe that for any x  L, a random choice of r is a witness with probability at least ½. Goal: We want to increase this probability, i.e., decrease the error probability.

6 Strategy: Pick t>1 values, r 1,r 2,..r t  Z n. Compute A(x,r i ) for i=1,..,t. If for any i, A(x,r i )=1, then declare x  L. The error is at most 2 -t. But use  (tlogn) random bits.

7 Actually, we can use fewer random bits. Choose a,b randomly from Z n. Let r i =a i +b mod n, i=1,..,t. Compute A(x,r i ). If for any i, A(x,r i )=1, then declare x  L. Now what is the error probability?

8 Ex: r i =ai+b mod n, i=1,..,t. r i ’ s are pairwise independent. Let

9 What is the probability of the event {Y=0}? Y=0  |Y-E[Y]|  t/2. Thus Chebyshev ’ s Inq Pr[|X-  x |  t  x ]  1/t 2