Download presentation
Presentation is loading. Please wait.
Published byΚυρία Αλαβάνος Modified over 5 years ago
1
Chernoff bounds The Chernoff bound for a random variable X is
obtained as follows: for any t >0, Pr[X a] = Pr[etX eta] ≤ E[etX ] / eta Similarly, for any t <0, Pr[X a] = Pr[etX eta] ≤ E[etX ] / eta The value of t that minimizes E[etX ] / eta gives the best possible bounds.
2
Moment generating functions
Def: The moment generating function of a random variable X is MX(t) = E[etX]. E[Xn] = MX(n)(0) , which is the nth derivative of MX(t) evaluated at t = 0. Fact: If MX(t)= MY(t) for all t in (-c, c) for some c > 0, then X and Y have the same distribution. If X and Y are independent r.v., then MX+Y(t)= MX(t) MY(t).
3
Chernoff bounds for the sum of Poisson trials
Poisson trials: the distribution of a sum of independent 0-1 random variables, which may not be identical. Bernoulli trials: same as above except that all the random variables are identical. Xi:i=1…n, mutually independent 0-1 r.v. with Pr[Xi=1]=pi. Let X =X1+…+Xn and E[X] =μ=p1+...+pn. MXi(t) =E[etXi] = piet +(1-pi) = 1 + pi (et -1) ≤ exp (pi (et -1) ).
4
Chernoff bound for a sum of Poisson trials
MX(t) = MX1(t) MX2(t) .... MXn(t) ≤ exp{(p pn)(et -1)} = exp{(et -1)μ} T h e o r m L t X = 1 + n , w ; : a i d p l s u c P [ ] f 2 . ( ) y > 3 R 6
5
Proof: B y M a r k o v i n e q u l t , f > w h P [ X ¸ ( 1 + d ) ¹
w h P [ X ( 1 + d ) ] = E . F s T p 2 < 3 g m b c R 6 5 H
6
Similarly, we have: T h e o r m L t X = P , w ; : a d p s l u c [ ] .
n i 1 , w ; : a d p s l u c [ ] . E f < ( ) 2 C y F j 3
7
Example: Let X be the number of heads of n independent fair coin flips
Example: Let X be the number of heads of n independent fair coin flips. Applying the above Corollary, we have: P r [ j X n = 2 p 6 l ] e x ( 1 3 ) : 4 B y C h b s v i q u a t , . E V w
8
Application: Estimating a parameter
Given a DNA sample, a lab test can determine if it carries the mutation. Since the test is expensive and we would like to obtain a relatively reliable estimate from a small number of samples. Let p be the unknown parameter that we are looking for estimation. Assume we have n samples and X=p~ n of these samples have the mutation. For sufficient large number of samples, we expect p to be close to p~. D e f : A 1 r c o n d i t v a l p m s [ ~ ; + ] u h P 2 .
9
A m o n g t h e s a p l , w ¯ d X = ~ u i . W r f c P [ 2 ¡ ; + ] ( )
1 b E I v : < > T 3 S
10
Better bounds for special cases
h e o r m L t X = 1 + n , w ; : a i d p v b l s P [ ] 2 . F y > f E ! ( ) u g S B
11
Better bounds for special cases
y L e t X = 1 + n , w h ; : i d p m v b s P [ ] 2 . F > j Y ( ) f g E T u 4
12
Better bounds for special cases
y L e t Y = 1 + n , w h ; : i d p m v b s P [ ] 2 . E ( ) F > A c S g G x - f u T q j 4
13
Proof of set balancing:
h i - w A b a = ( ; 1 m ) n d s u p k . I 4 l , c y j v S > z Z B C [ ] 2
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.