Andrey Markov, public domain image

Slides:



Advertisements
Similar presentations
Occupancy Problems m balls being randomly assigned to one of n bins. (Independently and uniformly) The questions: - what is the maximum number of balls.
Advertisements

Central Limit Theorem. So far, we have been working on discrete and continuous random variables. But most of the time, we deal with ONE random variable.
Review of Probability. Definitions (1) Quiz 1.Let’s say I have a random variable X for a coin, with event space {H, T}. If the probability P(X=H) is.
Week 4 – Random Graphs Dr. Anthony Bonato Ryerson University AM8002 Fall 2014.
6.2 Construct and Interpret Binomial Distributions
Section 7.4 (partially). Section Summary Expected Value Linearity of Expectations Independent Random Variables.
Randomized Algorithms Randomized Algorithms CS648 Lecture 8 Tools for bounding deviation of a random variable Markov’s Inequality Chernoff Bound Lecture.
Probability Theory Part 2: Random Variables. Random Variables  The Notion of a Random Variable The outcome is not always a number Assign a numerical.
Probability Distributions Finite Random Variables.
© 2004 Goodrich, Tamassia Sorting Lower Bound1. © 2004 Goodrich, Tamassia Sorting Lower Bound2 Comparison-Based Sorting (§ 10.3) Many sorting algorithms.
The moment generating function of random variable X is given by Moment generating function.
Finite probability space set  (sample space) function P:  R + (probability distribution)  P(x) = 1 x 
Probability Distributions: Finite Random Variables.
Random Variables. A random variable X is a real valued function defined on the sample space, X : S  R. The set { s  S : X ( s )  [ a, b ] is an event}.
Bernoulli Trials Two Possible Outcomes –Success, with probability p –Failure, with probability q = 1  p Trials are independent.
Probability Theory Overview and Analysis of Randomized Algorithms Prepared by John Reif, Ph.D. Analysis of Algorithms.
COMP 170 L2 L17: Random Variables and Expectation Page 1.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Sample Variability Consider the small population of integers {0, 2, 4, 6, 8} It is clear that the mean, μ = 4. Suppose we did not know the population mean.
1 Since everything is a reflection of our minds, everything can be changed by our minds.
Stat 13 Lecture 19 discrete random variables, binomial A random variable is discrete if it takes values that have gaps : most often, integers Probability.
ENGG 2040C: Probability Models and Applications Andrej Bogdanov Spring Limit theorems.
Chapter 4-5 DeGroot & Schervish. Conditional Expectation/Mean Let X and Y be random variables such that the mean of Y exists and is finite. The conditional.
Week 121 Law of Large Numbers Toss a coin n times. Suppose X i ’s are Bernoulli random variables with p = ½ and E(X i ) = ½. The proportion of heads is.
AP STATISTICS Section 7.1 Random Variables. Objective: To be able to recognize discrete and continuous random variables and calculate probabilities using.
1 Keep Life Simple! We live and work and dream, Each has his little scheme, Sometimes we laugh; sometimes we cry, And thus the days go by.
Section 10.5 Let X be any random variable with (finite) mean  and (finite) variance  2. We shall assume X is a continuous type random variable with p.d.f.
16-3 The Binomial Probability Theorem. Let’s roll a die 3 times Look at the probability of getting a 6 or NOT getting a 6. Let’s make a tree diagram.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
Rieman sums Lower sum f (x) a h h h h h h h h h b.
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Basic statistics Usman Roshan.
Sampling and Sampling Distributions
A Structured Wikipedia for Mathematics
Sorting Lower Bound 4/25/2018 8:49 PM
Skip Lists 5/10/2018 Presentation for use with the textbook Data Structures and Algorithms in Java, 6th edition, by M. T. Goodrich, R. Tamassia, and M.
Probabilistic Algorithms
Random variables (r.v.) Random variable
The Stable Marriage Problem
Markov Chains and Mixing Times
Basic statistics Usman Roshan.
Probability Theory Overview and Analysis of Randomized Algorithms
Randomized Algorithms
CHAPTER 6 Random Variables
Skip Lists S3 + - S2 + - S1 + - S0 + -
Analysis of Algorithms
Monte Carlo Approximations – Introduction
Skip Lists S3 + - S2 + - S1 + - S0 + -
Operations Research: Applications and Algorithms
CS 583 Analysis of Algorithms
Randomized Algorithms
ASV Chapters 1 - Sample Spaces and Probabilities
Using the Tables for the standard normal distribution
(2,4) Trees 12/4/2018 1:20 PM Sorting Lower Bound Sorting Lower Bound.
Chernoff bounds The Chernoff bound for a random variable X is
Distributions and expected value
6.2/6.3 Probability Distributions and Distribution Mean
Parasol Lab, Dept. CSE, Texas A&M University
Tomado del libro Cormen
Handout Ch 4 實習.
Handout Ch 4 實習.
Chernoff bounds The Chernoff bound for a random variable X is
Applied Discrete Mathematics Week 12: Discrete Probability
9. Limit Theorems.
Chap 11 Sums of Independent Random Variables and Limit Theorem Ghahramani 3rd edition 2019/5/16.
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
CHAPTER 2.1 PROBABILITY DISTRIBUTIONS.
CHAPTER 2.1 PROBABILITY DISTRIBUTIONS.
Presentation transcript:

Andrey Markov, public domain image Numerical Algorithms 10/12/2017 5:24 PM Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 Chernoff Bounds Andrey Markov, public domain image © 2015 Goodrich and Tamassia Chernoff Bounds

Markov’s Inequality © 2015 Goodrich and Tamassia Chernoff Bounds

Sums of Indicator Random Variables Let X = X1 + X2 + . . . + Xn be the sum of n independent 0-1 indicator random variables, such that Xi = 1 with probability pi. Using the language of probability theory, X is a random variable from the binomial distribution. Intuitively, X is equal to the number of heads one gets by flipping n coins such that the i-th coin comes up heads with probability pi. Define the mean, or expected value of X, as follows: © 2015 Goodrich and Tamassia Chernoff Bounds

A Chernoff (upper) Bound © 2015 Goodrich and Tamassia Chernoff Bounds

A Chernoff (lower) Bound © 2015 Goodrich and Tamassia Chernoff Bounds

Application: Load Balancing Suppose we have a set of n processors and a set of n jobs for them to perform, but no good way of assigning jobs to processors. So we just assign jobs to processors at random. What is a good high-probability upper bound on the number of jobs assigned to any processor? We can answer this question using a Chernoff bound. Let X be a random variable representing the number of jobs assigned to processor 1. Then X can be written as X = X1 + X2 + . . .Xn, where Xi is the 0-1 indicator random variable that job i is assigned to processor 1. Thus, Pr(Xi = 1) = 1/n and μ = E[X] = 1. © 2015 Goodrich and Tamassia Chernoff Bounds

Applying a Chernoff Bound Since the Xi’s are clearly independent, we can apply the Chernoff bound from Theorem 19.12 to get, for any integer m > 1: Thus, the probability that processor 1 has more than m jobs assigned to it by this random assignment is at most 1/n2. Therefore, the probability that any processor is assigned more than mjobs is at most n/n2 = 1/n. In other words, the number of processors assigned to any processor is O(log n/ log log n) with high probability. © 2015 Goodrich and Tamassia Chernoff Bounds