COMPSCI 102 Introduction to Discrete Mathematics.

Slides:



Advertisements
Similar presentations
CompSci 102 Discrete Math for Computer Science March 29, 2012 Prof. Rodger Lecture adapted from Bruce Maggs/Lecture developed at Carnegie Mellon, primarily.
Advertisements

Lecture Discrete Probability. 5.2 Recap Sample space: space of all possible outcomes. Event: subset of of S. p(s) : probability of element s of.
Lecture Discrete Probability. 5.3 Bayes’ Theorem We have seen that the following holds: We can write one conditional probability in terms of the.
Probability Three basic types of probability: Probability as counting
Great Theoretical Ideas in Computer Science.
Great Theoretical Ideas in Computer Science about AWESOME Some Generating Functions Probability.
Random Variables and Expectation. Random Variables A random variable X is a mapping from a sample space S to a target set T, usually N or R. Example:
Lectures prepared by: Elchanan Mossel Yelena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
CPSC 411, Fall 2008: Set 10 1 CPSC 411 Design and Analysis of Algorithms Set 10: Randomized Algorithms Prof. Jennifer Welch Fall 2008.
Section 7.4 (partially). Section Summary Expected Value Linearity of Expectations Independent Random Variables.
Lec 18 Nov 12 Probability – definitions and simulation.
1 Discrete Structures & Algorithms Discrete Probability.
Probability And Expected Value ————————————
Math 310 Section 7.2 Probability. Succession of Events So far, our discussion of events have been in terms of a single stage scenario. We might be looking.
Great Theoretical Ideas in Computer Science.
The probabilistic method & infinite probability spaces Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture.
1 Discrete Probability Hsin-Lung Wu Assistant Professor Advanced Algorithms 2008 Fall.
Great Theoretical Ideas in Computer Science.
Discrete Random Variables: The Binomial Distribution
Probability Theory: Counting in Terms of Proportions Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture.
Probability and Probability Distributions
Lectures prepared by: Elchanan Mossel Yelena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture 22April 1, 2004Carnegie Mellon University
Simple Mathematical Facts for Lecture 1. Conditional Probabilities Given an event has occurred, the conditional probability that another event occurs.
Lecture Discrete Probability. 5.3 Bayes’ Theorem We have seen that the following holds: We can write one conditional probability in terms of the.
Probability III: The probabilistic method & infinite probability spaces Great Theoretical Ideas In Computer Science Steven Rudich, Avrim BlumCS
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc.. Chap 5-1 Chapter 5 Some Important Discrete Probability Distributions Basic Business Statistics.
Great Theoretical Ideas in Computer Science.
ENGG 2040C: Probability Models and Applications Andrej Bogdanov Spring Properties of expectation.
Dan Boneh Introduction Discrete Probability (crash course) Online Cryptography Course Dan Boneh.
CSCE 411 Design and Analysis of Algorithms Set 9: Randomized Algorithms Prof. Jennifer Welch Fall 2014 CSCE 411, Fall 2014: Set 9 1.
Chapter 5: Probability Analysis of Randomized Algorithms Size is rarely the only property of input that affects run time Worst-case analysis most common.
Chapter 16 Probability. Activity Rock-Paper-Scissors Shoot Tournament 1)Pair up and choose one person to be person A and the other person B. 2)Play 9.
1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.
12/7/20151 Probability Introduction to Probability, Conditional Probability and Random Variables.
Discrete Distributions. Random Variable - A numerical variable whose value depends on the outcome of a chance experiment.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 7.
Sixth lecture Concepts of Probabilities. Random Experiment Can be repeated (theoretically) an infinite number of times Has a well-defined set of possible.
WOULD YOU PLAY THIS GAME? Roll a dice, and win $1000 dollars if you roll a 6.
Stat 31, Section 1, Last Time Big Rules of Probability –The not rule –The or rule –The and rule P{A & B} = P{A|B}P{B} = P{B|A}P{A} Bayes Rule (turn around.
MATH 256 Probability and Random Processes Yrd. Doç. Dr. Didem Kivanc Tureli 14/10/2011Lecture 3 OKAN UNIVERSITY.
Great Theoretical Ideas in Computer Science for Some.
Tutorial 5: Discrete Probability I Reference: cps102/lecture11.ppt Steve Gu Feb 15, 2008.
Great Theoretical Ideas In Computer Science John LaffertyCS Fall 2006 Lecture 10 Sept. 28, 2006Carnegie Mellon University Classics.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Probability and Simulation The Study of Randomness.
Introduction to Algorithms Probability Review – Appendix C CIS 670.
Dan Boneh Introduction Discrete Probability (crash course, cont.) Online Cryptography Course Dan Boneh See also:
Probability Distribution. Probability Distributions: Overview To understand probability distributions, it is important to understand variables and random.
1 Chapter 4 Mathematical Expectation  4.1 Mean of Random Variables  4.2 Variance and Covariance  4.3 Means and Variances of Linear Combinations of Random.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
Section 7.3. Why we need Bayes?  How to assess the probability that a particular event occurs on the basis of partial evidence.  The probability p(F)
Great Theoretical Ideas in Computer Science.
Chapter 7. Section 7.1 Finite probability  In a lottery, players win a large prize when they pick four digits that match, in the correct order, four.
Random Variables and Probability Distributions. Definition A random variable is a real-valued function whose domain is the sample space for some experiment.
Introduction to Discrete Mathematics
Copyright © 2016, 2013, and 2010, Pearson Education, Inc.
Introduction to Discrete Mathematics
Conditional Probability
Probability III: The Probabilistic Method
Lecture 8 Randomized Algorithms
CSCE 411 Design and Analysis of Algorithms
Great Theoretical Ideas In Computer Science
Probability Theory: Counting in Terms of Proportions
Great Theoretical Ideas In Computer Science
I flip a coin two times. What is the sample space?
Applied Discrete Mathematics Week 12: Discrete Probability
Probability Theory: Counting in Terms of Proportions
Randomized Algorithms CS648
Lecture 9 Randomized Algorithms
Presentation transcript:

COMPSCI 102 Introduction to Discrete Mathematics

CPS 102 Classics

Today, we will learn about a formidable tool in probability that will allow us to solve problems that seem really really messy…

If I randomly put 100 letters into 100 addressed envelopes, on average how many letters will end up in their correct envelopes? =  k k (…aargh!!…) Hmm…  k k Pr(k letters end up in correct envelopes)

On average, in class of size m, how many pairs of people will have the same birthday?  k k Pr(exactly k collisions) =  k k (…aargh!!!!…)

The new tool is called “Linearity of Expectation”

Random Variable To use this new tool, we will also need to understand the concept of a Random Variable Today’s lecture: not too much material, but need to understand it well

Random Variable A Random Variable is a real-valued function on S Examples: X = value of white die in a two-dice roll X(3,4) = 3,X(1,6) = 1 Y = sum of values of the two dice Y(3,4) = 7,Y(1,6) = 7 W = (value of white die) value of black die W(3,4) = 3 4,Y(1,6) = 1 6 Let S be a sample space in a probability distribution

Tossing a Fair Coin n Times S = all sequences of {H, T} n D = uniform distribution on S  D(x) = (½) n for all x  S Random Variables (say n = 10) X = # of heads X(HHHTTHTHTT) = 5 Y = (1 if #heads = #tails, 0 otherwise) Y(HHHTTHTHTT) = 1, Y(THHHHTTTTT) = 0

Notational Conventions Use letters like A, B, E for events Use letters like X, Y, f, g for R.V.’s R.V. = random variable

Two Views of Random Variables Input to the function is random Randomness is “pushed” to the values of the function Think of a R.V. as A function from S to the reals  Or think of the induced distribution on 

0 1 2 TT HT TH HH ¼ ¼ ¼ ¼ S Two Coins Tossed X: {TT, TH, HT, HH} → {0, 1, 2} counts the number of heads ¼ ½ ¼ Distribution on the reals

It’s a Floor Wax And a Dessert Topping It’s a function on the sample space S It’s a variable with a probability distribution on its values You should be comfortable with both views

From Random Variables to Events For any random variable X and value a, we can define the event A that X = a Pr(A) = Pr(X=a) = Pr({x  S| X(x)=a})

Pr(X = a) = Pr({x  S| X(x) = a}) Two Coins Tossed X: {TT, TH, HT, HH} → {0, 1, 2} counts # of heads TT HT TH HH ¼ ¼ ¼ ¼ S ¼ ½ ¼ Distribution on X X = Pr({x  S| X(x) = 1}) = Pr({TH, HT}) = ½ Pr(X = 1)

From Events to Random Variables For any event A, can define the indicator random variable for A: X A (x) = 1 if x  A 0 if x  A

X has a distribution on its values X is a function on the sample space S Definition: Expectation The expectation, or expected value of a random variable X is written as E[X], and is  Pr(x) X(x) =  k Pr[X = k] x  S k E[X] =

A Quick Calculation… What if I flip a coin 3 times? What is the expected number of heads? E[X] = (1/8)×0 + (3/8)×1 + (3/8)×2 + (1/8)×3 = 1.5 But Pr[ X = 1.5 ] = 0 Moral: don’t always expect the expected. Pr[ X = E[X] ] may be 0 !

Type Checking A Random Variable is the type of thing you might want to know an expected value of If you are computing an expectation, the thing whose expectation you are computing is a random variable

E[X A ] = 1 × Pr(X A = 1) = Pr(A) Indicator R.V.s: E[X A ] = Pr(A) For any event A, can define the indicator random variable for A: X A (x) = 1 if x  A 0 if x  A

Adding Random Variables E.g., rolling two dice. X = 1st die, Y = 2nd die, Z = sum of two dice If X and Y are random variables (on the same set S), then Z = X + Y is also a random variable Z(x) = X(x) + Y(x)

Adding Random Variables Example: Consider picking a random person in the world. Let X = length of the person’s left arm in inches. Y = length of the person’s right arm in inches. Let Z = X+Y. Z measures the combined arm lengths

Independence Two random variables X and Y are independent if for every a,b, the events X=a and Y=b are independent How about the case of X=1st die, Y=2nd die? X = left arm, Y=right arm?

Linearity of Expectation If Z = X+Y, then E[Z] = E[X] + E[Y] Even if X and Y are not independent

E[Z]  Pr[x] Z(x) x  S  Pr[x] (X(x) + Y(x)) x  S  Pr[x] X(x) +  Pr[x] Y(x)) x  S E[X] + E[Y] = = = =

1,0,1 HT 0,0,0 TT 0,1,1 TH 1,1,2 HH Linearity of Expectation E.g., 2 fair flips: X = 1st coin, Y = 2nd coin Z = X+Y = total # heads What is E[X]? E[Y]? E[Z]?

1,0,1 HT 0,0,0 TT 1,0,1 TH 1,1,2 HH Linearity of Expectation E.g., 2 fair flips: X = at least one coin is heads Y = both coins are heads, Z = X+Y Are X and Y independent? What is E[X]? E[Y]? E[Z]?

By Induction E[X 1 + X 2 + … + X n ] = E[X 1 ] + E[X 2 ] + …. + E[X n ] The expectation of the sum = The sum of the expectations

It is finally time to show off our probability prowess…

If I randomly put 100 letters into 100 addressed envelopes, on average how many letters will end up in their correct envelopes? Hmm…  k k Pr(k letters end up in correct envelopes) =  k k (…aargh!!…)

Use Linearity of Expectation X i = 1 if A i occurs 0 otherwise Let A i be the event the i th letter ends up in its correct envelope Let X i be the indicator R.V. for A i We are asking for E[Z] Let Z = X 1 + … + X 100 E[X i ] = Pr(A i ) = 1/100 So E[Z] = 1

So, in expectation, 1 letter will be in the same correct envelope Pretty neat: it doesn’t depend on how many letters! Question: were the X i independent? No! E.g., think of n=2

Use Linearity of Expectation General approach: View thing you care about as expected value of some RV Write this RV as sum of simpler RVs (typically indicator RVs) Solve for their expectations and add them up!

We flip n coins of bias p. What is the expected number of heads? We could do this by summing But now we know a better way! Example  k k Pr(X = k) =   k k p k (1-p) n-k nknk

Let X = number of heads when n independent coins of bias p are flipped Break X into n simpler RVs: E[ X ] = E[  i X i ] = np X j = 1 if the j th coin is heads 0 if the j th coin is tails Linearity of Expectation!

What About Products? If Z = XY, then E[Z] = E[X] × E[Y]? No! X=indicator for “1st flip is heads” Y=indicator for “1st flip is tails” E[XY]=0

But It’s True If RVs Are Independent Proof: E[X] =  a a × Pr(X=a) E[Y] =  b b × Pr(Y=b) E[XY] =  c c × Pr(XY = c) =  c  a,b:ab=c c × Pr(X=a  Y=b) =  a,b ab × Pr(X=a  Y=b) =  a,b ab × Pr(X=a) Pr(Y=b) = E[X] E[Y]

Example: 2 fair flips X = indicator for 1st coin heads Y = indicator for 2nd coin heads XY = indicator for “both are heads” E[X] = ½, E[Y] = ½, E[XY] = ¼ E[X*X] = E[X] 2 ? No: E[X 2 ] = ½, E[X] 2 = ¼ In fact, E[X 2 ] – E[X] 2 is called the variance of X

Most of the time, though, power will come from using sums Mostly because Linearity of Expectations holds even if RVs are not independent

On average, in class of size m, how many pairs of people will have the same birthday?  k k Pr(exactly k collisions) =  k k (…aargh!!!!…)

Use linearity of expectation Suppose we have m people each with a uniformly chosen birthday from 1 to 366 X = number of pairs of people with the same birthday E[X] = ?

X = number of pairs of people with the same birthday E[X] = ? Use m(m-1)/2 indicator variables, one for each pair of people X jk = 1 if person j and person k have the same birthday; else 0 E[X jk ] = (1/366) 1 + (1 – 1/366) 0 = 1/366

X = number of pairs of people with the same birthday X jk = 1 if person j and person k have the same birthday; else 0 E[X jk ] = (1/366) 1 + (1 – 1/366) 0 = 1/366 E[X] = E[  j ≤ k ≤ m X jk ] =  j ≤ k ≤ m E[ X jk ] = m(m-1)/2 × 1/366 For m = 28, E[X] = 1.03

Step Right Up… You pick a number n  [1..6]. You roll 3 dice. If any match n, you win $1. Else you pay me $1. Want to play? Hmm… let’s see

Analysis A i = event that i-th die matches X i = indicator RV for A i Expected number of dice that match: E[X 1 +X 2 +X 3 ] = 1/6+1/6+1/6 = ½ But this is not the same as Pr(at least one die matches)

Analysis Pr(at least one die matches) = 1 – Pr(none match) = 1 – (5/6) 3 = 0.416

Study Bee Random Variables Definition Two Views of R.V.s Expectation Definition Linearity How to solve problems using it