COMPSCI 230 Discrete Mathematics for Computer Science.

Slides:



Advertisements
Similar presentations
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Advertisements

Chapter 5.1 & 5.2: Random Variables and Probability Mass Functions
Great Theoretical Ideas in Computer Science about AWESOME Some Generating Functions Probability.
Great Theoretical Ideas in Computer Science for Some.
Great Theoretical Ideas in Computer Science.
17.1 CompSci 102 Today’s topics Instant InsanityInstant Insanity Random WalksRandom Walks.
COMPSCI 102 Introduction to Discrete Mathematics.
Random Walks. Random Walks on Graphs - At any node, go to one of the neighbors of the node with equal probability. -
Lectures prepared by: Elchanan Mossel Yelena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
April 2, 2015Applied Discrete Mathematics Week 8: Advanced Counting 1 Random Variables In some experiments, we would like to assign a numerical value to.
4/5/05Tucker, Sec Applied Combinatorics, 4rth Ed. Alan Tucker Section 4.3 Graph Models Prepared by Jo Ellis-Monaghan.
Entropy Rates of a Stochastic Process
Prof. Bart Selman Module Probability --- Part d)
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
Random Walks Great Theoretical Ideas In Computer Science Anupam GuptaCS Fall 2005 Lecture 23Nov 14, 2005Carnegie Mellon University.
Great Theoretical Ideas in Computer Science.
The probabilistic method & infinite probability spaces Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture.
Random Walks Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture 24April 8, 2004Carnegie Mellon University.
Discrete and Continuous Random Variables
Discrete Random Variables: The Binomial Distribution
Probability Theory: Counting in Terms of Proportions Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture.
Random Variables A random variable A variable (usually x ) that has a single numerical value (determined by chance) for each outcome of an experiment A.
Lectures prepared by: Elchanan Mossel Yelena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
Random Walks and Markov Chains Nimantha Thushan Baranasuriya Girisha Durrel De Silva Rahul Singhal Karthik Yadati Ziling Zhou.
Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture 22April 1, 2004Carnegie Mellon University
Permutations & Combinations and Distributions
Random Variables A random variable is simply a real-valued function defined on the sample space of an experiment. Example. Three fair coins are flipped.
Random Walks Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2005 Lecture 24April 7, 2005Carnegie Mellon University.
Probability III: The probabilistic method & infinite probability spaces Great Theoretical Ideas In Computer Science Steven Rudich, Avrim BlumCS
Great Theoretical Ideas in Computer Science.
5.3 Random Variables  Random Variable  Discrete Random Variables  Continuous Random Variables  Normal Distributions as Probability Distributions 1.
CSCE 411 Design and Analysis of Algorithms Set 9: Randomized Algorithms Prof. Jennifer Welch Fall 2014 CSCE 411, Fall 2014: Set 9 1.
Introduction to Behavioral Statistics Probability, The Binomial Distribution and the Normal Curve.
1.3 Simulations and Experimental Probability (Textbook Section 4.1)
Probability III: The Probabilistic Method Great Theoretical Ideas In Computer Science John LaffertyCS Fall 2006 Lecture 11Oct. 3, 2006Carnegie.
Many random walks are faster than one Noga AlonTel Aviv University Chen AvinBen Gurion University Michal KouckyCzech Academy of Sciences Gady KozmaWeizmann.
Week 21 Conditional Probability Idea – have performed a chance experiment but don’t know the outcome (ω), but have some partial information (event A) about.
COMPSCI 102 Introduction to Discrete Mathematics.
1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 7.
Chapter 2. Conditional Probability Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
L56 – Discrete Random Variables, Distributions & Expected Values
Binomial Distributions Chapter 5.3 – Probability Distributions and Predictions Mathematics of Data Management (Nelson) MDM 4U.
MATH 256 Probability and Random Processes Yrd. Doç. Dr. Didem Kivanc Tureli 14/10/2011Lecture 3 OKAN UNIVERSITY.
Great Theoretical Ideas in Computer Science for Some.
Great Theoretical Ideas In Computer Science John LaffertyCS Fall 2006 Lecture 10 Sept. 28, 2006Carnegie Mellon University Classics.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Binomial Distributions Chapter 5.3 – Probability Distributions and Predictions Mathematics of Data Management (Nelson) MDM 4U Authors: Gary Greer (with.
18.1 CompSci 102 Today’s topics Impaired wanderingImpaired wandering Instant InsanityInstant Insanity 1950s TV Dating1950s TV Dating Final Exam reviewFinal.
PROBABILITY DISTRIBUTIONS DISCRETE RANDOM VARIABLES OUTCOMES & EVENTS Mrs. Aldous & Mr. Thauvette IB DP SL Mathematics.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
1 Chapter 4 Mathematical Expectation  4.1 Mean of Random Variables  4.2 Variance and Covariance  4.3 Means and Variances of Linear Combinations of Random.
The Law of Averages. What does the law of average say? We know that, from the definition of probability, in the long run the frequency of some event will.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
Chapter 8: The Binomial and Geometric Distributions 8.2 – The Geometric Distributions.
Great Theoretical Ideas in Computer Science.
Introduction to Discrete Mathematics
Conditional Probability
Great Theoretical Ideas In Computer Science
Great Theoretical Ideas in Computer Science
Great Theoretical Ideas In Computer Science
Random Walks.
Probability Theory: Counting in Terms of Proportions
Great Theoretical Ideas in Computer Science
Random Walks.
Applied Discrete Mathematics Week 12: Discrete Probability
Probability Theory: Counting in Terms of Proportions
Great Theoretical Ideas in Computer Science
Introduction to Discrete Mathematics
Presentation transcript:

COMPSCI 230 Discrete Mathematics for Computer Science

Probability Refresher What’s a Random Variable? A Random Variable is a real-valued function on a sample space S E[X+Y] =E[X] + E[Y]

Conditional Expectation What does this mean: E[X | A]? E[ X ] = E[ X | A ] Pr[ A ] + E[ X | A ] Pr[ A ] Pr[ A ] = Pr[ A | B ] Pr[ B ] + Pr[ A | B ] Pr[ B ] Is this true: Yes! Similarly: E[X | A] = ∑ k Pr[X = k | A]

Random Walks

How to walk home drunk

No new ideas Solve HW problem Eat Wait Work probability Hungry Abstraction of Student Life

Like finite automata, but instead of a determinisic or non-deterministic action, we have a probabilistic action Example questions: “What is the probability of reaching goal on string Work,Eat,Work?” No new ideas Solve HW problem Eat Wait Work Hungry

- Simpler: Random Walks on Graphs At any node, go to one of the neighbors of the node with equal probability

- Simpler: Random Walks on Graphs At any node, go to one of the neighbors of the node with equal probability

- Simpler: Random Walks on Graphs At any node, go to one of the neighbors of the node with equal probability

- Simpler: Random Walks on Graphs At any node, go to one of the neighbors of the node with equal probability

- Simpler: Random Walks on Graphs At any node, go to one of the neighbors of the node with equal probability

0n k Random Walk on a Line You go into a casino with $k, and at each time step, you bet $1 on a fair game You leave when you are broke or have $n Question 1: what is your expected amount of money at time t? Let X t be a R.V. for the amount of $$$ at time t

0n k Random Walk on a Line You go into a casino with $k, and at each time step, you bet $1 on a fair game You leave when you are broke or have $n X t = k +  1 +   t, (  i is RV for change in your money at time i) So, E[X t ] = k E[  i ] = 0

0n k Random Walk on a Line You go into a casino with $k, and at each time step, you bet $1 on a fair game You leave when you are broke or have $n Question 2: what is the probability that you leave with $n?

Random Walk on a Line Question 2: what is the probability that you leave with $n? E[X t ] = k E[X t ] = E[X t | X t = 0] × Pr(X t = 0) + E[X t | X t = n] × Pr(X t = n) + E[ X t | X t is neither] × Pr(X t is neither) Allow play to infinity, but  i = 0 after reaching 0 or n. As t  ∞, Pr(neither)  0, and something t < n Hence Pr(X t = n)  E[X t ]/n = k/n = n × Pr(X t = n) + (something t ) × Pr(neither)

0n k Another Way To Look At It You go into a casino with $k, and at each time step, you bet $1 on a fair game You leave when you are broke or have $n Question 2: what is the probability that you leave with $n? = probability that I hit green before I hit red

- What is chance I reach green before red? Random Walks and Electrical Networks Same as voltage if edges are identical resistors and we put 1-volt battery between green and red

- Random Walks and Electrical Networks Same as equations for voltage if edges all have same resistance! p x = Pr(reach green first starting from x) p green = 1, p red = 0 And for the rest p x = Average y  Nbr(x) (p y )

0n k Another Way To Look At It You go into a casino with $k, and at each time step, you bet $1 on a fair game You leave when you are broke or have $n Question 2: what is the probability that you leave with $n? voltage(k) = k/n = Pr[ hitting n before 0 starting at k] !!!

Getting Back Home - Lost in a city, you want to get back to your hotel How should you do this? Requires a good memory and a piece of chalk Depth First Search!

Getting Back Home - How about walking randomly?

Will this work? When will I get home? Is Pr[ reach home ] = 1? What is E[ time to reach home ]?

Pr[ will reach home ] = 1

We Will Eventually Get Home Look at the first n steps There is a non-zero chance p 1 that we get home Also, p 1 ≥ (1/n) n Suppose we fail Then, wherever we are, there is a chance p 2 ≥ (1/n) n that we hit home in the next n steps from there Probability of failing to reach home by time kn = (1 – p 1 )(1 – p 2 ) … (1 – p k )  0 as k  ∞

E[ time to reach home ] is at most this Furthermore (no proof provided): If the graph has n nodes and m edges, then E[ time to visit all nodes ] ≤ 2m × (n-1)

Cover Times Cover time (from u) C u = E [ time to visit all vertices | start at u ] Cover time of the graph (worst case expected time to see all vertices) C(G) = max u { C u }

Cover Time Theorem If the graph G has n nodes and m edges, then the cover time of G is C(G) ≤ 2m (n – 1) Any graph on n vertices has < n 2 /2 edges Hence C(G) < n 3 for all graphs G

Actually, we get home pretty fast… Chance that we don’t hit home by (2k)2m(n-1) steps is (½) k

A Simple Calculation If the average income of people is $100 then more than 50% of the people can be earning more than $200 each False! else the average would be higher!!! True or False:

Andrei A. Markov Markov’s Inequality If X is a non-negative r.v. with mean E[X], then Pr[ X > 2 E[X] ] < ½ Pr[ X > k E[X] ] < 1/k

(since X is non-neg) Markov’s Inequality Non-neg random variable X has expectation A = E[X] A = E[X] = E[X | X > 2A ] Pr[X > 2A] + E[X | X ≤ 2A ] Pr[X ≤ 2A] ≥ E[X | X > 2A ] Pr[X > 2A] Also, E[X | X > 2A] > 2A  A > 2A × Pr[X > 2A] Pr[ X > k × expectation ] < 1/k  ½ > Pr[X > 2A]

Actually, we get home pretty fast… Chance that we don’t hit home by (2k)2m(n-1) steps is (½) k

An Averaging Argument Suppose I start at u E[ time to hit all vertices | start at u ] ≤ C(G) Hence, by Markov’s Inequality: Pr[ time to hit all vertices > 2C(G) | start at u ] ≤ ½

So Let’s Walk Some More! Pr [ time to hit all vertices > 2C(G) | start at u ] ≤ ½ Suppose at time 2C(G), I’m at some node with more nodes still to visit Pr [ haven’t hit all vertices in 2C(G) more time | start at v ] ≤ ½ Chance that you failed both times ≤ ¼ = (½) 2 Hence, Pr[ havent hit everyone in time k × 2C(G) ] ≤ (½) k

Hence, if we know that Expected Cover Time C(G) < 2m(n-1) then Pr[ home by time 4k m(n-1) ] ≥ 1 – (½) k

Random walks on infinite graphs

‘To steal a joke from Kakutani (U.C.L.A. colloquium talk): “A drunk man will eventually find his way home but a drunk bird may get lost forever.”’ Shizuo Kakutani R. Durrett, Probability: Theory and Examples, Fourth edition, Cambridge University Press, New York, NY, 2010, p. 191.

0i Random Walk On a Line Flip an unbiased coin and go left/right Let X t be the position at time t Pr[ X t = i ]= Pr[ #heads – #tails = i] = Pr[ #heads – (t - #heads) = i] t (t+i)/2 /2 t =

Sterling’s approx Random Walk On a Line 0i Pr[ X 2t = 0 ] = ≤ Θ (1/√t) 2t t /2 2t Y 2t = indicator for (X 2t = 0)  E[ Y 2t ] = Θ (1/√t) Z 2n = number of visits to origin in 2n steps E[ Z 2n ] = E[  t = 1…n Y 2t ] ≤ Θ (1/√1 + 1/√2 +…+ 1/√n)= Θ (√n)

In n steps, you expect to return to the origin Θ (√n) times!

How About a 2-d Grid? Let us simplify our 2-d random walk: move in both the x-direction and y-direction…

How About a 2-d Grid? Let us simplify our 2-d random walk: move in both the x-direction and y-direction…

How About a 2-d Grid? Let us simplify our 2-d random walk: move in both the x-direction and y-direction…

How About a 2-d Grid? Let us simplify our 2-d random walk: move in both the x-direction and y-direction…

How About a 2-d Grid? Let us simplify our 2-d random walk: move in both the x-direction and y-direction…

In The 2-d Walk Returning to the origin in the grid  both “line” random walks return to their origins Pr[ visit origin at time t ] = Θ (1/√t) × Θ (1/√t) = Θ (1/t) E[ # of visits to origin by time n ] = Θ (1/1 + 1/2 + 1/3 + … + 1/n ) = Θ (log n)

But In 3D Pr[ visit origin at time t ] = Θ (1/√t) 3 = Θ (1/t 3/2 ) lim n  ∞ E[ # of visits by time n ] < K (constant) Hence Pr[ never return to origin ] > 1/(K+1) E[visits in total ] = Pr[ return to origin ]*(1+ E[visits in total ]) + 0*Pr[ never return to origin ]

A Tale of Two Brothers Carl and Hal As competitive as they are similar. They decided to play a fair game once an hour! Each brother wins any event with probability 1/2, independent of any other events. Amazingly, Hal was ahead for the last 50 years of his life! Should we have expected this?

A Random Walk Let x k denote the difference in victories after k events. We can view x 0, x 1, x 2, …, x n as a “random walk” that depends on the outcomes of the individual events.

Theorem: The number of sequences in which x k is never negative is equal to the number that start and end at 0, for even n.

Proof: Put the two sets in one-to-one and onto correspondence.

Lemma: Suppose that Y is an integer random variable in [0,m] such that Then Proof:

Lemma: For odd n,the number of sequences that last touch 0 at position i is equal to the number that last touch 0 at position n-i-1. Proof: Put these sets in one-to-one and onto correspondence using previous theorem to map A to A’ and B to B’.

Theorem: Let Y be the length of the longest suffix that does not touch 0. For odd n, Proof: Y is a random variable in the range [0,n-1]. By previous lemma,

A seemingly contradictory observation: The expected length of the final suffix that does not touch 0 is (n-1)/2, but the expected number of times the sequence touches 0 in the last n/2 steps is

Berthold’s Triangle The number of sequences with k 1’s and n-k 0’s in which the number of 1’s is greater than or equal to the number of 0’s in every prefix. Examples:

Notice that the n’th row sums to

Theorem: Proof: By induction on n. Base cases: Inductive step:

Another strange fact… Let Z be the number of steps before the sequence returns to 0. Let Z’ be the number of steps before the sequence drops below 0. This looks like trouble! E[Z’] isn’t finite!

Here’s What You Need to Know… Random Walk in a Line Cover Time of a Graph Markov’s Inequality