1 1.1- Random walk on Z: 1.1.1- Simple random walk : Let be independent identically distributed random variables (iid) with only two possible values {1,-1},

Slides:



Advertisements
Similar presentations
Random Processes Introduction (2)
Advertisements

Completeness and Expressiveness
Boyce/DiPrima 9th ed, Ch 2.8: The Existence and Uniqueness Theorem Elementary Differential Equations and Boundary Value Problems, 9th edition, by William.
Chapter 2 Multivariate Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Congestion Games with Player- Specific Payoff Functions Igal Milchtaich, Department of Mathematics, The Hebrew University of Jerusalem, 1993 Presentation.
(CSC 102) Discrete Structures Lecture 14.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Copyright © Cengage Learning. All rights reserved. CHAPTER 5 SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION.
Week 21 Basic Set Theory A set is a collection of elements. Use capital letters, A, B, C to denotes sets and small letters a 1, a 2, … to denote the elements.
Chain Rules for Entropy
10 Further Time Series OLS Issues Chapter 10 covered OLS properties for finite (small) sample time series data -If our Chapter 10 assumptions fail, we.
STAT 497 APPLIED TIME SERIES ANALYSIS
Markov Chains 1.
Independence of random variables
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
Introduction to stochastic process
2.3 General Conditional Expectations 報告人:李振綱. Review Def (P.51) Let be a nonempty set. Let T be a fixed positive number, and assume that for each.
13. The Weak Law and the Strong Law of Large Numbers
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
Math Camp 2: Probability Theory Sasha Rakhlin. Introduction  -algebra Measure Lebesgue measure Probability measure Expectation and variance Convergence.
Probability theory 2011 Convergence concepts in probability theory  Definitions and relations between convergence concepts  Sufficient conditions for.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
NIPRL Chapter 2. Random Variables 2.1 Discrete Random Variables 2.2 Continuous Random Variables 2.3 The Expectation of a Random Variable 2.4 The Variance.
Lecture II.  Using the example from Birenens Chapter 1: Assume we are interested in the game Texas lotto (similar to Florida lotto).  In this game,
Mathematical Induction Assume that we are given an infinite supply of stamps of two different denominations, 3 cents and and 5 cents. Prove using mathematical.
Introduction to AEP In information theory, the asymptotic equipartition property (AEP) is the analog of the law of large numbers. This law states that.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Chapter 1 Probability and Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
T. Mhamdi, O. Hasan and S. Tahar, HVG, Concordia University Montreal, Canada July 2010 On the Formalization of the Lebesgue Integration Theory in HOL On.
The Poisson Process. A stochastic process { N ( t ), t ≥ 0} is said to be a counting process if N ( t ) represents the total number of “events” that occur.
Limits and the Law of Large Numbers Lecture XIII.
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
Discrete Mathematics, 1st Edition Kevin Ferland
1 7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology The Weak Law and the Strong.
1 2. Independence and Bernoulli Trials Independence: Events A and B are independent if It is easy to show that A, B independent implies are all independent.
Chapter 3 – Set Theory  .
Chapter 2 Mathematical preliminaries 2.1 Set, Relation and Functions 2.2 Proof Methods 2.3 Logarithms 2.4 Floor and Ceiling Functions 2.5 Factorial and.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Expectation for multivariate distributions. Definition Let X 1, X 2, …, X n denote n jointly distributed random variable with joint density function f(x.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
1 3. Random Variables Let ( , F, P) be a probability model for an experiment, and X a function that maps every to a unique point the set of real numbers.
Week 121 Law of Large Numbers Toss a coin n times. Suppose X i ’s are Bernoulli random variables with p = ½ and E(X i ) = ½. The proportion of heads is.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
Review of Statistics.  Estimation of the Population Mean  Hypothesis Testing  Confidence Intervals  Comparing Means from Different Populations  Scatterplots.
1 Probability and Statistical Inference (9th Edition) Chapter 5 (Part 2/2) Distributions of Functions of Random Variables November 25, 2015.
Discrete Random Variables. Introduction In previous lectures we established a foundation of the probability theory; we applied the probability theory.
1 EE571 PART 3 Random Processes Huseyin Bilgekul Eeng571 Probability and astochastic Processes Department of Electrical and Electronic Engineering Eastern.
Joint Moments and Joint Characteristic Functions.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
Copyright © Cengage Learning. All rights reserved. CHAPTER 8 RELATIONS.
Department of Statistics University of Rajshahi, Bangladesh
Computer Performance Modeling Dirk Grunwald Prelude to Jain, Chapter 12 Laws of Large Numbers and The normal distribution.
Chebyshev’s Inequality Markov’s Inequality Proposition 2.1.
Sums of Random Variables and Long-Term Averages Sums of R.V. ‘s S n = X 1 + X X n of course.
Chapter 2 1. Chapter Summary Sets (This Slide) The Language of Sets - Sec 2.1 – Lecture 8 Set Operations and Set Identities - Sec 2.2 – Lecture 9 Functions.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
Chapter 2 Sets and Functions.
3. Random Variables (Fig.3.1)
Markov Chains and Random Walks
What is Probability? Quantification of uncertainty.
STOCHASTIC HYDROLOGY Random Processes
13. The Weak Law and the Strong Law of Large Numbers
Chapter 2. Random Variables
13. The Weak Law and the Strong Law of Large Numbers
Presentation transcript:

Random walk on Z: Simple random walk : Let be independent identically distributed random variables (iid) with only two possible values {1,-1}, such that:, or Define the simple random walk process, by:, Define: To be waiting time of the first visit to state 1.

2 A state (i) is called recurrent if the chain returns to (i) with probability 1 in a finite number of steps, other wise the state is transient. If we define the waiting time variables as, then state i is recurrent if. That is the returns to the state (i) are sure events, and the state (i) is transient if: In this case there is positive probability of never returning to state (i), and the state (i) is a positive recurrent if. Hence if then the state (i) is a null recurrent.

3 We say that i leads to (j) and write if for some We say i communicates with j and write if both and. A Markov chain with state space S is said to be irreducible if, If x is recurrent and then y is recurrent.

4 If the Markov chain is irreducible, and one of the states is recurrent then all the states are recurrent. The simple random walk on is recurrent iff 1) If and, then. 2) If and, then.

5 We define the state or position of random walk at time zero as, and the position at time n as, then by strong law of large number Because: are independent identically distributed and from the definition,then. Hence And This means thatand consequence of that we have The state 0 is transient if the number of visit to zero is finite almost surly, which mean that with probability 1 the number of visit is finite.

6 Hence zero is transient state. Now if p=q=1/2, we claim recurrence. It is enough to show that 0 is recurrent (from theorem 1.2.4), then the state 0 is recurrent iff Define Where

7 Now using sterling formula Hence However and 0 is recurrent, and consequently the simple random walk on Z is recurrent. Now we will show that symmetric simple random walk is null recurrent, which means that Where

8 The probability generating function is defined as: Then Also Hence And

9 Put: Hence: and Hence zero is a null recurrent, and the simple symmetric random walk on Z is null recurrent. We proved that simple random walk on Z is recurrent if. We have: Then:

10 If is an infinite sequence of events such that: 1. 2., provided that are independent events.

11 It is well known that if are independent random variables, then, but this may fail in the case of infinite product, To show this we can introduce this counter example.

12 * Consider to be independent random variables such that: That is we have infinite sequence of independent identically distributed random variables, then: We define anew sequence as follows: then Now define the waiting time that means the first time the sequence equal to zero. Then :

13 On the other hand hence: T is finite almost surly.

14 i. X is finite random variable iff ii. X is bounded iff Now we introduce the notion of lebesgue measure, and Borel -field

15 · The -field is a collection of sub-sets of such that The elements of are called measurable sets or events. * The intersection of all -fields that contain all open intervals is called Borel -field and denoted by. It is known that a lebesgue measure on (Borel -field) is the only measure that assigns for every interval (a,b] a measure, and. Provided that EX and EY are finite.

16 If is finite then: Note : If are independent identically distributed random variables, and N is positive integer valued random variable independent of then: and Hence

17 If, and Then If random variables, then We define a new sequence then: now

18 ·  But is this theorem true if ? The following example shows that this theorem may fail if all. Define are independent identically distributed random variables, such that: and define : and

19 then: On the other hand : Because we have symmetric simple random walk, then recurrence implies that and then: Define also:

20 the event does occur or does not depending only on, which means that. Hence are independent random variables. then and Hence from (2) & (3) we conclude that:. The sigma-field generated by a random variable X is: where is the Borel.

21 A common error is repeated in some books, about linear correlation which is But this relation must be written as:  The following example shows that the formula (1) may not be true. * Consider a random variable. We define the triple where and and P=Lesbgue measure on. is the Borel -field restricted on (0,1).

22 Now define From (1) & (2) we get a = 0, y = b Suppose for the contrary that This Contradiction because the assumption is not true.

23 It is already known that if the moment generating function do exist,then all the moment do exist,but it could happen that all the moments do exist but the moment generating function does not exist, the following counterexample explains this We know if the moment generating function is defined, then Let

24 And define, that is Y follows the lognormal distribution. Hence all the moment of y do exist, whereas the moment generating function of Y does not exist, as we show now. Then the moment generating function does not exist.

25 The sequence of moments does not determine uniquely the distribution, the following example show this: We have the following two distributions with identical sequence of moments

26 Now: Obviously. Also

27 It is known from the literature that the sequence of moments of a random variable uniquely determines the distribution if it satisfies one of the following equivalent conditions:

28 The joint distribution of random variables determines uniquely the marginal distributions, but the converse is not necessarily true. This is a joint probability function ; YXYX

29 These two marginal functions are corresponding to the joint function fir any Conclusion: The marginal distributions don’t determine the joint distribution. 10X The marginal distribution of X is: 10Y The marginal distribution of Y is:

30 The “positive correlated” is not transitive property. A matrix is said to be positive definite if it satisfies one of the following equivalent conditions: i. ii. All the eignvalues are positive. iii. The determinants of all left upper sub-matrices are positive.

31 Any positive definite matrix could be variance-covariance matrix of some random vector. · Now Consider the matrix We can make sure that is variance-covariance matrix of a random variable. then Then X and Y are positively correlated, and Y and Z as well, but X and Z are negatively correlated

32 We say that converges weakly to X if: At all continuity points of F. This is denoted by or The following example shows that a sequence of continuous random variable does not necessary converge weakly to a continuous random variable.

33 Let Then X is a degenerate random variable with cumulative distribution function Consider a sequence of cumulative density functions such that: Then Obviously, is the cumulative distribution function of a continuous random variable. Whereas, F is the cumulative distribution function of a discrete random variable X. Never the less,.

34 The median is any x such that If F is continuous random variable, then the median is unique. Let X be a random variable such that

35 Consider a random variable X, such that P(X) 210X thus the median is any Which means that a=0, and b=1. That is, the median is not unique. This is a disadvantage of the median as a measure of location as the mean. The cumulative density function of X

36 If Then : Let then The median does not satisfy the relation Mod(X+Y) = Mod(X) + Mod(Y) Assume that Y is an independent copy of X, so we can find Med(X) and Med(Y) as follows

37 this implies that Med(X) =Med(Y)=log2 now to calculated Med (Z) suppose for contrary that Med(X+Y) = Med(X) + Med(Y) This means that Med(Z) = 2 log 2 With probability density function

38. If the median of Z were 2log 2, then However, Hence

39 The mod is the value of X that makes the density function is maximum. The mode is not unique, it is 0 and 1 P(X) 210X A random variable X has the following distribution

40 Suppose that X&Y follow the distribution above and they are independent. Let Z=X+Y P(X) 10X P(Z) 210Z We show in this example that the mode is not a linear operator. This is a disadvantage of the mode.

41 and from (1) & (2) we conclude that the Mode is not linear operator. Let and Y is an independent copy of X, Since f (0)=1 is the maximum value of f(x), hence Now Z = X + Y, then, the probability density function of Z

42 We defined a simple random walk in chapter (1), and we know that the following Markov Chain is recurrent iff This walk is called simple symmetric random walk (SSRW).

43 Consider a state i and a state j such that: ( Transitive property). By the same way Hence simple symmetric random walk (SSRW) on Z is irreducible. Simple symmetric random walk (SSRW) on Z is irreducible

44 A sequence of random variables is Martingale if: and sub-martingale if and a super-martingale if :

45 Consider independent random variables, such that We claim that is a martingale, for

46 Consider independent random variables Define a sequence. We show that is a martingale.

47 ’s independent random Consider a family tree where, and let the number of children at generation, and let the number of children of the individual of the generation then: and,then is a doubly indexed family. Variables, for every n and k and for fixed n they are independent identically distributed random variables. We assume that

48 Consider, we Show that, then is a martingale. We use the fact that: If Independent identically distributed random variables and random variable and independent of then: are Integer-valued

49 Since variables, and for fixed n they are independent identically distributed. Also are independent random are independent of Then: And  n n n nn nn nn nnn nn n Z k nk n n n W EZ Z EdEZ EdZ EZ dEZ EdEZ dE Z E n                      

50 (Not every martingale has an almost sure limit). In a symmetric simple random walk on Z, we have, then is a martingale since and simple random walk (SSRW), is recurrent this means that it keeps oscillating between all the integers. does not exist, because this symmetric

51 If is a sub-martingale such that: Then for every

52. is greater than or equal tofor the first time at time k. We define which means that is the event that the process, where A is the event that the process is greater than or equal to by time n. We want to show that: Since on, But sinceare disjoint. Hence

53 Then In the following example we can see if the sequence is not a sub-martingale, then the last theorem fail.

54 Consider independent identically distributed random variables such that It is clear that the sequence is not a sub-martingale since obviously, Hence: Doob’s inequality requires that for then (1) will be which is not true for large n. This validity because the sequence is not a martingale.

55 In this chapter we describe convergence of sequences of random variables. Almost sure convergence, because of its relationship to pointwise convergence, and convergence in distribution, because of its being the easiest to establish. Convergence in probability is significant for weak laws of large numbers, and in statistics, for certain forms of consistency. Mean convergence is used to establish convergence of moments.

56 be random variables on. Let We have these modes of convergence 1. Almost sure convergence 2. Convergence in probability 3. Convergence in distribution 4. Convergence in mean Almost sure convergence, also known as convergence with probability one, is the probabilistic version of pointwise convergence. The sequence converges to X almost surly, denoted by, if

57 The sequence converges to X in probability, for every, if denoted by The sequence converges to X in mean,, if denoted by

58 In distribution In mean In probability Almost sure Defining conditionMode for all at continuity points t of Table 5.1. Definition of convergence for random variables This convergence is some times called weak convergence. We summarize these modes in the following table. The sequence converges to X in distribution,, ifdenoted by

59 Almost sure convergence. Convergence in mean. Convergence in probability. Convergence in distribution. Figer 5.2- Implications Among Forms of Convergence

60 The last figure depicts the implications that are always valid. None of the other implications holds in general., then Suppose that, and for each n, let Then, convergence in probability follows: If

61 If, then. By Chebyshev’s inequality, for each Therefor, implies that If, then The reader can see the proof in the book of Alan F. Karr p.(141).

62 The implication convergence in probability does not always lead to almost sure convergence and this counter example shows us that. of independent random variables such that ; We can see that for, Consider a sequence Hence Now claim thatdoes not converge to zero almost surly, And since are independent, then from Borel-Cantelli lemma, This means that does not converge to zero almost surly. Then

63 If is a sequence of independent random variables and then: This theorem suggests that a convergence in probability is equivalent to almost sure convergence in case of martingales. But, this is false in general as we show in the following example.

64 Set Where Consider a sequence of independent random variables such that: Define a new sequence as follows:

65 We claim thatis a martingale which means that: Now Since is independent of Then

66 Notice that For But This means that Which is equivalent to saying does not converge to zero almost surly. That is, convergence in probability does not imply almost sure convergence even for martingales.

67 Define And Hence the cumulative density function of is Then on the other hand, does not converge to 0. Hence does not converge to X in probability. This example shows that convergence in distribution does not imply convergence in probability. Let each of 0 & 1 has mass = ½

68 We can see that Hence And in example (13) we show that does not converge to 0 almost surly. This example shows that convergence in mean does not imply almost sure convergence. Define

69 Hence does not converge to 0 in mean. On the other hand It is clear that This example shows the converse implication of the last example, is not true in general. Define on the probability space a sequence of random variables as follows: Where and P=lebesgue measure Now

70 in the p th mean if This example shows that convergence in probability does not imply convergence in the p th mean. Define For every,

71 This means And Hence for every p>0, does not converge to 0 in the pth mean.

72 6- The Integrals of Lebesgue Measure, by Denjoy, Perron, and Henstock/Russell A. Gordan. 1- Adventures in Stochastic Processes, by Sidney I. Resnick. 2- Introduction to Probability Theory, by Paul G.Hoel, Sidney C. Port and Charles J.Stone. 3- Markov Chains, by J.R.Norries. 4- Probability by Alan F. Karr. 5- Random Walks and Electric Networks, by Peter G.Doyle and J.Laurie Snell.