1 Transformation Techniques In Probability theory various transformation techniques are used to simplify the solutions for various moment calculations.

Slides:



Advertisements
Similar presentations
Order Statistics The order statistics of a set of random variables X1, X2,…, Xn are the same random variables arranged in increasing order. Denote by X(1)
Advertisements

MOMENT GENERATING FUNCTION AND STATISTICAL DISTRIBUTIONS
Joint and marginal distribution functions For any two random variables X and Y defined on the same sample space, the joint c.d.f. is For an example, see.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Random Variable A random variable X is a function that assign a real number, X(ζ), to each outcome ζ in the sample space of a random experiment. Domain.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Probability, Statistics, and Traffic Theories
Review of Basic Probability and Statistics
Chapter 5 Basic Probability Distributions
Review.
1 Review of Probability Theory [Source: Stanford University]
Probability theory 2010 Outline  The need for transforms  Probability-generating function  Moment-generating function  Characteristic function  Applications.
CONTINUOUS RANDOM VARIABLES. Continuous random variables have values in a “continuum” of real numbers Examples -- X = How far you will hit a golf ball.
Probability and Statistics Review
A random variable that has the following pmf is said to be a binomial random variable with parameters n, p The Binomial random variable.
2. Random variables  Introduction  Distribution of a random variable  Distribution function properties  Discrete random variables  Point mass  Discrete.
The moment generating function of random variable X is given by Moment generating function.
Some standard univariate probability distributions
Copyright © Cengage Learning. All rights reserved. 4 Continuous Random Variables and Probability Distributions.
Chapter 21 Random Variables Discrete: Bernoulli, Binomial, Geometric, Poisson Continuous: Uniform, Exponential, Gamma, Normal Expectation & Variance, Joint.
CIS 2033 based on Dekking et al. A Modern Introduction to Probability and Statistics, 2007 Instructor Longin Jan Latecki Chapter 7: Expectation and variance.
ELE 511 – TELECOMMUNICATIONS NETWORKS
Continuous Random Variables and Probability Distributions
DATA ANALYSIS Module Code: CA660 Lecture Block 3.
Tch-prob1 Chap 3. Random Variables The outcome of a random experiment need not be a number. However, we are usually interested in some measurement or numeric.
Winter 2006EE384x1 Review of Probability Theory Review Session 1 EE384X.
4.2 Variances of random variables. A useful further characteristic to consider is the degree of dispersion in the distribution, i.e. the spread of the.
Section 3.6 Recall that y –1/2 e –y dy =   0 (Multivariable Calculus is required to prove this!)  (1/2) = Perform the following change of variables.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Continuous Distributions The Uniform distribution from a to b.
Random Variables. A random variable X is a real valued function defined on the sample space, X : S  R. The set { s  S : X ( s )  [ a, b ] is an event}.
Convergence in Distribution
X = 2*Bin(300,1/2) – 300 E[X] = 0 Y = 2*Bin(30,1/2) – 30 E[Y] = 0.
Chapter 4 DeGroot & Schervish. Variance Although the mean of a distribution is a useful summary, it does not convey very much information about the distribution.
Probability & Statistics I IE 254 Summer 1999 Chapter 4  Continuous Random Variables  What is the difference between a discrete & a continuous R.V.?
Chapter 5.6 From DeGroot & Schervish. Uniform Distribution.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
STA347 - week 51 More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function.
One Random Variable Random Process.
Expectation for multivariate distributions. Definition Let X 1, X 2, …, X n denote n jointly distributed random variable with joint density function f(x.
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Expectation. Let X denote a discrete random variable with probability function p(x) (probability density function f(x) if X is continuous) then the expected.
Random Variable The outcome of an experiment need not be a number, for example, the outcome when a coin is tossed can be 'heads' or 'tails'. However, we.
Chapter 4-5 DeGroot & Schervish. Conditional Expectation/Mean Let X and Y be random variables such that the mean of Y exists and is finite. The conditional.
Week 121 Law of Large Numbers Toss a coin n times. Suppose X i ’s are Bernoulli random variables with p = ½ and E(X i ) = ½. The proportion of heads is.
Chapter 3 DeGroot & Schervish. Functions of a Random Variable the distribution of some function of X suppose X is the rate at which customers are served.
Chapter 2: Random Variable and Probability Distributions Yang Zhenlin.
Chapter 2 Probability, Statistics and Traffic Theories
Section 10.5 Let X be any random variable with (finite) mean  and (finite) variance  2. We shall assume X is a continuous type random variable with p.d.f.
Week 111 Some facts about Power Series Consider the power series with non-negative coefficients a k. If converges for any positive value of t, say for.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Engineering Probability and Statistics - SE-205 -Chap 3 By S. O. Duffuaa.
Sums of Random Variables and Long-Term Averages Sums of R.V. ‘s S n = X 1 + X X n of course.
Random Variables Lecture Lecturer : FATEN AL-HUSSAIN.
Lecture 21 Dr. MUMTAZ AHMED MTH 161: Introduction To Statistics.
3. Random Variables (Fig.3.1)
ONE DIMENSIONAL RANDOM VARIABLES
Random Variable 2013.
Lecture 3 B Maysaa ELmahi.
3.1 Expectation Expectation Example
Chapter 5 Statistical Models in Simulation
Multinomial Distribution
TexPoint fonts used in EMF.
Distributions and expected value
Useful Discrete Random Variable
3.0 Functions of One Random Variable
Handout Ch 4 實習.
Chapter 3 : Random Variables
Berlin Chen Department of Computer Science & Information Engineering
Erlang, Hyper-exponential, and Coxian distributions
Presentation transcript:

1 Transformation Techniques In Probability theory various transformation techniques are used to simplify the solutions for various moment calculations. We will discuss here 4 of those functions. 1.Probability Generating Function 2.Moment Generating Function 3.Characteristic Function 4.Laplace Transformation of probability density function

2 Probability Generating Function Tool that simplifies computations of integer valued discrete random variable problems X: non-negative integer valued Random Number P(X=k) =p k, then define the Probability Generating Function (PGF) of X by G X (z) = E[z X ] =  p k z k = p 0 + p 1 z + p 2 z 2 + ……. p k z k +…… z is a complex number  z  < 1 G(z) is nothing more than z-transform of p k. G x (1) = 1 =  p k

3 Generating Functions K : Non-negative integer valued random variable with probability distribution p j where, p j = Prob[K =j] for all j = 0,1,2,…… g(z) : p 0 + p 1 z+ p 2 z 2 + p 3 z 3 + ……. g(z) is a power series of probability p j with coefficient z j is the probability generating Function of random variable K Few properties g(1) = 1 as  p j = 1 and z is a complex number and converged to Absolute Value Mod[z] < 1 Expected Value E[K] =  j p j for j: 0,1,2….. (d/dz)g(z) =  j p j z j-1 at z =1 for j : 1,2,….. E[K] = g (1) (1) Similarly V[K] = g (2) (1) + g (1) (1) – [g (1) (1)] 2 Reference: Introduction to Queuing Theory, Robert Cooper

4 Moment Generating Function m g (t) : Moment Generating Function: Expected Value of function e tX, where ‘t’ is a real variable and X is the random variable m g (t) = E[e tX ] =  Xi  Rx p(X i ). e tX i = ∫ Rx f(x). e tX i dx If m g (t) exists for all real values of t, in some small interval –d, d : d > 0 about the origin, it can be shown that the probability distribution function can be obtained from m g (t). We assume m g (t) exists at a small region t about origin.

5 Moment Generating Function-2 e tX = 1 + tx + t 2 X 2 /2! + t 3 X 3 /3!+  Assume X is a continuous Random Variable m g (t) = E[e tX ] =  Xi  Rx p(X i ). e tX i = ∫ Rx f(x). e tX i dx  = ∫ Rx  i=0 t i X i /i! f(X)dx  = ∫ Rx t i /i!  i=0 X i f(X)dx =  i=0 t i /i! ∫ Rx X i f(X)dx  =  i=0 t i /i! E[X i ] = E[X 0 ] + tE[X 1 ] + t 2 /2!E[X 2 ] + … e tX

6 Moment Generating Function-3 m g (t) = E[X 0 ] + tE[X 1 ] + t 2 /2!E[X 2 ] + … m (1) g (t) = E[X 1 ] + tE[X 2 ] + t 2 /2!E[X 3 ] + … m (2) g (t) = E[X 2 ] + tE[X 3 ] + t 2 /2!E[X 4 ] + … At t = 0 m (1) g (t) = E[X 1 ] m (2) g (t) = E[X 2 ] Var[X] = E[X 2 ] – [E[X]] 2 = m (2) g (t) - [m (1) g (t)] 2

7 Characteristic Function The Characteristic Function of Random Variable X  X (u) = E[e juX ] = ∫  -  e juX f x (x)dx where j =  -1 and u is an arbitrary real variable Note: Except for the sign of exponent, Characteristic function is the Fourier Transform of the pdf of X.  X (u) = ∫  -  f x (x)dx[1 + jux +(jux) 2 /2! + (jux) 3 /3! + ……..]dx = 1 + juE[X] + (ju) 2 /2!E[X 2 ] + (ju) 3 /3!E[X 3 ] + ….. Let u=0 Then  X (0) = 1   X (0) = d  X (u)/du  u=0 = jE[X]   X (0) = d 2  X (u)/du 2  u=0 = j 2 E[X 2 ]

8 Laplace Transform Let CDF of traffic arrival process is defined as A(x), where X is the random variable for inter arrival time between two customers. A(x) = P[X < x] The pdf (probability density function) is denoted by a(x) Laplace Transform of a(x) is denoted by A*(s) and is given by A*(s) = E[e –sX ] = ∫  -  e –sx axdx Since most random variable deals with non negative numbers, we can make the transform as A*(s) = ∫  0 e –sx axdx Similar techniques of Moment generating function or characteristic function can be used to show that A *(n) (0) = (-1) n E[X n ]

9 Example For a continuous Random variable pdf is given as follows e – x x > 0 f x (x) = 0 x < 0 Laplace Transform : A*(s) = Characteristic Function:  x (u) = Moment Generating Function: m g (v) =  + s  - ju  - v

10 Expected Value Laplace Transform : E[X] = (-1)A *(1) (0) = (-) d[ /( + s)/ds  s=0 = (-) [(-) /( +s) 2 ]  s=0 = / 2 = 1/ Characteristic Function: E[X] = j -1  x (1) (0) = (j -1 ) d[ /( - ju)/du  u=0 = (j -1 )[  j/( - ju) 2  u=0 = / 2 = 1/ Moment Generating Function E[X}= m X (1) (0) = d[ /( - v)/dv  v=0 =[ /( - v) 2  v=0 = / 2 = 1/

11 Variance Laplace Transform : E[X 2 ] = (-1) 2 A *(2) (0) = d 2 [ /( + s)/ds 2  s=0 = [2  s  /( +s) 3 ]  s=0 = 2  / 3 = 2/  Var[X] = E[X 2 ] – [E[X]] 2 = 2/ – [1/ ] 2 = (2 – )/ 2 = 1/ Characteristic Function: E[X 2 ] = j -2  x (2) (0) = (j -2 ) d 2 [ /( - ju)/du 2  u=0 = (j -2 )[2  – ju)  j 2 /( - ju) 3  u=0 = 2  / 3 = 2/ Moment Generating Function E[X 2 }= m X (2) (0) = d 2 [ /( - v)/dv 2  v=0 =[2  - v  /( - v) 3  v=0 = 2  / 3 = 2/

12

13 Sum of Random Variables K1 and K2 are two independent random variables with GF g1(z) and g2(z) Find the Probability distribution P{K=k} where K = K1 + K2 P{K =k} = P{k1 = j}.P{k2 = k-j} g1(z) =  P{k1=j}z j for j: 0.1,2……. g2(z) =  P{k2=j}z j for j: 0.1,2……. g1(z)g2(z) =  {  P{k1=j}P{k2=k - j}z k for k: 0.1,2……. and j : 0,1,2…k If K has a generating function of g(z), then g(z) =  P{K=k}z k for k: 0.1,2……. =  [  P{k1 = j}.P{k2 = k-j}] for k: 0.1,2……. and j : 0,1,2…k g(z) = g1(z)g2(z) k = 0j = 0 k= 0j = 0

14 Example: Bernoulli Distribution Bernoulli Distribution : X =0 with probability q X = 1 with probability p p + q = 1 g(z) = q + pz g’(1) = p g’’(I) = 0 E[X] = g’(1) = p V[x] = g’’(1) + g’(1) – [g’(1)] 2 = p – p 2 = p(1 – p) = pq A coin is tossed for n times, X j = 0 if tail and X j = 1 if head probability to have k heads in n tosses. S n is the sum of n independent Bernoulli random variables S n = X1 +X2 +……….+ Xn g(z) = GF of a toss = q + pz GF of S n =  P{S n = k}z k for k : 0,1,2…… = g(z).g(z)…….g(z) = [g(z)] n = (q + pz) n =  n C k [pz] k q n-k for k = 0…..n Binomial Distribution P{S n = k} = n C k [pz] k q n-k for k = 0…..n = 0 for k > n

15 Example Poisson Distribution Poisson Distribution = [( t) j /j!]e – t for j:0,1,2….. Generating Function g(z) =  [( t) j /j!]e – t z j = e – t  [( tz) j /j!]  for j: 0,1,2,…. = e – t e tz = e – t(1-z) Expectation P[N(t) =j] g’(z) = te – t(1-z) E[N(t) =j] = g’(1) = t Variance g’’(z) = ( t) 2 e – t(1-z) g’’(1) = ( t) 2 V[N(t)] = g’’(1) + g’(1) – {g’(1)} 2 = t Sum of Poisson distribution of 1 and 2 g(z) = e –  t(1-z) e –  t(1-z) = +e –(  t(1-z) 

16 Use of GF for Probability M/M/1 System Birth and Death Equation 0 = - (  ) p n +  p n+1 + p n-1 (n>1) 0 = - p 0 +  p 1 p n+1 = [(  )/  p n - [  p n-1 p 1 = [  p 0 If  =  p n+1 = (  )p n -  p n-1 (n>1) p 1 =  p 0 Use GF to solve this equation z n p n+1 = (  ) z n p n -  z n p n-1 (n>1) z -1 p n+1 z n+1 = (  ) z n p n -  z p n-1 z n-1 z -1  p n+1 z n+1 = (  )  z n p n -  z  p n-1 z n-1    n=1

17 GF for Prob z -1 [  p n+1 z n+1 – p 1 z – p 0 ] = (  )[  z n p n - p 0 ] -  z  p n-1 z n-1    n=-1 n=1n=0 But  p n+1 z n+1 =  p n z n =  p n-1 z n-1 = P(z) n=-1 n=0n=1    z -1 [P(z) – p 1 z – p 0 ] = (  )[ P(z) - p 0 ] -  z P(z) z -1 [P(z) –  p 0 z – p 0 ] = (  )[ P(z) - p 0 ] -  z P(z) z -1 P(z) –  p 0 – z -1 p 0 =  P(z) -  p 0 + P(z) - p 0 -  z P(z) P(z) = p 0 /(1 –  z) To Find p we use the boundary condition P(1) =1 P(1) = p 0 /(1 –  ) = 1 p 0 = 1 –  P(z) = (1 –  /(1 –  z) 1 /(1 –  z) = 1 + z  + z  2 + ……. P(z) =  (1-  )  n z n n=0  p n  (1-  )  n