Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Transformation Techniques In Probability theory various transformation techniques are used to simplify the solutions for various moment calculations.

Similar presentations


Presentation on theme: "1 Transformation Techniques In Probability theory various transformation techniques are used to simplify the solutions for various moment calculations."— Presentation transcript:

1 1 Transformation Techniques In Probability theory various transformation techniques are used to simplify the solutions for various moment calculations. We will discuss here 4 of those functions. 1.Probability Generating Function 2.Moment Generating Function 3.Characteristic Function 4.Laplace Transformation of probability density function

2 2 Probability Generating Function Tool that simplifies computations of integer valued discrete random variable problems X: non-negative integer valued Random Number P(X=k) =p k, then define the Probability Generating Function (PGF) of X by G X (z) = E[z X ] =  p k z k = p 0 + p 1 z + p 2 z 2 + ……. p k z k +…… z is a complex number  z  < 1 G(z) is nothing more than z-transform of p k. G x (1) = 1 =  p k

3 3 Generating Functions K : Non-negative integer valued random variable with probability distribution p j where, p j = Prob[K =j] for all j = 0,1,2,…… g(z) : p 0 + p 1 z+ p 2 z 2 + p 3 z 3 + ……. g(z) is a power series of probability p j with coefficient z j is the probability generating Function of random variable K Few properties g(1) = 1 as  p j = 1 and z is a complex number and converged to Absolute Value Mod[z] < 1 Expected Value E[K] =  j p j for j: 0,1,2….. (d/dz)g(z) =  j p j z j-1 at z =1 for j : 1,2,….. E[K] = g (1) (1) Similarly V[K] = g (2) (1) + g (1) (1) – [g (1) (1)] 2 Reference: Introduction to Queuing Theory, Robert Cooper

4 4 Moment Generating Function m g (t) : Moment Generating Function: Expected Value of function e tX, where ‘t’ is a real variable and X is the random variable m g (t) = E[e tX ] =  Xi  Rx p(X i ). e tX i = ∫ Rx f(x). e tX i dx If m g (t) exists for all real values of t, in some small interval –d, d : d > 0 about the origin, it can be shown that the probability distribution function can be obtained from m g (t). We assume m g (t) exists at a small region t about origin.

5 5 Moment Generating Function-2 e tX = 1 + tx + t 2 X 2 /2! + t 3 X 3 /3!+  Assume X is a continuous Random Variable m g (t) = E[e tX ] =  Xi  Rx p(X i ). e tX i = ∫ Rx f(x). e tX i dx  = ∫ Rx  i=0 t i X i /i! f(X)dx  = ∫ Rx t i /i!  i=0 X i f(X)dx =  i=0 t i /i! ∫ Rx X i f(X)dx  =  i=0 t i /i! E[X i ] = E[X 0 ] + tE[X 1 ] + t 2 /2!E[X 2 ] + … e tX

6 6 Moment Generating Function-3 m g (t) = E[X 0 ] + tE[X 1 ] + t 2 /2!E[X 2 ] + … m (1) g (t) = E[X 1 ] + tE[X 2 ] + t 2 /2!E[X 3 ] + … m (2) g (t) = E[X 2 ] + tE[X 3 ] + t 2 /2!E[X 4 ] + … At t = 0 m (1) g (t) = E[X 1 ] m (2) g (t) = E[X 2 ] Var[X] = E[X 2 ] – [E[X]] 2 = m (2) g (t) - [m (1) g (t)] 2

7 7 Characteristic Function The Characteristic Function of Random Variable X  X (u) = E[e juX ] = ∫  -  e juX f x (x)dx where j =  -1 and u is an arbitrary real variable Note: Except for the sign of exponent, Characteristic function is the Fourier Transform of the pdf of X.  X (u) = ∫  -  f x (x)dx[1 + jux +(jux) 2 /2! + (jux) 3 /3! + ……..]dx = 1 + juE[X] + (ju) 2 /2!E[X 2 ] + (ju) 3 /3!E[X 3 ] + ….. Let u=0 Then  X (0) = 1   X (0) = d  X (u)/du  u=0 = jE[X]   X (0) = d 2  X (u)/du 2  u=0 = j 2 E[X 2 ]

8 8 Laplace Transform Let CDF of traffic arrival process is defined as A(x), where X is the random variable for inter arrival time between two customers. A(x) = P[X < x] The pdf (probability density function) is denoted by a(x) Laplace Transform of a(x) is denoted by A*(s) and is given by A*(s) = E[e –sX ] = ∫  -  e –sx axdx Since most random variable deals with non negative numbers, we can make the transform as A*(s) = ∫  0 e –sx axdx Similar techniques of Moment generating function or characteristic function can be used to show that A *(n) (0) = (-1) n E[X n ]

9 9 Example For a continuous Random variable pdf is given as follows e – x x > 0 f x (x) = 0 x < 0 Laplace Transform : A*(s) = Characteristic Function:  x (u) = Moment Generating Function: m g (v) =  + s  - ju  - v

10 10 Expected Value Laplace Transform : E[X] = (-1)A *(1) (0) = (-) d[ /( + s)/ds  s=0 = (-) [(-) /( +s) 2 ]  s=0 = / 2 = 1/ Characteristic Function: E[X] = j -1  x (1) (0) = (j -1 ) d[ /( - ju)/du  u=0 = (j -1 )[  j/( - ju) 2  u=0 = / 2 = 1/ Moment Generating Function E[X}= m X (1) (0) = d[ /( - v)/dv  v=0 =[ /( - v) 2  v=0 = / 2 = 1/

11 11 Variance Laplace Transform : E[X 2 ] = (-1) 2 A *(2) (0) = d 2 [ /( + s)/ds 2  s=0 = [2  s  /( +s) 3 ]  s=0 = 2  / 3 = 2/  Var[X] = E[X 2 ] – [E[X]] 2 = 2/ – [1/ ] 2 = (2 – )/ 2 = 1/ Characteristic Function: E[X 2 ] = j -2  x (2) (0) = (j -2 ) d 2 [ /( - ju)/du 2  u=0 = (j -2 )[2  – ju)  j 2 /( - ju) 3  u=0 = 2  / 3 = 2/ Moment Generating Function E[X 2 }= m X (2) (0) = d 2 [ /( - v)/dv 2  v=0 =[2  - v  /( - v) 3  v=0 = 2  / 3 = 2/

12 12

13 13 Sum of Random Variables K1 and K2 are two independent random variables with GF g1(z) and g2(z) Find the Probability distribution P{K=k} where K = K1 + K2 P{K =k} = P{k1 = j}.P{k2 = k-j} g1(z) =  P{k1=j}z j for j: 0.1,2……. g2(z) =  P{k2=j}z j for j: 0.1,2……. g1(z)g2(z) =  {  P{k1=j}P{k2=k - j}z k for k: 0.1,2……. and j : 0,1,2…k If K has a generating function of g(z), then g(z) =  P{K=k}z k for k: 0.1,2……. =  [  P{k1 = j}.P{k2 = k-j}] for k: 0.1,2……. and j : 0,1,2…k g(z) = g1(z)g2(z) k = 0j = 0 k= 0j = 0

14 14 Example: Bernoulli Distribution Bernoulli Distribution : X =0 with probability q X = 1 with probability p p + q = 1 g(z) = q + pz g’(1) = p g’’(I) = 0 E[X] = g’(1) = p V[x] = g’’(1) + g’(1) – [g’(1)] 2 = p – p 2 = p(1 – p) = pq A coin is tossed for n times, X j = 0 if tail and X j = 1 if head probability to have k heads in n tosses. S n is the sum of n independent Bernoulli random variables S n = X1 +X2 +……….+ Xn g(z) = GF of a toss = q + pz GF of S n =  P{S n = k}z k for k : 0,1,2…… = g(z).g(z)…….g(z) = [g(z)] n = (q + pz) n =  n C k [pz] k q n-k for k = 0…..n Binomial Distribution P{S n = k} = n C k [pz] k q n-k for k = 0…..n = 0 for k > n

15 15 Example Poisson Distribution Poisson Distribution = [( t) j /j!]e – t for j:0,1,2….. Generating Function g(z) =  [( t) j /j!]e – t z j = e – t  [( tz) j /j!]  for j: 0,1,2,…. = e – t e tz = e – t(1-z) Expectation P[N(t) =j] g’(z) = te – t(1-z) E[N(t) =j] = g’(1) = t Variance g’’(z) = ( t) 2 e – t(1-z) g’’(1) = ( t) 2 V[N(t)] = g’’(1) + g’(1) – {g’(1)} 2 = t Sum of Poisson distribution of 1 and 2 g(z) = e –  t(1-z) e –  t(1-z) = +e –(  t(1-z) 

16 16 Use of GF for Probability M/M/1 System Birth and Death Equation 0 = - (  ) p n +  p n+1 + p n-1 (n>1) 0 = - p 0 +  p 1 p n+1 = [(  )/  p n - [  p n-1 p 1 = [  p 0 If  =  p n+1 = (  )p n -  p n-1 (n>1) p 1 =  p 0 Use GF to solve this equation z n p n+1 = (  ) z n p n -  z n p n-1 (n>1) z -1 p n+1 z n+1 = (  ) z n p n -  z p n-1 z n-1 z -1  p n+1 z n+1 = (  )  z n p n -  z  p n-1 z n-1    n=1

17 17 GF for Prob z -1 [  p n+1 z n+1 – p 1 z – p 0 ] = (  )[  z n p n - p 0 ] -  z  p n-1 z n-1    n=-1 n=1n=0 But  p n+1 z n+1 =  p n z n =  p n-1 z n-1 = P(z) n=-1 n=0n=1    z -1 [P(z) – p 1 z – p 0 ] = (  )[ P(z) - p 0 ] -  z P(z) z -1 [P(z) –  p 0 z – p 0 ] = (  )[ P(z) - p 0 ] -  z P(z) z -1 P(z) –  p 0 – z -1 p 0 =  P(z) -  p 0 + P(z) - p 0 -  z P(z) P(z) = p 0 /(1 –  z) To Find p we use the boundary condition P(1) =1 P(1) = p 0 /(1 –  ) = 1 p 0 = 1 –  P(z) = (1 –  /(1 –  z) 1 /(1 –  z) = 1 + z  + z  2 + ……. P(z) =  (1-  )  n z n n=0  p n  (1-  )  n


Download ppt "1 Transformation Techniques In Probability theory various transformation techniques are used to simplify the solutions for various moment calculations."

Similar presentations


Ads by Google