Generating Functions.

Slides:



Advertisements
Similar presentations
Order Statistics The order statistics of a set of random variables X1, X2,…, Xn are the same random variables arranged in increasing order. Denote by X(1)
Advertisements

Chapter 2 Multivariate Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
AP Statistics Chapter 16. Discrete Random Variables A discrete random variable X has a countable number of possible values. The probability distribution.
By : L. Pour Mohammad Bagher Author : Vladimir N. Vapnik
Section 5.4 is n  a i X i i = 1 n  M i (a i t). i = 1 M 1 (a 1 t) M 2 (a 2 t) … M n (a n t) = Y = a 1 X 1 + a 2 X 2 + … + a n X n = If X 1, X 2, …, X.
Binomial Random Variables. Binomial experiment A sequence of n trials (called Bernoulli trials), each of which results in either a “success” or a “failure”.
Probability theory 2010 Outline  The need for transforms  Probability-generating function  Moment-generating function  Characteristic function  Applications.
Generating Functions. The Moments of Y We have referred to E(Y) and E(Y 2 ) as the first and second moments of Y, respectively. In general, E(Y k ) is.
Section 10.6 Recall from calculus: lim= lim= lim= x  y  — x x — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives.
The moment generating function of random variable X is given by Moment generating function.
Copyright © Cengage Learning. All rights reserved. 6 Point Estimation.
Mutually Exclusive: P(not A) = 1- P(A) Complement Rule: P(A and B) = 0 P(A or B) = P(A) + P(B) - P(A and B) General Addition Rule: Conditional Probability:
Discrete Random Variables and Probability Distributions
Normal and Sampling Distributions A normal distribution is uniquely determined by its mean, , and variance,  2 The random variable Z = (X-  /  is.
CIS 2033 based on Dekking et al. A Modern Introduction to Probability and Statistics, 2007 Instructor Longin Jan Latecki Chapter 7: Expectation and variance.
© 2010 Pearson Education Inc.Goldstein/Schneider/Lay/Asmar, CALCULUS AND ITS APPLICATIONS, 12e – Slide 1 of 15 Chapter 12 Probability and Calculus.
5.5 Normal Approximations to Binomial Distributions Key Concepts: –Binomial Distributions (Review) –Approximating a Binomial Probability –Correction for.
Variance and Covariance
Continuous Distributions The Uniform distribution from a to b.
Chapter 4 DeGroot & Schervish. Variance Although the mean of a distribution is a useful summary, it does not convey very much information about the distribution.
More Continuous Distributions
Discrete Random Variables. Numerical Outcomes Consider associating a numerical value with each sample point in a sample space. (1,1) (1,2) (1,3) (1,4)
1 6. Mean, Variance, Moments and Characteristic Functions For a r.v X, its p.d.f represents complete information about it, and for any Borel set B on the.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Mean, Variance, Moments and.
7 sum of RVs. 7-1: variance of Z Find the variance of Z = X+Y by using Var(X), Var(Y), and Cov(X,Y)
B AD 6243: Applied Univariate Statistics Data Distributions and Sampling Professor Laku Chidambaram Price College of Business University of Oklahoma.
Expectation. Let X denote a discrete random variable with probability function p(x) (probability density function f(x) if X is continuous) then the expected.
CONTINUOUS RANDOM VARIABLES
AP Statistics Chapter 16. Discrete Random Variables A discrete random variable X has a countable number of possible values. The probability distribution.
1 Probability and Statistical Inference (9th Edition) Chapter 5 (Part 2/2) Distributions of Functions of Random Variables November 25, 2015.
Random Variables. Numerical Outcomes Consider associating a numerical value with each sample point in a sample space. (1,1) (1,2) (1,3) (1,4) (1,5) (1,6)
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Sections 4.9, 4.10: Moment Generating Function and Probability.
1 6. Mean, Variance, Moments and Characteristic Functions For a r.v X, its p.d.f represents complete information about it, and for any Borel set B on the.
Distributions of Functions of Random Variables November 18, 2015
Week 111 Some facts about Power Series Consider the power series with non-negative coefficients a k. If converges for any positive value of t, say for.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Review of Final Part I Sections Jiaping Wang Department of Mathematics 02/29/2013, Monday.
ASV Chapters 1 - Sample Spaces and Probabilities
MAT 446 Supplementary Note for Ch 3
Lecture 3 B Maysaa ELmahi.
Functions and Transformations of Random Variables
Variance and Covariance
Cumulative distribution functions and expected values
Chapter 2 Discrete Random Variables
Random Variable.
STATISTICS Random Variables and Distribution Functions
CONTINUOUS RANDOM VARIABLES
Let a function be given as the sum of a power series in the convergence interval of the power series Then such a power series is unique and its.
Means and Variances of Random Variables
ANATOMY OF THE EMPIRICAL RULE
Moment Generating Functions
Moment Generating Functions
Probability Review for Financial Engineers
The Normal Probability Distribution Summary
ASV Chapters 1 - Sample Spaces and Probabilities
Multinomial Distribution
Let a function be given as the sum of a power series in the convergence interval of the power series Then such a power series is unique and its.
Example Suppose X ~ Uniform(2, 4). Let . Find .
Random Variable.
Virtual University of Pakistan
10-5 The normal distribution
Handout Ch 4 實習.
Handout Ch 4 實習.
Theorems About Variability
Further Topics on Random Variables: 1
Berlin Chen Department of Computer Science & Information Engineering
Continuous Distributions
Presentation transcript:

Generating Functions

The Moments of Y We have referred to E(Y) and E(Y2) as the first and second moments of Y, respectively. In general, E(Yk) is the kth moment of Y. Consider the polynomial where the moments of Y are incorporated into the coefficients

Moment Generating Function If the sum converges for all t in some interval |t| < b, the polynomial is called the moment-generating function, m(t), for the random variable Y. And we may note that for each k,

Moment Generating Function Hence, the moment-generating function is given by May rearrange, since finite for |t| < b.

Moment Generating Function That is, is the polynomial whose coefficients involve the moments of Y.

The kth moment To retrieve the kth moment from the MGF, evaluate the kth derivative at t = 0. And so, letting t = 0:

Common MGFs The MGFs for some of the discrete distributions we’ve seen include:

Geometric MGF Consider the MGF Use derivatives to determine the first and second moments. And so,

Geometric MGF Since We have And so,

Geometric MGF Since is for a geometric random variable with p = 1/3, our prior results tell us E(Y) = 1/p and V(Y) = (1 – p)/p2. which do agree with our current results.

All the moments Although the mean and variance help to describe a distribution, they alone do not uniquely describe a distribution. All the moments are necessary to uniquely describe a probability distribution. That is, if two random variables have equal MGFs, (i.e., mY(t) = mZ(t) for |t| < b ), then they have the same probability distribution.

m(aY+b)? For the random variable Y with MGF m(t), consider W = aY + b.

E(aY+b) Now, based on the MGF, we could again consider E(W) = E(aY + b). And so, letting t = 0, as expected.

V(aY+b) Now, based on the MGF, can you again consider V(W) = V(aY + b). …and so V(W) = V(aY + b) = a2V(Y).

Tchebysheff’s Theorem For “bell-shaped” distributions, the empirical rule gave us a 68-95-99.7% rule for probability a value falls within 1, 2, or 3 standard deviations from the mean, respectively. When the distribution is not so bell-shaped, Tchebysheff tells use the probability of being within k standard deviations of the mean is at least 1 – 1/k2, for k > 0. Remember, it’s just a lower bound.

A Skewed Distribution Consider a binomial experiment with n = 10 and p = 0.1.

A Skewed Distribution Verify Tchebysheff’s lower bound for k = 2: