Probability theory 2008 Conditional probability mass function  Discrete case  Continuous case.

Slides:



Advertisements
Similar presentations
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 10 The Analysis of Variance.
Advertisements

Some additional Topics. Distributions of functions of Random Variables Gamma distribution,  2 distribution, Exponential distribution.
Let X 1, X 2,..., X n be a set of independent random variables having a common distribution, and let E[ X i ] = . then, with probability 1 Strong law.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Discrete Probability Distributions
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Review.
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
Probability Distributions
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
Probability theory 2010 Outline  The need for transforms  Probability-generating function  Moment-generating function  Characteristic function  Applications.
Probability theory 2011 Outline of lecture 7 The Poisson process  Definitions  Restarted Poisson processes  Conditioning in Poisson processes  Thinning.
Some standard univariate probability distributions
Probability theory 2011 Convergence concepts in probability theory  Definitions and relations between convergence concepts  Sufficient conditions for.
Probability theory 2010 Conditional distributions  Conditional probability:  Conditional probability mass function: Discrete case  Conditional probability.
Discrete Probability Distributions
The moment generating function of random variable X is given by Moment generating function.
Some standard univariate probability distributions
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
Discrete Random Variables and Probability Distributions
Continuous Probability Distribution  A continuous random variables (RV) has infinitely many possible outcomes  Probability is conveyed for a range of.
Standard error of estimate & Confidence interval.
Distribution Function properties. Density Function – We define the derivative of the distribution function F X (x) as the probability density function.
Chapter 6: Probability Distributions
Biostatistics Lecture 7 4/7/2015. Chapter 7 Theoretical Probability Distributions.
Copyright © 1998, Triola, Elementary Statistics Addison Wesley Longman 1 Normal Distribution as an Approximation to the Binomial Distribution Section 5-6.
Some standard univariate probability distributions Characteristic function, moment generating function, cumulant generating functions Discrete distribution.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Random Variables and Probability Models
Chapter 4 DeGroot & Schervish. Variance Although the mean of a distribution is a useful summary, it does not convey very much information about the distribution.
Chapter 7 Sampling and Sampling Distributions ©. Simple Random Sample simple random sample Suppose that we want to select a sample of n objects from a.
One Random Variable Random Process.
Chapter 5 Discrete Probability Distributions. Random Variable A numerical description of the result of an experiment.
Chapter 01 Probability and Stochastic Processes References: Wolff, Stochastic Modeling and the Theory of Queues, Chapter 1 Altiok, Performance Analysis.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics Seventh Edition By Brase and Brase Prepared by: Mistah Flynn.
7 sum of RVs. 7-1: variance of Z Find the variance of Z = X+Y by using Var(X), Var(Y), and Cov(X,Y)
Week 121 Law of Large Numbers Toss a coin n times. Suppose X i ’s are Bernoulli random variables with p = ½ and E(X i ) = ½. The proportion of heads is.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
1 Probability and Statistical Inference (9th Edition) Chapter 5 (Part 2/2) Distributions of Functions of Random Variables November 25, 2015.
Joint Moments and Joint Characteristic Functions.
Week 111 Some facts about Power Series Consider the power series with non-negative coefficients a k. If converges for any positive value of t, say for.
Functions of Random Variables
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Chapter 5 Discrete Random Variables.
Week 21 Order Statistics The order statistics of a set of random variables X 1, X 2,…, X n are the same random variables arranged in increasing order.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
S TOCHASTIC M ODELS L ECTURE 4 B ROWNIAN M OTIONS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (Shenzhen) Nov 11,
Ver Chapter 5 Continuous Random Variables 1 Probability/Ch5.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
1 AAEC 4302 ADVANCED STATISTICAL METHODS IN AGRICULTURAL RESEARCH Part II: Theory and Estimation of Regression Models Chapter 5: Simple Regression Theory.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics Seventh Edition By Brase and Brase Prepared by: Lynn Smith.
Topic Overview and Study Checklist. From Chapter 7 in the white textbook: Modeling with Differential Equations basic models exponential logistic modified.
Theory of Computational Complexity M1 Takao Inoshita Iwama & Ito Lab Graduate School of Informatics, Kyoto University.
Discrete Probability Distributions
Sampling and Sampling Distributions
The simple linear regression model and parameter estimation
Chapter Five The Binomial Probability Distribution and Related Topics
MAT 446 Supplementary Note for Ch 3
Simple Linear Regression
Expectations of Random Variables, Functions of Random Variables
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
Discrete Probability Distributions
Chapter 3 Discrete Random Variables and Probability Distributions
Example Suppose X ~ Uniform(2, 4). Let . Find .
Lecture Slides Elementary Statistics Twelfth Edition
Introduction to Probability & Statistics The Central Limit Theorem
Handout Ch 4 實習.
Further Topics on Random Variables: 1
Berlin Chen Department of Computer Science & Information Engineering
Presentation transcript:

Probability theory 2008 Conditional probability mass function  Discrete case  Continuous case

Probability theory 2008 Conditional probability mass function - examples  Throwing two dice  Let Z 1 = the number on the first die  Let Z 2 = the number on the second die  Set Y = Z 1 and X = Z 1 +Z 2  Radioactive decay  Let X = the number of atoms decaying within 1 unit of time  Let Y = the time of the first decay

Probability theory 2008 Conditional expectation  Discrete case  Continuous case  Notation

Probability theory 2008 Conditional expectation - rules

Probability theory 2008 Calculation of expected values through conditioning  Discrete case  Continuous case  General formula

Probability theory 2008 Calculation of expected values through conditioning - example  Primary and secondary events  Let N denote the number of primary events  Let X 1, X 2, … denote the number of secondary events for each primary event  Set Y = X 1 + X 2 + … + X N  Assume that X 1, X 2, … are i.i.d. and independent of N

Probability theory 2008 Calculation of variances through conditioning Variation in the expected value of Y induced by variation in X Average remaining variation in Y after X has been fixed

Probability theory 2008 Variance decomposition in linear regression

Probability theory 2008 Proof of the variance decomposition We shall prove that It can easily be seen that

Probability theory 2008 Regression and prediction Regression function: Theorem: The regression function is the best predictor of Y based on X Proof: Function of X

Probability theory 2008 Best linear predictor Theorem: The best linear predictor of Y based on X is Proof: ……. Ordinary linear regression

Probability theory 2008 Expected quadratic prediction error of the best linear predictor Theorem: Proof: ……. Ordinary linear regression

Probability theory 2008 Martingales The sequence X 1, X 2,… is called a martingale if Example 1: Partial sums of independent variables with mean zero Example 2: Gambler’s fortune if he doubles the stake as long as he loses and leaves as soon as he wins

Probability theory 2008 Exercises: Chapter II 2.6, 2.9, 2.12, 2.16, 2.22, 2.26, 2.28 Use conditional distributions/probabilities to explain why the envelop-rejection method works

Probability theory 2008 Transforms

Probability theory 2008 The probability generating function Let X be an integer-valued nonnegative random variable. The probability generating function of X is  Defined at least for | t | < 1  Determines the probability function of X uniquely  Adding independent variables corresponds to multiplying their generating functions Example 1: X  Be(p) Example 2: X  Bin(n;p) Example 3: X  Po(λ) Addition theorems for binomial and Poisson distributions

Probability theory 2008 The moment generating function Let X be a random variable. The moment generating function of X is provided that this expectation is finite for | t | 0  Determines the probability function of X uniquely  Adding independent variables corresponds to multiplying their moment generating functions

Probability theory 2008 The moment generating function and the Laplace transform Let X be a non-negative random variable. Then

Probability theory 2008 The moment generating function - examples The moment generating function of X is Example 1: X  Be(p) Example 2: X  Exp(a) Example 3: X   (2;a)

Probability theory 2008 The moment generating function - calculation of moments

Probability theory 2008 The moment generating function - uniqueness

Probability theory 2008 Normal approximation of a binomial distribution Let X 1, X 2, …. be independent and Be(p) and let Then.

Probability theory 2008 Distributions for which the moment generating function does not exist Let X = e Y, where Y  N(  ;  ) Then and.

Probability theory 2008 The characteristic function Let X be a random variable. The characteristic function of X is  Exists for all random variables  Determines the probability function of X uniquely  Adding independent variables corresponds to multiplying their characteristic functions

Probability theory 2008 Comparison of the characteristic function and the moment generating function Example 1: Exp(λ) Example 2: Po(λ) Example 3: N(  ;  ) Is it always true that.

Probability theory 2008 The characteristic function - uniqueness For discrete distributions we have For continuous distributions with we have.

Probability theory 2008 The characteristic function - calculation of moments If the k:th moment exists we have.

Probability theory 2008 Using a normal distribution to approximate a Poisson distribution Let X  Po(m) and set Then.

Probability theory 2008 Using a Poisson distribution to approximate a Binomial distribution Let X  Bin(n ; p) Then If p = 1/n we get.

Probability theory 2008 Sums of a stochastic number of stochastic variables Probability generating function: Moment generating function: Characteristic function:

Probability theory 2008 Branching processes  Suppose that each individual produces j new offspring with probability p j, j ≥ 0, independently of the number produced by any other individual.  Let X n denote the size of the n th generation  Then where Z i represents the number of offspring of the i th individual of the ( n - 1 ) st generation. generation

Probability theory 2008 Generating function of a branching processes Let X n denote the number of individuals in the n:th generation of a population, and assume that where Y k, k = 1, 2, … are i.i.d. and independent of X n Then Example:

Probability theory 2008 Branching processes - mean and variance of generation size  Consider a branching process for which X 0 = 1, and  and  respectively depict the expectation and standard deviation of the offspring distribution.  Then.

Probability theory 2008 Branching processes - extinction probability  Let  0 = P(population dies out ) and assume that X 0 = 1  Then where g is the probability generating function of the offspring distribution

Probability theory 2008 Exercises: Chapter III 3.1, 3.2, 3.3, 3.7, 3.15, 3.25, 3.26, 3.27, 3.32