Statistical Inference I

Slides:



Advertisements
Similar presentations
ORDER STATISTICS.
Advertisements

Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Chapter 5 Basic Probability Distributions
Class notes for ISE 201 San Jose State University
Today Today: Chapter 5 Reading: –Chapter 5 (not 5.12) –Suggested problems: 5.1, 5.2, 5.3, 5.15, 5.25, 5.33, 5.38, 5.47, 5.53, 5.62.
Probability Distributions
A gentle introduction to Gaussian distribution. Review Random variable Coin flip experiment X = 0X = 1 X: Random variable.
Review of Probability and Statistics
1 Sampling Distribution Theory ch6. 2  Two independent R.V.s have the joint p.m.f. = the product of individual p.m.f.s.  Ex6.1-1: X1is the number of.
Chapter6 Jointly Distributed Random Variables
Chapter 1 Probability and Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Convergence in Distribution
Chapter 5.6 From DeGroot & Schervish. Uniform Distribution.
1 ORDER STATISTICS AND LIMITING DISTRIBUTIONS. 2 ORDER STATISTICS Let X 1, X 2,…,X n be a r.s. of size n from a distribution of continuous type having.
1 Since everything is a reflection of our minds, everything can be changed by our minds.
One Random Variable Random Process.
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Review of Statistics I: Probability and Probability Distributions.
1 Probability and Statistical Inference (9th Edition) Chapter 5 (Part 2/2) Distributions of Functions of Random Variables November 25, 2015.
Discrete Random Variables. Introduction In previous lectures we established a foundation of the probability theory; we applied the probability theory.
TRANSFORMATION OF FUNCTION OF A RANDOM VARIABLE
Distributions of Functions of Random Variables November 18, 2015
Week 111 Some facts about Power Series Consider the power series with non-negative coefficients a k. If converges for any positive value of t, say for.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
Chapter 2: Probability. Section 2.1: Basic Ideas Definition: An experiment is a process that results in an outcome that cannot be predicted in advance.
Week 21 Order Statistics The order statistics of a set of random variables X 1, X 2,…, X n are the same random variables arranged in increasing order.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Chapter 5 Joint Probability Distributions and Random Samples  Jointly Distributed Random Variables.2 - Expected Values, Covariance, and Correlation.3.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Statistics Lecture 19.
Functions and Transformations of Random Variables
Binomial and Geometric Random Variables
STATISTICS Random Variables and Distribution Functions
AP Statistics Chapter 16.
The distribution function F(x)
3.1 Expectation Expectation Example
Conditional Probability on a joint discrete distribution
Some Rules for Expectation
Jiaping Wang Department of Mathematical Science 04/10/2013, Wednesday
Chapter 5 Some Important Discrete Probability Distributions
Chapter 12 Hypothesis testing.
ASV Chapters 1 - Sample Spaces and Probabilities
Example Suppose X ~ Uniform(2, 4). Let . Find .
Discrete Probability Distributions
ORDER STATISTICS AND LIMITING DISTRIBUTIONS
Confidence Intervals Chapter 11.
ASV Chapters 1 - Sample Spaces and Probabilities
EMIS 7300 SYSTEMS ANALYSIS METHODS FALL 2005
Review for test Ch. 6 ON TUESDAY:
Addition of Independent Normal Random Variables
Chapter 6: Random Variables
Discrete Probability Distributions
Discrete Probability Distributions
Functions of Random variables
Discrete Probability Distributions
ASV Chapters 1 - Sample Spaces and Probabilities
Chap 9 Multivariate Distributions Ghahramani 3rd edition
ORDER STATISTICS AND LIMITING DISTRIBUTIONS
RANDOM VARIABLES, EXPECTATIONS, VARIANCES ETC.
ASV Chapters 1 - Sample Spaces and Probabilities
Chapter 2. Random Variables
TRANSFORMATION OF FUNCTION OF A RANDOM VARIABLE
Chapter 9 Chapter 9 – Point estimation
Discrete Probability Distributions
Quick Review of Probability
Chapter 3-2 Discrete Random Variables
Applied Statistics and Probability for Engineers
Presentation transcript:

Statistical Inference I Brought to you by Ron Reeder

About me - work PhD in 2011 from University of Utah (Math) 2 years as statistician at Actavis (pharmaceutical company) At the U since 2013 Assistant professor in Department of Pediatrics Lead teams of statisticians and others to design, implement, and analyze studies with the intent of improving medical care for children Basically, I save babies.

About me - fun I ride my bike to work I like to Ski, hike, camp, bike (mountain and road) Read I’m married and have 3 kids Cameron (12 years) Maddie (9 years) Ellie (7 years)

This class Read the syllabus; ask questions afterward. What we cover Chapter 6 – Functions of random variables Chapter 7 – Limiting distributions Chapter 8 – Statistics Chapter 9 – Point estimation Chapter 10 – Sufficiency and completeness

Chapter 6 – Functions of random variables We begin with the ‘cdf technique’ Example 6.2.1 Find FY(y) if Y = eX and Find fY(y). Discuss applying monotonic functions to inequalities. Discuss derivatives of piecewise functions.

Example 6.2.2 Repeat the previous example with Y = X2

Example 6.2.3 Let Θ ~ UNIF(0, 2π). Let Y = tan(Θ). Find the cdf and density function for Y.

End of Day 1 (Jan 8, 2019) Homework: 6.1 6.2 6.3 6.4

Example 6.2.4 Let X1, X2 ~ i.i.d. EXP(1). Let Y = X1 + X2. Find the cdf and density function for Y.

Example Let X ~ UNIF (0,2). Let Y = 1{X>1}. Let W = max(0, X-1). What type of distribution does Y have? Let W = max(0, X-1). What type of distribution does W have? Nasty things can happen when the function is not 1-to-1. Caution is needed.

Transformation method For now, assume h(.) is 1-to-1. Since Y = h(X), I will use the notation h(x) and y(x) interchangeably.

Theorem 6.3.1 If Then Note that fY(y) is the mass function for Y. X is discrete Y = h(X) h is 1-to-1 Then fY(y) = fX(x(y)) Note that fY(y) is the mass function for Y. Prove this.

Example 6.3.1 Let X ~ GEO(p) so that Let Y = X – 1 fX(x) = (1-p)x-1p 1{x = 1, 2, …} X represents the number of tosses of a p-coin in order to obtain the first success (heads). Let Y = X – 1 Y is also often called geometric. Y represents the number of failures before the first success. Find the distribution (mass function) of Y using the transformation method. Want to hate life? Try the CDF technique on this one. =)

End of Day 2 (Jan 10, 2019) Homework: 6.10 6.11 6.12 6.13 Do the past quiz from Spring 2015 that is labeled ‘ConvolutionFormula5.’ You can find it under ‘extra practice’ and then ‘Ron Reeder.’

Theorem 6.3.2 If Then Note that B = {h(x) | fX(x) > 0} X is continuous Y = h(X) h is 1-to-1 on B = {y | fY(y) > 0} d/dy x(y) exists, is continuous, and is never zero on B Then fY(y) = fX[x(y)] |d/dy x(y)| 1{y ∈ B} Note that B = {h(x) | fX(x) > 0} Prove this when h is increasing and B is an interval.

Example Repeat Example 6.2.1 using the transformation method to find the cdf of Y.

Example Let X ~ N(μ, σ2). Find the distribution of Y = eX. This is referred to as a lognormal distribution. Note that 1{y ∈ B} from the theorem statement is important.

Example Let X ~ PAR(1,1) so that f(x) = 1{x>0}/(1+x)2. Find the distribution of Y = log(X).

Theorem 6.3.3 If X is a continuous random variable, Then F(X) ~ UNIF(0,1). Prove this when F is invertible. This theorem allows us to generate data from any continuous distribution we like by first generating a uniform, and then applying a generalized inverse.

Example (with non-1-to-1 function) Let X ~ UNIF(-1,4). Find the density function of Y = X4. Option 1 – the cdf technique Option 2 – generalize the transformation method Do Option 1 today and Option 2 next class.

End of Day 3 (Jan 15, 2019) Homework: 6.8 parts a – c 6.9 parts a – c

Example (with non-1-to-1 function) Let X ~ UNIF(-1,4). Find the density function of Y = X4. Option 1 – the cdf technique We did this last class. Option 2 – generalize the transformation method

Theorem 6.3.5 If Then Prove this. X = (X1, X2, …, Xk) is discrete Y = (Y1, Y2, …, Yk) Y = h(X) i.e. Y1 = h1(X1, X2, …, Xk), Y2 = h2(X1, X2, …, Xk), etc. h is 1-to-1 on B = {h(x) | fX(x) > 0} Then fY(y) = fX[x(y)] 1{y ∈ B} Prove this.

Theorem 6.3.6 If Then X = (X1, X2, …, Xk) is continuous Y = (Y1, Y2, …, Yk) Y = h(X) i.e. Y1 = h1(X1, X2, …, Xk), Y2 = h2(X1, X2, …, Xk), etc. h is 1-to-1 on B = {h(x) | fX(x) > 0} Det(∂x(y)/∂y) is continuous and non-zero on B Then fY(y) = fX[x(y)] |Det(∂x(y)/∂y)| 1{y ∈ B}

Example Let X1, X2 ~ i.i.d. EXP(1). Let Y1 = X1, Y2 = X1 + X2. Find the joint density of Y1 and Y2. Find the marginals for Y1 and Y2.

Example Let X1, X2 ~ i.i.d. EXP(1). Let Y1 = 2X1 – X2, Y2 = X1 + 2X2. Find the joint density of Y1 and Y2. Find P[Y2 ∈ (1,2)]

End of Day 4 (Jan 22, 2019) Homework: 6.8 part d 6.9 part d 6.14 6.16 6.17 6.18 6.21

Review Quiz 4

A third approach In addition to the cdf and transformation methods, one can also use moment generating functions. Works for sums (or differences or linear combinations) of independent random variables. Example. If X and Y are independent BIN(5, ½) and BIN(7, ½) respectively, find the distribution of their sum. M(t) = (pet + q)n Show why this works too using definition of moment generating function and independence.

Random samples Random sample (general definition) Simple random sample A sample from a population that is ‘randomly’ selected Simple random sample A subset of the population in which every member of a population has equal probability of inclusion Random sample (as defined for this class) A set of i.i.d. random variables, e.g. X1, X2, …, Xn ~ i.i.d N(0,1) Basically, we’re imagining that the population is infinite.

Random samples and their outcomes X1, X2, …, Xn Outcome of random sample x1, x2, …, xn Examples of outcomes of random samples 1, 7, …, 2 5, 3, …, 6 12, 1, …, 1

(Outcomes) of order statistics Suppose that the outcome of a random sample is 1, 7, 4, 2. The outcome of the kth order statistic, xk:4, is the kth largest observation. Find x1:4, x3:4, x4:4. Note that x1:4 is called the outcome of the ‘smallest order statistic’ and x4:4 is called the outcome of ‘largest order statistic.’

Order statistics Suppose you have a random sample: X1, X2, X3, X4 The order statistics are similarly defined, e.g. X1:4 = min(X1, X2, X3, X4). Note that X1:4 is a random variable. Note that order statistics are a transformation of a random sample. If the outcome of the random sample is 1, 7, 4, 2 Then the outcome of X1:4 is x1:4 = 1.

Example Toss a fair coin 5 times. Let X1, X2, …, X5 represent the five tosses, with Xi = 1 if the ith toss is heads and zero otherwise. Note that X1, X2, …, X5 is a random sample from BER(1/2). The outcome of this random sample is determined when you toss the coin 5 times. 1, 1, 0, 0, 0 is one possible outcome What is the outcome of the largest order statistic?

Example Suppose X1, X2, …, X5 is a random sample from POI(100). Suppose the outcome of the random sample is 5, 1, 50, 77, 57. What is the outcome of the smallest order statistic?

Theorem If X1, X2, …, Xn is a random sample from a continuous density function, f, Then The joint density function of the order statistics is: n! f(y1) f(y2) … f(yn) 1{y1<y2<…<yn}

Proof? The proof can be completed with the transformation method after noting that the order statistics are a transformation on the random sample. In the case of a random sample of size 2: Y1 = min (X1, X2) Y2 = max (X1, X2) Is the transformation 1-to-1? The proof is left for the advanced student.

How about some non-proofy intuition? Discrete case, n = 3. P(X1:3= 3, X2:3 = 4, X3:3 = 6) = 1{3 ≤ 4 ≤ 5} [P(X1 = 3) P(X2 = 4) P(X3 = 6) + P(X2 = 3) P(X1 = 4) P(X3 = 6) + … ] = 1{3 ≤ 4 ≤ 5} 3! p(3)p(4)p(6) FYI: The theorem doesn’t actually hold in the discrete case. Why? Because you would either double count or not count at all when two of the arguments are the same. Consider p(3,3,3), which is actually just p(3)p(3)p(3)—no 3 factorial.

Example 6.5.1 Suppose X1, X2, X3 ~ i.i.d. f(x) = 2x 1{x ∈ (0,1)}. Find the joint density function of the order statistics. Find the distribution of the median.

End of Day 5 (Jan 24, 2019) Homework: 6.23 6.25 6.35

Theorem 6.5.2

Theorem 6.5.3

Example