1 STATISTICAL INFERENCE PART I EXPONENTIAL FAMILY & POINT ESTIMATION.

Slides:



Advertisements
Similar presentations
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 6 Point Estimation.
Advertisements

SOME GENERAL PROBLEMS.
Point Estimation Notes of STAT 6205 by Dr. Fan.
CHAPTER 8 More About Estimation. 8.1 Bayesian Estimation In this chapter we introduce the concepts related to estimation and begin this by considering.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Chapter 7 Title and Outline 1 7 Sampling Distributions and Point Estimation of Parameters 7-1 Point Estimation 7-2 Sampling Distributions and the Central.
Estimation  Samples are collected to estimate characteristics of the population of particular interest. Parameter – numerical characteristic of the population.
Fundamentals of Data Analysis Lecture 12 Methods of parametric estimation.
SOLVED EXAMPLES.
The General Linear Model. The Simple Linear Model Linear Regression.
Copyright © Cengage Learning. All rights reserved.
Maximum likelihood (ML) and likelihood ratio (LR) test
Chapter 3 Simple Regression. What is in this Chapter? This chapter starts with a linear regression model with one explanatory variable, and states the.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Review.
Statistical Inference Chapter 12/13. COMP 5340/6340 Statistical Inference2 Statistical Inference Given a sample of observations from a population, the.
Maximum likelihood (ML)
Maximum likelihood (ML) and likelihood ratio (LR) test
Statistical inference
Maximum-Likelihood estimation Consider as usual a random sample x = x 1, …, x n from a distribution with p.d.f. f (x;  ) (and c.d.f. F(x;  ) ) The maximum.
July 3, Department of Computer and Information Science (IDA) Linköpings universitet, Sweden Minimal sufficient statistic.
STATISTICAL INFERENCE PART II SOME PROPERTIES OF ESTIMATORS
Copyright © Cengage Learning. All rights reserved. 6 Point Estimation.
Maximum likelihood (ML)
Chapter 6. Point Estimation Weiqi Luo ( 骆伟祺 ) School of Software Sun Yat-Sen University : Office : # A313
Traffic Modeling.
STATISTICAL INFERENCE PART I POINT ESTIMATION
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Random Sampling, Point Estimation and Maximum Likelihood.
A statistical model Μ is a set of distributions (or regression functions), e.g., all uni-modal, smooth distributions. Μ is called a parametric model if.
Chapter 7 Point Estimation
1 Lecture 16: Point Estimation Concepts and Methods Devore, Ch
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Statistical Decision Theory Bayes’ theorem: For discrete events For probability density functions.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
1 Standard error Estimated standard error,s,. 2 Example 1 While measuring the thermal conductivity of Armco iron, using a temperature of 100F and a power.
CLASS: B.Sc.II PAPER-I ELEMENTRY INFERENCE. TESTING OF HYPOTHESIS.
Exam 2: Rules Section 2.1 Bring a cheat sheet. One page 2 sides. Bring a calculator. Bring your book to use the tables in the back.
Week 41 How to find estimators? There are two main methods for finding estimators: 1) Method of moments. 2) The method of Maximum likelihood. Sometimes.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
Statistics Sampling Distributions and Point Estimation of Parameters Contents, figures, and exercises come from the textbook: Applied Statistics and Probability.
Week 21 Order Statistics The order statistics of a set of random variables X 1, X 2,…, X n are the same random variables arranged in increasing order.
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
STATISTICAL INFERENCE PART III
Conditional Expectation
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Copyright © Cengage Learning. All rights reserved.
Statistical Estimation
STATISTICS POINT ESTIMATION
12. Principles of Parameter Estimation
Point and interval estimations of parameters of the normally up-diffused sign. Concept of statistical evaluation.
STATISTICAL INFERENCE PART I POINT ESTIMATION
Parameter, Statistic and Random Samples
CONCEPTS OF ESTIMATION
STATISTICAL INFERENCE PART IV
POINT ESTIMATOR OF PARAMETERS
STATISTICAL INFERENCE PART III
Simple Linear Regression
6 Point Estimation.
STATISTICAL INFERENCE PART I POINT ESTIMATION
STATISTICAL INFERENCE PART III
12. Principles of Parameter Estimation
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Applied Statistics and Probability for Engineers
Presentation transcript:

1 STATISTICAL INFERENCE PART I EXPONENTIAL FAMILY & POINT ESTIMATION

2 EXPONENTIAL CLASS OF PDFS X is a continuous (discrete) rv with pdf f(x;  ), . If the pdf can be written in the following form then, the pdf is a member of exponential class of pdfs of the continuous (discrete) type. (Here, k is the number of parameters)

3 REGULAR CASE OF THE EXPONENTIAL CLASS OF PDFS We have a regular case of the exponential class of pdfs of the continuous type if a)Range of X does not depend on . b)c(  ) ≥ 0, w 1 (  ),…, w k (  ) are real valued functions of  for . c)h(x) ≥ 0, t 1 (x),…, t k (x) are real valued functions of x. If the range of X depends on , then it is called irregular exponential class or range-dependent exponential class.

4 EXAMPLES X~Bin(n,p), where n is known. Is this pdf a member of exponential class of pdfs? Why? Binomial family is a member of exponential family of distributions.

5 EXAMPLES X~Cauchy(1,  ). Is this pdf a member of exponential class of pdfs? Why? Cauchy is not a member of exponential family.

6 STATISTICAL INFERENCE Determining certain unknown properties of a probability distribution on the basis of a sample (usually, a r.s.) obtained from that distribution Point Estimation: Interval Estimation: Hypothesis Testing:  () 

7 STATISTICAL INFERENCE Parameter Space (  or ): The set of all possible values of an unknown parameter,  ; . A pdf with unknown parameter: f(x;  ), . Estimation: Where in ,  is likely to be? { f(x;  ),  } The family of pdfs

8 STATISTICAL INFERENCE Statistic: A function of rvs (usually a sample rvs in an estimation) which does not contain any unknown parameters. Estimator of an unknown parameter  : A statistic used for estimating . An observed value

POINT ESTIMATION θ: a parameter of interest; unknown Goal: Find good estimator(s) for θ or its function g(θ). 9

10 SOME METHODS OF POINT ESTIMATION Method of Moments Estimation, Maximum Likelihood Estimation, Least Squares Estimation

11 METHOD OF MOMENTS ESTIMATION (MME) Let X 1, X 2,…,X n be a r.s. from a population with pmf or pdf f(x;  1,  2,…,  k ). The MMEs are found by equating the first k population moments to corresponding sample moments and solving the resulting system of equations. Sample MomentsPopulation Moments

12 METHOD OF MOMENTS ESTIMATION (MME) so on… Continue this until there are enough equations to solve for the unknown parameters.

13 EXAMPLES Let X~Exp(  ). For a r.s of size n, find the MME of . For the following sample (assuming it is from Exp(  ) ), find the estimate of  : 11.37, 3, 0.15, 4.27, 2.56, 0.59.

14 EXAMPLES Let X~N(μ,σ²). For a r.s of size n, find the MMEs of μ and σ². For the following sample (assuming it is from N(μ,σ²) ), find the estimates of μ and σ² : 4.93, 6.82, 3.12, 7.57, 3.04, 4.98, 4.62, 4.84, 2.95, 4.22

15 DRAWBACKS OF MMES Although sometimes parameters are positive valued, MMEs can be negative. If moments do not exist, we cannot find MMEs.

16 MAXIMUM LIKELIHOOD ESTIMATION (MLE) Let X 1, X 2,…,X n be a r.s. from a population with pmf or pdf f(x;  1,  2,…,  k ), the likelihood function is defined by

17 MAXIMUM LIKELIHOOD ESTIMATION (MLE) For each sample point (x 1,…,x n ), let be a parameter value at which L(  1,…,  k | x 1,…,x n ) attains its maximum as a function of (  1,…,  k ), with ( x 1,…,x n ) held fixed. A maximum likelihood estimator (MLE) of parameters (  1,…,  k ) based on a sample (X 1,…,X n ) is The MLE is the parameter point for which the observed sample is most likely.

18 EXAMPLES Let X~Bin(n,p), where both n and p are unknown. One observation on X is available, and it is known that n is either 2 or 3 and p=1/2 or 1/3. Our objective is to estimate the pair (n,p). x(2,1/2)(2,1/3)(3,1/2)(3,1/3)Max. Prob. 01/44/91/88/274/9 11/24/93/812/271/2 21/41/93/86/273/8 3001/81/271/8

19 MAXIMUM LIKELIHOOD ESTIMATION (MLE) It is usually convenient to work with the logarithm of the likelihood function. Suppose that f(x;  1,  2,…,  k ) is a positive, differentiable function of  1,  2,…,  k. If a supremum exists, it must satisfy the likelihood equations MLE occurring at boundary of  cannot be obtained by differentiation. So, use inspection.

MLE Moreover, you need to check that you are in fact maximizing the log-likelihood (or likelihood) by checking that the second derivative is negative. 20

21 EXAMPLES 1. X~Exp(  ),  >0. For a r.s of size n, find the MLE of .

22 EXAMPLES 2. X~N( ,  2 ). For a r.s. of size n, find the MLEs of  and  2.

23 EXAMPLES 3. X~Uniform(0,  ),  >0. For a r.s of size n, find the MLE of .

24 INVARIANCE PROPERTY OF THE MLE If is the MLE of , then for any function  (  ), the MLE of  (  ) is. Example: X~N( ,  2 ). For a r.s. of size n, the MLE of  is. By the invariance property of MLE, the MLE of  2 is

25 ADVANTAGES OF MLE Often yields good estimates, especially for large sample size. Invariance property of MLEs Asymptotic distribution of MLE is Normal. Most widely used estimation technique. Usually they are consistent estimators. [will define consistency later]

26 DISADVANTAGES OF MLE Requires that the pdf or pmf is known except the value of parameters. MLE may not exist or may not be unique. MLE may not be obtained explicitly (numerical or search methods may be required). It may be sensitive to the choice of starting values when using numerical estimation. MLEs can be heavily biased for small samples.

LEAST SQUARES ERROR Minimize the square of error Mostly used in regression As an example consider simple linear model: LSE says minimize 27

Problems 1. Let X i (i=1,…,n) be a random sample from gamma distribution s.t. X i ~Gamma(2,θ). The p.d.f. of X 1 is given by: a)Is this a member of exponential class? b) Find the MLE of θ. c) Find the MME of θ. 28

Problems 2. Let be a random sample from inverse gamma distribution with the following probability density function. a) Does the distribution of belong to the exponential family of distributions? b) Find the kth moment, i.e. E(X k ), for any k=1,2,… c) Find MMEs of α and β. 29

Problems 3. Sometimes the regression function is known to be linear and go through the origin, i.e. Y=0 for X=0. For example, let Y be the volume of beer sales in the supermarkets, and X be the number of beer bootless stocked in these supermarkets. If X=0, then obviously Y=0. In such cases, the model can be given as in Equation (1). for i=1,2,…,n and(1) Here both β 1 and σ² are unknown, and X’s are fixed values. 30

Problems a) Find the least squares estimator of β 1 for the model given in Equation (1). b) Find the maximum likelihood estimator of β 1 for the model given in Equation (1). c) Find the expected value of. d) Find the variance of. 31

Problems e) In an experiment involving 12 similar branch offices of a bank, new customers were offered gifts for opening accounts. The value of the gift was proportional to the minimum cash deposited in that branch. We would like to study whether such gifts were helpful in bringing new accounts. The following table provides the data. 32

Problems Branch Size of Minimum Deposit (dollars) Number of New Accounts Calculate the estimate of β 1 for this data. Write down the estimated regression equation given in Equation (1) for this data. Interpret your findings.