Download presentation
Presentation is loading. Please wait.
1
6 Point Estimation
2
Introduction Given a parameter of interest, such as a population mean μ or population proportion p, the objective point estimating is to use a sample to compute a number that represents in some sense a good guess for the true value of the parameter. The resulting number is called a point estimate. In Section 6.2, we describe and illustrate two important methods for obtaining point estimates: the method of moments and the method of maximum likelihood.
3
Some General Concepts of Point Estimation
6.1 Definition A point estimate of a parameter θ is a single number that can be regarded as a sensible value for θ . A point estimate is obtained by selecting a suitable statistic and computing its value from the given sample data. The selected statistic is called the point estimator of θ.
4
Example 6.1 An automobile manufacturer has developed a new type of bumper, which is supposed to absorb impacts with less damage than previous bumpers. The manufacturer has used this bumper in a sequence of 25 controlled crashes against a wall, each at 10 mph, using one of its compact car models. Let X= the number of crashes that result in no visible damage to the automobile. The parameter to be estimate is p= the proportion of all such crashes that result in no damage [alternatively, p=P( no damage in a single crash)].
5
If is observed to be x=15, the most reasonable estimator and estimate are
If for each parameter of interest there were only one reasonable point estimator, there would not be much to point estimation. In most problems, though, there will be more than one reasonable estimator.
6
Unbiased Estimators Definition
A point estimator is said to be an unbiased estimator of if for every possible value of . If is not unbiased, the difference is called the bias of
7
Thus , is unbiased if its probability distribution is always “centered” at the true value of the parameter. Figure 6.1 pictures the distribution of several biased and unbiased estimators. Note that “centered” here means that the expected value, not the median, of the distribution of is equal to θ pdf of pdf of pdf of pdf of θ θ Bias of θ1 Bias of θ1 Figure 6.1 the pdf’s of a biased estimator and an unbiased estimated for a parameter
8
PROPOSITION PROPOSITION
When X is a binomial rv with parameters n and p, the sample proportion =X/n is an unbiased estimator of p. Please read the Example 6.4 yourself. PROPOSITION Let X1, X 2,…,Xn be a random sample from a distribution with mean μ and variance σ2. Then the estimator Is an an unbiased estimator of σ2 The proof is omitted, please read it yourself
10
Estimators with Minimum Variance
Suppose and are two estimator of θthat are both unbiased.Then,although the distribution of each estimator is centered at the true value of θ,the spread of the distributions about the true value may be different. Principle of Minimum Variance Unbiased Estimation Among all estimators of θthat are unbiased, choose the one that has minimum variance.The resulting is called the minimum variance unbiased estimator (MVUE) of θ.
11
Example 6.5 We argued in Figure 6.4 that when X1 ,X2,…,Xn is a random sample from a uniform distribution on [0, θ],the estimator is unbiased for θ(we previously denoted this estimator by ). This is not the only unbiased estimator of θ. The expected value of a uniformly distributed rv is just the midpoint of the interval of positive density, so E(Xi)=θ/2.This implies that ,from which That is ,the estimator is unbiased for θ.
12
THEOREM Let X1, X 2,…,Xn be a random sample from a distribution with parameters μand σ.Then the estimator is the MVUE μ. Whenever we are convinced that the population being sampled is normal,the result says that should be used to estimate μ.In Example 6.2,then,our estimate would be In some situations,it is possible to obtain an estimator with small bias that would be preferred to the best unbiased estimator.This is illustrated in Figure 6.3.However,MVUEs are often easier to obtain than the type of biased estimator whose distribution is pictured.
13
pdf of ,a biased estimator
pdf of , the MVUE θ Figure 6.3 A biased estimator that is preferable to the MVUE
14
6.2 Method of Point Estimation
The definition of unbiasedness does not in general indicate how unbiased estimators can be derived. We now discuss two “constructive” methods for obtaining point estimators: the method of moments and the method of maximum likelihood.
15
By constructive we mean that the general definition of each type of estimator suggests explicitly how to obtain the estimator in any specific problem. Although maximum likelihood estimators are generally preferable to moment estimators because of certain efficiency properties, they often require significantly more computation than do moment estimators. It is sometimes the case that these methods yield unbiased estimators.
16
The Method of Moments DEFINITION
Let X1, X 2,…,Xn be a random sample from a pmf or pdf f(x).For k=1,2,3,…., the kth population moment,or kth moment of the distribution f(x),is The kth sample moment is DEFINITION Let X1, X 2,…,Xn be a random sample from a distribution with pmf or pdf f(x;θ1,…..,θm),where θ1,…..,θm are parameters whose values are unknown.Then the moment estimators are obtained by equating the first m sample moments to the corresponding first m population moments and solving for θ1,…..,θm
17
For population X k-origin moment k-central moment For sample X1,X2,…,Xn k-origin moment k-central moment
18
Suppose, that we wish to estimate parameters θ1,θ2,…,θm.
Then
19
So the method of moments estimates are
Where are estimation of parameters θ1,θ2,…,θm.
20
Which is ,therefore, the method of moments estimate of λ: .
Example Poisson Distribution The first origin moment for the Poisson distribution is the parameter λ=E(X). Then first sample origin moment is Which is ,therefore, the method of moments estimate of λ:
21
The first and second origin moment for the normal distribution are
Example B Normal Distribution The first and second origin moment for the normal distribution are Therefore,
22
The corresponding estimates of μandσ2 from the sample origin moments are
Where S is the sample standard deviation.
23
Example 6.11 Let X1, X 2,…,Xn represent a random sample of service times of n customers at a certain facility, where the underlying distribution is assumed exponential with parameter λ. Since there is only one parameter to be estimated, the estimator is obtained by equating E(X) to Since E(X)=1/λ for an exponential distribution,this gives
24
The estimation of parameters of some
common distributions
26
Maximum Likelihood Estimation
The method of maximum likelihood was first introduced by R.A.Fisher, a geneticist and statistician, in the 1920s. Most statisticians recommend this method, at least when the sample is large, since the resulting estimators have certain desirable efficiency properties.
27
Example 6.14 A sample of ten new bike helmets manufactured by a certain company is obtained.Upon testing, it is found that the first, third, and tenth helmets are flawed, whereas the others are not. Let p=P(flawed helmet) and define X1,….,X10 by Xi=1 if the ith helmet is flawed and zero otherwise.Then the observed xi’s are 1,0,1,0,0,0,0,0,0,1, so the joint pmf of the sample is
28
We now ask: For what value of p is the observed sample most likely to have occurred? That is, we wish to find the value of p that maximizes the pmf (6.4), or, equivalently, Which is a differentiable function of p, equating the derivative of (6.5) to zero gives the maximizing value
29
where x is the observed number of successes(flawed helmets)
where x is the observed number of successes(flawed helmets).The estimate of p is now It is called the maximum likelihood estimate because for fixed x1,……,x10, it is the parameter value that maximizes the likelihood of the observed sample. Note that if we had been told only that among the ten helmets there were three that were flawed,Equation would be replaced by the binomial pmf ,which is also maximized for
30
DEFINITION Let X1, X 2,…,Xn have joint pmf or pdf where the parameters θ1,…..,θm have unknown values.When x1,……,xn are the observed sample values and is regarded as a function of θ1,…..,θm,it is called the likelihood function.The maximum likelihood estimates are those values of the θi’s that maximize the likelihood function,so that When the Xi’s are substituted in place of the xi’s, the maximum likelihood estimators result. for all θ1,…..,θm
31
Suppose, that we wish to estimate parameters θ1,θ2,…,θm.
X1,X2,…,Xn are independent Let X1,X2,…,Xn are sample Suppose, that we wish to estimate parameters θ1,θ2,…,θm. Set (1) likelihood function
32
(2) System of equations of likelihood Why? The solutions of the system (2) are the maximum likelihood estimator (3) log-likelihood
33
Example A Poisson Distribution
Likelihood function is The log-likelihood is
34
same Setting the first derivative of the log-likelihood
equal to zero, we find so same
35
Example B Normal Distribution
Likelihood function is
36
The log-likelihood is
37
Not the unbiased estimator
so Not the unbiased estimator
38
Exercise Let the probability of a random variable X is
Find the moment estimator and likelihood estimator of θ.
39
Example 6.15 Suppose X1, X2,…,Xn is a random sample from an exponential distribution with parameter λ.Because of independence,the likelihood function is a product of the individual pdf’s: The ln (likelihood) is
40
Equating (d/dλ) to zero results in
Thus, the mle is It is identical to the method of moments estimator. Example 6.16 Example 6.17
41
Estimating Function of Parameters
PROPOSITION The Invariance Principle Let θ1,….,θm be the mle’s of the parameters θ1,…,θm .Then the mle of any function h(θ1,….,θm ) of these parameters is the function of the mle’s.
42
In the normal case,the mle’s of μ and σ2 are
Example 6.19 In the normal case,the mle’s of μ and σ2 are and To obtain the mle of the function ,substitute the mle’s into the function: The mle of σis not the sample standard deviation S, though they are close unless n is quit small. Example 6.20 (omit)
43
A Desirable Property of the Maximum Likelihood Estimate
Although the principle of maximum likelihood estimation has considerable intuitive appeal, the following proposition provides additional rationale for the use of mle’s.
44
PROPOSITION Under very general conditions on the joint distribution of the sample, when the sample size n is large, the maximum likelihood estimator of any parameter θis approximately unbiased and has variance that is nearly as small as can be achieved by any estimator. Stated another way, the mle is approximately the MVUE of θ.
45
Because of this result and the fact that calculus-based techniques can usually be used to derive the mle’s (though often numerical methods, such as Newton’s method, are necessary), maximum likelihood estimation is the most widely used estimation technique among statisticians. Many of the estimators used in the remainder of the book are mle’s. Obtaining an mle, however, does require that the underlying distribution be specified.
46
Some Complications Example 6.21 Suppose my waiting time for a bus is uniformly distributed on [0,θ] and the results x1,…,xn of a random sample from this distribution have been observed. Since f(x;θ) for 0≤x≤ θ and 0 otherwise,
47
As long as max(xi) ≤θ,the likelihood is ,which is positive,but as soon as θ <max(xi) ,the likelihood drops to 0.This is illustrated in Figure 6.4.Calculus will not work because the maximum of the likelihood occurs at a point of discontinuity,but the figure shows that Thus,if my waiting times are 2.3,3.7,1.5,.4,and 3.2,then the mle is
48
Likelihood max(xi) θ Figure 6.4 The likelihood function for Example 6.21 Example 6.22 is omitted,please read it yourself.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.