Download presentation
Presentation is loading. Please wait.
Published byTrevor Powers Modified over 9 years ago
1
CLASS: B.Sc.II PAPER-I ELEMENTRY INFERENCE
2
TESTING OF HYPOTHESIS
21
THEORY OF ESTIMATION
22
DEFINITION
23
1.UNBIASEDNESS 2.CONSISTENCY 3.EFFICIENCY 4.SUFFICIENCY
24
METHOD OF ESTIMATION
25
MAXIMUM LIKLIHOOD ESTIMATION
26
26 Definition Let X 1, X 2, …, X n have joint p.m.f. or p.d.f. f(x 1, x 2, … x n ; θ 1,θ 2,…,θ n ), Where the parameters θ 1,θ 2,…,θ n have unknown values. When x 1, x 2, … x n are observed samples values, the f( ) function is regarded as a function of θ 1,θ 2,…,θ n, it is called the likelihood function or L(θ) = p 1 (X 1 )p 2 (X 2 ) …p n (X n )
27
27 Then maximum likelihood estimates (m.l.e.’s) are those values of the i ’s with maximize the likelihood function, so that f(x 1, x 2, … x n ; ) > f(x 1, x 2, … x n ; 1, 2,…, m ) for all 1, 2,…, m.When the X i ’s are substituted in place of the x i ’s, the maximum likelihood estimators result. Maximum Likelihood Estimation
28
28 Example 1: Illustrating the MLE Method Using the Exponential Distribution Suppose that X 1,…, X n is a random sample from an exponential distribution with parameter λ. Because of independence, the likelihood function is a product of the individual p.d.f.’s
29
29 The ln(likelihood) is Thus the MLE is (This is identical to the method of moment estimator but is not an unbiased estimator, since MLE Example 1 (cont’d)
30
Notes on lambda Note that the value of λ is an estimate because if we obtain another sample from the same population and re-estimate λ, the new value would differ from the one previously calculated. In plain language, is an estimate of the true value of λ. How close is the value of our estimate to the true value? To answer this question, one must first determine the distribution of the parameter, in this case λ. This methodology introduces a new term, confidence level, which allows us to specify a range for our estimate with a certain confidence level. The treatment of confidence intervals is integral to simulation, reliability engineering and to all of statistics (e.g., coefficients in regression models). 30
31
31 Example 2: The Binomial Distribution A sample of 10 new CD-ROM’s pressed by a manufacturer: 1 st, 3 rd and 10 th are warped, rest OK. Let p=P(warped CD-ROM’s) and define X 1,…, X 10 by X i =1if the ith CD-ROM is warped X i =0 if the ith CD-ROM is not warped Then the observed X i ’s are: 1,0,1,0,0,0,0,0,0,1. So the joint p.m.f. of the sample is f(x 1, x 2, … x 10 ; p) = p(1-p)p(1-p)…p = p 3 (1-p) 7
32
32 Q: For what value of p is the observed sample most likely to have occurred? i.e., wish to find the value of p which maximizes f(x 1, x 2, … x 10 ; p) = p 3 (1-p) 7. Equivalent to max ln{f( )] = 3ln(p) + 7ln(1-p) p=3/10=x/n Where x is the observed # of success (warped CD- ROM). The estimate of p is now p=3/10 — the MLE because for fixed x 1, x 2, … x 10, it is the parameter value which maximizes the likelihood (joint p.m.f.) of the observed sample. Example 2 (cont’d)
33
33 Example 2 (cont’d) Note: if we were told only that among the 10 CD- ROMs there were 3 which are warped we could write as a binomial p.m.f., which is also maximized for p=3/10
34
34 Desirable Properties 1.For most of the common distributions, the MLE is unique; that is, is strictly greater than L( ) for any other value of θ. 2.Although MLE’s need not be unbiased, in general, the asymptotic distribution (as n infinity) of has mean equal to θ (see property 4 below). 3.MLE’s are invariant; that is, if Φ=h( ) for some function h, then the MLE of Φ is.(unbiased is not invariant). Fro instance the variance of an expo(β) random variable is β 2, so the MLE of this variance is
35
35 Desirable Properties (cont’d) 4.MLEs are asymptotically normally distributed, that is where (the expectation is with respect to Xi, assuming that Xi has the hypothesized distribution) and denotes convergence in distribution. Further, if is any other estimator such that then (Thus MLEs are called best aymptotic normal). 5.MLEs are strongly consistent.
38
METHOD OF MOMENTS
43
ASSIGNMENT 1.EXPLAIN: i)NULL HYPOTHYSIS ii)ALTERNATIVE HYPOTHSIS iii)CRITICAL REGION Iv)LEVEL OF SIGNIFICANCE 2.EXPLAIN METHOD OF MAXIMUM LIKLIHOOD 3.WRITE IN DETAIL WITH PROCEDURE ABOUT METHOD OF MOMENT
44
TEST 1.EXPLAIN: i)NULL HYPOTHYSIS ii)ALTERNATIVE HYPOTHSIS iii)WHAT ARE TYPE ONE AND TYPE TWO ERRORS 2.EXPLAIN METHOD OF MAXIMUM LIKLIHOOD
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.