Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION This sequence introduces the principle of maximum likelihood estimation and illustrates it with some simple.

Similar presentations


Presentation on theme: "1 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION This sequence introduces the principle of maximum likelihood estimation and illustrates it with some simple."— Presentation transcript:

1 1 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION This sequence introduces the principle of maximum likelihood estimation and illustrates it with some simple examples. p   L

2 2 Suppose that you have a normally-distributed random variable X with unknown population mean  and standard deviation , and that you have a sample of two observations, 4 and 6. For the time being, we will assume that  is equal to 1. p   INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION L

3 3 Suppose initially you consider the hypothesis  = 3.5. Under this hypothesis the probability density at 4 would be 0.3521 and that at 6 would be 0.0175. p   0.3521 0.0175 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION L  p(4) p(6) 3.50.35210.0175

4 4 The joint probability density, shown in the bottom chart, is the product of these, 0.0062. p   0.3521 0.0175 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION L  p(4) p(6) L 3.50.35210.01750.0062

5 5 Next consider the hypothesis  = 4.0. Under this hypothesis the probability densities associated with the two observations are 0.3989 and 0.0540, and the joint probability density is 0.0215. p   0.3989 0.0540 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION L  p(4) p(6) L 3.50.35210.01750.0062 4.00.39890.05400.0215

6 6 Under the hypothesis  = 4.5, the probability densities are 0.3521 and 0.1295, and the joint probability density is 0.0456. p   0.3521 0.1295 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION L  p(4) p(6) L 3.50.35210.01750.0062 4.00.39890.05400.0215 4.50.35210.12950.0456

7 7 Under the hypothesis  = 5.0, the probability densities are both 0.2420 and the joint probability density is 0.0585. p   0.2420 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION L  p(4) p(6) L 3.50.35210.01750.0062 4.00.39890.05400.0215 4.50.35210.12950.0456 5.00.24200.24200.0585

8 8 Under the hypothesis  = 5.5, the probability densities are 0.1295 and 0.3521 and the joint probability density is 0.0456. p   0.3521 0.1295 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION L  p(4) p(6) L 3.50.35210.01750.0062 4.00.39890.05400.0215 4.50.35210.12950.0456 5.00.24200.24200.0585 5.50.12950.35210.0456

9 9 The complete joint density function for all values of  has now been plotted in the lower diagram. We see that it peaks at  = 5.  p(4) p(6) L 3.50.35210.01750.0062 4.00.39890.05400.0215 4.50.35210.12950.0456 5.00.24200.24200.0585 5.50.12950.35210.0456 p L   0.1295 0.3521 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

10 10 Now we will look at the mathematics of the example. If X is normally distributed with mean  and standard deviation , its density function is as shown. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

11 11 For the time being, we are assuming  is equal to 1, so the density function simplifies to the second expression. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

12 12 Hence we obtain the probability densities for the observations where X = 4 and X = 6. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

13 13 The joint probability density for the two observations in the sample is just the product of their individual densities. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION joint density

14 14 In maximum likelihood estimation we choose as our estimate of  the value that gives us the greatest joint density for the observations in our sample. This value is associated with the greatest probability, or maximum likelihood, of obtaining the observations in the sample. joint density INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

15 15 In the graphical treatment we saw that this occurs when  is equal to 5. We will prove this must be the case mathematically. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION p   0.2420 L  p(4) p(6) L 3.50.35210.01750.0062 4.00.39890.05400.0215 4.50.35210.12950.0456 5.00.24200.24200.0585 5.50.12950.35210.0456

16 16 To do this, we treat the sample values X = 4 and X = 6 as given and we use the calculus to determine the value of  that maximizes the expression. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

17 17 When it is regarded in this way, the expression is called the likelihood function for , given the sample observations 4 and 6. This is the meaning of L(  | 4,6). INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

18 18 To maximize the expression, we could differentiate with respect to  and set the result equal to 0. This would be a little laborious. Fortunately, we can simplify the problem with a trick. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

19 19 log L is a monotonically increasing function of L (meaning that log L increases if L increases and decreases if L decreases). INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

20 20 It follows that the value of  which maximizes log L is the same as the one that maximizes L. As it so happens, it is easier to maximize log L with respect to  than it is to maximize L. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

21 21 The logarithm of the product of the density functions can be decomposed as the sum of their logarithms. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

22 22 Using the product rule a second time, we can decompose each term as shown. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

23 23 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION Now one of the basic rules for manipulating logarithms allows us to rewrite the second term as shown.

24 24 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION log e is equal to 1, another basic logarithm result. (Remember, as always, we are using natural logarithms, that is, logarithms to base e.)

25 25 Hence the second term reduces to a simple quadratic in X. And so does the fourth. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

26 26 We will now choose  so as to maximize this expression. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

27 27 Quadratic terms of the type in the expression can be expanded as shown. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

28 28 Thus we obtain the differential of the quadratic term. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

29 29 Applying this result, we obtain the differential of log L with respect to . (The first term in the expression for log L disappears completely since it is not a function of .) INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

30 30 Thus from the first order condition we confirm that 5 is the value of  that maximizes the log-likelihood function, and hence the likelihood function. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

31 31 Note that a caret mark has been placed over , because we are now talking about the specific value of  that maximizes the log-likelihood. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

32 32 Note also that the second differential of log L with respect to  is –2. Since this is negative, we have found a maximum, not a minimum. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

33 33 We will generalize this result to a sample of n observations X 1,...,X n. The probability density for X i is given by the first line. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

34 34 The joint density function for a sample of n observations is the product of their individual densities. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

35 35 Now treating the sample values as fixed, we can re-interpret the joint density function as the likelihood function for , given this sample. We will find the value of  that maximizes it. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

36 36 We will do this indirectly, as before, by maximizing log L with respect to . The logarithm decomposes as shown. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

37 37 We differentiate log L with respect to . INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

38 38 The first order condition for a minimum is that the differential be equal to zero. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

39 39 Thus we have demonstrated that the maximum likelihood estimator of  is the sample mean. The second differential, –n, is negative, confirming that we have maximized log L. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

40 40 So far we have assumed that , the standard deviation of the distribution of X, is equal to 1. We will now relax this assumption and find the maximum likelihood estimator of it. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

41 41 We will illustrate the process graphically with the two-observation example, keeping  fixed at 5. We will start with  equal to 2.   INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION L p

42 42 With  equal to 2, the probability density is 0.1760 for both X = 4 and X = 6, and the joint density is 0.0310. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION  L p   p(4) p(6) L 2.00.17600.17600.0310

43 43 Now try  equal to 1. The individual densities are 0.2420 and so the joint density, 0.0586, has increased. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION L   p  p(4) p(6) L 2.00.17600.17600.0310 1.00.24200.24200.0586

44 44 Now try putting  equal to 0.5. The individual densities have fallen and the joint density is only 0.0117. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION L   p  p(4) p(6) L 2.00.17600.17600.0310 1.00.24200.24200.0586 0.50.10800.10800.0117

45 45 The joint density has now been plotted as a function of  in the lower diagram. You can see that in this example it is greatest for  equal to 1. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION L   p  p(4) p(6) L 2.00.17600.17600.0310 1.00.24200.24200.0586 0.50.10800.10800.0117

46 46 We will now look at this mathematically, starting with the probability density function for X given  and . INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

47 47 The joint density function for the sample of n observations is given by the second line. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

48 48 As before, we can re-interpret this function as the likelihood function for  and , given the sample of observations. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

49 49 We will find the values of  and  that maximize this function. We will do this indirectly by maximizing log L. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

50 50 We can decompose the logarithm as shown. To maximize it, we will set the partial derivatives with respect to  and  equal to zero. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

51 51 When differentiating with respect to , the first two terms disappear. We have already seen how to differentiate the other terms. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

52 52 Setting the first differential equal to 0, the maximum likelihood estimate of  is the sample mean, as before. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

53 53 Next, we take the partial differential of the log-likelihood function with respect to . INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

54 54 Before doing so, it is convenient to rewrite the equation. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

55 55 The derivative of log  with respect to  is 1/ . The derivative of  --2 is –2  --3. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

56 56 Setting the first derivative of log L to zero gives us a condition that must be satisfied by the maximum likelihood estimator. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

57 57 We have already demonstrated that the maximum likelihood estimator of  is the sample mean. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

58 58 Hence the maximum likelihood estimator of the population variance is the mean square deviation of X. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

59 59 Note that it is biased. The unbiased estimator is obtained by dividing by n – 1, not n. INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

60 However it can be shown that the maximum likelihood estimator is asymptotically efficient, in the sense of having a smaller mean square error than the unbiased estimator in large samples. 60 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION

61 Copyright Christopher Dougherty 2012. These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 10.6 of C. Dougherty, Introduction to Econometrics, fourth edition 2011, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre http://www.oup.com/uk/orc/bin/9780199567089/http://www.oup.com/uk/orc/bin/9780199567089/. Individuals studying econometrics on their own who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx or the University of London International Programmes distance learning course 20 Elements of Econometrics www.londoninternational.ac.uk/lsewww.londoninternational.ac.uk/lse. 2012.12.16


Download ppt "1 INTRODUCTION TO MAXIMUM LIKELIHOOD ESTIMATION This sequence introduces the principle of maximum likelihood estimation and illustrates it with some simple."

Similar presentations


Ads by Google