Likelihood Ratio Tests The origin and properties of using the likelihood ratio in hypothesis testing Teresa Wollschied Colleen Kenney.

Slides:



Advertisements
Similar presentations
Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function.
Advertisements

6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Chap 9: Testing Hypotheses & Assessing Goodness of Fit Section 9.1: INTRODUCTION In section 8.2, we fitted a Poisson dist’n to counts. This chapter will.
Title Bout: MLE vs. MoM Opponents: R.A. Fisher and Karl Pearson By Lada Kyj and Galen Papkov Rice University - STAT 600 September 27, 2004.
Likelihood ratio tests
Hypothesis testing Some general concepts: Null hypothesisH 0 A statement we “wish” to refute Alternative hypotesisH 1 The whole or part of the complement.
Chapter 6 Hypotheses texts. Central Limit Theorem Hypotheses and statistics are dependent upon this theorem.
COMPARING PROPORTIONS IN LARGE SAMPLES Examples: Compare probability of H on two coins. Compare proportions of republicans in two cities. 2 populations:
Inference about a Mean Part II
Hypothesis Testing for the Mean and Variance of a Population Introduction to Business Statistics, 5e Kvanli/Guynes/Pavur (c)2000 South-Western College.
Chapter 2 Simple Comparative Experiments
Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides
STATISTICAL INFERENCE PART VI
8-5 Testing a Claim About a Standard Deviation or Variance This section introduces methods for testing a claim made about a population standard deviation.
Chapter 9 Hypothesis Testing.
The Neymann-Pearson Lemma Suppose that the data x 1, …, x n has joint density function f(x 1, …, x n ;  ) where  is either  1 or  2. Let g(x 1, …,
Chapter Nine: Evaluating Results from Samples Review of concepts of testing a null hypothesis. Test statistic and its null distribution Type I and Type.
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
Statistical Hypothesis Testing. Suppose you have a random variable X ( number of vehicle accidents in a year, stock market returns, time between el nino.
Copyright © 2010, 2007, 2004 Pearson Education, Inc Lecture Slides Elementary Statistics Eleventh Edition and the Triola Statistics Series by.
Hypothesis testing. Classical hypothesis testing is a statistical method that appeared in the first third of the 20 th Century, alongside the “modern”
Lecture Slides Elementary Statistics Twelfth Edition
Business Statistics - QBM117 Introduction to hypothesis testing.
Hypothesis Testing – Introduction
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
1 1 Slide IS 310 – Business Statistics IS 310 Business Statistics CSU Long Beach.
Quantitative Data Analysis: Statistics – Part 2. Overview Part 1  Picturing the Data  Pitfalls of Surveys  Averages  Variance and Standard Deviation.
STA Statistical Inference
Maximum Likelihood Estimator of Proportion Let {s 1,s 2,…,s n } be a set of independent outcomes from a Bernoulli experiment with unknown probability.
1 Univariate Inferences about a Mean Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
IE241: Introduction to Hypothesis Testing. We said before that estimation of parameters was one of the two major areas of statistics. Now let’s turn to.
Statistics In HEP Helge VossHadron Collider Physics Summer School June 8-17, 2011― Statistics in HEP 1 How do we understand/interpret our measurements.
Probabilistic & Statistical Techniques Eng. Tamer Eshtawi First Semester Eng. Tamer Eshtawi First Semester
Quantitative Data Analysis: Statistics – Part 2. Overview Part 1  Picturing the Data  Pitfalls of Surveys  Averages  Variance and Standard Deviation.
STATISTICAL INFERENCE PART VI HYPOTHESIS TESTING 1.
USC3002 Picturing the World Through Mathematics Wayne Lawton Department of Mathematics S , Theme for Semester I, 2008/09.
Lecture 3: Statistics Review I Date: 9/3/02  Distributions  Likelihood  Hypothesis tests.
CLASS: B.Sc.II PAPER-I ELEMENTRY INFERENCE. TESTING OF HYPOTHESIS.
Ex St 801 Statistical Methods Inference about a Single Population Mean.
"Classical" Inference. Two simple inference scenarios Question 1: Are we in world A or world B?
Inen 460 Lecture 2. Estimation (ch. 6,7) and Hypothesis Testing (ch.8) Two Important Aspects of Statistical Inference Point Estimation – Estimate an unknown.
Analyzing Statistical Inferences July 30, Inferential Statistics? When? When you infer from a sample to a population Generalize sample results to.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
© 2013 Pearson Education, Inc. Active Learning Lecture Slides For use with Classroom Response Systems Introductory Statistics: Exploring the World through.
Stats Term Test 4 Solutions. c) d) An alternative solution is to use the probability mass function and.
n Point Estimation n Confidence Intervals for Means n Confidence Intervals for Differences of Means n Tests of Statistical Hypotheses n Additional Comments.
Week 21 Order Statistics The order statistics of a set of random variables X 1, X 2,…, X n are the same random variables arranged in increasing order.
Chapter 15 Maximum Likelihood Estimation, Likelihood Ratio Test, Bayes Estimation, and Decision Theory Bei Ye, Yajing Zhao, Lin Qian, Lin Sun, Ralph Hurtado,
Hypothesis Testing Steps for the Rejection Region Method State H 1 and State H 0 State the Test Statistic and its sampling distribution (normal or t) Determine.
Inferential statistics by example Maarten Buis Monday 2 January 2005.
Example The strength of concrete depends, to some extent on the method used for drying it. Two different drying methods were tested independently on specimens.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
T-TEST. Outline  Introduction  T Distribution  Example cases  Test of Means-Single population  Test of difference of Means-Independent Samples 
Ex St 801 Statistical Methods Part 2 Inference about a Single Population Mean (HYP)
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Hypothesis Testing: One-Sample Inference
Ch06 Hypothesis Testing.
Chapter 9 Hypothesis Testing.
One-Sample Tests of Hypothesis
STATISTICS POINT ESTIMATION
Chapter 2 Simple Comparative Experiments
Hypothesis Testing – Introduction
CONCEPTS OF HYPOTHESIS TESTING
Elementary Statistics
בדיקת השערות - שלב 1: השערות
Lecture 10/24/ Tests of Significance
STATISTICAL INFERENCE PART VI
Hypothesis Testing 1 Copyright by Winston S. Sirug, Ph.D.
Presentation transcript:

Likelihood Ratio Tests The origin and properties of using the likelihood ratio in hypothesis testing Teresa Wollschied Colleen Kenney

Outline  Background/History  Likelihood Function  Hypothesis Testing  Introduction to Likelihood Ratio Tests  Examples  References

Jerzy Neyman (1894 – 1981)  Jerzy Neyman (1894 – 1981)  April 16, 1894: Born in Benderry, Russia/Moldavia (Russian version:Yuri Czeslawovich)  1906: Father died. Neyman and his mother moved to Kharkov.  1912:Neyman began study in both physics and mathematics at University of Kharkov where professor Aleksandr Bernstein introduced him to probability  1919: Traveled south to Crimea and met Olga Solodovnikova. In 1920 ten days after their wedding, he was imprisoned for six weeks in Kharkov.  1921: Moved to Poland and worked as an asst. statistical analyst at the Agricultural Institute in Bromberg then State Meteorological Institute in Warsaw.

Neyman biography  :Became an assistant at Warsaw University and taught at the College of Agriculture. Earned a doctorate for a thesis that applied probability to agricultural experimentation.  1925: Received the Rockefeller fellowship to study at University College London with Karl Pearson (met Egon Pearson)  :Went to Paris. Visited by Egon Pearson in 1927, began collaborative work on testing hypotheses.  : Took position at University College London  1938: Offered a position at UC Berkeley. Set up Statistical Laboratory within Department of Mathematics. Statistics became a separate department in  Died on August 5, 1981

Egon Pearson (1895 – 1980)  August 11, 1895: Born in Hampstead, England. Middle child of Karl Pearson  : Attended Dragon School Oxford  : Attended Winchester College  1914: Started at Cambridge, interrupted by influenza.  1915: Joined war effort at Admiralty and Ministry of Shipping  1920: Awarded B.A. by taking Military Special Examination; Began research in solar physics, attending lectures by Eddington  1921: Became lecturer at University College London with his father  1924: Became assistant editor of Biometrika

Pearson biography  1925: Met Neyman and corresponded with him through letters while Neyman was in Paris. Also corresponded with Gosset at the same time.  1933: After father retires, becomes the Head of Department of Apllied Statistics  1935: Won Weldon Prize for work done with Neyman and began work on revising Tables for Statisticians and Biometricians (1954,1972)  1939: Did war work, eventually receiving a C.B.E.  1961: Retired from University College London  1966: Retired as Managing Editor of Biometrika  Died June 12, 1890

Likelihood and Hypothesis Testing  “On The Use and Interpretation of Certain Test Criteria for Purposes of Statistical Inference, Part I,” 1928, Biometrika: Likelihood Ratio Tests explained in detail by Neyman and Pearson “Probability is a ratio of frequencies and this relative measure cannot be termed the ratio of probabilities of the hypotheses, unless we speak of probability a posteriori and postulate some a priori frequency distribution of sampled populations. Fisher has therefore introduced the term likelihood, and calls this comparative measure the ratio of the two hypotheses.

Likelihood and Hypothesis Testing  “On the Problem of the most Efficient Tests of Statistical Hypotheses,” 1933, Philosophical Transactions of the Royal Society of London: The concept of developing an ‘efficient’ test is expanded upon. “Without hoping to know whether each hypothesis is true or false, we may search for rules to govern our behavior with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong”

Likelihood Function

Hypothesis Testing  Define T=r(x)  R={x: T>c} for some constant c.

Power Function  The probability a test will reject H 0 is given by:  Size  test:  Level  test:

Types of Error  Type I Error: Rejecting H 0 when H 0 is true  Type II Error: Accepting H 0 when H 0 is false

Likelihood Ratio Test (LRT)  LRT statistic for testing H 0:    0 vs. H a:    a is:  A LRT is any test that has a rejection region of the form {x: (x)  c}, where c is any number such that 0  c  1.

Uniformly Most Powerful (UMP) Test  Let  be a test procedure for testing H 0:    0 vs. H a:    a, with level of significance  0. Then , with power function  (  ), is a UMP level  0 test if: (1)  (  )   0 (2) For every test procedure  ′ with  (  ′ )   0, we have  ′ (  )   (  ) for every    a.

Neyman-Pearson Lemma Consider testing H 0:  =  0 vs. H a:  =  1, where the pdf or pmf corresponding to  i is f(x|  i ), i=0,1, using a test with rejection region R that satisfies x  R if f(x|  1 ) > k f(x|  0 ) (1)and x  R c if f(x|  1 ) < k f(x|  0 ), for some k  0, and (2)

Neyman-Pearson Lemma (cont’d)  Then (a) Any test that satisfies (1) and (2) is a UMP level  test. (b) If there exists a test satisfying (1) and (2) with k>0, then every UMP level  test satisfies (2) and every UMP level  test satisfies (1) except perhaps on a set A satisfying

Proof: Neyman-Pearson Lemma

Proof: Neyman-Pearson Lemma (cont’d)

LRTs and MLEs

Example: Normal LRT

Example: Normal LRT (cont’d)  We will reject H 0 if (x)  c. We have:  Therefore, the LRTs are those tests that reject H 0 if the sample mean differs from the value  0 by more than

Example: Size of the Normal LRT

Sufficient Statistics and LRTs Theorem: If T(X) is a sufficient statistic for , and *(t) and (t) are the LRT statistics based on T and X, respectively, then *(T(x))= (x) for every x in the sample space.

Example: Normal LRT with unknown variance

Example: Normal LRT with unknown variance (cont’d)

Asymptotic Distribution of the LRT – Simple H 0

Asymptotic Distribution of the LRT – Simple H 0 (cont’d)

Restrictions  When a UMP test does not exist, other methods must be used. Consider subset of tests and search for a UMP test.

References Cassella, G. and Berger, R.L. (2002). Statistical Inference. Duxbury:Pacific Grove, CA. Neyman, J. and Pearson, E., “On The Use and Interpretation of Certain Test Criteria for Purposes of Statistical Inference, Part I,” Biometrika, Vol. 20A, No.1/2 (July 1928), pp Neyman, J. and Pearson, E., “On the Problem of the most Efficient Tests of Statistical Hypotheses,” Philosophical Transactions of the Royal Society of London, Vol. 231 (1933), pp