STATISTICAL INFERENCE PART II SOME PROPERTIES OF ESTIMATORS

Slides:



Advertisements
Similar presentations
SOME GENERAL PROBLEMS.
Advertisements

Point Estimation Notes of STAT 6205 by Dr. Fan.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Chapter 7. Statistical Estimation and Sampling Distributions
Chapter 7 Title and Outline 1 7 Sampling Distributions and Point Estimation of Parameters 7-1 Point Estimation 7-2 Sampling Distributions and the Central.
Estimation  Samples are collected to estimate characteristics of the population of particular interest. Parameter – numerical characteristic of the population.
ORDER STATISTICS.
SOLVED EXAMPLES.
Chap 8: Estimation of parameters & Fitting of Probability Distributions Section 6.1: INTRODUCTION Unknown parameter(s) values must be estimated before.
Hypothesis testing Some general concepts: Null hypothesisH 0 A statement we “wish” to refute Alternative hypotesisH 1 The whole or part of the complement.
Today Today: Chapter 9 Assignment: Recommended Questions: 9.1, 9.8, 9.20, 9.23, 9.25.
Statistical Inference Chapter 12/13. COMP 5340/6340 Statistical Inference2 Statistical Inference Given a sample of observations from a population, the.
Part 2b Parameter Estimation CSE717, FALL 2008 CUBS, Univ at Buffalo.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Parametric Inference.
2. Point and interval estimation Introduction Properties of estimators Finite sample size Asymptotic properties Construction methods Method of moments.
1 STATISTICAL INFERENCE PART I EXPONENTIAL FAMILY & POINT ESTIMATION.
STATISTICAL INFERENCE PART VI
1 Inference About a Population Variance Sometimes we are interested in making inference about the variability of processes. Examples: –Investors use variance.
A) Transformation method (for continuous distributions) U(0,1) : uniform distribution f(x) : arbitrary distribution f(x) dx = U(0,1)(u) du When inverse.
Sampling Distributions  A statistic is random in value … it changes from sample to sample.  The probability distribution of a statistic is called a sampling.
Simulation Output Analysis
Chapter 7 Estimation: Single Population
STATISTICAL INFERENCE PART I POINT ESTIMATION
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Random Sampling, Point Estimation and Maximum Likelihood.
A statistical model Μ is a set of distributions (or regression functions), e.g., all uni-modal, smooth distributions. Μ is called a parametric model if.
1 G Lect 3b G Lecture 3b Why are means and variances so useful? Recap of random variables and expectations with examples Further consideration.
Maximum Likelihood Estimator of Proportion Let {s 1,s 2,…,s n } be a set of independent outcomes from a Bernoulli experiment with unknown probability.
1 Lecture 16: Point Estimation Concepts and Methods Devore, Ch
1 ORDER STATISTICS AND LIMITING DISTRIBUTIONS. 2 ORDER STATISTICS Let X 1, X 2,…,X n be a r.s. of size n from a distribution of continuous type having.
Chapter 7 Sampling and Sampling Distributions ©. Simple Random Sample simple random sample Suppose that we want to select a sample of n objects from a.
STATISTICAL INFERENCE PART VI HYPOTHESIS TESTING 1.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Consistency An estimator is a consistent estimator of θ, if , i.e., if
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
1 Standard error Estimated standard error,s,. 2 Example 1 While measuring the thermal conductivity of Armco iron, using a temperature of 100F and a power.
CLASS: B.Sc.II PAPER-I ELEMENTRY INFERENCE. TESTING OF HYPOTHESIS.
Confidence Interval & Unbiased Estimator Review and Foreword.
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
Chapter 5 Joint Probability Distributions and Random Samples  Jointly Distributed Random Variables.2 - Expected Values, Covariance, and Correlation.3.
Statistics Sampling Distributions and Point Estimation of Parameters Contents, figures, and exercises come from the textbook: Applied Statistics and Probability.
1 STATISTICAL INFERENCE PART II POINT ESTIMATION.
Week 31 The Likelihood Function - Introduction Recall: a statistical model for some data is a set of distributions, one of which corresponds to the true.
Week 21 Order Statistics The order statistics of a set of random variables X 1, X 2,…, X n are the same random variables arranged in increasing order.
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
STATISTICAL INFERENCE PART III
Conditional Expectation
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
Stat 223 Introduction to the Theory of Statistics
Statistical Estimation
STATISTICS POINT ESTIMATION
Probability Theory and Parameter Estimation I
STATISTICAL INFERENCE PART I POINT ESTIMATION
Parameter, Statistic and Random Samples
t distribution Suppose Z ~ N(0,1) independent of X ~ χ2(n). Then,
Statistical Assumptions for SLR
Lecture 2 Interval Estimation
The Multivariate Normal Distribution, Part 2
STATISTICAL INFERENCE PART III
STATISTICAL INFERENCE PART II SOME PROPERTIES OF ESTIMATORS
Parametric Methods Berlin Chen, 2005 References:
STATISTICAL INFERENCE PART I POINT ESTIMATION
STATISTICAL INFERENCE PART III
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Applied Statistics and Probability for Engineers
Presentation transcript:

STATISTICAL INFERENCE PART II SOME PROPERTIES OF ESTIMATORS

SOME PROPERTIES OF ESTIMATORS θ: a parameter of interest; unknown Previously, we found good(?) estimator(s) for θ or its function g(θ). Goal: Check how good are these estimator(s). Or are they good at all? If more than one good estimator is available, which one is better?

SOME PROPERTIES OF ESTIMATORS UNBIASED ESTIMATOR (UE): An estimator is an UE of the unknown parameter , if Otherwise, it is a Biased Estimator of . Bias of for estimating  If is UE of ,

SOME PROPERTIES OF ESTIMATORS ASYMPTOTICALLY UNBIASED ESTIMATOR (AUE): An estimator is an AUE of the unknown parameter , if

SOME PROPERTIES OF ESTIMATORS CONSISTENT ESTIMATOR (CE): An estimator which converges in probability to an unknown parameter  for all  is called a CE of . For large n, a CE tends to be closer to the unknown population parameter. MLEs are generally CEs.

EXAMPLE For a r.s. of size n, By WLLN,

MEAN SQUARED ERROR (MSE) The Mean Square Error (MSE) of an estimator for estimating  is If is smaller, is a better estimator of .

MEAN SQUARED ERROR CONSISTENCY is called mean squared error consistent (or consistent in quadratic mean) if E{ }2 0 as n. Theorem: is consistent in MSE iff Var( )0 as n. If E{ }20 as n, is also a CE of .

EXAMPLES X~Exp(), >0. For a r.s of size n, consider the following estimators of , and discuss their bias and consistency.

SUFFICIENT STATISTICS X, f(x;),  X1, X2,…,Xn Y=U(X1, X2,…,Xn ) is a statistic. A sufficient statistic, Y, is a statistic which contains all the information for the estimation of .

SUFFICIENT STATISTICS Given the value of Y, the sample contains no further information for the estimation of . Y is a sufficient statistic (ss) for  if the conditional distribution h(x1,x2,…,xn|y) does not depend on  for every given Y=y. A ss for  is not unique: If Y is a ss for , then any 1-1 transformation of Y, say Y1=fn(Y) is also a ss for .

SUFFICIENT STATISTICS The conditional distribution of sample rvs given the value of y of Y, is defined as If Y is a ss for , then Not depend on  for every given y. ss for  may include y or constant. Also, the conditional range of Xi given y not depend on .

SUFFICIENT STATISTICS EXAMPLE: X~Ber(p). For a r.s. of size n, show that is a ss for p.

SUFFICIENT STATISTICS Neyman’s Factorization Theorem: Y is a ss for  iff The likelihood function Does not depend on xi except through y Not depend on  (also in the range of xi.) where k1 and k2 are non-negative functions.

EXAMPLES 1. X~Ber(p). For a r.s. of size n, find a ss for p if exists.

EXAMPLES 2. X~Beta(θ,2). For a r.s. of size n, find a ss for θ.

SUFFICIENT STATISTICS A ss, that reduces the dimension, may not exist. Jointly ss (Y1,Y2,…,Yk ) may be needed. Example: Example 10.2.5 in Bain and Engelhardt (page 342 in 2nd edition), X(1) and X(n) are jointly ss for  If the MLE of  exists and unique and if a ss for  exists, then MLE is a function of a ss for .

EXAMPLE X~N(,2). For a r.s. of size n, find jss for  and 2.

MINIMAL SUFFICIENT STATISTICS If is a ss for θ, then, is also a ss for θ. But, the first one does a better job in data reduction. A minimal ss achieves the greatest possible reduction.

MINIMAL SUFFICIENT STATISTICS A ss T(X) is called minimal ss if, for any other ss T’(X), T(x) is a function of T’(x). THEOREM: Let f(x;) be the pmf or pdf of a sample X1, X2,…,Xn. Suppose there exist a function T(x) such that, for two sample points x1,x2,…,xn and y1,y2,…,yn, the ratio is constant with respect to  iff T(x)=T(y). Then, T(X) is a minimal sufficient statistic for .

EXAMPLE X~N(,2) where 2 is known. For a r.s. of size n, find minimal ss for . Note: A minimal ss is also not unique. Any 1-to-1 function is also a minimal ss.

ANCILLARY STATISTIC A statistic S(X) whose distribution does not depend on the parameter  is called an ancillary statistic. Unlike a ss, an ancillary statistic contains no information about .

Example Example 6.1.8 in Casella & Berger, page 257: Let Xi~Unif(θ,θ+1) for i=1,2,…,n Then, range R=X(n)-X(1) is an ancillary statistic because its pdf does not depend on θ.

COMPLETENESS Let {f(x; ), } be a family of pdfs (or pmfs) and U(x) be an arbitrary function of x not depending on . If requires that the function itself equal to 0 for all possible values of x; then we say that this family is a complete family of pdfs (or pmfs). i.e., the only unbiased estimator of 0 is 0 itself.

EXAMPLES 1. Show that the family {Bin(n=2,); 0<<1} is complete.

EXAMPLES 2. X~Uniform(,). Show that the family {f(x;), >0} is not complete.

COMPLETE AND SUFFICIENT STATISTICS (css) Y is a complete and sufficient statistic (css) for  if Y is a ss for  and the family is complete. The pdf of Y. 1) Y is a ss for . 2) u(Y) is an arbitrary function of Y. E(u(Y))=0 for all  implies that u(y)=0 for all possible Y=y.

BASU THEOREM If T(X) is a complete and minimal sufficient statistic, then T(X) is independent of every ancillary statistic. Example: X~N(,2). (n-1)S2/ 2 ~ S2 Ancillary statistic for  By Basu theorem, and S2 are independent.

BASU THEOREM Example: Let T=X1+ X2 and U=X1 - X2 We know that T is a complete minimal ss. U~N(0, 22)  distribution free of   T and U are independent by Basu’s Theorem X1, X2~N(,2), independent, 2 known.

Problems Let be a random sample from a Bernoulli distribution with parameter p. Find the maximum likelihood estimator (MLE) of p.  Is this an unbiased estimator of p?

Problems If Xi are normally distributed random variables with mean μ and variance σ2, what is an unbiased estimator of σ2?

Problems Suppose that are i.i.d. random variables on the interval [0; 1] with the density function, where α> 0 is a parameter to be estimated from the sample. Find a sufficient statistic for α.