Confidence Interval & Unbiased Estimator Review and Foreword.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Distributions of sampling statistics Chapter 6 Sample mean & sample variance.
Point Estimation Notes of STAT 6205 by Dr. Fan.
Hypothesis testing Another judgment method of sampling data.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Chapter 7 Title and Outline 1 7 Sampling Distributions and Point Estimation of Parameters 7-1 Point Estimation 7-2 Sampling Distributions and the Central.
Sampling: Final and Initial Sample Size Determination
Estimation  Samples are collected to estimate characteristics of the population of particular interest. Parameter – numerical characteristic of the population.
POINT ESTIMATION AND INTERVAL ESTIMATION
SOLVED EXAMPLES.
Sampling Distributions
Chapter 6 Introduction to Sampling Distributions
Statistical Inference Chapter 12/13. COMP 5340/6340 Statistical Inference2 Statistical Inference Given a sample of observations from a population, the.
Part 2b Parameter Estimation CSE717, FALL 2008 CUBS, Univ at Buffalo.
Fall 2006 – Fundamentals of Business Statistics 1 Chapter 6 Introduction to Sampling Distributions.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
2. Point and interval estimation Introduction Properties of estimators Finite sample size Asymptotic properties Construction methods Method of moments.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
STATISTICAL INFERENCE PART II SOME PROPERTIES OF ESTIMATORS
Lecture 7 1 Statistics Statistics: 1. Model 2. Estimation 3. Hypothesis test.
Standard error of estimate & Confidence interval.
1 Bayesian methods for parameter estimation and data assimilation with crop models Part 2: Likelihood function and prior distribution David Makowski and.
Chapter 7 Estimation: Single Population
Estimation Basic Concepts & Estimation of Proportions
AP Statistics Chapter 9 Notes.
1 Introduction to Estimation Chapter Concepts of Estimation The objective of estimation is to determine the value of a population parameter on the.
Random Sampling, Point Estimation and Maximum Likelihood.
Estimating parameters in a statistical model Likelihood and Maximum likelihood estimation Bayesian point estimates Maximum a posteriori point.
Maximum Likelihood Estimator of Proportion Let {s 1,s 2,…,s n } be a set of independent outcomes from a Bernoulli experiment with unknown probability.
7-1 Introduction The field of statistical inference consists of those methods used to make decisions or to draw conclusions about a population. These.
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 9 Samples.
Chapter 7 Sampling and Sampling Distributions ©. Simple Random Sample simple random sample Suppose that we want to select a sample of n objects from a.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
1 Standard error Estimated standard error,s,. 2 Example 1 While measuring the thermal conductivity of Armco iron, using a temperature of 100F and a power.
The final exam solutions. Part I, #1, Central limit theorem Let X1,X2, …, Xn be a sequence of i.i.d. random variables each having mean μ and variance.
Review Normal Distributions –Draw a picture. –Convert to standard normal (if necessary) –Use the binomial tables to look up the value. –In the case of.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
Chapter 5 Sampling Distributions. The Concept of Sampling Distributions Parameter – numerical descriptive measure of a population. It is usually unknown.
Statistical Estimation Vasileios Hatzivassiloglou University of Texas at Dallas.
8.1 Estimating µ with large samples Large sample: n > 30 Error of estimate – the magnitude of the difference between the point estimate and the true parameter.
Chapter 5 Joint Probability Distributions and Random Samples  Jointly Distributed Random Variables.2 - Expected Values, Covariance, and Correlation.3.
Statistics Sampling Distributions and Point Estimation of Parameters Contents, figures, and exercises come from the textbook: Applied Statistics and Probability.
Week 21 Order Statistics The order statistics of a set of random variables X 1, X 2,…, X n are the same random variables arranged in increasing order.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Sampling Distributions Chapter 18. Sampling Distributions A parameter is a number that describes the population. In statistical practice, the value of.
Stat 223 Introduction to the Theory of Statistics
Sampling and Sampling Distributions
Statistical Estimation
STATISTICAL INFERENCE
Probability Theory and Parameter Estimation I
Chapter 4. Inference about Process Quality
Ch3: Model Building through Regression
Sample Mean Distributions
Parameter, Statistic and Random Samples
Statistics in Applied Science and Technology
CONCEPTS OF ESTIMATION
POINT ESTIMATOR OF PARAMETERS
Lecture 2 Interval Estimation
CHAPTER 15 SUMMARY Chapter Specifics
Stat 223 Introduction to the Theory of Statistics
Statistical Inference
Sampling Distributions
Chapter 8 Estimation.
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Applied Statistics and Probability for Engineers
Presentation transcript:

Confidence Interval & Unbiased Estimator Review and Foreword

Central limit theorem vs. the weak law of large numbers

Weak law vs. strong law Personal research Search on the web or the library Compare and tell me why

Cont.

Maximum Likelihood estimator Suppose the i.i.d. random variables X 1, X 2, … X n, whose joint distribution is assumed given except for an unknown parameter θ, are to be observed and constituted a random sample. f(x 1,x 2,…,x n )=f(x 1 )f(x 2 )…f(x n ), The value of likelihood function f(x 1,x 2,…,x n / θ ) will be determined by the observed sample (x 1,x 2,…,x n ) if the true value of θ could also be found. Differentiate on the θ and let the first order condition equal to zero, and then rearrange the random variables X1, X2, … Xn to obtain θ.

Confidence interval

Confidence vs. Probability Probability is used to describe the distribution of a certain random variable (interval) Confidence (trust) is used to argue how the specific sampling consequence would approach to the reality (population)

100(1-α)% Confidence intervals

100(1-α)% confidence intervals for (μ 1 - μ 2 )

Approximate 100(1-α)% confidence intervals for p

Unbiased estimators

Linear combination of several unbiased estimators If d 1,d 2,d 3,d 4 … d n are independent unbiased estimators If a new estimator with the form, d=λ 1 d 1 +λ 2 d 2 +λ 3 d 3 + … λ n d n and λ 1 +λ 2 + … λ n =1, it will also be an unbiased estimator. The mean square error of any estimator is equal to its variance plus the square of the bias r(d, θ)=E[(d(X)-θ)2]=E[d-E(d)2]+(E[d]-θ)2

The Bayes estimator

The value of additional information The Bayes estimator The set of observed sample revised the prior θ distribution Smaller variance of posterior θ distribution Ref. pp