Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistical Background

Similar presentations


Presentation on theme: "Statistical Background"— Presentation transcript:

1 Statistical Background
Chapter 2 Statistical Background

2 2.3 Random Variables and Probability Distributions
A variable X is said to be a random variable (rv) if for every real number a there exists a probability P(X ≦ a) that X takes on a value less than or equal to a Thus P(X = x) is the probability that the random variable X takes the value x. P( x1≦ X ≦ x2 ) is the probability that the random variable X takes values between x1 and x2, both inclusive.

3 2.3 Random Variables and Probability Distributions
A formula giving the probabilities for different values of the random variable X is called a probability distribution in the case of discrete random variables. Probability density function (denoted by p.d.f.) for continuous random variables. This is usually denoted by f(x)

4 2.3 Random Variables and Probability Distributions
In general, for a continuous random variable, the occurrence of any exact value of X may be regarded as having a zero probability. Hence probabilities are discussed in terms of some ranges. These probabilities are obtained by integrating f(x) over the desired range.

5 2.3 Random Variables and Probability Distributions
For instance, if we want Prob (a≦ X ≦b), this is given by

6 2.4 The Normal Probability Distribution and Related Distributions
There are some probability distributions for which the probabilities have been tabulated and which are considered suitable descriptions for a wide variety of phenomena. These are the normal distribution and the x2, t, and F distributions

7 2.4 The Normal Probability Distribution and Related Distributions
There is also a question of whether the normal distribution is an appropriate one to use to describe economic variables. However, even if the variables are not normally distributed, one can consider transformations of the variables so that the transformed variables are normally distributed.

8 2.4 The Normal Probability Distribution and Related Distributions
The Normal Distribution (an example) The normal distribution is a bell-shaped distribution which is used most extensively in statistical applications in a wide variety of fields. Its probability density function is given by

9 2.4 The Normal Probability Distribution and Related Distributions
and the correlation between x and x is , then In particular, and

10 2.4 The Normal Probability Distribution and Related Distributions
If x1 , x2 , …, xn are independent normal variable with mean zero and variance 1, that is, , then is said to have the distribution with degrees of freedom (d.f.) n, and we will write this as

11 2.4 The Normal Probability Distribution and Related Distributions
The subscript n denotes the d.f. The distribution is the distribution of the sum of squares of n independent standard normal variables.

12 2.4 The Normal Probability Distribution and Related Distributions
If ,then Z should be defined as The distribution also has an “additive property,” although it is different from the property of the normal distribution and is much more restrictive.

13 2.4 The Normal Probability Distribution and Related Distributions
The property is: If and and and are independent, then

14 2.4 The Normal Probability Distribution and Related Distributions
t - Distribution If and and x and y are independent, has a t-distribution with d.f. n. We write this as The subscript n again denotes the d.f.

15 2.4 The Normal Probability Distribution and Related Distributions
Thus the t-distribution is the distribution of a standard normal variable divided by the square root of an independent averaged variable ( variable divided by its degrees of freedom). The t-distribution is a symmetric probability distribution like the normal distribution but is flatter than the normal and has longer tails. As the d.f n approached infinity, the t-distribution approaches the normal distribution.

16 2.4 The Normal Probability Distribution and Related Distributions
F ~Distribution If and and and are independent, has the F-distribution with d.f. n1 and n2. We write this as

17 2.4 The Normal Probability Distribution and Related Distributions
F ~Distribution The first subscript, n1, refers to the d.f. the numerator, and the second subscript, n2 ,refers to the d.f. of the denominator. The F -distribution is thus the distribution of the ratio of two independent averaged variables.

18 2.5 Classical Statistical Inference
Statistical inference is the area that describes the procedures by which we use the observed data to draw conclusions about the population from which the data came or about the process by which the data were generated.

19 2.5 Classical Statistical Inference
Broadly speaking, statistical inference can be classified under two headings: Classical inference Bayesian inference.

20 2.5 Classical Statistical Inference
In Bayesian inference we combine sample information with prior information. Suppose that we draw a random sample y1 , y2 , …, yn of size n from a normal population with mean and variance (assumed known), and we want to make inferences about .

21 2.5 Classical Statistical Inference
In classical inference we take the sample mean as our estimate of Its variance is The inverse of this variance is known as the sample precision. Thus the sample precision is

22 2.5 Classical Statistical Inference
In Bayesian inference we have prior information on This is expressed in term of a probability distribution known as the prior distribution. Suppose that the prior distribution is normal with mean and variance , that is, precision

23 2.5 Classical Statistical Inference
We now combine this with the sample information to obtain what is known as the posterior distribution of This distribution can be shown to be normal. Its mean is a weighted average of the sample.

24 2.5 Classical Statistical Inference
Mean and the prior mean , weighted by the sample precision and prior precision, respectively. Thus Where Also, the precision (or inverse of the variance) of the posterior distribution of is , that is, the sum of sample precision and prior precision.

25 2.5 Classical Statistical Inference
For instance, if the sample mean is 20 with variance 4 and the prior mean is 10 with variance 2, we have The posterior mean will lie between the sample mean and the prior mean. The posterior variance will be less than both the sample and prior variance.

26 2.5 Classical Statistical Inference
1. Point estimation. 2. Interval estimation. 3. Testing of hypotheses.

27 2.6 Properties of Estimators
There are some desirable properties of estimators that are often mentioned in the book. These are: 1. Unbiasedness. 2. Efficiency. 3. Consistency. The first two are small-sample properties. The third is a large-sample property.

28 2.6 Properties of Estimators
Unbiasedness An estimator g is said to be unbiased for if ,that is, the mean of the sampling distribution of g is equal to . What this says is that if we calculate g for each sample and repeat this process infinitely many times, the average of all these estimates will be equal to .

29 2.6 Properties of Estimators
If , then g is said to be biased and we refer to as the bias. Unbiasedness is a desirable property but not at all costs. Suppose that we have two estimator g1 and g2 can assume values far away from and yet have its mean equal to ,whereas g2 always ranges close to but has its mean slightly away from .

30 2.6 Properties of Estimators
Then we might prefer g2 to g1 because it has smaller variance even though it is biased. If the variance of the estimator is large, we can have some unlucky samples where our estimate is far from the true value. Thus the second property we want our estimators to have is a small variance. One criterion that is often suggested is the mean-squared error (MSE), which is defined by

31 2.6 Properties of Estimators
Efficiency The property of efficiency is concerned with the variance of estimators. Obviously, it is a relative concept and we have to confine ourselves to a particular class. If g is an unbiased estimator and it has the minimum variance in the class of unbiased estimators, g is said to be an efficient estimators. We say that g is an MVUE (a minimum-variance unbiased estimator).

32 2.6 Properties of Estimators
If we confine ourselves to linear estimators, that is, where the c’s are constants which we choose so that g is unbiased and has minimum variance, g is called a BLUE (a best linear unbiased estimator).

33 2.6 Properties of Estimators
Consistency Often it is not possible to find estimators that have desirable small-sample properties such as properties. In such cases, it is customary to look at desirable properties in large samples. These are called asymptotic properties.

34 Three such properties often mentioned are consistency, asymptotic unbiasedness ,and asymptotic efficiency

35 2.6 Properties of Estimators
Suppose that is the estimator of base on a sample of size n. Then the sequence of estimators is called a consistent sequence if for any arbitrarily small positive numbers and there is a sample size such that

36 2.6 Properties of Estimators
That is, by increasing the sample size n the estimator can be made to lie arbitrarily close to the true value of with probability arbitrarily close to 1. This statement is also written as And more briefly we write it as

37 2.6 Properties of Estimators
A sufficient condition for to be consistent is that the bias and variance should both tend to zero as the sample size increase. This condition is often useful to check in practice, but it should be noted that the condition is not necessary. An estimator can be consistent even if the bias does not tend to zero An example: unbiased and consistent estimate


Download ppt "Statistical Background"

Similar presentations


Ads by Google