Presentation is loading. Please wait.

Presentation is loading. Please wait.

Brief Review Probability and Statistics. Probability distributions Continuous distributions.

Similar presentations


Presentation on theme: "Brief Review Probability and Statistics. Probability distributions Continuous distributions."— Presentation transcript:

1 Brief Review Probability and Statistics

2 Probability distributions Continuous distributions

3 Defn (density function) Let x denote a continuous random variable then f(x) is called the density function of x 1) f(x) ≥ 0 2) 3)

4 Defn (Joint density function) Let x = (x 1,x 2,x 3,..., x n ) denote a vector of continuous random variables then f(x) = f(x 1,x 2,x 3,..., x n ) is called the joint density function of x = (x 1,x 2,x 3,..., x n ) if 1) f(x) ≥ 0 2) 3)

5 Note:

6 Defn (Marginal density function) The marginal density of x 1 = (x 1,x 2,x 3,..., x p ) (p < n) is defined by: f 1 (x 1 ) = = where x 2 = (x p+1,x p+2,x p+3,..., x n ) The marginal density of x 2 = (x p+1,x p+2,x p+3,..., x n ) is defined by: f 2 (x 2 ) = = where x 1 = ( x 1,x 2,x 3,..., x p )

7 Defn (Conditional density function) The conditional density of x 1 given x 2 (defined in previous slide) (p < n) is defined by: f 1|2 (x 1 |x 2 ) = conditional density of x 2 given x 1 is defined by: f 2|1 (x 2 |x 1 ) =

8 Marginal densities describe how the subvector x i behaves ignoring x j Conditional densities describe how the subvector x i behaves when the subvector x j is held fixed

9 Defn (Independence) The two sub-vectors (x 1 and x 2 ) are called independent if: f(x) = f(x 1, x 2 ) = f 1 (x 1 )f 2 (x 2 ) = product of marginals or the conditional density of x i given x j : f i|j (x i |x j ) = f i (x i ) = marginal density of x i

10 Example (p-variate Normal) The random vector x (p × 1) is said to have the p-variate Normal distribution with mean vector  (p × 1) and covariance matrix  (p × p) (written x ~ N p ( ,  )) if:

11 Example (bivariate Normal) The random vector is said to have the bivariate Normal distribution with mean vector and covariance matrix

12

13

14

15 Theorem (Transformations) Let x = (x 1,x 2,x 3,..., x n ) denote a vector of continuous random variables with joint density function f(x 1,x 2,x 3,..., x n ) = f(x). Let y 1 =  1 (x 1,x 2,x 3,..., x n ) y 2 =  2 (x 1,x 2,x 3,..., x n )... y n =  n (x 1,x 2,x 3,..., x n ) define a 1-1 transformation of x into y.

16 Then the joint density of y is g(y) given by: g(y) = f(x)|J| where = the Jacobian of the transformation

17 Corollary (Linear Transformations) Let x = (x 1,x 2,x 3,..., x n ) denote a vector of continuous random variables with joint density function f(x 1,x 2,x 3,..., x n ) = f(x). Let y 1 = a 11 x 1 + a 12 x 2 + a 13 x 3,... + a 1n x n y 2 = a 21 x 1 + a 22 x 2 + a 23 x 3,... + a 2n x n... y n = a n1 x 1 + a n2 x 2 + a n3 x 3,... + a nn x n define a 1-1 transformation of x into y.

18 Then the joint density of y is g(y) given by:

19 Corollary (Linear Transformations for Normal Random variables) Let x = (x 1,x 2,x 3,..., x n ) denote a vector of continuous random variables having an n-variate Normal distribution with mean vector  and covariance matrix . i.e. x ~ N n ( ,  ) Let y 1 = a 11 x 1 + a 12 x 2 + a 13 x 3,... + a 1n x n y 2 = a 21 x 1 + a 22 x 2 + a 23 x 3,... + a 2n x n... y n = a n1 x 1 + a n2 x 2 + a n3 x 3,... + a nn x n define a 1-1 transformation of x into y. Then y = (y 1,y 2,y 3,..., y n ) ~ N n (A ,A  A')

20 Defn (Expectation) Let x = (x 1,x 2,x 3,..., x n ) denote a vector of continuous random variables with joint density function f(x) = f(x 1,x 2,x 3,..., x n ). Let U = h(x) = h(x 1,x 2,x 3,..., x n ) Then

21 Defn (Conditional Expectation) Let x = (x 1,x 2,x 3,..., x n ) = (x 1, x 2 ) denote a vector of continuous random variables with joint density function f(x) = f(x 1,x 2,x 3,..., x n ) = f(x 1, x 2 ). Let U = h(x 1 ) = h(x 1,x 2,x 3,..., x p ) Then the conditional expectation of U given x 2

22 Defn (Variance) Let x = (x 1,x 2,x 3,..., x n ) denote a vector of continuous random variables with joint density function f(x) = f(x 1,x 2,x 3,..., x n ). Let U = h(x) = h(x 1,x 2,x 3,..., x n ) Then

23 Defn (Conditional Variance) Let x = (x 1,x 2,x 3,..., x n ) = (x 1, x 2 ) denote a vector of continuous random variables with joint density function f(x) = f(x 1,x 2,x 3,..., x n ) = f(x 1, x 2 ). Let U = h(x 1 ) = h(x 1,x 2,x 3,..., x p ) Then the conditional variance of U given x 2

24 Defn (Covariance, Correlation) Let x = (x 1,x 2,x 3,..., x n ) denote a vector of continuous random variables with joint density function f(x) = f(x 1,x 2,x 3,..., x n ). Let U = h(x) = h(x 1,x 2,x 3,..., x n ) and V = g(x) =g(x 1,x 2,x 3,..., x n ) Then the covariance of U and V.

25 Properties Expectation Variance Covariance Correlation

26 1. E[a 1 x 1 + a 2 x 2 + a 3 x 3 +... + a n x n ] = a 1 E[x 1 ] + a 2 E[x 2 ] + a 3 E[x 3 ] +... + a n E[x n ] or E[a'x] = a'E[x]

27 2.E[UV] = E[h(x 1 )g(x 2 )] = E[U]E[V] = E[h(x 1 )]E[g(x 2 )] if x 1 and x 2 are independent

28 3. Var[a 1 x 1 + a 2 x 2 + a 3 x 3 +... + a n x n ] or Var[a'x] = a′  a

29 4. Cov[a 1 x 1 + a 2 x 2 +... + a n x n, b 1 x 1 + b 2 x 2 +... + b n x n ] or Cov[a'x, b'x] = a′  b

30 5. 6.

31 Statistical Inference Making decisions from data

32 There are two main areas of Statistical Inference Estimation – deciding on the value of a parameter –Point estimation –Confidence Interval, Confidence region Estimation Hypothesis testing –Deciding if a statement (hypotheisis) about a parameter is True or False

33 The general statistical model Most data fits this situation

34 Defn (The Classical Statistical Model) The data vector x = (x 1,x 2,x 3,..., x n ) The model Let f(x|  ) = f(x 1,x 2,..., x n |  1,  2,...,  p ) denote the joint density of the data vector x = (x 1,x 2,x 3,..., x n ) of observations where the unknown parameter vector    (a subset of p-dimensional space).

35 An Example The data vector x = (x 1,x 2,x 3,..., x n ) a sample from the normal distribution with mean  and variance  2 The model Then f(x| ,  2 ) = f(x 1,x 2,..., x n | ,  2 ), the joint density of x = (x 1,x 2,x 3,..., x n ) takes on the form: where the unknown parameter vector  ( ,  2 )   ={(x,y)|-∞ < x < ∞, 0 ≤ y < ∞}.

36 Defn (Sufficient Statistics) Let x have joint density f(x|  ) where the unknown parameter vector   . Then S = (S 1 (x),S 2 (x),S 3 (x),..., S k (x)) is called a set of sufficient statistics for the parameter vector  if the conditional distribution of x given S = (S 1 (x),S 2 (x),S 3 (x),..., S k (x)) is not functionally dependent on the parameter vector . A set of sufficient statistics contains all of the information concerning the unknown parameter vector

37 A Simple Example illustrating Sufficiency Suppose that we observe a Success-Failure experiment n = 3 times. Let  denote the probability of Success. Suppose that the data that is collected is x 1, x 2, x 3 where x i takes on the value 1 is the i th trial is a Success and 0 if the i th trial is a Failure.

38 The following table gives possible values of (x 1, x 2, x 3 ). The data can be generated in two equivalent ways: 1.Generating (x 1, x 2, x 3 ) directly from f (x 1, x 2, x 3 |  ) or 2.Generating S from g(S|  ) then generating (x 1, x 2, x 3 ) from f (x 1, x 2, x 3 |S). Since the second step does involve  no additional information will be obtained by knowing (x 1, x 2, x 3 ) once S is determined

39 The Sufficiency Principle Any decision regarding the parameter  should be based on a set of Sufficient statistics S 1 (x), S 2 (x),...,S k (x) and not otherwise on the value of x.

40 A useful approach in developing a statistical procedure 1.Find sufficient statistics 2.Develop estimators, tests of hypotheses etc. using only these statistics

41 Defn (Minimal Sufficient Statistics) Let x have joint density f(x|  ) where the unknown parameter vector   . Then S = (S 1 (x),S 2 (x),S 3 (x),..., S k (x)) is a set of Minimal Sufficient statistics for the parameter vector  if S = (S 1 (x),S 2 (x),S 3 (x),..., S k (x)) is a set of Sufficient statistics and can be calculated from any other set of Sufficient statistics.

42 Theorem (The Factorization Criterion) Let x have joint density f(x|  ) where the unknown parameter vector   . Then S = (S 1 (x),S 2 (x),S 3 (x),..., S k (x)) is a set of Sufficient statistics for the parameter vector  if f(x|  ) = h(x)g(S,  ) = h(x)g(S 1 (x),S 2 (x),S 3 (x),..., S k (x),  ). This is useful for finding Sufficient statistics i.e. If you can factor out q-dependence with a set of statistics then these statistics are a set of Sufficient statistics

43 Defn (Completeness) Let x have joint density f(x|  ) where the unknown parameter vector   . Then S = (S 1 (x),S 2 (x),S 3 (x),..., S k (x)) is a set of Complete Sufficient statistics for the parameter vector  if S = (S 1 (x),S 2 (x),S 3 (x),..., S k (x)) is a set of Sufficient statistics and whenever E[  (S 1 (x),S 2 (x),S 3 (x),..., S k (x)) ] = 0 then P[  (S 1 (x),S 2 (x),S 3 (x),..., S k (x)) = 0] = 1

44 Defn (The Exponential Family) Let x have joint density f(x|  )| where the unknown parameter vector   . Then f(x|  ) is said to be a member of the exponential family of distributions if:  ,where

45 1)- ∞ < a i < b i < ∞ are not dependent on . 2)  contains a nondegenerate k-dimensional rectangle. 3) g(  ), a i,b i and p i (  ) are not dependent on x. 4) h(x), a i,b i and S i (x) are not dependent on q.

46 If in addition. 5) The S i (x) are functionally independent for i = 1, 2,..., k. 6)  [S i (x)]/  x j exists and is continuous for all i = 1, 2,..., k j = 1, 2,..., n. 7) p i (  ) is a continuous function of  for all i = 1, 2,..., k. 8) R = {[p 1 (  ),p 2 (  ),...,p K (  )] |   ,} contains nondegenerate k-dimensional rectangle. Then the set of statistics S 1 (x), S 2 (x),...,S k (x) form a Minimal Complete set of Sufficient statistics.

47 Defn (The Likelihood function) Let x have joint density f(x|  ) where the unkown parameter vector  . Then for a given value of the observation vector x,the Likelihood function, L x (  ), is defined by: L x (  ) = f(x|  ) with   The log Likelihood function l x (  ) is defined by: l x (  ) =lnL x (  ) = lnf(x|  ) with  

48 The Likelihood Principle Any decision regarding the parameter  should be based on the likelihood function L x (  ) and not otherwise on the value of x. If two data sets result in the same likelihood function the decision regarding  should be the same.

49 Some statisticians find it useful to plot the likelihood function L x (  ) given the value of x. It summarizes the information contained in x regarding the parameter vector .

50 An Example The data vector x = (x 1,x 2,x 3,..., x n ) a sample from the normal distribution with mean  and variance  2 The joint distribution of x Then f(x| ,  2 ) = f(x 1,x 2,..., x n | ,  2 ), the joint density of x = (x 1,x 2,x 3,..., x n ) takes on the form: where the unknown parameter vector  ( ,  2 )   ={(x,y)|-∞ < x < ∞, 0 ≤ y < ∞}.

51 The Likelihood function Assume data vector is known x = (x 1,x 2,x 3,..., x n ) The Likelihood function Then L( ,  )= f(x| ,  ) = f(x 1,x 2,..., x n | ,  2 ),

52 or

53 hence Now consider the following data: (n = 10)

54   0 20 50 70

55   0 20 50 70

56 Now consider the following data: (n = 100)

57   0 20 50 70

58   0 20 50 70

59 The Sufficiency Principle Any decision regarding the parameter  should be based on a set of Sufficient statistics S 1 (x), S 2 (x),...,S k (x) and not otherwise on the value of x. If two data sets result in the same values for the set of Sufficient statistics the decision regarding  should be the same.

60 Theorem (Birnbaum - Equivalency of the Likelihood Principle and Sufficiency Principle) L x 1 (  )  L x 2 (  ) if and only if S 1 (x 1 ) = S 1 (x 2 ),..., and S k (x 1 ) = S k (x 2 )

61 The following table gives possible values of (x 1, x 2, x 3 ). The Likelihood function

62 Estimation Theory Point Estimation

63 Defn (Estimator) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Then an estimator of the parameter  (  ) =  (  1,  2,...,  k ) is any function T(x)=T(x 1,x 2,x 3,..., x n ) of the observation vector.

64 Defn (Mean Square Error) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Let T(x) be an estimator of the parameter  (  ). Then the Mean Square Error of T(x) is defined to be:

65 Defn (Uniformly Better) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Let T(x) and T*(x) be estimators of the parameter  (  ). Then T(x) is said to be uniformly better than T*(x) if:

66 Defn (Unbiased ) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Let T(x) be an estimator of the parameter  (  ). Then T(x) is said to be an unbiased estimator of the parameter  (  ) if:

67 Theorem (Cramer Rao Lower bound) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Suppose that: i) exists for all x and for all. ii) iii) iv)

68 Let M denote the p x p matrix with ij th element. Then V = M -1 is the lower bound for the covariance matrix of unbiased estimators of . That is, var(c' ) = c'var( )c ≥ c'M -1 c = c'Vc where is a vector of unbiased estimators of .

69 Defn (Uniformly Minimum Variance Unbiased Estimator) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector   . Then T*(x) is said to be the UMVU (Uniformly minimum variance unbiased) estimator of  (  ) if: 1) E[T*(x)] =  (  ) for all   . 2) Var[T*(x)] ≤ Var[T(x)] for all    whenever E[T(x)] =  (  ).

70 Theorem (Rao-Blackwell) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Let S 1 (x), S 2 (x),...,S K (x) denote a set of sufficient statistics. Let T(x) be any unbiased estimator of  (  ). Then T*[S 1 (x), S 2 (x),...,S k (x)] = E[T(x)|S 1 (x), S 2 (x),...,S k (x)] is an unbiased estimator of  (  ) such that: Var[T*(S 1 (x), S 2 (x),...,S k (x))] ≤ Var[T(x)] for all   .

71 Theorem (Lehmann-Scheffe') Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Let S 1 (x), S 2 (x),...,S K (x) denote a set of complete sufficient statistics. Let T*[S 1 (x), S 2 (x),...,S k (x)] be an unbiased estimator of  (  ). Then: T*(S 1 (x), S 2 (x),...,S k (x)) )] is the UMVU estimator of  (  ).

72 Defn ( Consistency ) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector   . Let T n (x) be an estimator of  (  ). Then T n (x) is called a consistent estimator of  (  ) if for any  > 0:

73 Defn (M. S. E. Consistency ) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector   . Let T n (x) be an estimator of  (  ). Then T n (x) is called a M. S. E. consistent estimator of  (  ) if for any  > 0:

74 Methods for Finding Estimators 1.The Method of Moments 2.Maximum Likelihood Estimation

75 Methods for finding estimators 1.Method of Moments 2.Maximum Likelihood Estimation

76 Let x 1, …, x n denote a sample from the density function f(x;  1, …,  p ) = f(x;  ) Method of Moments The k th moment of the distribution being sampled is defined to be:

77 To find the method of moments estimator of  1, …,  p we set up the equations: The k th sample moment is defined to be:

78 for  1, …,  p. We then solve the equations The solutions are called the method of moments estimators

79 The Method of Maximum Likelihood Suppose that the data x 1, …, x n has joint density function f(x 1, …, x n ;  1, …,  p ) where  (  1, …,  p ) are unknown parameters assumed to lie in  (a subset of p-dimensional space). We want to estimate the parameters  1, …,  p

80 Definition: Maximum Likelihood Estimation Suppose that the data x 1, …, x n has joint density function f(x 1, …, x n ;  1, …,  p ) Then the Likelihood function is defined to be L(  ) = L(  1, …,  p ) = f(x 1, …, x n ;  1, …,  p ) the Maximum Likelihood estimators of the parameters  1, …,  p are the values that maximize L(  ) = L(  1, …,  p )

81 the Maximum Likelihood estimators of the parameters  1, …,  p are the values Such that Note: is equivalent to maximizing the log-likelihood function

82 Application The General Linear Model

83 Consider the random variable Y with 1. E[Y] = g(U 1,U 2,..., U k ) =  1  1 (U 1,U 2,..., U k ) +  2  2 (U 1,U 2,..., U k ) +... +  p  p (U 1,U 2,..., U k ) = and 2. var(Y) =  2 where  1,  2,...,  p are unknown parameters and  1,  2,...,  p are known functions of the nonrandom variables U 1,U 2,..., U k. Assume further that Y is normally distributed.

84 Thus the density of Y is: f(Y|  1,  2,...,  p,  2 ) = f(Y| ,  2 ) i = 1,2, …, p

85 Now suppose that n independent observations of Y, (y 1, y 2,..., y n ) are made corresponding to n sets of values of (U 1,U 2,..., U k ) - (u 11,u 12,..., u 1k ), (u 21,u 22,..., u 2k ),... (u n1,u n2,..., u nk ). Let x ij =  j (u i1,u i2,..., u ik ) j =1, 2,..., p; i =1, 2,..., n. Then the joint density of y = (y 1, y 2,... y n ) is: f(y 1, y 2,..., y n |  1,  2,...,  p,  2 ) = f(y| ,  2 )

86

87 Thus f(y| ,  2 ) is a member of the exponential family of distributions and S = (y'y, X'y) is a Minimal Complete set of Sufficient Statistics.

88 Hypothesis Testing

89 Defn (Test of size  ) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Let  be any subset of . Consider testing the the Null Hypothesis H 0 :   against the alternative hypothesis H 1 :   .

90 Let A denote the acceptance region for the test. (all values x = (x 1,x 2,x 3,..., x n ) of such that the decision to accept H 0 is made.) and let C denote the critical region for the test (all values x = (x 1,x 2,x 3,..., x n ) of such that the decision to reject H 0 is made.). Then the test is said to be of size  if

91 Defn (Power) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Consider testing the the Null Hypothesis H 0 :   against the alternative hypothesis H 1 :  . where  is any subset of . Then the Power of the test for   is defined to be:

92 Defn (Uniformly Most Powerful (UMP) test of size  ) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Consider testing the the Null Hypothesis H 0 :   against the alternative hypothesis H 1 :   . where  is any subset of . Let C denote the critical region for the test. Then the test is called the UMP test of size  if:

93 Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Consider testing the the Null Hypothesis H 0 :   against the alternative hypothesis H 1 :   . where  is any subset of . Let C denote the critical region for the test. Then the test is called the UMP test of size  if:

94 and for any other critical region C* such that: then

95 Theorem (Neymann-Pearson Lemma) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector   = (  0,  1 ). Consider testing the the Null Hypothesis H 0 :  =  0 against the alternative hypothesis H 1 :  =  1. Then the UMP test of size  has critical region: where K is chosen so that

96 Defn (Likelihood Ratio Test of size  ) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Consider testing the the Null Hypothesis H 0 :   against the alternative hypothesis H 1 :   . where  is any subset of  Then the Likelihood Ratio (LR) test of size a has critical region: where K is chosen so that

97 Theorem (Asymptotic distribution of Likelihood ratio test criterion) Let x = (x 1,x 2,x 3,..., x n ) denote the vector of observations having joint density f(x|  ) where the unknown parameter vector  . Consider testing the the Null Hypothesis H 0 :   against the alternative hypothesis H 1 :   . where  is any subset of  Then under proper regularity conditions on U = -2ln (x) possesses an asymptotic Chi-square distribution with degrees of freedom equal to the difference between the number of independent parameters in  and .


Download ppt "Brief Review Probability and Statistics. Probability distributions Continuous distributions."

Similar presentations


Ads by Google