Presentation is loading. Please wait.

Presentation is loading. Please wait.

CIS 2033 based on Dekking et al

Similar presentations


Presentation on theme: "CIS 2033 based on Dekking et al"— Presentation transcript:

1 CIS 2033 based on Dekking et al
CIS 2033 based on Dekking et al. A Modern Introduction to Probability and Statistics Slides by Kier Heilman Instructor Longin Jan Latecki C22: The Method of Least Squares 1

2 22.1 – Least Squares Consider the random variables: Yi = α + βxi + Ui for i = 1, 2, . . ., n. where random variables U1, U2, …, Un have zero expectation and variance σ 2 Method of Least Squares: Choose a value for α and β such that S(α,β)=( ) is minimal. 2

3 22.1 – Regression The observed value yi corresponding to xi and the value α+βxi on the regression line y = α + βx. 3

4 22.1– Estimation After some calculus magic, we have the following two simultaneously equations to estimate α and β: 4

5 22.1– Estimation After some simple algebraic rearranging, we put the equations in terms of α and β: (slope) (intercept) 5

6 22.1– Least Square Estimators are Unbiased
All estimators for α and β are unbiased. For the simple linear regression model, the random variable is an unbiased estimator for δ2. 6

7 22.2– Residuals Residual: The vertical distance between the ith point and the estimated regression line: The sum of the residuals is zero. 7

8 22.2– Heteroscedasticity Homoscedasticity: The assumption of equal variance of the Ui (and therefore Yi). For instance, heteroscedasticity occurs when Yi with a large expected value have a larger variance than those with small expected values. 8

9 22.3– Relation with Maximum Likelihood
What are the maximum likelihood estimates for α and β? To apply the method of least squares no assumption is needed about the type of distribution of the Ui. In case the type of distribution of the Ui is known, the maximum likelihood principle can be applied. Consider, for instance, the classical situation where the Ui are independent with an N(0, σ2) distribution. Using the maximum likelihood estimation for a normal distribution: Yi has an N (α + βxi, σ2) distribution, making the probability density function 9

10 22.3– Maximum Likelihood For fixed σ >0 the loglikelihood l (α, β, σ) obtains the maximum when is minimal. Hence, when random variables independent with a N(0,δ2) distribution, the maximum likelihood principle and the least squares method return the same estimators. The maximum likelihood estimator for σ 2 is: 10


Download ppt "CIS 2033 based on Dekking et al"

Similar presentations


Ads by Google