Download presentation
1
Chapter 4-5: Analytical Solutions to OLS
EC339: Lecture 7 Chapter 4-5: Analytical Solutions to OLS
2
The Linear Regression Model
Postulate: The dependent variable, Y, is a function of the explanatory variable, X, or Yi = ƒ(Xi) However, the relationship is not deterministic Value of Y is not completely determined by value of X Thus, we incorporate an error term (residual) into the model which provides a statistical relationship Yi = ƒ(Xi) + ui
3
The Simple Linear Regression Model (SLR)
Remember we are trying to predict Y for a given X. We assume a linear relationship (in the parameters (i.e., the BETAS)) Ceteris Paribus—All else held equal To account for our ERROR in prediction, we can add an error term to our prediction. If Y is a linear function of X then ERRORS are typically written as u, or epsilon representing ANYTHING ELSE that might cause the deviation between actual and predicted values We are interested in determining the intercept (0) and slope (1)
4
SLR Uses Multivariate Expectations
Univariate Distributions Means, Variances, Standard Deviations Multivariate Distributions Correlation, Covariance Marginal, Joint, and Conditional Probabilities Joint Probability Density Fn. Conditional Expectation Marginal Probability Density Fn. Conditional Probability Density Fn.
5
Joint Distributions Joint Distribution Probability Density Functions
Now want to consider how Y and X are distributed when considered together INDEPENDENCE When outcomes of X and Y have no influence on one another, the joint probability is equal to the product of the marginal probability density function Think about BINOMIAL DISTRIBUTIONS, each TRIAL is INDEPENDENT and has no effect on the subsequent trial. Also, think of marginal distributions much like a histogram of a single variable.
6
Conditional Distributions
Conditional Probability Density Functions Now want to consider how Y is distributed when GIVEN a certain value for X Conditional Probability of Y occurring given X, is equal to the joint probability of X and Y, divided by the marginal probability of X occurring in the first place INDEPENDENCE If X and Y are independent then the conditional distribution shows these as marginal distributions. Just as if there is no new information. A joint probability is like finding the probability of a “high school graduate” with an hourly wage between “$8 and $10” if looking at education and wage data.
7
Discrete Bivariate Distributions—Joint Probability Function
For example, assume we flip a coin 3 times, recording the number of heads (H) X = number of heads on the last (3rd) flip Y = total number of heads in three flips S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT} X takes on the values {0,1} Y takes on the values {0,1,2,3} There are 8 possible different joint outcomes (X = 0,Y = 0) (X = 0,Y = 1) (X = 0,Y = 2) (X = 0,Y = 3) (X = 1,Y = 0) (X = 1,Y = 1) (X = 1,Y = 2) (X = 1,Y = 3) Attaching a probability to each of the different joint outcomes gives us a discrete bivariate probability distribution or joint probability function Thus, ƒ(x,y) gives us the probability the random variables X and Y assume the joint outcome (x,y)
8
Properties of Covariance
If X and Y are discrete If X and Y are continuous If X and Y are independent then cov(X,Y) = 0
9
Properties of Conditional Expectations
10
The Linear Regression Model
Ceteris Paribus—All else held equal Conditional Expectations can be linear or nonlinear—We will only examine LINEAR functions here.
11
The Linear Regression Model
For any given level of X many possible values of Y can exist If Y is a linear function of X then Yi = 0 + 1Xi + ui u represents the deviation between the actual value of Y and the predicted value of Y (or 0 + 1X1i) We are interested in determining the intercept (0) and slope (1)
12
The Simple Linear Regression Model (SLR)
Thus, what we are looking for is the Conditional Expectation of Y Given values of X. This is what we have called Y-hat thus far. We are trying to predict values of Y given values of X. To do this we must hold ALL OTHER FACTORS FIXED (Ceteris Paribus).
13
The Simple Linear Regression Model (SLR)
LINEAR POPULATION REGRESSION FUNCTION We can assume that the EXPECTED VALUE of our error term is zero. If the value were NOT equal to zero, we could alter this expected value to equal zero by altering the INTERCEPT to account for this fact. This makes no statement about how X and the errors are related. IF u and X are unrelated linearly, their CORRELATION will equal zero! Correlation is not sufficient though, since they could be related NONLINEARLY… Conditional probability gives sufficient conditions as it looks at ALL values of u, given a value for X. This is zero conditional mean error.
14
The Linear Regression Model: Assumptions
Several assumptions must be made about the random variable error term The mean error is zero, or E(ui) = 0 Errors above and below the regression line tend to balance out Errors can arise due to Human behavior (may be unpredictable) A large number of explanatory variables are not in the model Imperfect measuring of dependent variable
15
The Simple Linear Regression Model (SLR)
Beginning with the simple linear regression, taking conditional expectations, and using our current assumptions gives us the POPULATION REGRESSION FUNCTION (Notice, no hats over the Betas, and that y, is equal to the predicted value, plus an error).
16
The Linear Regression Model
The regression model asserts that the expected value of Y is a linear function of X E(Yi) = 0 + 1X1i Known as the population regression function From a practical standpoint not all of a population’s observations are available Thus we typically estimate the slope and intercept using sample data
17
The Simple Linear Regression Model (SLR)
We can also make the following assumptions knowing that E[u|x]=0 WE NOW HAVE TWO EQUATIONS IN TWO UNKNOWNS!! (The Beta’s are the unknowns). This is how the Method of Moments is constructed.
18
The Linear Regression Model: Assumptions
Additional assumptions are necessary to develop confidence intervals and perform hypothesis tests i all for ) var(ui 2 s u = Says that errors are drawn from a distribution with a constant variance (heteroskedasticity exists if this assumption fails) ui and uj are independent One observation’s error does not influence another observation’s error—errors are uncorrelated (serial correlation of errors exist if this assumption fails) Cov(ui,uj) = 0 for all i j
19
The Linear Regression Model: Assumptions
Cov(Xi,ui) = 0 for all i Error term is uncorrelated with the explanatory variable, X 2 s e ui ~ N(0, ) Error term follows a normal distribution
20
The Linear Regression Model: Assumptions
Cov(Xi,ui) = 0 for all i Error term is uncorrelated with the explanatory variable, X Error term follows a normal distribution
21
Ordinary Least Squares-Fit
22
Ordinary Least Squares-Fit
23
Ordinary Least Squares-Fit
24
Estimation (Three Ways-We will not discuss Maximum Likelihood)
We need a formal method to determine the line that “fits” the data well Distance of the line from observations should be minimized ^ Let Yi = 0 + 1X1i The deviation of the observation from the line is the estimated error, or residual (ui) ^ ui = Yi - Yi
25
Ordinary Least Squares
Designed to minimize the magnitude of estimated residuals Selecting an estimated slope and estimated intercept that minimizes the sum of the squared errors Most popular method known as Ordinary Least Squares
26
Ordinary Least Squares—Minimize Sum of Squared Errors
Identifying the parameters (estimated slope and estimated y-intercept) that minimize the sum of the squared errors is a standard optimization problem in multivariable calculus Take first derivatives with respect to the estimated slope coefficient and estimated y-intercept coefficient Set both equations equal to zero and solve the two equations
27
Ordinary Least Squares
28
Ordinary Least Squares-Derived
29
Ordinary Least Squares
This results in the normal equations Which suggests an estimator for the intercept. The means of X and Y are ALWAYS on the regression line.
30
Ordinary Least Squares
Which yields an estimator for the slope of the line No other estimators will result in a smaller sum of squared errors
31
SLR Assumption 1 Linear in Parameters SLR.1
Defines POPULATION model The dependent variable y is related to the independent variable x and the error (or disturbance) u as SLR.1 b0 and b1 are population parameters
32
SLR Assumption 2 Random Sampling
Use a random sample of size n, {xi,yi): i=1,2,…,n} from the population model Allows redefinition of SLR.1. Want to use DATA to estimate our parameters b0 and b1 are population parameters to be estimated
33
SLR Assumption 3 Sample variation in independent variable
X values must vary. The variance of X cannot equal zero
34
SLR Assumption 4 Zero Conditional Mean
For a random sample, implication is that NO independent variable is correlated with ANY unobservable (remember error includes unobservable data)
35
SLR Theorem 1 Unbiasedness of OLS, estimators should equal the population value in expectation This holds because x and u are assumed to be uncorrelated. Thus our estimator equals the actual value of Beta
36
SLR Theorem 1 Unbiasedness of OLS, estimators should equal the population value in expectation The expected value of the residuals is zero. Thus our estimator equals the actual value of Beta
37
SLR Assumption 5 Homoskedasticity
The variance of the errors is INDEPENDENT of the values of X.
38
Method of Moments Seeks to equate the moments implied by a statistical model of the population distribution to the actual moments found in the sample Certain restrictions are implied in the population E(u) = 0 Cov(Xi,uj) = 0 i,j Results in the same estimators as least squares method
39
Interpretation of the Regression Slope Coefficient
The coefficient, 1, tells us the effect X has on Y Increasing X by one unit will change the mean value of Y by 1 units
40
Units of Measurement and Regression Coefficients
Magnitude of regression coefficients depends upon the units in which the dependent and explanatory variables are measured For example, using cents versus dollars will result in smaller coefficients Changing both the Y and X variables by the same amount will not affect the slope although it will impact the y-intercept
41
Models Including Logarithms
For a log-linear model the slope represents the proportionate (like percentage change) change in Y arising from a unit change in X The coefficients in your regression result in the SEMI-elasticity of Y with respect to X For a log-log model the slope represents the proportionate change in Y arising from a proportionate change in X The coefficients in your regression results in the elasticity of Y with respect to X. This is the CONSTANT ELASTICITY MODEL. For a linear-log model the slope is the unit change in Y arising from a proportionate change in X
42
Regression in Excel Step 1: Reorganize data so that variables are right next to one another in columns Step 2: Data AnalysisRegression
43
Regression in Excel-Ex. 2.11
44
Regression in Excel
45
Regression in Excel
46
Regression in Excel T-statistics show that the coefficient on ceoten is insignificant at the 5% level. The p-value for ceoten is which is greater than .05, meaning that you could see this value about 13% of the time. You are Inherently testing the null hypothesis that all coefficients are equal to ZERO. YOU FAIL TO REJECT THE NULL HYPOTHESIS HERE ON BETA-1.
47
Regression in Excel
48
Regression in Excel
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.