Download presentation
Published byOsbaldo Letts Modified over 10 years ago
1
Things to do in Lecture 1 Outline basic concepts of causality
Go through the ideas & principles underlying ordinary least squares (OLS) estimation Derive formally how to estimate the value of the coefficients we are interested using OLS Run an OLS regression
2
DERIVING OLS REGRESSION COEFFICIENTS
Regression analysis is primarily concerned with quantifying the relationship between variables Does more than measures of association like the correlation coefficient since it implies that one variable depends on another hence there is a causal relationship between the variable whose behaviour we would like to explain - the dependent variable (which label y) and the explanatory (or independent) variable that we think can explain this behaviour (which label X)
3
Given data on both the dependent and explanatory variable we can establish that causal relationship using regression analysis How? fit a straight line through the data that best summarises the relationship Since the equation of a straight line is given by Y= b0 + b1X we can obtain an estimate of the values for the intercept b0 (the constant) and the slope b1 that together give the “line of best fit” How ?
4
Suppose we plot a set of observations on the Y variable and the X variable of interest - a scatter diagram Y do # # b1 # # # d1 # b0 Which line (and hence which slope & intercept) to choose? - The one that minimises the sum of squared residuals X
5
Using the principle of Ordinary Least Squares (OLS)
- Try to fit a (straight) line through the data based on “minimising the sum of squared residuals” What is a residual? If we fit a line through some data then this will give a predicted value for the dependent variable based on the value of the X variable and the values for the constant and the slope
6
Given any straight line can always read off actual and predicted values of variable of interest for every individual i in the data set y # b1 # ↓ # # # yi # b0 X Xi
7
For example given the equation of a straight line
Y = b0 + b1X we know when X = 1, then the predicted value of Y and when X = 2, then the predicted value of Y
8
We can then compare this predicted value with the actual value of the dependent variable and the difference between the actual and predicted value gives the residual Which since gives
9
The difference between the actual and predicted value is the residual
y predicted # b1 # # # # yi # b0 actual X Xi
10
We can then compare this predicted value with the actual value of the dependent variable and the difference between the actual and predicted value gives the residual (where the i subscript refers to the ith individual or firm or time period in the data set)
11
Things to know about residuals
1. The larger the residual the worse the prediction So intuitively then the line of best fit should be the one that delivers the smallest residual values for the each observation in the data set
12
a positive residual means
2. Since the difference between the actual and predicted value gives the residual a positive residual means The model underpredicts and similarly The model over-predicts (larger than actual)
13
Given this.. Suppose we tried to minimise the sum of all the residuals in the data in an attempt to get the line of best fit Whilst this might seem intuitive it will not work because it is possible that any positive residual will be offset by a negative residual in the summation and so the sum could be close to zero even if the overall fit of the regression were poor
14
We can avoid this problem is use instead the principle of Ordinary Least Squares (OLS)
Rather than minimise the sum of residuals, minimise the sum of squared residuals - Squaring ensures that values are always positive and so can never cancel each other out Also gives more “weight” to larger residuals and so harder to get away with a poor fit, (the larger any one residual in absolute value, the larger is the sum of squared residuals (RSS)
15
Things to do in Lecture 2 Derive formally how to estimate the value of the coefficients we are interested using OLS Run an OLS regression Interpret the regression output Measure how well the model fits the data
16
The Idea Behind Ordinary Least Squares (OLS)
# # b1 # # # d1 # b0 Which line (and hence which slope & intercept) to choose? - The one that minimises the sum of squared residuals X
17
The difference between the actual and predicted value is the residual
y predicted # b1 # # # # yi # b0 actual X Xi
18
Consider the following simple example
N=2 and want to fit a straight line y=b0 +b1X thru’ the following data points using principle of OLS (min sum of squared residuals) (Y1=3 X1 =1) (Y2=5 X2=2) It follows that can write estimated residual for the 1st observation using and y1=3 as and similarly for the 2nd observation
19
OLS: minimise the sum of squared residuals
Expanding the terms in brackets Adding together like terms (A)
20
Now need to find values of
and which minimise this sum, Using the rules of calculus we know the first order condition for minimisation are: →
21
This gives 2 simultaneous equations
which can solve for unknown values of and using rules for simultaneous equations So the estimated regression line becomes ie the intercept (constant) with the y axis is at 1 and the slope of the straight line is 2
22
Basic idea underlying OLS is to choose a “line of best fit”
Choose a straight line that passes through the data and minimises the sum of squared residuals Now need to do this more generally so can apply the technique to any possible combination of (x, y) data pairs and any number of observations
23
(where now the summation runs from 1 to N rather than 1 to 2 )
If we wish to fit a (straight) line through N (rather than 2) observations, then the OLS principle is still the same ie choose to minimise and (where now the summation runs from 1 to N rather than 1 to 2 )
24
This is just a generalised version of (A) above
sub. in This is just a generalised version of (A) above
25
which minimise this sum, using the same simple calculus rules
Again, find values of and which minimise this sum, using the same simple calculus rules and
26
Now these two (1st order) minimisation conditions give
(1) (2) and again we have 2 simultaneous equations (called the normal equations) which can again solve for and
27
Using the fact that the sample means of Y and X
can re-write (1) and so obtain the formula to calculate the OLS estimate of the intercept (3) (*** learn this **)
28
Sub. into (2) gives and simplifying collecting terms
29
which gives the formula to calculate the OLS estimate of the slope
Dividing both sides by 1/N which gives the formula to calculate the OLS estimate of the slope (**** learn this ****)
30
So are how the computer determines the size of the intercept and the slope respectively in an OLS regression The OLS equations give a nice, clear intuitive meaning about the influence of the variable X on the size of the slope, since it shows that: i) the greater the covariance between X and Y, the larger the (absolute value of) the slope ii) the smaller the variance of X, the larger the (absolute value of) the slope
31
It is equally important to be able to interpret the effect of an estimated regression coefficient
Given OLS essentially passes a straight line through the data, then given So the OLS estimate of the slope will give an estimate of the unit change in the dependent variable y following a unit change in the level of the explanatory variable (so you need to be aware of the units of measurement of your variables in order to be able to interpret what the OLS coefficient is telling you)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.