Multiple Linear Regression - Matrix Formulation Let x = (x 1, x 2, …, x n )′ be a n  1 column vector and let g(x) be a scalar function of x. Then, by.

Slides:



Advertisements
Similar presentations
Multiple Regression.
Advertisements

3.3 Hypothesis Testing in Multiple Linear Regression
Topic 12: Multiple Linear Regression
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Applied Econometrics Second edition
1 Outliers and Influential Observations KNN Ch. 10 (pp )
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Ch11 Curve Fitting Dr. Deshi Ye
8. Heteroskedasticity We have already seen that homoskedasticity exists when the error term’s variance, conditional on all x variables, is constant: Homoskedasticity.
Mathematics. Matrices and Determinants-1 Session.
July 1, 2008Lecture 17 - Regression Testing1 Testing Relationships between Variables Statistics Lecture 17.
1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce
1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
1 Chapter 3 Multiple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
Lecture 23: Tues., April 6 Interpretation of regression coefficients (handout) Inference for multiple regression.
Review of Matrix Algebra
Lecture 16 – Thurs, Oct. 30 Inference for Regression (Sections ): –Hypothesis Tests and Confidence Intervals for Intercept and Slope –Confidence.
Basics of regression analysis
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Linear regression models in matrix terms. The regression function in matrix terms.
Matrix Approach to Simple Linear Regression KNNL – Chapter 5.
Matrix Definition A Matrix is an ordered set of numbers, variables or parameters. An example of a matrix can be represented by: The matrix is an ordered.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
CPE 619 Simple Linear Regression Models Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on the Least-Squares Regression Model and Multiple Regression 14.
Simple Linear Regression Models
Chapter 12 Multiple Linear Regression Doing it with more variables! More is better. Chapter 12A.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
1 Chapter 3 Multiple Linear Regression Multiple Regression Models Suppose that the yield in pounds of conversion in a chemical process depends.
MTH 161: Introduction To Statistics
Matrix Algebra and Regression a matrix is a rectangular array of elements m=#rows, n=#columns  m x n a single value is called a ‘scalar’ a single row.
Slide 6.1 Linear Hypotheses MathematicalMarketing In This Chapter We Will Cover Deductions we can make about  even though it is not observed. These include.
Thomas Knotts. Engineers often: Regress data  Analysis  Fit to theory  Data reduction Use the regression of others  Antoine Equation  DIPPR.
2014. Engineers often: Regress data  Analysis  Fit to theory  Data reduction Use the regression of others  Antoine Equation  DIPPR We need to be.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
Lesson Multiple Regression Models. Objectives Obtain the correlation matrix Use technology to find a multiple regression equation Interpret the.
Analisa Regresi Week 7 The Multiple Linear Regression Model
Regression Analysis Part C Confidence Intervals and Hypothesis Testing
Section 12.3 Regression Analysis HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2008 by Hawkes Learning Systems/Quant Systems, Inc. All.
STA 286 week 131 Inference for the Regression Coefficient Recall, b 0 and b 1 are the estimates of the slope β 1 and intercept β 0 of population regression.
HAWKES LEARNING SYSTEMS Students Matter. Success Counts. Copyright © 2013 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Section 12.3.
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
6. Simple Regression and OLS Estimation Chapter 6 will expand on concepts introduced in Chapter 5 to cover the following: 1) Estimating parameters using.
Trees Example More than one variable. The residual plot suggests that the linear model is satisfactory. The R squared value seems quite low though,
June 30, 2008Stat Lecture 16 - Regression1 Inference for relationships between variables Statistics Lecture 16.
Engineers often: Regress data to a model  Used for assessing theory  Used for predicting  Empirical or theoretical model Use the regression of others.
Lesson 14 - R Chapter 14 Review. Objectives Summarize the chapter Define the vocabulary used Complete all objectives Successfully answer any of the review.
Example x y We wish to check for a non zero correlation.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
1 G Lect 4W Multiple regression in matrix terms Exploring Regression Examples G Multiple Regression Week 4 (Wednesday)
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Simple and multiple regression analysis in matrix form Least square Beta estimation Beta Simple linear regression Multiple regression with two predictors.
BPS - 5th Ed. Chapter 231 Inference for Regression.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Chapter 14 Inference on the Least-Squares Regression Model and Multiple Regression.
6. Simple Regression and OLS Estimation
Evgeniya Anatolievna Kolomak, Professor
Correlation and Simple Linear Regression
BA 275 Quantitative Business Methods
CHAPTER 29: Multiple Regression*
Chapter 12 Inference on the Least-squares Regression Line; ANOVA
Linear Regression.
The regression model in matrix form
ANOVA Table Models can be evaluated by examining variability.
Correlation and Simple Linear Regression
Simple Linear Regression and Correlation
Topic 11: Matrix Approach to Linear Regression
Presentation transcript:

Multiple Linear Regression - Matrix Formulation Let x = (x 1, x 2, …, x n )′ be a n  1 column vector and let g(x) be a scalar function of x. Then, by definition,

For example, let Let a = (a 1, a 2, …, a n )′ be a n  1 column vector of constants. It is easy to verify that and that, for symmetrical A (n  n)

Theory of Multiple Regression Suppose we have response variables Y i, i = 1, 2, …, n and k explanatory variables/predictors X 1, X 2, …, X k. i = 1,2, …, n There are k+2 parameters b 0, b 1, b 2, …, b k­ and σ 2

X is called the design matrix

OLS (ordinary least-squares) estimation

Fitted values are given by H is called the “hat matrix” (… it puts the hats on the Y’s)

The error sum of squares, SS RES, is The estimate of  2 is based on this.

Example: Find a model of the form yx1x1 x2x for the data below.

X is called the design matrix

The model in matrix form is given by: We have already seen that Now calculate this for our example

R can be used to calculate X’X and the answer is:

To input the matrix in R use X=matrix(c(1,1,1,1,1,1,1,3.1,3.4,3.0,3.4, 3.9,2.8,2.2,30,25,20,30,40,25,30),7,3) Number of rows Number of columns

Notice command for matrix multiplication

The inverse of X’X can also be obtained by using R

We also need to calculate X’Y Now

Notice that this is the same result as obtained previously using the lm result on R

So y = x x2 + e

The “hat matrix” is given by

The fitted Y values are obtained by

Recall once more we are looking at the model

Compare with

Error Terms and Inference A useful result is : n : number of points k: number of explanatory variables

In addition we can show that: And c (i+1)(i+1) is the (i+1)th diagonal element of where s.e.(b i )=  c (i+1)(i+1) 

For our example:

was calculated as:

This means that c 11 = 6.683, c 22 =0.7600,c 33 = Note that c 11 is associated with b 0, c 22 with b 1 and c 33 with b 2 We will calculate the standard error for b 1 This is  x =

The value of b 1 is Now carry out a hypothesis test. H 0 : b 1 = 0 H 1 : b 1 ≠ 0 The standard error of b 1 is ^

The test statistic is This calculates as ( – 0)/ = 3.55

Ds….. ………. t tables using 4 degrees of freedom give cut of point of for 2.5%. ………………

We therefore accept H 1. There is no evidence at the 5% level that b 1 is zero. The process can be repeated for the other b values and confidence intervals calculated in the usual way. CI for  2 - based on the  4 2 distribution of ((4  )/11.14, (4  )/0.4844) i.e. (0.030, 0.695)

The sum of squares of the residuals can also be calculated.