Correlation 2 Computations, and the best fitting line.

Slides:



Advertisements
Similar presentations
Estimation of Means and Proportions
Advertisements

Lesson 10: Linear Regression and Correlation
Copyright © 2010 Pearson Education, Inc. Slide
CHAPTER 24: Inference for Regression
Objectives (BPS chapter 24)
Chapter 15 (Ch. 13 in 2nd Can.) Association Between Variables Measured at the Interval-Ratio Level: Bivariate Correlation and Regression.
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
Chapter 5 Introduction to Inferential Statistics.
The Regression Equation Using the regression equation to individualize prediction and move beyond saying that everyone is equal, that everyone should score.
T scores and confidence intervals using the t distribution.
The standard error of the sample mean and confidence intervals
PSY 307 – Statistics for the Behavioral Sciences
t scores and confidence intervals using the t distribution
The standard error of the sample mean and confidence intervals
Chapter 5 Introduction to Inferential Statistics.
Chapter 9 - Lecture 2 Some more theory and alternative problem formats. (These are problem formats more likely to appear on exams. Most of your time in.
Correlation 2 Computations, and the best fitting line.
The Simple Regression Model
Confidence intervals using the t distribution. Chapter 6 t scores as estimates of z scores; t curves as approximations of z curves Estimated standard.
SIMPLE LINEAR REGRESSION
The standard error of the sample mean and confidence intervals How far is the average sample mean from the population mean? In what interval around mu.
Chapter 5 Introduction to Inferential Statistics.
The Regression Equation How we can move beyond predicting that everyone should score right at the mean by using the regression equation to individualize.
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
Lecture 16 – Thurs, Oct. 30 Inference for Regression (Sections ): –Hypothesis Tests and Confidence Intervals for Intercept and Slope –Confidence.
The Regression Equation Using the regression equation to individualize prediction and move beyond saying that everyone is equal, that everyone should score.
Regression Chapter 10 Understandable Statistics Ninth Edition By Brase and Brase Prepared by Yixun Shi Bloomsburg University of Pennsylvania.
SIMPLE LINEAR REGRESSION
Chapter 1-6 Review Chapter 1 The mean, variance and minimizing error.
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
T scores and confidence intervals using the t distribution.
Simple Linear Regression Least squares line Interpreting coefficients Prediction Cautions The formal model Section 2.6, 9.1, 9.2 Professor Kari Lock Morgan.
The standard error of the sample mean and confidence intervals How far is the average sample mean from the population mean? In what interval around mu.
Correlation and Linear Regression
Correlation and Linear Regression
McGraw-Hill/Irwin Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 13 Linear Regression and Correlation.
Correlation and Linear Regression Chapter 13 Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin.
SIMPLE LINEAR REGRESSION
Inference for regression - Simple linear regression
Linear Regression and Correlation
Copyright © Cengage Learning. All rights reserved. 13 Linear Correlation and Regression Analysis.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on the Least-Squares Regression Model and Multiple Regression 14.
Inferences for Regression
+ Chapter 12: Inference for Regression Inference for Linear Regression.
© The McGraw-Hill Companies, Inc., Chapter 11 Correlation and Regression.
Correlation and Regression Used when we are interested in the relationship between two variables. NOT the differences between means or medians of different.
McGraw-Hill/Irwin Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 13 Linear Regression and Correlation.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
+ Chapter 12: More About Regression Section 12.1 Inference for Linear Regression.
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
Sampling distributions rule of thumb…. Some important points about sample distributions… If we obtain a sample that meets the rules of thumb, then…
1 Regression & Correlation (1) 1.A relationship between 2 variables X and Y 2.The relationship seen as a straight line 3.Two problems 4.How can we tell.
Midterm Review Ch 7-8. Requests for Help by Chapter.
Chapter 8: Simple Linear Regression Yang Zhenlin.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Simple Linear Regression Analysis Chapter 13.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
©The McGraw-Hill Companies, Inc. 2008McGraw-Hill/Irwin Linear Regression and Correlation Chapter 13.
Chapter 13 Linear Regression and Correlation. Our Objectives  Draw a scatter diagram.  Understand and interpret the terms dependent and independent.
Computations, and the best fitting line.
Inference for Regression
Correlation and Regression
CHAPTER 26: Inference for Regression
Descriptive Analysis and Presentation of Bivariate Data
Correlation and Regression
CHAPTER 12 More About Regression
SIMPLE LINEAR REGRESSION
Regression & Correlation (1)
Presentation transcript:

Correlation 2 Computations, and the best fitting line.

Computing r from a more realistic set of data A study was performed to investigate whether the quality of an image affects reading time. The experimental hypothesis was that reduced quality would slow down reading time. Quality was measured on a scale of 1 to 10. Reading time was in seconds.

Quality vs Reading Time data: Compute the correlation Quality (scale 1-10) Reading time (seconds) Is there a relationship? Check for linearity. Compute r.

Calculate t scores for X X  X=39.25 n= 7 X=5.61 (X - X) X - X t X = (X - X) / s X MS W = 4.73/(7-1) = 0.79 s = 0.89 SS W = 4.73

Calculate t scores for Y Y  Y=52.5 n= 7 Y=7.50 MS W = 3.78/(7-1) = 0.63 s Y = 0.79 (Y - Y) Y - Y t Y = (Y - Y) / s Y SS W = 3.78

Plot t scores t Y t X

t score plot with best fitting line: linear? YES!

Calculate r t Y t X t Y - t X (t Y - t X )  (t X - t Y ) 2 / (n P - 1) = r = 1 - (1/2 * 3.580) = =  (t X - t Y ) 2 = 21.48

Best fitting line

The definition of the best fitting line plotted on t axes A “best fitting line” minimizes the average squared vertical distance of Y scores in the sample (expressed as t Y scores) from the line. The best fitting line is a least squares, unbiased estimate of values of Y in the sample. The generic formula for a line is Y=mx+b where m is the slope and b is the Y intercept. Thus, any specific line, such as the best fitting line, can be defined by its slope and its intercept.

The intercept of the best fitting line plotted on t axes The origin is the point where both t X and t Y =0.000 So the origin represents the mean of both the X and Y variable When plotted on t axes all best fitting lines go through the origin. Thus, the t Y intercept of the best fitting line = 0.000

The slope of and formula for the best fitting line When plotted on t axes the slope of the best fitting line = r, the correlation coefficient. To define a line we need its slope and Y intercept r = the slope and t Y intercept =0.00 The formula for the best fitting line is therefore t Y =rt X or t Y = rt X

Here’s how a visual representation of the best fitting line (slope = r, Y intercept = 0.000) and the dots representing t X and t Y scores might be described. (Whether the correlation is positive of negative doesn’t matter.) Perfect - scores fall exactly on a straight line. Strong - most scores fall near the line. Moderate - some are near the line, some not. Weak - the scores are only mildly linear. Independent - the scores are not linear at all.

Strength of a relationship Perfect

Strength of a relationship Strong r about.800

Strength of a relationship Moderate r about.500

Strength of a relationship r about Independent

r=.800, the formula for the best fitting line = ???

r=-.800, the formula for the best fitting line = ???

r=0.000, the formula for the best fitting line is:

Notice what that formula for independent variables says t Y = rt X = (t X ) = When t Y = 0.000, you are at the mean of Y So, when variables are independent, the best fitting line says that the best estimate of Y scores in the sample is back to the mean of Y regardless of your score on X Thus, when variables are independent we go back to saying everyone will score right at the mean

A note of caution: Watch out for the plot for which the best fitting line is a curve

Confidence intervals around rho T – relation to Chapter 6 In Chapter 6 we learned to create confidence intervals around mu T that allowed us to test a theory. To test our theory about mu we took a random sample, computed the sample mean and standard deviation, and determined whether the sample mean fell into that interval. If it did not, we had shown the theory that led us to predict mu T was false. We then discarded the theory and mu T and used the sample mean as our best estimate of the true population mean.

If we discard mu T, what do we use as our best estimate of mu? Generally, our best estimate of a population parameter is the sample statistic that estimates it. Our best estimate of mu has been and is the sample mean, X-bar. Since we have discarded our theory, we went back to using X-bar as our best (least squares, unbiased, consistent estimate) of mu.

More generally, we can test a theory (hypothesis) about any population parameter using a similar confidence interval. We theorize about what the value of the population parameter is. We get an estimate of the variability of the parameter We construct a confidence interval (usually a 95% confidence interval) in which our hypothesis says that the sample statistic should fall. We obtain a random sample and determine whether the sample statistic falls inside or outside our confidence interval

The sample statistic will fall inside or outside of the CI.95 If the sample statistic falls inside the confidence interval, our theory has received some support and we hold on to it. But the more interesting case is when the sample statistic falls outside the confidence interval. Then we must discard the theory and the theory based estimate of the population parameter. In that case, our best estimate of the population parameter is the sample statistic Remember, the sample statistic is a least squares, unbiased, consistent estimate of its population parameter.

We are going to do the same thing with a theory about rho rho is the correlation coefficient for the population. If we have a theory about rho, we can create a 95% confidence interval into which we expect r will fall. An r computed from a random sample will then fall inside or outside the confidence interval.

When r falls inside or outside of the CI.95 around rho T If r falls inside the confidence interval, our theory about rho has received some support and we hold on to it. But the more interesting case is when r falls outside the confidence interval. Then we must discard the theory and the theory based estimate of the population parameter. In that case, our best estimate of rho is the r we found in our random sample Thus, when r falls outside the CI.95 we can go back to using it as a least squares unbiased estimate of rho.

Chapter 7 slides end here Rest of slides are for other chapters and should not be reviewed here. RK – 10/24

Why is it so important to determine whether r fits a theory In Chapter 8 we go on to predict values of Y from values of X and r. The formula we use is called the regression equation, it is very much like the formula for the best fitting line. The only difference is that the best fitting line describes the relationship among the Y scores in the sample. But in Chapter 8 we move to predicting scores for people who are in the population from which the sample was drawn, but not in the sample.

That’s dangerous. Let me give you an example.

Assume, you are the personnel officer for a mid size company. You need to hire a typist. There are 2 applicants for the job. You give the applicants a typing test. Which would you hire: someone who types 6 words a minute with 12 mistakes or someone who types 100 words a minute with 1 mistake.

Who would you hire? Of course, you would predict that the second person will be a better typist and hire that person. Notice that we never gave the person with 6 words/minute a chance to be a typist in our firm. We prejudged her on the basis of the typing test. That is probably valid in this case – a typing test probably predicts fairly well how good a typist someone will be.

But say the situation is a little more complicated! You have several applicants for a leadership position in your firm. But it is not 2002, it is 1957, when we knew that only white males were capable of leadership in corporate America. That is, we all “know” that leadership ability is correlated with both gender and skin color, white and male are associated with high leadership ability and darker skin color and female gender with lower leadership ability. We now know this is absurd, but lots of people were never

Confidence intervals around mu T

Confidence intervals and hypothetical means We frequently have a theory about what the mean of a distribution should be. To be scientific, that theory about mu must be able to be proved wrong (falsified). One way to test a theory about a mean is to state a range where sample means should fall if the theory is correct. We usually state that range as a 95% confidence interval.

To test our theory, we take a random sample from the appropriate population and see if the sample mean falls where the theory says it should, inside the confidence interval. If the sample mean falls outside the 95% confidence interval established by the theory, the evidence suggests that our theoretical population mean and the theory that led to its prediction is wrong. When that happens our theory has been falsified. We must discard it and look for an alternative explanation of our data.

For example: For example, let’s say that we had a new antidepressant drug we wanted to peddle. Before we can do that we must show that the drug is safe. Drugs like ours can cause problems with body temperature. People can get chills or fever. We want to show that body temperature is not effected by our new drug.

Testing a theory “Everyone” knows that normal body temperature for healthy adults is 98.6 o F. Therefore, it would be nice if we could show that after taking our drug, healthy adults still had an average body temperature of 98.6 o F. So we might test a sample of 16 healthy adults, first giving them a standard dose of our drug and, when enough time had passed, taking their temperature to see whether it was 98.6 o F on the average.

Testing a theory - 2 Of course, even if we are right and our drug has no effect on body temperature, we wouldn’t expect a sample mean to be precisely … We would expect some sampling fluctuation around a population mean of 98.6 o F. So, if our drug does not cause change in body temperature, the sample mean should be close to It should, in fact, be within the 95% confidence interval around mu T, SO WE MUST CONSTRUCT A 95% CONFIDENCE INTERVAL AROUND 98.6 o AND SEE WHETHER OUR SAMPLE MEAN FALLS INSIDE OR OUTSIDE THE CI.

To create a confidence interval around mu T, we must estimate sigma from a sample. We randomly select a group of 16 healthy individuals from the population. We administer a standard clinical dose of our new drug for 3 days. We carefully measure body temperature. RESULTS: We find that the average body temperature in our sample is 99.5 o F with an estimated standard deviation of 1.40 o (s=1.40). IS 99.5 o F. IN THE 95% CI AROUND MU T ???

Knowing s and n we can easily compute the estimated standard error of the mean. Let’s say that s=1.40 o and n = 16: = 1.40/4.00 = 0.35 Using this estimated standard error we can construct a 95% confidence interval for the body temperature of a sample of 16 healthy adults.

We learned how to create confidence intervals with the Z distribution in Chapter 4. 95% of sample means will fall in a symmetrical interval around mu that goes from standard errors below mu to standard errors above mu A way to write that fact in statistical language is: CI.95 : mu + Z CRIT * sigma X-bar or CI.95 : mu - Z CRIT * sigma X-bar < X-bar < mu + Z CRIT * sigma X-bar For a 95% CI, Z CRIT = 1.960

But when we must estimate sigma with s, we must use the t distribution to define critical intervals around mu or mu T. Here is how we would write the formulae substituting t for Z and s for sigma CI 95 : mu T + t CRIT * s X-bar or CI.95 : mu T - t CRIT * s X-bar < X-bar < mu T + t CRIT * s X-bar Notice that the critical value of t that includes 95% of the sample means changes with the number of degrees of freedom for s, our estimate of sigma, and must be taken from the t table. If n= 16 in a single sample, df W =n-k=15.

df df df df df

So, mu T =98.6, t CRIT =2.131, s=1.40, n=16 Here is the confidence interval CI.95 : mu T + t CRIT * s X-bar = = (2.131)*(1.40/ ) = = (2.131)*(1.40/4) = (2.131)(0.35) = CI.95 : < X-bar < Our sample mean fell outside the CI.95 and falsifies the theory that our drug has no effect on body temperature. Our drug may cause a slight fever.