Correlation 2 Computations, and the best fitting line.

Slides:



Advertisements
Similar presentations
Simple Linear Regression and Correlation by Asst. Prof. Dr. Min Aung.
Advertisements

11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Lesson 10: Linear Regression and Correlation
Chapter 10 Regression. Defining Regression Simple linear regression features one independent variable and one dependent variable, as in correlation the.
Chapter 15 (Ch. 13 in 2nd Can.) Association Between Variables Measured at the Interval-Ratio Level: Bivariate Correlation and Regression.
The Regression Equation Using the regression equation to individualize prediction and move beyond saying that everyone is equal, that everyone should score.
The standard error of the sample mean and confidence intervals
t scores and confidence intervals using the t distribution
The standard error of the sample mean and confidence intervals
Chapter 7 -Part 1 Correlation. Correlation Topics zCo-relationship between two variables. zLinear vs Curvilinear relationships zPositive vs Negative relationships.
The Simple Regression Model
Correlation 2 Computations, and the best fitting line.
Confidence intervals using the t distribution. Chapter 6 t scores as estimates of z scores; t curves as approximations of z curves Estimated standard.
Chapter 7 -Part 1 Correlation. Correlation Topics zCorrelational research – what is it and how do you do “co-relational” research? zThe three questions:
SIMPLE LINEAR REGRESSION
The standard error of the sample mean and confidence intervals How far is the average sample mean from the population mean? In what interval around mu.
The Regression Equation How we can move beyond predicting that everyone should score right at the mean by using the regression equation to individualize.
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
The Regression Equation Using the regression equation to individualize prediction and move beyond saying that everyone is equal, that everyone should score.
Introduction to Probability and Statistics Linear Regression and Correlation.
Regression Chapter 10 Understandable Statistics Ninth Edition By Brase and Brase Prepared by Yixun Shi Bloomsburg University of Pennsylvania.
SIMPLE LINEAR REGRESSION
Chapter 1-6 Review Chapter 1 The mean, variance and minimizing error.
1 Chapter 8 – Regression 2 Basic review, estimating the standard error of the estimate and short cut problems and solutions.
T scores and confidence intervals using the t distribution.
Correlation and Regression Analysis
Relationships Among Variables
The standard error of the sample mean and confidence intervals How far is the average sample mean from the population mean? In what interval around mu.
Correlation & Regression
Correlation and Linear Regression
McGraw-Hill/Irwin Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 13 Linear Regression and Correlation.
Correlation and Regression A BRIEF overview Correlation Coefficients l Continuous IV & DV l or dichotomous variables (code as 0-1) n mean interpreted.
Linear Regression.
SIMPLE LINEAR REGRESSION
Linear Regression and Correlation
Section #6 November 13 th 2009 Regression. First, Review Scatter Plots A scatter plot (x, y) x y A scatter plot is a graph of the ordered pairs (x, y)
Simple Linear Regression Models
Chapter 15 Correlation and Regression
Chapter 13 Statistics © 2008 Pearson Addison-Wesley. All rights reserved.
© 2008 Pearson Addison-Wesley. All rights reserved Chapter 1 Section 13-6 Regression and Correlation.
© The McGraw-Hill Companies, Inc., Chapter 11 Correlation and Regression.
Hypothesis of Association: Correlation
Production Planning and Control. A correlation is a relationship between two variables. The data can be represented by the ordered pairs (x, y) where.
BIOL 582 Lecture Set 11 Bivariate Data Correlation Regression.
McGraw-Hill/Irwin Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 13 Linear Regression and Correlation.
Introduction to Probability and Statistics Thirteenth Edition Chapter 12 Linear Regression and Correlation.
Objective: Understanding and using linear regression Answer the following questions: (c) If one house is larger in size than another, do you think it affects.
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
11/23/2015Slide 1 Using a combination of tables and plots from SPSS plus spreadsheets from Excel, we will show the linkage between correlation and linear.
Creating a Residual Plot and Investigating the Correlation Coefficient.
Midterm Review Ch 7-8. Requests for Help by Chapter.
Correlation – Recap Correlation provides an estimate of how well change in ‘ x ’ causes change in ‘ y ’. The relationship has a magnitude (the r value)
Correlation & Regression Analysis
Chapter 8: Simple Linear Regression Yang Zhenlin.
Advanced Statistical Methods: Continuous Variables REVIEW Dr. Irina Tomescu-Dubrow.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Simple Linear Regression Analysis Chapter 13.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics Seventh Edition By Brase and Brase Prepared by: Lynn Smith.
1 Simple Linear Regression and Correlation Least Squares Method The Model Estimating the Coefficients EXAMPLE 1: USED CAR SALES.
Chapter 15 Association Between Variables Measured at the Interval-Ratio Level.
STATISTICS People sometimes use statistics to describe the results of an experiment or an investigation. This process is referred to as data analysis or.
Chapter 13 Linear Regression and Correlation. Our Objectives  Draw a scatter diagram.  Understand and interpret the terms dependent and independent.
Inference about the slope parameter and correlation
Regression and Correlation
Computations, and the best fitting line.
CHAPTER 10 Correlation and Regression (Objectives)
Correlation and Regression
6-1 Introduction To Empirical Models
Correlation and Regression
Warsaw Summer School 2017, OSU Study Abroad Program
Presentation transcript:

Correlation 2 Computations, and the best fitting line.

Correlation Topics Correlational research – what is it and how do you do “co-relational” research? The three questions: –Is it a linear or curvilinear correlation? –Is it a positive or negative relationship? –How strong is the relationship? Solving these questions with t scores and r, the estimated correlation coefficient derived from the tx and ty scores of individuals in a random sample.

Correlational research – how to start. To begin a correlational study, we select a population or, far more frequently, select a random sample from a population. (Since we use samples most of the time, for the most part, we will use the formulae and symbols for computing a correlation from a sample.) We then obtain two scores from each individual, one score on each of two variables. These are usually variables that we think might be related to each other for interesting reasons). We call one variable X and the other Y.

Correlational research: comparing t X & t Y scores We translate the raw scores on the X variable to t scores (called t X scores) and raw scores on the Y variable to t Y scores. –So each individual has a pair of scores, a t X score and a t Y score. You determine how similar or different the t X and t Y scores in the pairs are, on the average, by subtracting t Y from t X, then squaring, summing, and averaging the t X and t Y differences.

The estimated correlation coefficient, Pearson’s r With a simple formula, you transform the average squared differences between the t scores to Pearson’s correlation coefficient, r Pearson’s r indicates (with a single number), both the direction and strength of the relationship between the two variables in your sample. r also estimates the correlation in the population from which the sample was drawn –In Ch. 8, you will learn when you can use r that way.

r, strength and direction Perfect, positive Strong, positive+.750 Moderate, positive+.500 Weak, positive+.250 Independent.000 Weak, negative Moderate, negative Strong, negative Perfect, negative

Calculating Pearson’s r Select a random sample from a population; obtain scores on two variables, which we will call X and Y. Convert all the scores into t scores.

Calculating Pearson’s r First, subtract the t Y score from the t X score in each pair. Then square all of the differences and add them up, that is,  (t X - t Y ) 2.

Calculating Pearson’s r Estimate the average squared distance between Z X and Z Y by dividing by the sum of squared differences between the t scores by (n P - 1).  ( t X - t Y ) 2 / (n P - 1) To turn this estimate into Pearson’s r, use the formula r =1 - (1/2  ( t X - t Y ) 2 / (n P - 1))

Example: Calculate t scores for X DATA  X=30 N= 5 X=6.00 MS W = 40.00/(5-1) = 10 s X = 3.16 (X - X) X - X t x =(X-X)/ s SS W = 40.00

Calculate t scores for Y DATA  Y=55 N= 5 Y=11.00 MS W = 10.00/(5-1) = 2.50 s Y = 1.58 (Y - Y) Y - Y (t y =Y - Y) / s SS W = 10.00

Calculate r t Y t X t X - t Y (t X - t Y )  (t X - t Y ) 2 / (n P - 1)=0.200 r = (1/2 * (  (t X - t Y ) 2 / (n P - 1))) r = (1/2 *.200) = =.900  (t X - t Y ) 2 =0.80 This is a very strong, positive relationship.

Computing r from a more realistic set of data A study was performed to investigate whether the quality of an image affects reading time. The experimental hypothesis was that reduced quality would slow down reading time. Quality was measured on a scale of 1 to 10. Reading time was in seconds.

Quality vs Reading Time data: Compute the correlation Quality (scale 1-10) Reading time (seconds) Is there a relationship? Check for linearity. Compute r.

Calculate t scores for X X  X=39.25 n= 7 X=5.61 (X - X) X - X t X = (X - X) / s X MS W = 4.73/(7-1) = 0.79 s = 0.89 SS W = 4.73

Calculate t scores for Y Y  Y=52.5 n= 7 Y=7.50 MS W = 3.78/(7-1) = 0.63 s Y = 0.79 (Y - Y) Y - Y t Y = (Y - Y) / s Y SS W = 3.78

Plot t scores t Y t X

t score plot with best fitting line: linear? YES!

Calculate r t Y t X t Y - t X (t Y - t X )  (t X - t Y ) 2 / (n P - 1) = r = 1 - (1/2 * 3.580) = =  (t X - t Y ) 2 = 21.48

Best fitting line

The definition of the best fitting line plotted on t axes A “best fitting line” minimizes the average squared vertical distance of Y scores in the sample (expressed as t Y scores) from the line. The best fitting line is a least squares, unbiased estimate of values of Y in the sample. As you probably remember from pre-calc or high school, the generic formula for a line is Y=mx+b where m is the slope and b is the Y intercept. Thus, any specific line, such as the best fitting line, can be defined by its slope and its intercept.

The intercept of the best fitting line plotted on t axes The Y intercept of a line is the value of Y at the point where a line crosses the Y axes. A straight line just keeps on going, so any straight line can cross the Y axis only once. The origin is the point where both t X and t Y =0.000 So the origin represents the mean of both the X and Y variable. When plotted on t axes all best fitting lines go through the origin. Thus, the t Y intercept of the best fitting line = 0.000

The slope of the best fitting line The slope of a line is how fast it goes up (rises) as it moves one unit on the X axis. The formula is slope =rise/run If a line rises from your left to your right, it has a positive rise.If it falls, it has a negative rise. Run is always positive. So, the slope of a rising line is a positive rise divided by a positive run, and thus is positive. The slope of a falling line is a negative rise divided by a positive run, and thus is negative. When plotted on t axes the slope of the best fitting line equals r, the correlation coefficient.

The formula for the best fitting line To define a line we need its slope and Y intercept r = the slope and t Y intercept =0.00 The formula for the best fitting line is therefore t Y =rt X or t Y = rt X We call this formula Artie Ex, you use him for samples. His sister is Rosie Ex (as in Z Y = rhoZ X ). You use her for populations.

How the best fitting line would appear (slope = r, Y intercept = 0.000) when accompanied by the dots representing the actual t X and t Y scores. (Whether the correlation is positive or negative doesn’t matter.) Perfect - scores fall exactly on a straight line. Strong - most scores fall near the line. Moderate - some are near the line, some not. Weak - the scores are only mildly linear. Independent - the scores are not linear at all.

Strength of a relationship Perfect

Strength of a relationship Strong r about.800

Strength of a relationship Moderate r about.500

Strength of a relationship r about Independent

r=.800, the formula for the best fitting line = ???

r=-.800, the formula for the best fitting line = ???

r=0.000, the formula for the best fitting line is:

Notice what that formula for independent variables says t Y = rt X = (t X ) = When t Y = 0.000, you are at the mean of Y So, when variables are independent, the best fitting line says that the best estimate of Y scores in the sample is the mean of Y regardless of your score on X Thus, when variables are independent we go back to saying everyone will score right at the mean

A note of caution: Watch out for the plot for which the best fitting line is a curve

Confidence intervals around rho T – relation to Chapter 6 In Chapter 6 we learned to create confidence intervals around mu T that allowed us to test a theory. To test our theory about mu we took a random sample, computed the sample mean and standard deviation, and determined whether the sample mean fell into that interval. If it did not, we had shown the theory that led us to predict mu T was false. We then discarded the theory and mu T and used the sample mean as our best estimate of the true population mean.

If we discard mu T, what do we use as our best estimate of mu? Generally, our best estimate of a population parameter is the sample statistic that estimates it. Our best estimate of mu has been and is the sample mean, X-bar. Since we have discarded our theory, we went back to using X-bar as our best (least squares, unbiased, consistent estimate) of mu.

More generally, we can test a theory (hypothesis) about any population parameter using a similar confidence interval. We theorize about what the value of the population parameter is. We get an estimate of the variability of the parameter We construct a confidence interval (usually a 95% confidence interval) in which our hypothesis says that the sample statistic should fall. We obtain a random sample and determine whether the sample statistic falls inside or outside our confidence interval

The sample statistic will fall inside or outside of the CI.95 If the sample statistic falls inside the confidence interval, our theory has received some support and we hold on to it. But the more interesting case is when the sample statistic falls outside the confidence interval. Then we must discard the theory and the theory based estimate of the population parameter. In that case, our best estimate of the population parameter is the sample statistic Remember, the sample statistic is a least squares, unbiased, consistent estimate of its population parameter.

We are going to do the same thing with a theory about rho rho is the correlation coefficient for the population. If we have a theory about rho, we can create a 95% confidence interval into which we expect r will fall. An r computed from a random sample will then fall inside or outside the confidence interval.

When r falls inside or outside of the CI.95 around rho T If r falls inside the confidence interval, our theory about rho has received some support and we hold on to it. But the more interesting case is when r falls outside the confidence interval. Then we must discard the theory and the theory based estimate of the population parameter. In that case, our best estimate of rho is the r we found in our random sample Thus, when r falls outside the CI.95 we can go back to using it as a least squares unbiased estimate of rho.