Multivariate Data.

Slides:



Advertisements
Similar presentations
Probabilistic & Statistical Techniques Eng. Tamer Eshtawi First Semester Eng. Tamer Eshtawi First Semester
Advertisements

The General Linear Model. The Simple Linear Model Linear Regression.
Correlation & Regression Chapter 15. Correlation statistical technique that is used to measure and describe a relationship between two variables (X and.
Chapter 15 (Ch. 13 in 2nd Can.) Association Between Variables Measured at the Interval-Ratio Level: Bivariate Correlation and Regression.
The Simple Regression Model
Relationship of two variables
Correlation.
Chapter 15 Correlation and Regression
Correlation is a statistical technique that describes the degree of relationship between two variables when you have bivariate data. A bivariate distribution.
Basic Concepts of Correlation. Definition A correlation exists between two variables when the values of one are somehow associated with the values of.
Measure of Variability (Dispersion, Spread) 1.Range 2.Inter-Quartile Range 3.Variance, standard deviation 4.Pseudo-standard deviation.
Relationships If we are doing a study which involves more than one variable, how can we tell if there is a relationship between two (or more) of the.
CORRELATION. Correlation key concepts: Types of correlation Methods of studying correlation a) Scatter diagram b) Karl pearson’s coefficient of correlation.
Multivariate Data. Descriptive techniques for Multivariate data In most research situations data is collected on more than one variable (usually many.
Multivariate data. Regression and Correlation The Scatter Plot.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics Seventh Edition By Brase and Brase Prepared by: Lynn Smith.
The Simple Linear Regression Model. Estimators in Simple Linear Regression and.
Summarizing Data Graphical Methods. Histogram Stem-Leaf Diagram Grouped Freq Table Box-whisker Plot.
Linear Regression Hypothesis testing and Estimation.
Week 2 Normal Distributions, Scatter Plots, Regression and Random.
Lecture Slides Elementary Statistics Twelfth Edition
Chapter 2 Linear regression.
Linear Regression Essentials Line Basics y = mx + b vs. Definitions
The simple linear regression model and parameter estimation
Department of Mathematics
Regression and Correlation
1 Functions and Applications
Correlation & Regression
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Correlation and Simple Linear Regression
SCATTERPLOTS, ASSOCIATION AND RELATIONSHIPS
Chapter 6: Exploring Data: Relationships Lesson Plan
Correlation – Regression
Chapter 5 STATISTICS (PART 4).
SIMPLE LINEAR REGRESSION MODEL
Correlation and Regression
Elementary Statistics
Correlation and Regression
Simple Linear Regression - Introduction
Correlation and Simple Linear Regression
1) A residual: a) is the amount of variation explained by the LSRL of y on x b) is how much an observed y-value differs from a predicted y-value c) predicts.
Hypothesis testing and Estimation
Lecture Slides Elementary Statistics Thirteenth Edition
Chapter 6: Exploring Data: Relationships Lesson Plan
CHAPTER 26: Inference for Regression
6-1 Introduction To Empirical Models
Lecture Notes The Relation between Two Variables Q Q
Inference for Regression
CORRELATION ANALYSIS.
Correlation and Simple Linear Regression
Undergraduated Econometrics
Correlation and Regression
Comparing k Populations
Correlation and Regression
SIMPLE LINEAR REGRESSION
Simple Linear Regression and Correlation
Product moment correlation
SIMPLE LINEAR REGRESSION
Topic 8 Correlation and Regression Analysis
Review I am examining differences in the mean between groups How many independent variables? OneMore than one How many groups? Two More than two ?? ?
Warsaw Summer School 2017, OSU Study Abroad Program
Correlation & Regression
Honors Statistics Review Chapters 7 & 8
Correlation and Simple Linear Regression
CORRELATION & REGRESSION compiled by Dr Kunal Pathak
REGRESSION ANALYSIS 11/28/2019.
Correlation and Simple Linear Regression
Correlation and Simple Linear Regression
Presentation transcript:

Multivariate Data

Descriptive techniques for Multivariate data In most research situations data is collected on more than one variable (usually many variables)

Graphical Techniques The scatter plot The two dimensional Histogram

The Scatter Plot xi = the value of X for case i For two variables X and Y we will have a measurements for each variable on each case: xi, yi xi = the value of X for case i and yi = the value of Y for case i.

To Construct a scatter plot we plot the points: (xi, yi) for each case on the X-Y plane. (xi, yi) yi xi

The following table gives data on Verbal IQ, Math IQ,   Data Set #3 The following table gives data on Verbal IQ, Math IQ, Initial Reading Acheivement Score, and Final Reading Acheivement Score for 23 students who have recently completed a reading improvement program Initial Final Verbal Math Reading Reading Student IQ IQ Acheivement Acheivement 1 86 94 1.1 1.7 2 104 103 1.5 1.7 3 86 92 1.5 1.9 4 105 100 2.0 2.0 5 118 115 1.9 3.5 6 96 102 1.4 2.4 7 90 87 1.5 1.8 8 95 100 1.4 2.0 9 105 96 1.7 1.7 10 84 80 1.6 1.7 11 94 87 1.6 1.7 12 119 116 1.7 3.1 13 82 91 1.2 1.8 14 80 93 1.0 1.7 15 109 124 1.8 2.5 16 111 119 1.4 3.0 17 89 94 1.6 1.8 18 99 117 1.6 2.6 19 94 93 1.4 1.4 20 99 110 1.4 2.0 21 95 97 1.5 1.3 22 102 104 1.7 3.1 23 102 93 1.6 1.9

(84,80)

Some Scatter Patterns

Circular No relationship between X and Y Unable to predict Y from X

Ellipsoidal Positive relationship between X and Y Increases in X correspond to increases in Y (but not always) Major axis of the ellipse has positive slope

Example Verbal IQ, MathIQ

Some More Patterns

Ellipsoidal (thinner ellipse) Stronger positive relationship between X and Y Increases in X correspond to increases in Y (more freqequently) Major axis of the ellipse has positive slope Minor axis of the ellipse much smaller

Increased strength in the positive relationship between X and Y Increases in X correspond to increases in Y (almost always) Minor axis of the ellipse extremely small in relationship to the Major axis of the ellipse.

Perfect positive relationship between X and Y Y perfectly predictable from X Data falls exactly along a straight line with positive slope

Ellipsoidal Negative relationship between X and Y Increases in X correspond to decreases in Y (but not always) Major axis of the ellipse has negative slope slope

The strength of the relationship can increase until changes in Y can be perfectly predicted from X

Some Non-Linear Patterns

In a Linear pattern Y increase with respect to X at a constant rate In a Non-linear pattern the rate that Y increases with respect to X is variable

Growth Patterns

Growth patterns frequently follow a sigmoid curve Growth at the start is slow It then speeds up Slows down again as it reaches it limiting size

Review the scatter plot

Some Scatter Patterns

Non-Linear Patterns

Measures of strength of a relationship (Correlation) Pearson’s correlation coefficient (r) Spearman’s rank correlation coefficient (rho, r)

Assume that we have collected data on two variables X and Y. Let (x1, y1) (x2, y2) (x3, y3) … (xn, yn) denote the pairs of measurements on the on two variables X and Y for n cases in a sample (or population)

From this data we can compute summary statistics for each variable. The means and

The standard deviations

These statistics: give information for each variable separately but give no information about the relationship between the two variables

Consider the statistics:

The first two statistics: are used to measure variability in each variable they are used to compute the sample standard deviations and

The third statistic: is used to measure correlation If two variables are positively related the sign of will agree with the sign of

When is positive will be positive. When xi is above its mean, yi will be above its mean When is negative will be negative. When xi is below its mean, yi will be below its mean The product will be positive for most cases.

This implies that the statistic will be positive Most of the terms in this sum will be positive

On the other hand If two variables are negatively related the sign of will be opposite in sign to

When is positive will be negative. When xi is above its mean, yi will be below its mean When is negative will be positive. When xi is below its mean, yi will be above its mean The product will be negative for most cases.

Again implies that the statistic will be negative Most of the terms in this sum will be negative

Pearson’s correlation coefficient r A statistic measuring the strength of the relationship between two variables - X and Y

Pearsons correlation coefficient is defined as below:

The denominator: is always positive

The numerator: is positive if there is a positive relationship between X ad Y and negative if there is a negative relationship between X ad Y. This property carries over to Pearson’s correlation coefficient r

Properties of Pearson’s correlation coefficient r The value of r is always between –1 and +1. If the relationship between X and Y is positive, then r will be positive. If the relationship between X and Y is negative, then r will be negative. If there is no relationship between X and Y, then r will be zero. The value of r will be +1 if the points, (xi, yi) lie on a straight line with positive slope. The value of r will be -1 if the points, (xi, yi) lie on a straight line with negative slope.

r =1

r = 0.95

r = 0.7

r = 0.4

r = 0

r = -0.4

r = -0.7

r = -0.8

r = -0.95

r = -1

Computing formulae for the statistics:

To compute first compute Then

Example Verbal IQ, MathIQ

The following table gives data on Verbal IQ, Math IQ,   Data Set #3 The following table gives data on Verbal IQ, Math IQ, Initial Reading Acheivement Score, and Final Reading Acheivement Score for 23 students who have recently completed a reading improvement program Initial Final Verbal Math Reading Reading Student IQ IQ Acheivement Acheivement 1 86 94 1.1 1.7 2 104 103 1.5 1.7 3 86 92 1.5 1.9 4 105 100 2.0 2.0 5 118 115 1.9 3.5 6 96 102 1.4 2.4 7 90 87 1.5 1.8 8 95 100 1.4 2.0 9 105 96 1.7 1.7 10 84 80 1.6 1.7 11 94 87 1.6 1.7 12 119 116 1.7 3.1 13 82 91 1.2 1.8 14 80 93 1.0 1.7 15 109 124 1.8 2.5 16 111 119 1.4 3.0 17 89 94 1.6 1.8 18 99 117 1.6 2.6 19 94 93 1.4 1.4 20 99 110 1.4 2.0 21 95 97 1.5 1.3 22 102 104 1.7 3.1 23 102 93 1.6 1.9

Now Hence

Thus Pearsons correlation coefficient is:

Thus r = 0.769 Verbal IQ and Math IQ are positively correlated. If Verbal IQ is above (below) the mean then for most cases Math IQ will also be above (below) the mean.

Is the improvement in reading achievement (RA) related to either Verbal IQ or Math IQ? improvement in RA = Final RA – Initial RA

The Data Correlation between Math IQ and RA Improvement Correlation between Verbal IQ and RA Improvement

Scatterplot: Math IQ vs RA Improvement

Scatterplot: Verbal IQ vs RA Improvement

correlation coefficient Spearman’s rank correlation coefficient r (rho)

Spearman’s rank correlation coefficient r (rho) Spearman’s rank correlation coefficient is computed as follows: Arrange the observations on X in increasing order and assign them the ranks 1, 2, 3, …, n Arrange the observations on Y in increasing order and assign them the ranks 1, 2, 3, …, n. For any case (i) let (xi, yi) denote the observations on X and Y and let (ri, si) denote the ranks on X and Y.

If the variables X and Y are strongly positively correlated the ranks on X should generally agree with the ranks on Y. (The largest X should be the largest Y, The smallest X should be the smallest Y). If the variables X and Y are strongly negatively correlated the ranks on X should in the reverse order to the ranks on Y. (The largest X should be the smallest Y, The smallest X should be the largest Y). If the variables X and Y are uncorrelated the ranks on X should randomly distributed with the ranks on Y.

Spearman’s rank correlation coefficient is defined as follows: For each case let di = ri – si = difference in the two ranks. Then Spearman’s rank correlation coefficient (r) is defined as follows:

Properties of Spearman’s rank correlation coefficient r The value of r is always between –1 and +1. If the relationship between X and Y is positive, then r will be positive. If the relationship between X and Y is negative, then r will be negative. If there is no relationship between X and Y, then r will be zero. The value of r will be +1 if the ranks of X completely agree with the ranks of Y. The value of r will be -1 if the ranks of X are in reverse order to the ranks of Y.

Example xi 25.0 33.9 16.7 37.4 24.6 17.3 40.2 yi 24.3 38.7 13.4 32.1 28.0 12.5 44.9 Ranking the X’s and the Y’s we get: ri 4 5 1 6 3 2 7 si 3 6 2 5 4 1 7 Computing the differences in ranks gives us: di 1 -1 -1 1 -1 1 0

Computing Pearsons correlation coefficient, r, for the same problem:

To compute first compute

Then

and Compare with

Comments: Spearman’s rank correlation coefficient r and Pearson’s correlation coefficient r The value of r can also be computed from: Spearman’s r is Pearson’s r computed from the ranks.

Spearman’s r is less sensitive to extreme observations. (outliers) The value of Pearson’s r is much more sensitive to extreme outliers. This is similar to the comparison between the median and the mean, the standard deviation and the pseudo-standard deviation. The mean and standard deviation are more sensitive to outliers than the median and pseudo- standard deviation.

Scatter plots

Some Scatter Patterns

Non-Linear Patterns

Measuring correlation Pearson’s correlation coefficient r Spearman’s rank correlation coefficient r

Simple Linear Regression Fitting straight lines to data

The Least Squares Line The Regression Line When data is correlated it falls roughly about a straight line.

In this situation wants to: Find the equation of the straight line through the data that yields the best fit. The equation of any straight line: is of the form: Y = a + bX b = the slope of the line a = the intercept of the line

Rise = y2-y1 Run = x2-x1 y2-y1 Rise b = = Run x2-x1 a

a is the value of Y when X is zero b is the rate that Y increases per unit increase in X. For a straight line this rate is constant. For non linear curves the rate that Y increases per unit increase in X varies with X.

Linear

Non-linear

Example: In the following example both blood pressure and age were measure for each female subject. Subjects were grouped into age classes and the median Blood Pressure measurement was computed for each age class. He data are summarized below: Age Class 30-40 40-50 50-60 60-70 70-80 Mipoint Age (X) 35 45 55 65 75 Median BP (Y) 114 124 143 158 166

Graph:

Interpretation of the slope and intercept Intercept – value of Y at X = 0. Predicted Blood pressure of a newborn (65.1). This interpretation remains valid only if linearity is true down to X = 0. Slope – rate of increase in Y per unit increase in X. Blood Pressure increases 1.38 units each year.

Fitting the best straight line to “linear” data The Least Squares Line Fitting the best straight line to “linear” data

Reasons for fitting a straight line to data It provides a precise description of the relationship between Y and X. The interpretation of the parameters of the line (slope and intercept) leads to an improved understanding of the phenomena that is under study. The equation of the line is useful for prediction of the dependent variable (Y) from the independent variable (X).

Assume that we have collected data on two variables X and Y. Let (x1, y1) (x2, y2) (x3, y3) … (xn, yn) denote the pairs of measurements on the on two variables X and Y for n cases in a sample (or population)

Let Y = a + b X denote an arbitrary equation of a straight line. a and b are known values. This equation can be used to predict for each value of X, the value of Y. For example, if X = xi (as for the ith case) then the predicted value of Y is:

For example if Y = a + b X = 25.2 + 2.0 X Is the equation of the straight line. and if X = xi = 20 (for the ith case) then the predicted value of Y is:

If the actual value of Y is yi = 70.0 for case i, then the difference is the error in the prediction for case i. is also called the residual for case i

If the residual can be computed for each case in the sample, The residual sum of squares (RSS) is a measure of the “goodness of fit of the line Y = a + bX to the data

Y (x3,y3) Y=a+bX r3 r4 (x4,y4) (x1,y1) r2 (x2,y2) r1 X

The optimal choice of a and b will result in the residual sum of squares attaining a minimum. If this is the case than the line: Y = a + bX is called the Least Squares Line

The equation for the least squares line Let

Computing Formulae:

Then the slope of the least squares line can be shown to be:

and the intercept of the least squares line can be shown to be:

The following data showed the per capita consumption of cigarettes per month (X) in various countries in 1930, and the death rates from lung cancer for men in 1950.   TABLE : Per capita consumption of cigarettes per month (Xi) in n = 11 countries in 1930, and the death rates, Yi (per 100,000), from lung cancer for men in 1950.   Country (i) Xi Yi Australia 48 18 Canada 50 15 Denmark 38 17 Finland 110 35 Great Britain 110 46 Holland 49 24 Iceland 23 6 Norway 25 9 Sweden 30 11 Switzerland 51 25 USA 130 20  

Fitting the Least Squares Line

Fitting the Least Squares Line First compute the following three quantities:

Computing Estimate of Slope and Intercept

Y = 6.756 + (0.228)X

Interpretation of the slope and intercept Intercept – value of Y at X = 0. Predicted death rate from lung cancer (6.756) for men in 1950 in Counties with no smoking in 1930 (X = 0). Slope – rate of increase in Y per unit increase in X. Death rate from lung cancer for men in 1950 increases 0.228 units for each increase of 1 cigarette per capita consumption in 1930.

Correlation & Linear Regression A review

The Scattergram Pearson’s Correlation Coefficient Spearman’s Rank Correlation Coefficient

The Least squares Line Slope and intercept

Comment: Regression to the mean The Least Squares line is not the major axis of the ellipse that covers the data. The major axis of the ellipse The Least Squares line

If slope of the major axis is positive, the slope of the Least Squares line will also be positive but not as large This fact is sometimes revered to as regression towards the mean Suppose X is the father’s height and Y is the son’s height. If the fathers is tall within his population, then the son will also likely be tall but not as tall as the father within his population.

Relationship between correlation and Linear Regression Pearsons correlation. Takes values between –1 and +1

Least squares Line Y = a + bX Minimises the Residual Sum of Squares: The Sum of Squares that measures the variability in Y that is unexplained by X. This can also be denoted by: SSunexplained

Some other Sum of Squares: The Sum of Squares that measures the total variability in Y (ignoring X).

The Sum of Squares that measures the total variability in Y that is explained by X.

It can be shown: (Total variability in Y) = (variability in Y explained by X) + (variability in Y unexplained by X)

It can also be shown: = proportion variability in Y explained by X. = the coefficient of determination

Further: = proportion variability in Y that is unexplained by X.

Example  TABLE : Per capita consumption of cigarettes per month (Xi) in n = 11 countries in 1930, and the death rates, Yi (per 100,000), from lung cancer for men in 1950.   Country (i) Xi Yi Australia 48 18 Canada 50 15 Denmark 38 17 Finland 110 35 Great Britain 110 46 Holland 49 24 Iceland 23 6 Norway 25 9 Sweden 30 11 Switzerland 51 25 USA 130 20  

Fitting the Least Squares Line First compute the following three quantities:

Computing Estimate of Slope and Intercept

Computing r and r2 54.4% of the variability in Y (death rate due to lung Cancer (1950) is explained by X (per capita cigarette smoking in 1930)

Y = 6.756 + (0.228)X

Comments Correlation will be +1 or -1 if the data lies on a straight line. Correlation can be zero or close to zero if the data is either Not related or In some situations non-linear

Example The data

One should be careful in interpreting zero correlation. It does not necessarily imply that Y is not related to X. It could happen that Y is non-linearly related to X. One should plot Y vs X before concluding that Y is not related to X.

Next topic: Categorical Data