4/9/2005 11:38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu)http://statdtedm.6to23.

Slides:



Advertisements
Similar presentations
Chapter 4 Sampling Distributions and Data Descriptions.
Advertisements

Angstrom Care 培苗社 Quadratic Equation II
1
Ecole Nationale Vétérinaire de Toulouse Linear Regression
STATISTICS Linear Statistical Models
STATISTICS HYPOTHESES TEST (I)
STATISTICS INTERVAL ESTIMATION Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering National Taiwan University.
STATISTICS POINT ESTIMATION Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering National Taiwan University.
STATISTICS Univariate Distributions
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
Chapter 7 Sampling and Sampling Distributions
Simple Linear Regression 1. review of least squares procedure 2
Biostatistics Unit 5 Samples Needs to be completed. 12/24/13.
Factoring Quadratics — ax² + bx + c Topic
Chapter 4: Basic Estimation Techniques
3/2003 Rev 1 I – slide 1 of 33 Session I Part I Review of Fundamentals Module 2Basic Physics and Mathematics Used in Radiation Protection.
Table 12.1: Cash Flows to a Cash and Carry Trading Strategy.
PP Test Review Sections 6-1 to 6-6
Chi-Square and Analysis of Variance (ANOVA)
LIAL HORNSBY SCHNEIDER
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
Functions, Graphs, and Limits
1 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt Synthetic.
1 hi at no doifpi me be go we of at be do go hi if me no of pi we Inorder Traversal Inorder traversal. n Visit the left subtree. n Visit the node. n Visit.
Statistical Inferences Based on Two Samples
© The McGraw-Hill Companies, Inc., Chapter 10 Testing the Difference between Means and Variances.
1 Let’s Recapitulate. 2 Regular Languages DFAs NFAs Regular Expressions Regular Grammars.
Chapter Thirteen The One-Way Analysis of Variance.
Ch 14 實習(2).
Chapter 8 Estimation Understandable Statistics Ninth Edition
Copyright © 2013 Pearson Education, Inc. All rights reserved Chapter 11 Simple Linear Regression.
Experimental Design and Analysis of Variance
10-1 COMPLETE BUSINESS STATISTICS by AMIR D. ACZEL & JAYAVEL SOUNDERPANDIAN 6 th edition (SIE)
4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu)
COMPLETE f o u r t h e d i t i o n BUSINESS STATISTICS Aczel Irwin/McGraw-Hill © The McGraw-Hill Companies, Inc., Using Statistics The Simple.
Simple Linear Regression Analysis
©The McGraw-Hill Companies, Inc. 2008McGraw-Hill/Irwin Linear Regression and Correlation Chapter 13.
Correlation and Linear Regression
Multiple Regression and Model Building
16. Mean Square Estimation
9. Two Functions of Two Random Variables
4/4/2015Slide 1 SOLVING THE PROBLEM A one-sample t-test of a population mean requires that the variable be quantitative. A one-sample test of a population.
Commonly Used Distributions
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Regression Analysis Module 3. Regression Regression is the attempt to explain the variation in a dependent variable using the variation in independent.
Chapter 10 Simple Regression.
The Simple Regression Model
SIMPLE LINEAR REGRESSION
Chapter Topics Types of Regression Models
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
SIMPLE LINEAR REGRESSION
SIMPLE LINEAR REGRESSION
Introduction to Linear Regression and Correlation Analysis
CPE 619 Simple Linear Regression Models Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama.
Simple Linear Regression Models
Regression Analysis. 1. To comprehend the nature of correlation analysis. 2. To understand bivariate regression analysis. 3. To become aware of the coefficient.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Simple Linear Regression Analysis Chapter 13.
Bivariate Regression. Bivariate Regression analyzes the relationship between two variables. Bivariate Regression analyzes the relationship between two.
The simple linear regression model and parameter estimation
Principles of Biostatistics
Chapter 4 Basic Estimation Techniques
Correlation and Simple Linear Regression
Correlation and Simple Linear Regression
Correlation and Simple Linear Regression
Simple Linear Regression and Correlation
Product moment correlation
Correlation and Simple Linear Regression
Correlation and Simple Linear Regression
Presentation transcript:

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Principles of Biostatistics Simple Linear Regression PPT based on Dr Chuanhua Yu and Wikipedia

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Terminology Moments, Skewness, Kurtosis Analysis of varianceANOVA Response (dependent) variable Explanatory (independent) variable Linear regression model Method of least squares Normal equation sum of squares, Error SSE sum of squares, Regression SSR sum of squares, TotalSST Coefficient of Determination R 2 F-value P-value, t-test, F-test, p-test Homoscedasticity heteroscedasticity

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Normal distribution and terms 18.1 An Example 18.2 The Simple Linear Regression Model 18.3 Estimation: The Method of Least Squares 18.4 Error Variance and the Standard Errors of Regression Estimators 18.5 Confidence Intervals for the Regression Parameters 18.6 Hypothesis Tests about the Regression Relationship 18.7 How Good is the Regression? 18.8 Analysis of Variance Table and an F Test of the Regression Model 18.9 Residual Analysis Prediction Interval and Confidence Interval Contents

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Normal Distribution The continuous probability density function of the normal distribution is the Gaussian function where σ > 0 is the standard deviation, the real parameter μ is the expected value, and is the density function of the "standard" normal distribution: i.e., the normal distribution with μ = 0 and σ = 1.

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Normal Distribution

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Moment About The Mean The kth moment about the mean (or kth central moment) of a real-valued random variable X is the quantity μ k = E[(X − E[X]) k ], where E is the expectation operator. For a continuous uni-variate probability distribution with probability density function f(x), the moment about the mean μ is The first moment about zero, if it exists, is the expectation of X, i.e. the mean of the probability distribution of X, designated μ. In higher orders, the central moments are more interesting than the moments about zero. μ 1 is 0. μ 2 is the variance, the positive square root of which is the standard deviation, σ. μ 3/ σ 3 is Skewness, often γ. μ 3/ σ 4 -3 is Kurtosis.

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Skewness Consider the distribution in the figure. The bars on the right side of the distribution taper differently than the bars on the left side. These tapering sides are called tails (or snakes), and they provide a visual means for determining which of the two kinds of skewness a distribution has: 1.negative skew: The left tail is longer; the mass of the distribution is concentrated on the right of the figure. The distribution is said to be left-skewed. 2.positive skew: The right tail is longer; the mass of the distribution is concentrated on the left of the figure. The distribution is said to be right-skewed.

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Skewness Skewness, the third standardized moment, is written as γ 1 and defined as where μ 3 is the third moment about the mean and σ is the standard deviation. For a sample of n values the sample skewness is

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Kurtosis Kurtosis is the degree of peakedness of a distribution. A normal distribution is a mesokurtic distribution. A pure leptokurtic distribution has a higher peak than the normal distribution and has heavier tails. A pure platykurtic distribution has a lower peak than a normal distribution and lighter tails.

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Kurtosis The fourth standardized moment is defined as where μ 4 is the fourth moment about the mean and σ is the standard deviation. For a sample of n values the sample kurtosis is

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) An example Table18.1 IL-6 levels in brain and serum (pg/ml) of 10 patients with subarachnoid hemorrhage Patient i Serum IL-6 (pg/ml) x Brain IL-6 (pg/ml) y

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) This scatterplot locates pairs of observations of serum IL-6 on the x-axis and brain IL-6 on the y-axis. We notice that: Larger (smaller) values of brain IL-6 tend to be associated with larger (smaller) values of serum IL-6. The scatter of points tends to be distributed around a positively sloped straight line. The pairs of values of serum IL-6 and brain IL-6 are not located exactly on a straight line. The scatter plot reveals a more or less strong tendency rather than a precise linear relationship. The line represents the nature of the relationship on average. Scatterplot

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) X Y X Y X Y X Y X Y X Y Examples of Other Scatterplots

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) The inexact nature of the relationship between serum and brain suggests that a statistical model might be useful in analyzing the relationship. A statistical model separates the systematic component of a relationship from the random component. The inexact nature of the relationship between serum and brain suggests that a statistical model might be useful in analyzing the relationship. A statistical model separates the systematic component of a relationship from the random component. Data Statistical model Systematic component + Random errors In ANOVA, the systematic component is the variation of means between samples or treatments (SSTR) and the random component is the unexplained variation (SSE). In regression, the systematic component is the overall linear relationship, and the random component is the variation around the line. In ANOVA, the systematic component is the variation of means between samples or treatments (SSTR) and the random component is the unexplained variation (SSE). In regression, the systematic component is the overall linear relationship, and the random component is the variation around the line. Model Building

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) The population simple linear regression model: y=  +  x +  or  y|x =  +  x Nonrandom or Random Systematic Component Component Where y is the dependent (response) variable, the variable we wish to explain or predict; x is the independent (explanatory) variable, also called the predictor variable; and  is the error term, the only random component in the model, and thus, the only source of randomness in y.  y|x is the mean of y when x is specified, all called the conditional mean of Y.  is the intercept of the systematic component of the regression relationship.  is the slope of the systematic component. The population simple linear regression model: y=  +  x +  or  y|x =  +  x Nonrandom or Random Systematic Component Component Where y is the dependent (response) variable, the variable we wish to explain or predict; x is the independent (explanatory) variable, also called the predictor variable; and  is the error term, the only random component in the model, and thus, the only source of randomness in y.  y|x is the mean of y when x is specified, all called the conditional mean of Y.  is the intercept of the systematic component of the regression relationship.  is the slope of the systematic component The Simple Linear Regression Model

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) The simple linear regression model posits an exact linear relationship between the expected or average value of Y, the dependent variable Y, and X, the independent or predictor variable:  y|x =  +  x Actual observed values of Y (y) differ from the expected value (  y|x ) by an unexplained or random error(  ): y =  y|x +  =  +  x +  The simple linear regression model posits an exact linear relationship between the expected or average value of Y, the dependent variable Y, and X, the independent or predictor variable:  y|x =  +  x Actual observed values of Y (y) differ from the expected value (  y|x ) by an unexplained or random error(  ): y =  y|x +  =  +  x +  X Y  y|x =  +  x x } }  = Slope 1 y { Error:  Regression Plot Picturing the Simple Linear Regression Model 0 {  = Intercept

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) The relationship between X and Y is a straight-Line (linear) relationship. The values of the independent variable X are assumed fixed (not random); the only randomness in the values of Y comes from the error term . The errors  are uncorrelated (i.e. Independent) in successive observations. The errors  are Normally distributed with mean 0 and variance  2 (Equal variance). That is:  ~ N(0,  2 ) The relationship between X and Y is a straight-Line (linear) relationship. The values of the independent variable X are assumed fixed (not random); the only randomness in the values of Y comes from the error term . The errors  are uncorrelated (i.e. Independent) in successive observations. The errors  are Normally distributed with mean 0 and variance  2 (Equal variance). That is:  ~ N(0,  2 ) X Y LINE assumptions of the Simple Linear Regression Model Identical normal distributions of errors, all centered on the regression line. Assumptions of the Simple Linear Regression Model  y|x =  +  x x y N(  y|x,  y|x 2 )

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu)  Estimation of a simple linear regression relationship involves finding estimated or predicted values of the intercept and slope of the linear regression line. The estimated regression equation: y= a+ bx + e where a estimates the intercept of the population regression line,  ; b estimates the slope of the population regression line,  ; and e stands for the observed errors the residuals from fitting the estimated regression line a+ bx to a set of n points. Estimation of a simple linear regression relationship involves finding estimated or predicted values of the intercept and slope of the linear regression line. The estimated regression equation: y= a+ bx + e where a estimates the intercept of the population regression line,  ; b estimates the slope of the population regression line,  ; and e stands for the observed errors the residuals from fitting the estimated regression line a+ bx to a set of n points Estimation: The Method of Least Squares The estimated regression line: + where (y (y-hat) is the value of Y lying on the fitted regression line for a given value of X.  y abx 

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Fitting a Regression Line X Y Data X Y Three errors from a fitted line X Y Three errors from the least squares regression line e X Errors from the least squares regression line are minimized

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) { Y X Errors in Regression xixi yiyi

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Least Squares Regression a SSE b Least squares a Least squares b The sum of squared errors in regressionis: SSE= e (y The is that which the SSE with respect to theestimates a and b. i 2 i=1 n i i=1 n    )y i 2 least squares regression lineminimizes SSE: sum of squared errors Parabola function

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Normal Equation S is minimized when its gradient with respect to each parameter is equal to zero. The elements of the gradient vector are the partial derivatives of S with respect to the parameters: Since, the derivatives are Substitution of the expressions for the residuals and the derivatives into the gradient equations gives Upon rearrangement, the normal equations are obtained. The normal equations are written in matrix notation as The solution of the normal equations yields the vector of the optimal parameter values.

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Normal Equation

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Sums of Squares, Cross Products, and Least Squares Estimators

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Example 18-1

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) New Normal Distributions Since each coefficient estimator is a linear combination of Y (normal random variables), each b i (i = 0,1,..., k) is normally distributed. Notation: in 2D special case, when j=0, in 2D special case

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Y X What you see when looking at the total variation of Y. X What you see when looking along the regression line at the error variance of Y. Y Total Variance and Error Variance

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) X Y Square and sum all regression errors to find SSE Error Variance and the Standard Errors of Regression Estimators

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Standard Errors of Estimates in Regression

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) T distribution Student's distribution arises when the population standard deviation is unknown and has to be estimated from the data.

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Confidence Intervals for the Regression Parameters

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Y X Y X Y X Constant YUnsystematic VariationNonlinear Relationship A hypothesis test for the existence of a linear relationship between X and Y: H 0 H 1 Test statistic for the existence of a linear relationship between X and Y: where is the least-squares estimate ofthe regression slope and is the standard error of When thenull hypothesis is true, the statistic has a distribution with- degrees offreedom. : :     b sbsb b tn 18.6 Hypothesis Tests about the Regression Relationship H 0 :  =0

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) A test of the null hypothesis that the means of two normally distributed populations are equal. Given two data sets, each characterized by its mean, standard deviation and number of data points, we can use some kind of t test to determine whether the means are distinct, provided that the underlying distributions can be assumed to be normal. All such tests are usually called Student's t tests T-test

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) T-test

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) T test Table

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) The coefficient of determination, R 2, is a descriptive measure of the strength of the regression relationship, a measure how well the regression line fits the data.. { Y X { } Total Deviation Explained Deviation Unexplained Deviation Percentage of total variation explained by the regression How Good is the Regression? R2=R2= R 2 : coefficient of determination

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Y X R 2 =0SSE SST Y X R 2 =0.90 SSESSE SST SSR Y X R 2 =0.50 SSE SST SSR The Coefficient of Determination

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Another Test Earlier in this section you saw how to perform a t-test to compare a sample mean to an accepted value, or to compare two sample means. In this section, you will see how to use the F-test to compare two variances or standard deviations. When using the F-test, you again require a hypothesis, but this time, it is to compare standard deviations. That is, you will test the null hypothesis H 0 : σ 1 2 = σ 2 2 against an appropriate alternate hypothesis.

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) T test is used for every single parameter. If there are many dimensions, all parameters are independent. Too verify the combination of all the paramenters, we can use F-test. The formula for an F- test in multiple-comparison ANOVA problems is: F = (between-group variability) / (within-group variability) F-test

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) F test table

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Analysis of Variance Table and an F Test of the Regression Model

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) In 2D case, F-test and T-test are same. It can be proved that f = t 2 So in 2D case, either F or T test is enough. This is not true for more variables. 2. F-test and R have the same purpose to measure the whole regressions. They are co-related as 3. F-test are better than R became it has better metric which has distributions for hypothesis test. Approach: 1.First F-test. If passed, continue. 2.T-test for every parameter, if some parameter can not pass, then we can delete it can re-evaluate the regression. 3.Note we can delete only one parameters(which has least effect on regression) at one time, until we get all the parameters with strong effect. F-test T-test and R

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Residual Analysis

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Residual Analysis. The plot shows the a curve relationship between the residuals and the X-values (serum IL - 6). Example 18-1: Using Computer-Excel

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Prediction Interval samples from a normally distributed population. The mean and standard deviation of the population are unknown except insofar as they can be estimated based on the sample. It is desired to predict the next observation. Let n be the sample size; let μ and σ be respectively the unobservable mean and standard deviation of the population. Let X1,..., Xn, be the sample; let Xn+1 be the future observation to be predicted. Let and

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Prediction Interval Then it is fairly routine to show that It has a Student's t-distribution with n − 1 degrees of freedom. Consequently we have where T a is the 100((1 + p)/2) th percentile of Student's t-distribution with n − 1 degrees of freedom. Therefore the numbers are the endpoints of a 100p% prediction interval for Xn + 1.

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Point Prediction – A single-valued estimate of Y for a given value of X obtained by inserting the value of X in the estimated regression equation. Prediction Interval – For a value of Y given a value of X Variation in regression line estimate Variation of points around regression line – For confidence interval of an average value of Y given a value of X Variation in regression line estimate Point Prediction – A single-valued estimate of Y for a given value of X obtained by inserting the value of X in the estimated regression equation. Prediction Interval – For a value of Y given a value of X Variation in regression line estimate Variation of points around regression line – For confidence interval of an average value of Y given a value of X Variation in regression line estimate Prediction Interval and Confidence Interval

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) confidence interval of an average value of Y given a value of X

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Confidence Interval for the Average Value of Y

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Prediction Interval For a value of Y given a value of X

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Prediction Interval for a Value of Y

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Confidence Interval for the Average Value of Y and Prediction Interval for the Individual Value of Y

4/9/ :38 AM Department of Epidemiology and Health Statistics,Tongji Medical College (Dr. Chuanhua Yu) Summary 1. Regression analysis is applied for prediction while control effect of independent variable X. 2. The principle of least squares in solution of regression parameters is to minimize the residual sum of squares The coefficient of determination, R 2, is a descriptive measure of the strength of the regression relationship. 4. There are two confidence bands: one for mean predictions and the other for individual prediction values 5. Residual analysis is used to check goodness of fit for models