Inference About 2 or More Normal Populations, Part 1

Slides:



Advertisements
Similar presentations
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 10 The Analysis of Variance.
Advertisements

Week 2 – PART III POST-HOC TESTS. POST HOC TESTS When we get a significant F test result in an ANOVA test for a main effect of a factor with more than.
1-Way Analysis of Variance
MANOVA: Multivariate Analysis of Variance
Hypothesis Testing Steps in Hypothesis Testing:
Multivariate Analysis of Variance, Part 2 BMTRY 726 2/21/14.
Analysis of variance (ANOVA)-the General Linear Model (GLM)
Design of Engineering Experiments - Experiments with Random Factors
1 Multifactor ANOVA. 2 What We Will Learn Two-factor ANOVA K ij =1 Two-factor ANOVA K ij =1 –Interaction –Tukey’s with multiple comparisons –Concept of.
Analysis of Variance. Experimental Design u Investigator controls one or more independent variables –Called treatment variables or factors –Contain two.
BCOR 1020 Business Statistics
Lecture 9: One Way ANOVA Between Subjects
EXPERIMENTAL DESIGN Random assignment Who gets assigned to what? How does it work What are limits to its efficacy?
Chapter 2 Simple Comparative Experiments
Inferences About Process Quality
Analysis of Variance & Multivariate Analysis of Variance
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
Today Concepts underlying inferential statistics
5-3 Inference on the Means of Two Populations, Variances Unknown
Multivariate Analysis of Variance, Part 1 BMTRY 726.
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
Chapter 9 Hypothesis Testing and Estimation for Two Population Parameters.
1 Comparison of Several Multivariate Means Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute.
11 Comparison of Several Multivariate Means Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute.
Repeated Measurements Analysis. Repeated Measures Analysis of Variance Situations in which biologists would make repeated measurements on same individual.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 10-1 Chapter 10 Analysis of Variance Statistics for Managers Using Microsoft.
DOX 6E Montgomery1 Design of Engineering Experiments Part 9 – Experiments with Random Factors Text reference, Chapter 13, Pg. 484 Previous chapters have.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 11 Testing for Differences Differences betweens groups or categories of the independent.
Differences Among Groups
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
Six Easy Steps for an ANOVA 1) State the hypothesis 2) Find the F-critical value 3) Calculate the F-value 4) Decision 5) Create the summary table 6) Put.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Inference about the slope parameter and correlation
Dependent-Samples t-Test
Chapter 14 Inference on the Least-Squares Regression Model and Multiple Regression.
Chapter 10 Two-Sample Tests and One-Way ANOVA.
Regression Analysis: Statistical Inference
Week 2 – PART III POST-HOC TESTS.
3. The X and Y samples are independent of one another.
Statistical Data Analysis - Lecture /04/03
Estimation & Hypothesis Testing for Two Population Parameters
CHAPTER 10 Comparing Two Populations or Groups
Lecture Slides Elementary Statistics Twelfth Edition
i) Two way ANOVA without replication
Comparing Three or More Means
Chapter 2 Simple Comparative Experiments
Comparing Multiple Groups: Analysis of Variance ANOVA (1-way)
Kin 304 Inferential Statistics
Correlation and Regression
Chapter 11 Analysis of Variance
Comparing Populations
1-Way Analysis of Variance - Completely Randomized Design
CHAPTER 10 Comparing Two Populations or Groups
Interval Estimation and Hypothesis Testing
One way ANALYSIS OF VARIANCE (ANOVA)
One-Way Analysis of Variance
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
CHAPTER 10 Comparing Two Populations or Groups
Multivariate Analysis of Variance II
Multivariate Linear Regression
CHAPTER 10 Comparing Two Populations or Groups
Lecture Slides Elementary Statistics Twelfth Edition
CHAPTER 10 Comparing Two Populations or Groups
1-Way Analysis of Variance - Completely Randomized Design
CHAPTER 10 Comparing Two Populations or Groups
Comparing the Means of Two Dependent Populations
ANalysis Of VAriance Lecture 1 Sections: 12.1 – 12.2
Inference for Distributions
Presentation transcript:

Inference About 2 or More Normal Populations, Part 1 BMTRY 726 6/12/2018

Paired Samples Common when we want to compare response to treatment before and after using the same subject. Helps control subject to subject variation. Univariate case:

Paired Samples Multivariate case: Notation

Paired Samples Multivariate case: (2) Results 6.1: Assuming the differences D1, D2,…, Dn are a random sample from a Np(d,Sd), then This follows directly from the one sample Hotelling’s T2 test in chapter 5.

Paired Samples Thus it is easy to see that the 100(1-a)% confidence region for d is Similarly the 100(1-a)% simultaneous confidence intervals for the individuals di’s are And the Bonferroni 100(1-a)% CIs for the individuals di’s are

Example: Faculty Grading Two Dental Medicine Faculty are interested in whether they are consistent in their grading of student crown preparations The each evaluate 5 different student preparations based on 2 measures Surface preparation for the crown How well the students bonded the crown

Example: Faculty Grading What is the hypothesis?

Example Tooth ID Surface F1 Bonding F1 Surface F2 Bonding F2 1 80 90 70 85 2 65 75 3 4 95 5 60

Example Find T2

Example What are 95% simultaneous CIs for d1 and d2 (F2,3(a) = 9.55)?

2nd Example: Missing Incisor Repair Recall our example of use of braces to fix the gap caused by a congenital missing incisor The initial question was whether or not use of braces made facial appearance closer to “normal” The same PI is also interested in whether or not use of braces significantly change a person’s facial features Recall there 3 measures: 2 hard tissue and 1 soft tissue The PI also has data on these measure pre and post-braces

2nd Example: Missing Incisor Repair What is our hypothesis?

2nd Example: Missing Incisor Repair > inc<-read.csv("H:\\public_html\\BMTRY726_Summer2018\\Data \\MissingIncisor.csv") > head(inc) ID pre_M1 pre_M2 pre_M3 post_M1 post_M2 post_M3 1 1 79.9 78.3 8.9 82.2 77.5 6.1 2 2 82.2 79.4 8.9 90.1 84.3 10.7 3 3 65.6 69.4 5.8 81.7 80.1 5.2 4 4 77.6 79.9 5.5 86.4 84.2 3.7 5 5 86.0 82.8 10.0 83.2 78.0 5.8 6 75.5 76.0 10.6 81.6 78.0 9.5

2nd Example: Missing Incisor Repair > dinc<-cbind(c(inc[,2]-inc[,5]), c(inc[,3]-inc[,6]), c(inc[,4]-inc[,7])) > library(ICSNP) > T2b<-HotellingsT2(dinc, mu=c(0,0,0), test="f") > T2b Hotelling's one sample T2-test data: dinc T.2 = 91.125, df1 = 3, df2 = 30, p-value = 3.553e-15 alternative hypothesis: true location is not equal to c(0,0,0)

2nd Example: Missing Incisor Repair ### 95% Simultaneous CIs > mu<-colMeans(dinc); mu [1] -4.1303 -0.0970 2.3545 > S<-var(dinc); p<-ncol(dinc); n<-nrow(dinc) > round(c(mu[1]-sqrt(p*(n-1)/(n-p)*qf(0.95, p, n-p))*sqrt(S[1,1]/n), mu[1]+sqrt(p*(n-1)/(n-p)*qf(0.95, p, n-p))*sqrt(S[1,1]/n)), 3) [1] -6.684 -1.577 > round(c(mu[2]-sqrt(p*(n-1)/(n-p)*qf(0.95, p, n-p))*sqrt(S[2,2]/n), mu[2]+sqrt(p*(n-1)/(n-p)*qf(0.95, p, n-p))*sqrt(S[2,2]/n)) , 3) [1] -2.381 2.187 > round(c(mu[3]-sqrt(p*(n-1)/(n-p)*qf(0.95, p, n-p))*sqrt(S[3,3]/n), mu[3]+sqrt(p*(n-1)/(n-p)*qf(0.95, p, n-p))*sqrt(S[3,3]/n)) , 3) [1] 0.737 3.972

2nd Example: Missing Incisor Repair ### 95% Bonferroni CIs > c(mu[1]-qt(1-0.05/6, n-1)*sqrt(S[1,1]/n),mu[1]+qt(1-0.05/6, n-1)*sqrt(S[1,1]/n)) [1] -6.240 -2.021 > c(mu[2]-qt(1-0.05/6, n-1)*sqrt(S[2,2]/n), mu[2]+qt(1-0.05/6, n-1)*sqrt(S[2,2]/n)) [1] -1.984 1.790 > c(mu[3]-qt(1-0.05/6, n-1)*sqrt(S[3,3]/n),mu[3]+qt(1-0.05/6, n-1)*sqrt(S[3,3]/n)) [1] 1.018 3.691

Repeated Measures Consider a design with 1 response variable but multiple treatments for the same subject For example… -Measure outcome at different time points -Measure response under different treatments -Measure response for different raters

Repeated Measures Cognitive performance in Parkinsons rats: -Three treatments to improve cognitive performance -Each rat receives each treatment for 1 week -Measure average cognitive performance at the end of each week of treatment Identification of seizure center onset in patients with epilepsy -70 intercranial electrodes in different regions of the brain -Record electrode activity during 10 seizures -Measure average connectivity for all electrode pairs

Repeated Measures & Contrasts Test the hypothesis Write the null hypothesis in matrix form

Repeated Measures & Contrasts Transform the vector of observed values Then Y1, Y2, …, Yn is a random sample form a q -1 dimensional normal with mean Cm and So our Hotelling T2 test statistic is

Repeated Measures & Contrasts Proof Solve for S_Y in terms of S_X in class

Repeated Measures & Contrasts So consider data Test: Reject if: Note, for

Example Reaction times n = 20 people to visual stimuli driving in a simulator at 0, 30, and 60 minutes after consumption of 2 alcoholic beverages. Have the calculate T^2

Example The investigator in the study wants to examine if there is a difference in reaction times across the different times. What is the hypothesis? Have the calculate T^2

Example Are the mean reaction times the same for all 3 stimuli? Have the calculate T^2

Example Are the mean reaction times the same for all 3 stimuli? Have the calculate T^2

Example Are the mean reaction times the same for all 3 stimuli? Have the calculate T^2

More about Contrasts Note the choice of C is not unique. Any matrix C that is of full (row) rank will do and defines the null hypothesis in the same way In the example we could write These C matrices will give the same T2 value

More about Contrasts Proof

-Four anesthetic treatments given to n = 19 dogs: Example 6.2 in the book -Four anesthetic treatments given to n = 19 dogs: -T1 = High CO2, no Halothane -T2 = Low CO2, no Halothane -T3 = High CO2, with Halothane -T4 = Low CO2, with Halothane Milliseconds between heartbeats was measured for each treatment

We want to know if average time between heartbeats the same for the four treatments? We could use something similar to our last example…

We could use C to for more focused comparisons… What specific hypotheses might we be interested in? What about C?

If we use the C we just described what do we decide…

Repeated Measures: CIs For Cm, the 100(1-a)% confidence region is: Similarly the 100(1-a)% simultaneous confidence intervals for single contrasts are And the Bonferroni 100(1-a)% CIs for the individuals contrasts are

We can construct 95% simultaneous confidence intervals for these 3 contrasts… Halothane versus no Halothane High CO2 versus low CO2 Interaction between Halothane and CO2

However if a priori we decide we are only interested in these three effects, we can construct Bonferroni CIs Halothane versus no Halothane High CO2 versus low CO2 Interaction between Halothane and CO2

Conclusions Use of halothane produces longer times between heartbeats. This is consistent across CO2 levels since the interaction was not significant Lower levels of CO2 results in longer times between heartbeats whether or not halothane is used. -Note, we did not need T2 to come to this conclusion -The investigators in this study also did not randomized order in which the dogs received each treatment combination. Thus time or carry-over effects may be confounded with halothane or CO2 effects.

What About ANOVA? ANOVA table Use of the F-test in this ANOVA is based on the assumptions (1) The 4 observations for each dog are independent (2) Observations taken in different dogs are independent (3) Homogeneous variance Source of Variation d.f. Sums of Squares Mean Square Halothane 1 208101 CO2 17129 Interaction 777 Error 72 405229 5628 Corrected Total 75

Repeated Measures Random Sample: Linear model: random errors can have different variances and can also be correlated

Mixed Model Analysis What happens if we have the following data We also assume {sj} are independent of {rij} Note: Observations taken on the same subject are correlated leading to… Fixed Trt Effect Random Sub Effect Random w/in Sub Effect

Mixed Model Analysis For the jth subject

Correlation between any 2 obs on the same unit Then Where Correlation between any 2 obs on the same unit -Note: the T2 test doesn’t assume this special covariance structure

Mixed Model ANOVA Where Reject if Source of Variation d.f. Sums of Squares Mean Square Treatment w/in subject p-1 Subjects n-1 Error (p-1)(n-1) Corrected Total np-1 Where Reject if

When the mixed model covariance structure is correct

When the mixed model F-test is applied when The type I error level often exceeds the specified a When is true Where the zj’s are the NID(0,1) random variables

Otherwise Where

Note that q will be close to when S has a large diversity in variance, or S has some unequal eigenvalues near zero, or both q = 1 if the mixed model assumptions are correct, or more generally when

Since S is unknown, a conservative approach is to make q as small as possible, i.e. close to Then Reject H0: m1 = m2 = … = mp if Alternative approach: -Estimate q from the data

Analysis of Repeated Measures Studies Mixed model ANOVA: -Familiar random subjects model -Simple, well known -Assumes Equal variances and equal correlations -Good power -Other covariance structures are available in PROC MIXED

Analysis of Repeated Measures Studies Conservative degrees of freedom: -Familiar random subjects mixed model -Arbitrary covariance structure -Conservative -Actual type I error has less than nominal a -Loss of power -Could estimate correction to degrees of freedom

Analysis of Repeated Measures Studies Hotelling T2 (or MANOVA) -Arbitrary covariance structure -Exact type I error level -Less familiar -Power: -More power than conservative degree of freedom approach -Less power than random subjects mixed model