First example.

Slides:



Advertisements
Similar presentations
Numbers Treasure Hunt Following each question, click on the answer. If correct, the next page will load with a graphic first – these can be used to check.
Advertisements

1 A B C
Simplifications of Context-Free Grammars
Variations of the Turing Machine
AP STUDY SESSION 2.
1
& dding ubtracting ractions.
STATISTICS HYPOTHESES TEST (I)
STATISTICS INTERVAL ESTIMATION Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering National Taiwan University.
David Burdett May 11, 2004 Package Binding for WS CDL.
Whiteboardmaths.com © 2004 All rights reserved
We need a common denominator to add these fractions.
Local Customization Chapter 2. Local Customization 2-2 Objectives Customization Considerations Types of Data Elements Location for Locally Defined Data.
CALENDAR.
Overview of Lecture Partitioning Evaluating the Null Hypothesis ANOVA
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
Multiple-choice question
1 Matters arising 1.Summary of last weeks lecture 2.The exercises 3.Your queries.
1 Contact details Colin Gray Room S16 (occasionally) address: Telephone: (27) 2233 Dont hesitate to get in touch.
Multiple-choice example
Chapter 7 Sampling and Sampling Distributions
The 5S numbers game..
Biostatistics Unit 5 Samples Needs to be completed. 12/24/13.
Break Time Remaining 10:00.
The basics for simulations
EE, NCKU Tien-Hao Chang (Darby Chang)
Turing Machines.
Table 12.1: Cash Flows to a Cash and Carry Trading Strategy.
PP Test Review Sections 6-1 to 6-6
Contingency tables enable us to compare one characteristic of the sample, e.g. degree of religious fundamentalism, for groups or subsets of cases defined.
Chi-Square and Analysis of Variance (ANOVA)
Operating Systems Operating Systems - Winter 2010 Chapter 3 – Input/Output Vrije Universiteit Amsterdam.
Hypothesis Tests: Two Independent Samples
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
 Copyright I/O International, 2013 Visit us at: A Feature Within from Item Class User Friendly Maintenance  Copyright.
Adding Up In Chunks.
MaK_Full ahead loaded 1 Alarm Page Directory (F11)
Comparing several means: ANOVA (GLM 1)
1 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt Synthetic.
: 3 00.
5 minutes.
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
1 hi at no doifpi me be go we of at be do go hi if me no of pi we Inorder Traversal Inorder traversal. n Visit the left subtree. n Visit the node. n Visit.
Statistical Inferences Based on Two Samples
© The McGraw-Hill Companies, Inc., Chapter 10 Testing the Difference between Means and Variances.
1 Let’s Recapitulate. 2 Regular Languages DFAs NFAs Regular Expressions Regular Grammars.
Types of selection structures
Essential Cell Biology
12 System of Linear Equations Case Study
Converting a Fraction to %
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
1 Interpreting a Model in which the slopes are allowed to differ across groups Suppose Y is regressed on X1, Dummy1 (an indicator variable for group membership),
Chapter Thirteen The One-Way Analysis of Variance.
Ch 14 實習(2).
Chapter 8 Estimation Understandable Statistics Ninth Edition
CSE20 Lecture 15 Karnaugh Maps Professor CK Cheng CSE Dept. UC San Diego 1.
Clock will move after 1 minute
Copyright © 2013 Pearson Education, Inc. All rights reserved Chapter 11 Simple Linear Regression.
Experimental Design and Analysis of Variance
Chapter 11 Creating Framed Layouts Principles of Web Design, 4 th Edition.
Physics for Scientists & Engineers, 3rd Edition
Select a time to count down from the clock above
Copyright Tim Morris/St Stephen's School
1.step PMIT start + initial project data input Concept Concept.
4/4/2015Slide 1 SOLVING THE PROBLEM A one-sample t-test of a population mean requires that the variable be quantitative. A one-sample test of a population.
1 Dr. Scott Schaefer Least Squares Curves, Rational Representations, Splines and Continuity.
1 Decidability continued…. 2 Theorem: For a recursively enumerable language it is undecidable to determine whether is finite Proof: We will reduce the.
Analysis of variance (ANOVA)-the General Linear Model (GLM)
Presentation transcript:

First example

Solution No, main effects appear in the MARGINAL means. No, any treatment effect will appear in patterns of MEANS in the table, not VARIANCES. Yes, main effects show themselves in differences among the MARGINAL MEANS. We already have a respectable candidate, so we choose C.

Second example

Drawing the graph

Multiple line graph The Category Axis is the horizontal scale This will obtain a Right handedness line and a Left handedness line.

The handedness profiles

An interaction No main effect of either factor is evident. But the effect of Handedness reverses its direction from Task 1 to Task 3. There is a striking interaction pattern. So C is the right answer.

Adding a main effect Create a new variable ScoreTwo by adding thirty points to each of the scores of the left-handed group. This will add a main effect of the Handedness factor to the data. The interaction between Task and Handedness will remain. Add 30 to each of the scores in this group.

Interaction plus main effect

Separation of the lines The lines no longer cross. Their non-parallelism indicates an interaction. Interactions and main effects are quite separate and either can occur without the other.

A three-factor experiment When another factor (in this case Gender) is added to an experimental design, all THREE factors may interact.

The results

Simple two-way interactions A two-way interaction at one level of a third factor is known as a SIMPLE TWO-WAY INTERACTION. There’s a simple two-way interaction of Drug and Alertness in the data from the Male participants, but not in the Female scores. No simple Drug×Alertness interaction in the Female data. The profiles are parallel. The profiles are not parallel, indicating the presence of a simple Drug×Alertness interaction in the Male scores.

Three-way interaction A THREE-WAY INTERACTION is said to occur when the simple interactions between two of the factors are not homogeneous across all levels of a third factor. As with main effects and two-way interactions, a three-way interaction can occur irrespective of whether other effects are present.

Summary table for the 3-way ANOVA

Some observations There are now three kinds of effect: MAIN EFFECTS TWO-WAY INTERACTIONS THREE-WAY INTERACTION The same error term is used for all the F tests. The MSwithin is the mean of the cell variances, as in the two-way ANOVA. The same error term is used for all the F-tests.

Warning You can have as many factors as you like in your experimental design. But I suggest you have no more than three. A three-way interaction can be difficult enough to explain; a four-way interaction is often impossible to explain. KEEP YOUR DESIGN SIMPLE!

Lecture 4 WITHIN SUBJECTS EXPERIMENTS

Between subjects experiments ALL factors are between subjects. We have considered three types of between subjects experiment: the one-factor between subjects experiment; the two-factor between subjects factorial experiment; The three-factor between subjects factorial experiment. Each participant (subject) is tested only once, either (as in the one-factor experiment) under one CONDITION or (as in factorial experiments) one COMBINATION of conditions.

Within subjects experiments In a WITHIN SUBJECTS experiment, each participant is tested at ALL levels of EVERY treatment factor. In a within subjects experiment, ALL factors are within subjects. Such an experiment is also said to have REPEATED MEASURES on all of its treatment factors.

Stroop interference You are asked not to READ the words, but to NAME the COLOUR of the print each time. It’s difficult to do, because the natural tendency is to read the word. The more words there are, the more difficult it gets.

A within subjects experiment Stroop (1935) tested the SAME PARTICIPANTS under all three conditions. So each person was serving as his or her own control.

The results Hardest task Easiest task Harder than reading Participants took longer to name colours than to read colour words printed in black and white. But they took MUCH longer to name the print colours of the colour words in the ‘conflicting’ condition.

The dependent and independent variables The dependent variable is the time taken by the participant (subject) to go through all the items on a card, either naming the print colour of each item or (in the reading condition) reading the words. The independent variable (or factor) is the condition under which the task is attempted. This Stroop factor has THREE levels.

The experimental design Since each subject (participant) performs under all three conditions, i.e, at all ‘levels’ of the Stroop factor, this is a ONE-FACTOR WITHIN SUBJECTS experiment.

Comparing designs In the within subjects experiment, you can obtain each participant’s mean score across all conditions. You can’t do that in the between subjects experiment.

Subjects (participants) as a factor in the experiment Since you can obtain each participant’s mean score, you can have ‘subjects’ as a source in the ANOVA summary table. This will leave a smaller estimate of error variance known as the RESIDUAL.

Sums of squares and degrees of freedom Recall that, for many purposes, what is true of sums of squares SS is also true of their degrees of freedom df. For example, in the one-way ANOVA, the PARTITION OF THE TOTAL SUM OF SQUARES is SStotal = SSbetween + SSwithin We can also write dftotal = dfbetween + dfwithin

Suppose … Suppose the experimental design were between subjects, rather than within subjects. Suppose we had three different groups of four people (12 participants in all), each group being tested under one condition. A one-way ANOVA would be appropriate.

A comparison of the ANOVAs Within subjects ANOVA One-way ANOVA In the within subjects analysis, the Within degrees of freedom (9) has been split into Subjects and Residual components. What’s true of the df is true also of the SS.

Partition of SStotal total number of scores N = nk number of subjects per group (one-way ANOVA) or number of subjects (within subjects ANOVA) number of groups or treatments

Implications For a given number of observations, the error term in the within subjects ANOVA will be smaller than it would have been for the between subjects (one-way) ANOVA. In the within subjects ANOVA, the main effect of individual differences in average ability is factored out.

Advantages of within subjects experiments The within subjects experiment has several advantages over the between subjects experiment. You cannot argue that people of different abilities were tested under different conditions. You gather as much data with fewer participants. You make maximum use of the participants while they are available.

Order effects On the other hand, the performance of a task may be affected by the order in which the tasks are performed. Perhaps it’s easier to name the colours of non-word objects after you have tried the conflicting colour-word task than after you have tried the reading task? If all participants were to perform the three tasks in the same order, therefore, the results might be confounded with ORDER EFFECTS.

Counterbalancing We vary the order of presentation, so that each task is presented equally often in First, Second and Third positions. This procedure is known as COUNTERBALANCING.

Cyclic permutation Counterbalancing can be achieved by CYCLIC PERMUTATION. A, B, C and D represent the different conditions in the experiment. The condition on the right is continually moved to the leftmost position each time, so that all the other conditions ‘slide’ to the right. Each condition occurs in the same serial position with approximately equal frequency.

Homogeneity of covariance The ANOVA model for within subjects experiments assumes that the data have a property known as SPHERICITY or HOMOGENEITY OF COVARIANCE. This property is tested for by the MAUCHLY SPHERICITY TEST. (There are other tests too.)

Mauchly test… In the Mauchly test, the null hypothesis is that the data have ‘sphericity’. So you hope that the null hypothesis will be accepted: the p-value should be high – certainly greater than .05. If the Mauchly test shows significance, you have to adjust the degrees of freedom for your F-test downwards to make the test more stringent. SPSS runs a Mauchly test on the data.

Replicating the Stroop experiment Suppose a researcher decides to replicate Stroop’s experiment. (It’s been done many times. The effect is very robust.) Suppose the results are as follows.

The results They look promising – we see some comparatively long times in the Conflict condition. But we need to get to know our data by calculating some statistics and drawing a graph.

In Variable View Assign clear names and labels to the variables. Dispense with the decimals. Note that we haven’t yet told SPSS about the Stroop factor. We can’t do that until we get into the ANOVA dialog boxes.

Data View Note how the display in Data View benefits from your earlier good work in Variable View. The numbers are easier to read. The variables are clearly labelled. Notice that there is NO GROUPING VARIABLE, because the participants have not been grouped in any way. In this data set, we only have THREE SEPARATE VARIABLES.

Ordering a line graph Choose a simple line graph. There are no groups, so choose Summaries of separate variables. Click the Define button.

The Simple Line dialog Since you chose Summaries of Separate Variables, you can now transfer all three variables in you data set to the Line Represents panel on the right. We are happy that the points represent mean scores. Click OK.

Profile of means Experimental condition

The profile plot As expected, the Conflict condition proved to be the most difficult. Naming the colours of the zeros was more difficult than reading the words.

The within subjects ANOVA

Defining the treatment factor Name of the DV The Add buttons come alive. Click the live Add buttons to transfer the names to the appropriate panels.

Assign the variables SPSS knows only that there is a within subjects factor, ‘Stroop’, with three levels. Now you must assign the three variables in your data set to the three different levels.

The assignment is made

The Options button Click the Options button to order descriptive statistics – otherwise, you will just get the ANOVA summary table. We have ordered the Bonferroni test for pairwise multiple pairwise comparisons later on. The Bonferroni is an equivalent of the Tukey test in the one-way ANOVA.

Profile plots You can order profile plots; but watch out! The scale on the vertical axis will be microscopic, making the graph appear as if there were huge differences among the means. Double-click on the image and adjust the vertical scale in the manner I described in Lecture 2, Slide 53.

The output The output is extensive and rather daunting. An extensive pruning operation is required.

Pruning the output To remove this item (left), click on the image. A rectangular frame around the item appears. Press Delete to remove the item.

Remove the followiing

Retain: Descriptive Statistics The Mauchly Test Within Subjects Effects Comparisons

Summary of the results As expected, the Conflict condition was the toughest. There is a substantial difference between the mean times for the Conflict and Naming conditions. As expected, the Reading condition was much the easiest. You must request this Descriptive Statistics table when you are completing the dialogs for the ANOVA routine.

The basic ANOVA summary table Notice the additional Subjects factor. As expected, the Stroop factor has had a significant effect. Write this result as, ‘F(2, 6) = 55.353; p < .01’.

The SPSS ANOVA summary table

The Mauchly test The p-value is high (.900). There is therefore no evidence to question the assumption of homogeneity of variance. The ordinary ANOVA F-test can be made. Just keep the first rows of the Stroop and Error(Stroop) sections of the table.

The SPSS summary table

The F-ratio In the within subjects ANOVA, the variance attributable to individual differences is removed from the analysis. A separate error term is calculated. This is MSresidual

If you divide the Stroop MS by the Error MS, you will obtain the value of F.

The table of pairwise comparisons

EDITED TABLE Naming isn’t significantly different from Reading. But Naming is significantly different from Conflicting. And Reading is also significantly different from Conflicting.

Rationale of the Bonferroni test With unplanned multiple comparisons, there is a heightened risk of at least one test showing significance, even when the null hypothesis is true. The risk increases dramatically with the number of conditions. Suppose we have 8 treatment means. If the null hypothesis is TRUE, the probability of AT LEAST ONE comparison showing significance is .76.

Bonferroni rationale… The Bonferroni method of unplanned or ‘post hoc’ comparisons works by multiplying the p-value of t by the number of paired comparisons possible with an array of means of that particular size. We have three means. Three possible pairs can be chosen (1&2), (1&3), (2&3). So you multiply the p-value by three and compare the product with .05. This procedure makes the Bonferroni test very conservative with large arrays of means.

Compare with related t tests If you ask for related-samples t-tests on the same data, you will see that the p-values given are a third of the corresponding values in the Bonferroni table. TRY THIS YOURSELF!

Suppose the Mauchly had shown significance The Greenhouse-Geisser test is a more conservative F test, in which the degrees of freedom are adjusted downwards. With our data, this test gives the same result as the ordinary F test because, because the data are ‘spherical’.

In our ANOVA table … Instead of reading the rows labelled ‘Sphericity assumed’, read the rows labelled ‘Greenhouse-Geisser’. The table now reads: ‘F(1.818, 5.455) = 55.35, p < .01’. Although the values for the degrees of freedom have been adjusted, the value of F remains unchanged.

When do conservative F-tests make a difference? Only in situations where one has a marginally significant result (just beyond the .05 level). A conservative test will not overturn an ordinary test showing a really small p-value of, say, .01.

Example

Third question