Math Psych 2000 Turning the Process-Dissociation Procedure Inside Out:

Slides:



Advertisements
Similar presentations
Psych 5500/6500 t Test for Two Independent Groups: Power Fall, 2008.
Advertisements

N.D.GagunashviliUniversity of Akureyri, Iceland Pearson´s χ 2 Test Modifications for Comparison of Unweighted and Weighted Histograms and Two Weighted.
Reliability and Validity
Getting Started with Hypothesis Testing The Single Sample.
Chapter 9: Introduction to the t statistic
1 Psych 5500/6500 Statistics and Parameters Fall, 2008.
1 of 27 PSYC 4310/6310 Advanced Experimental Methods and Statistics © 2013, Michael Kalsher Michael J. Kalsher Department of Cognitive Science Adv. Experimental.
Implication of Gender and Perception of Self- Competence on Educational Aspiration among Graduates in Taiwan Wan-Chen Hsu and Chia- Hsun Chiang Presenter.
CORRELATION & REGRESSION
RMTD 404 Lecture 8. 2 Power Recall what you learned about statistical errors in Chapter 4: Type I Error: Finding a difference when there is no true difference.
POSC 202A: Lecture 12/10 Announcements: “Lab” Tomorrow; Final ed out tomorrow or Friday. I will make it due Wed, 5pm. Aren’t I tender? Lecture: Substantive.
Reliability & Validity
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
Chapter 4 Linear Regression 1. Introduction Managerial decisions are often based on the relationship between two or more variables. For example, after.
Chapter 14 – 1 Chapter 14: Analysis of Variance Understanding Analysis of Variance The Structure of Hypothesis Testing with ANOVA Decomposition of SST.
CHAPTER 15: Tests of Significance The Basics ESSENTIAL STATISTICS Second Edition David S. Moore, William I. Notz, and Michael A. Fligner Lecture Presentation.
Lecture PowerPoint Slides Basic Practice of Statistics 7 th Edition.
Correlation & Regression Analysis
Sample Size Determination
1 Strategy Effects in Naming: A Modified Deadline View Thomas M. Spalek & Steve Joordens University of Toronto at Scarbrough.
Chapter 8: Introduction to Hypothesis Testing. Hypothesis Testing A hypothesis test is a statistical method that uses sample data to evaluate a hypothesis.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
CHAPTER 15: Tests of Significance The Basics ESSENTIAL STATISTICS Second Edition David S. Moore, William I. Notz, and Michael A. Fligner Lecture Presentation.
Definition Slides Unit 1.2 Research Methods Terms.
Chapter 13 Linear Regression and Correlation. Our Objectives  Draw a scatter diagram.  Understand and interpret the terms dependent and independent.
When the means of two groups are to be compared (where each group consists of subjects that are not related) then the excel two-sample t-test procedure.
Chapter 10 Confidence Intervals for Proportions © 2010 Pearson Education 1.
Yandell – Econ 216 Chap 15-1 Chapter 15 Multiple Regression Model Building.
Cornerstones of Managerial Accounting, 5e.
Descriptive and Inferential Statistics
The simple linear regression model and parameter estimation
Copyright © Cengage Learning. All rights reserved.
Department of Mathematics
Step 1: Specify a null hypothesis
Sample Size Determination
Market-Risk Measurement
Chapter 9 Audit Sampling: An Application to Substantive Tests of Account Balances McGraw-Hill/Irwin ©2008 The McGraw-Hill Companies, All Rights Reserved.
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 20th February 2014  
Statistical Data Analysis - Lecture /04/03
Correlation, Bivariate Regression, and Multiple Regression
Unit 3 Hypothesis.
Chapter 5: Introduction to Statistical Inference
Item Analysis: Classical and Beyond
Reliability & Validity
Understanding Standards Event Higher Statistics Award
POSC 202A: Lecture Lecture: Substantive Significance, Relationship between Variables 1.
Reliability and Validity of Measurement
CORRELATION(r) and REGRESSION (b)
Hypothesis tests for the difference between two means: Independent samples Section 11.1.
Solution of Equations by Iteration
Marty W. Niewiadomski University of Toronto at Scarborough
Review: What influences confidence intervals?
Problems: Q&A chapter 6, problems Chapter 6:
Elementary Statistics
Geology Geomath Chapter 7 - Statistics tom.h.wilson
Gerald Dyer, Jr., MPH October 20, 2016
EPSY 5245 EPSY 5245 Michael C. Rodriguez
Standard Deviation Standard Deviation summarizes the amount each value deviates from the mean. SD tells us how spread out the data items are in our data.
Confidence intervals for the difference between two means: Independent samples Section 10.1.
Experimental Design: The Basic Building Blocks
Product moment correlation
RESEARCH BASICS What is research?.
Hypothesis Testing: The Difference Between Two Population Means
Thinking critically with psychological science
Item Analysis: Classical and Beyond
DESIGN OF EXPERIMENTS by R. C. Baker
Exercise 1 Use Transform  Compute variable to calculate weight lost by each person Calculate the overall mean weight lost Calculate the means and standard.
Item Analysis: Classical and Beyond
Chapter Ten: Designing, Conducting, Analyzing, and Interpreting Experiments with Two Groups The Psychologist as Detective, 4e by Smith/Davis.
Presentation transcript:

Math Psych 2000 Turning the Process-Dissociation Procedure Inside Out: A New Approach for Investigating the Relation Between Controlled and Automatic Influences Math Psych 2000 Steve Joordens, Daryl Wilson & Thomas Spalek University of Toronto at Scarborough

Be Gentle Stevie Coggies Mathies Comfort with Mathematical Terms and Concepts

SPICE SPI___ SPI___ An Example for Context Study Phase ………….. …………… Test Phase or Inclusion Exclusion Try TO use Try NOT to use study items study items

Performance Assumptions C&A C&A C C C&A C&A A A Inclusion = C + A – C&A Exclusion = A – C&A Therefore, C = Inclusion - Exclusion

Assumptions Concerning the Relation Between Controlled and Automatic Processes A precise estimate of automatic influences (A) cannot be obtained based on the performance assumptions because there are three unknowns and only two formulae … one of the unknowns needs to somehow become known (or another formula created). C&A Assuming independence, C&A = CxA C C&A Therefore, A = Exclusion / (1 – C) A

Redundancy: An Alternative Assumption Joordens & Merikle (1993) suggested that one could also assume a redundancy relation between controlled and automatic influences such that controlled influences are assumed to arise from the same processes that gave rise to automatic influences. C&A Assuming redundancy, C&A = C C A Therefore, A = Inclusion Thus, the estimate of A depends critically on which relation between controlled and automatic influences is assumed

Approach/Avoidance How does one proceed from here? One alternative is to avoid the issue altogether by coming up with other ways of roughly estimating automatic influences … or assessing the interplay between automatic and controlled influences. In contrast, our view is that the issue of the underlying relation between controlled and automatic influences is critical as it gets at the underlying architecture of cognition. Rather than being a problem for the process-dissociation procedure, it is possible that the process-dissociation procedure may provide a tool for getting at this critical issue. Thus, we vote for approach!

Independence vs. Redundancy Most studies trying to discriminate between the models have really just tried to support or refute the independence assumption. Two versions of this have been promoted … The Jacoby camp has tried to support the independence assumption by highlighting manipulations that affect estimates of controlled influences, but leave the estimates of automatic influence invariant. In contrast, Hintzman, Curran and others have refuted the independence assumption by highlighting manipulations that give rise to bizarre effects on the estimates of automatic influences; so called paradoxical dissociations. Both of these rely on what we would call an outside-in logic …

The Outside-In Logic Begin by doing a study involving some critical manipulation, and obtaining inclusion and exclusion scores under the different levels of that manipulation Based on the performance assumptions and the independence assumption, compute estimates of C and A See if the estimates give rise to either invariances or paradoxical dissociations. We have two major problems with this approach. First, it can only really be used to support or refute independence (and is limited in that respect as well). Second, it requires an assumption of independence to support or refute independence, which is just weird.

The Inside-Out Logic We wish to propose a different (inverted) logic for discriminating between these models – one that begins from the theoretical relations and makes predictions about empirical data. Specifically: Arbitrarily set values of C and A across wide ranges Based on these values, and either the independence or the redundancy assumption, calculated predicted estimates of inclusion and exclusion scores Examine how those estimates should respond to manipulations that affect C or A See if the real estimates (from the literature) dance in manners more consistent with either the independence or redundancy predictions.

For Example As an illustration of this approach consider the following Monte-Carlo simulation in which we fixed A at 0.5, and allowed C to vary from 0.50 down to 0.00.

The Size of A Matters When the A is high (0.8), the models become similar with respect to their predictions – both predict that exclusion scores will change more than inclusion scores However, when A is low (0.2), the predictions associated with the two models become divergent because the independence model shifts to predicting a greater change on inclusion than on exclusion scores

Generally Speaking Independence Redundancy Inclusion Exclusion Delta Ratio

Evaluation Procedure We started by looking at 17 studies containing 76 manipulations of controlled influences (instructional manipulations) Then, we choose only those manipulations in which the absolute value of the change in A was less than 0.1 according to the independence assumption We then plotted a scatter-plot showing the delta ratio for the empirical data as a function of overall level of A, with the function predicted by the model overlaid on the plot Then we calculated the mean and standard deviations of the squared residuals, which were compared across models to ascertain which provided the better account of the data

Independence Squared Residuals Mean = 8.61 SD = 18.69 N = 49

Redundancy Squared Residuals Mean = 6.58 SD = 14.10 N = 49

Comparison of Models The following comparisons are t-tests comparing the mean squared residuals associated with the two models … Independence Redundancy t-value Overall 8.61 6.58 0.86 High A - I 3.80 4.09 -0.38 Low A - I 13.78 9.28 0.95 High A – R 1.73 2.32 -1.12 Low A – R 13.48 9.36 0.87

Conclusions We see this new “inside-approach” as having several advantages over previous attempts to get at the underlying relation between controlled and automatic influences: It is far more conducive to a discriminative comparison of the models in question, rather than focusing on one model or the other It does not require one to assume some underlying model in order to test that model It truly should have the potential to convincing show one model to be superior to the other in terms of predicting empirical data … thus it should allow stronger conclusions