Presentation is loading. Please wait.

Presentation is loading. Please wait.

Analysis of RT distributions with R Emil Ratko-Dehnert WS 2010/ 2011 Session 09 – 18.01.2011.

Similar presentations


Presentation on theme: "Analysis of RT distributions with R Emil Ratko-Dehnert WS 2010/ 2011 Session 09 – 18.01.2011."— Presentation transcript:

1 Analysis of RT distributions with R Emil Ratko-Dehnert WS 2010/ 2011 Session 09 – 18.01.2011

2 Last time... Recap of contents so far (Chapter 1 + 2) Hierarchical Interference (Townsend‘s system) Functional forms of RVs – Density function (TAFKA „distribution“) – Cumulative distribution function – Quantiles – Kolmogorov-Smirnof test 2

3 RT DISTRIBUTIONS IN THE FIELD 3 II

4 RTs in visual search 4

5 Why analyze distribution? 1.Normality assumption almost always violated 2.Experimental manipulations might affect only parts of RT distribution 3.RT distributions can be used to constrain models e.g. of visual search (model fitting and testing) 5

6 RT distributions Typically unimodal and positively skewed Can be characterized by e.g. the following distributions 6 Ex-Gauss Ex-Wald Gamma Weibull

7 Ex-Gauss distribution Introduced by Burbeck and Luce (1982) Is the convolution of a normal and an exponential distribution Density: 7 Ex-Gauss CDF of N(0,1)

8 Convolution... is a modified version of the two original functions It is the integral of the product of the two functions after one is reversed and shifted: 8 Ex-Gauss

9 9

10 10

11 11

12 12

13 Why popular? Components of Ex-Gauss might correspond to different mental processes – Exponential  Decision; Gaussian  Residual perceptual and response-generating processes It is known to fit RT distributions very well (particularly hard search tasks) One can look at parameter dynamics and infer on trade-offs 13 Ex-Gauss

14 Further reading Overview: – Schwarz (2001) – Van Zandt (2002) – Palmer, et al. (2009) Others: – McGill (1963) – Hohle (1965) – Ratcliff (1978, 1979) – Burbeck, Luce (1982) – Hockley (1984) – Luce (1986) – Spieler, et al. (1996) – McElree & Carrasco (1999) – Spieler, et al. (2000) – Wagenmakers, Brown (2007) 14 Ex-Gauss

15 Ex-Wald distribution Is the convolution of an exponential and a Wald distribution Represents decision and response components as a diffusion process (Schwarz, 2001) 15 Ex-Wald

16 Ex-Wald density where 16 Ex-Wald

17 Diffusion Process 17 A B z drift rate ~N(ν,η) time Evidence Boundary separation Mean drift ν Respond „A“ Respond „B“ Information space Ex-Wald

18 Qualitative Behaviour 18 A2A2 0 A1A1 time lax criterion strict criterion larger drift rate smaller drift rate Decision times for lax and strict criterion Ex-Wald

19 19

20 20

21 Why popular? Parameters can be interpreted psychologically Very successful in modelling RTs for a number of cognitive and perceptual tasks Are neurally plausible – Neuronal firing behaves like a diffusion process – Observed via single cell recordings 21 Ex-Wald

22 Further reading Theoretical Papers: – Schwarz (2001, 2002) – Ratcliff (1978) – Heathcote (2004) – Palmer, et al. (2005) – Wolfe, et al. (2009) Cognitive+perceptual tasks: – Palmer, Huk & Shadlen (2005) Visual Search: – Reeves, Santhi & Decaro (2005) – Palmer, et al. (2009) 22 Ex-Wald

23 Gamma distribution Series of exponential distributions α = average scale of processes β = reflects approximate number of processes 23 Gamma

24 24

25 25

26 Why popular? In fact, not too popular (publication-wise) It has very decent fits, when assuming a model, that sees RT distributions as composed of three exponentially distributed processes (Initial feed-forward  search  response selection) 26 Gamma

27 Further reading Dolan, van der Maas, & Molenaar (2002): A framework for ML estimation of parameters of (mixtures of) common reaction time distri- butions given optional truncation or censoring (in Behavioral Research Methods, Instruments & Computers, 34(3), 304-323) 27 Gamma

28 Weibull Distribution Like a series of races (bounded by 0 and ∞) the weibull distribution renders an asymptotic description of their minima Johnsons (1994) version as 3 parameters: α, γ, ξ – For γ = 1  exp. distr., for γ ~ 3.6  normal distr. – Hence γ must lie somewhere in between 28 Weibull

29 29

30 30

31 Why popular? Has been used in a variety of cognitive tasks Excels in those, which can be modeled as a race among competing units (e.g. Memory search RTs) Has decent functional fits 31 Weibull

32 Further reading Logan (1992) Johnson, et al. (1994) Dolan, et al. (2002) Chechile (2003) Rouder, Lu, Speckman, Sun & Jiang (2005) Cousineau, Goodman & Shiffrin (2002) Palmer, Horowitz, Torrabla, Wolfe (2009) 32 Weibull

33 Comparing functional fits Null hypothesis is fit of data with normal distribution (standard assumption for mean/var analysis) All proposed distributions beat the gaussian, but not equally well 1) Ex-Gauss, 2) Ex-Wald, 3) Gamma, 4)Weibull Also the first three have similar parameter trends For further reading, see the simulation study by Palmer, Horowitz, Torrabla, Wolfe (2009) 33

34 EXCURSION: BOOTSTRAPPING 34

35 Basic idea In statistics, bootstrapping is a method to assign accuracy to sample estimates This is done by resampling with replacements from the original dataset By that one can estimate properties of an estimator (such as its variance) It assumes IID data 35

36 Ex: Bootstrapping the sample mean Original data: X = x 1, x 2, x 3,..., x 10 Sample mean: X = 1/10 * (x 1 + x 2 + x 3 +... + x 10 ) Resample data to obtain a bootstrap means: X 1 * = x 2, x 5, x 10, x 10, x 2, x 8, x 3, x 10, x 6, x 7  μ 1 * Repeat this 100 times to get μ 1 *,..., μ 100 * Now one has an empirical bootstrap distribution of μ From this one can derive e.g. the bootstrap CI of μ 36

37 Pro bootstrapping It is... – simple and easy to implement – straightforward to derive SE and CI for complex estimtors of complex parameters of the distribution (percentile points, odds ratio, correlation coefficients) – an appropriate way to control and check the stability of the results 37

38 Contra bootstrapping Under some conditions it is asymptotically consistent, so it does not provide general finite- sample guarantees It has a tendency to be overly optimistic (under- estimates real error) Application not always possible because of IID restriction 38

39 Situations to use bootstrapping 1.When theoretical distribution of a statistic is compli- cated or unknown 2.When the sample size is insufficient for straight- forward statistical inference 3.When power calculations have to be performed and a small pilot sample is availible How many samples are to be computed?  As much as your hardware allows for... 39

40 AND NOW TO 40

41 Creating own functions new.fun <- function(arg1, arg2, arg3){ x <- exp(arg1) y <- sin(arg2) z <- mean(arg2, arg3) result <- x + y + z result } A <- new.fun(12, 0.4, -4) 41 „inputs“ „output“ Algorithm of function Usage of new.fun


Download ppt "Analysis of RT distributions with R Emil Ratko-Dehnert WS 2010/ 2011 Session 09 – 18.01.2011."

Similar presentations


Ads by Google