Download presentation
Presentation is loading. Please wait.
Published byNoel Newton Modified over 5 years ago
1
New Results from the Bayesian and Frequentist MPT Multiverse
Henrik Singmann (University of Warwick) Daniel W. Heck (University of Mannheim) Marius Barth (Universität zu Köln) Julia Groß (University of Mannheim) Beatrice G. Kuhlmann (University of Mannheim)
2
Multiverse Approach Statistical analysis usually requires several (more or less) arbitrary decisions between reasonable alternatives: Data processing and preparation (e.g., exclusion criteria, aggregation levels) Analysis framework (e.g., statistical vs. cognitive model, frequentist vs. Bayes) Statistical analysis (e.g., testing vs. estimation, fixed vs. random effects, pooling) Combination of decisions spans multiverse of data and results (Steegen, Tuerlinckx, Gelman, & Vanpaemel, 2016): Usually one path through multiverse (or 'garden of forking paths', Gelman & Loken, 2013) is reported Credible conclusions cannot be contingent on arbitrary decisions Multiverse analysis attempts exploration of possible results space: Reduces problem of selective reporting by making fragility or robustness of results transparent. Conclusions arising from many paths are more credible than conclusions arising from few paths. Helps identification of most consequential choices. Limits of multiverse approach: Implementation not trivial (usually problem specific implementation necessary). Results must be commensurable across multiverse (e.g., estimation versus hypothesis testing).
3
Our Multiverse: MPT Models
Statistical framework: Frequentist (i.e., maximum-likelihood) Bayesian (i.e., MCMC) Pooling: Complete pooling (aggregated data) No pooling (individual-level data) Partial pooling (hierarchical-modelling): Individual-level parameters with group-level distribution Results: Parameter point estimates: MLE and posterior mean Parameter uncertainty: ML-SE and MCMC-SE Model adequacy: 𝐺 2 p-value and posterior predictive p-value (Klauer, 2010) Part of DFG Scientific Network grant to Julia Groß and Beatrice Kuhlmann
4
Our Multiverse Pipeline (in R)
Frequentist (uses MPTinR; Singmann & Kellen, 2013): (1) Traditional approach: Frequentist asymptotic complete pooling (2) Frequentist asymptotic no pooling (3) Frequentist no-pooling with parametric bootstrap (4) Frequentist no-pooling with non-parametric bootstrap Bayesian (uses TreeBUGS; Heck, Arnold, & Arnold, 2018): (5) Bayesian complete pooling (custom C++ sampler) (6) Bayesian no pooling (unique method, custom C++ sampler) (7) Bayesian partial pooling I, Jags: Beta-MPT (Smith & Batchelder, 2010; Jags) (8) Bayesian partial pooling II: Latent trait MPT (Klauer, 2010; Jags) (9) Bayesian partial pooling III: Latent trait MPT w/o correlation parameters (Jags) Implemented in R package MPTmultiverse One function fits all: fit_mpt("model.eqn", "Heathcote_2006", dh06)
5
Preregistered Meta-Analytic Approach
34 93 2557 16 866 42 66 2680 10 22 786 9 26 963 12 28 1267 123 251 9119 2-high-threshold source memory (2HTSM; Bayen, Murnane, & Erdfelder, 1996) Marie Luisa Schaper & Beatrice G. Kuhlmann Confidence-rating 2-high-threshold memory model (c2HTM; Bröder, Kellen, Schütz, & Rohrmeier, 2013) Henrik Singmann, David Kellen, Georgia Eleni Kapetaniou, & Karl Christoph Klauer Recognition heuristic (r-model; Hilbig, Erdfelder, & Pohl, 2010) Fabian Gümüsdagli, Edgar Erdfelder, & Martha Michalkiewicz IAT ReAL model (Meissner & Rothermund, 2013) Franziska Meissner, Christoph Klauer, & Jimmy Calanchini Hindsight bias (Erdfelder & Buchner, 1998) Julia Groß Prospective Memory Model (Smith & Bayen, 2004) Nina Arnold & Sebastian Horn Analysis ongoing: Pair Clustering Model (Batchelder & Riefer, 1980, 1986), Quad Model (Conrey et al., 2005), Process-Dissociation Models (Jacoby, 1991; Buchner & Erdfelder, 1996)
6
Results per study Results Studies 9 81% 8 12% 7 6% 6 0% 5 1% Bayes
Frequentist
7
max. abs. diff.: .3 to .2 max. abs. diff.: > .5 max. abs. diff.: > .5
12
4th-degree polynomial
13
Which Covariates Affect Absolute Deviation?
Model: Alone yes, but adding parameter nested in model (categorical variable) provides clearly better account. Worst offenders: M & P (Prospective memory, .05 to .15), c & b (hindsight bias, ≈ .05), 𝐷 𝑛 (c2HTM, up to .06), d (2HTSM, up to .05) Structural aspects: Number of parameters Number of participants Model fit (log p-value) Maximal fungibility (structural parameter trade-offs) Average across-participants parameter trade-off (latent-trait rho) Covariates still missing: Number of trials Heterogeneity across participants Branch depth (product of previous branches)
14
Which Covariates Affect Absolute Deviation?
Model: Alone yes, but adding parameter nested in model (categorical variable) provides clearly better account. Worst offenders: M & P (Prospective memory, .05 to .15), c & b (hindsight bias, ≈ .05), 𝐷 𝑛 (c2HTM, up to .06), d (2HTSM, up to .05) Structural aspects: Number of parameters Number of participants ↑ Model fit (log p-value) ↑ Maximal fungibility (structural parameter trade-offs) ↑ Average across-participants parameter trade-off (latent-trait rho) ↓ Covariates still missing: Number of trials Heterogeneity across participants Branch depth (product of previous branches)
15
Conclusions Model misspecification affects cognitive modelling twice (at least): Cognitive-structural model Data ("statistical") model In 20% of data sets, at least one estimation method fails (usually a Bayesian one). Agreement between estimation methods, depends on specific model parameter. Influence of structural aspects tested so far (e.g., log-p, trade-offs) relatively small compared to effect of model parameter. Recommendation: Use multiverse approach for taking model misspecification of data model into account.
17
4th-degree polynomial
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.