Download presentation
Presentation is loading. Please wait.
Published bySamuel Holmes Modified over 8 years ago
1
Simine Vazire UC DAVIS SPSP 1.28.16 GETTING PAPERS ACCEPTED IN SOCIAL/PERSONALITY JOURNALS POST REPLICABILITY CRISIS
2
Counterintuitive is cool Three-way interactions are ‘sophisticated’ p =.04 is just as good as p =.01 6 small-sample studies all showing significant results is impressive PRE-2011
3
We now know some things that predict replication: Low p-value (close to 0) Larger samples Main effects/low-order interactions Pre-registration Internal replication (direct or pre-registered) We kinda always knew this, but now we know it more. 2016
4
How much do we care about replicability? How do we balance it with other values? Different journals are trying different approaches. HOW DOES THIS AFFECT JOURNALS?
5
CHOOSING A JOURNAL Society/Publisher that owns the journal Publication committee/board Editor in Chief Associate editors Reviewers SPSP, ARP, EASP, SESP SPPS Consortium Me 10 AEs You
6
Replications accepted Effect sizes, 95% CIs, exact p-values Tables and figures embedded, don’t count towards 5,000 word limit Handling editor’s name will be published with each article SPPS POLICIES
7
Upon submission: Confirm that you have reported how sample size was determined for each study and a discussion of statistical power. Confirm that you have reported all data exclusions (e.g., dropped outliers) and how decisions about data exclusions were made. Confirm that you have reported all measures or conditions for variables of interest to the research question(s), whether they were included in the analyses or not. Confirm that all key results are accompanied by exact p- values, effect sizes, and 95% confidence intervals, or an explanation of why this is not possible. SPPS POLICIES
8
About 300 submissions 38% desk rejected 40% rejected after review 22% revise & resubmit Of which we anticipate 80% will get accepted → 17% acceptance rate Average number of days to decision: 30 days 46 excluding desk rejections Impact factor: 2.56 SPPS SINCE JULY 2015
9
What proportion of the desk rejections at SPPS have something to do with power/sample size? A.25% B.50% C.75% D.100% POP QUIZ
10
SPPS DESK REJECTIONS PowerSelf-report Design issues Other N = 92 “Unimportant” 0% “Brick in the wall” 0%
11
SPPS DESK REJECTIONS PowerSelf-report Design issues Other Triple whammy Double whammy Single whammy N = 92
12
Power is ignored Power analysis uses unrealistic effect size A priori expectation of huge effect not justified Justification is based on one or a few underpowered studies (imprecise estimates) Justification is based on selective slice of literature (ignores failed replications, controversy) Justification is based on meta-analysis that doesn’t adequately take into account publication bias & p-hacking Power analysis uses observed or post-hoc power Authors cite Simmons et al., 2011 to justify n = 20 COMMON PROBLEMS
13
I encouraged people to conduct power analyses I was wrong Don’t conduct a power analysis for your specific effect unless There is a large, unbiased meta-analysis So, don’t conduct a power analysis YOU CAN’T WIN
14
Here is a power analysis for the field: Average published effect size: d =.43 (r =.21) 80% power = 90 people per condition (N = 180 for correlational study) Maybe total sample size matters more than sample size per condition, but it’s complicated THE ONLY POWER ANALYSIS YOU’LL EVER NEED*
15
N = 180 (90/condition) THIS IS FOR THE AVERAGE PUBLISHED EFFECT SIZE! Average = half of published effect sizes are smaller than this Published = definitely inflated Consider planning for smaller effect d =.25 (r =.12) -> 250 people per condition (500 total) If you are looking for a two-way interaction or mediation or partial correlation, assume a much smaller effect! If you are looking for a three-way interaction, pre- register and replicate. THE ONLY POWER ANALYSIS YOU’LL EVER NEED*
16
You’re not doing a traditional, between-person study Then, do a power analysis but be conservative in your effect size estimate And report all assumptions you’re making (e.g., correlation between repeated measures) If you want to interpret null effects, use Cumming’s planning for precision (need Very Large Sample) If you aren’t concerned about effect size, you can use sequential analysis *UNLESS
17
Hard to collect data Unusual sample/population Intensive procedure Intensive coding Expensive Extraordinary event High risk In that case Consider sequential analysis Pay attention to the confidence intervals Adjust your conclusions Definitely don’t interpret null results WHAT IF YOU CAN’T?
18
Pre-registration will save your butt Truly large effects and p-hacked small-N studies look the same to the observer. You can prove it’s the former and not the latter with pre- registration. I’ll believe almost anything if it’s pre-registered Direct replications have many of the benefits of pre- registration ALSO
19
They’re nice, but they don’t help with the problem of false positives/low power because of: Possibility of file drawer Undisclosed flexibility in data collection and analysis HARKing is still possible So, readers can’t tell if conceptual replication was truly a strong test Direct replication eliminates most, but not all, of these problems Pre-registered direct replication is great WHAT ABOUT CONCEPTUAL REPLICATIONS?
20
We can’t all do pre-registered direct replications… How else can I convince you that my study isn’t likely to be a false positive?
21
If you aren’t p-hacking, show us by being open and transparent 21-word solution: disclose all flexibility in data collection and analysis Tell us what’s in your file drawer Tell us what predictions were a priori and what was HARKed Make your data and materials publicly available I’ll forgive a lot if you show me you’re being extra open GIVE YOURSELF CREDIT
22
I’m assuming everyone is p-hacking Even without p-hacking, small samples are flukey, risk of false positive is high. If I tell you I’m not sure your result will replicate, you’re in good company. IT’S NOT THAT I’M ASSUMING THAT YOU’RE P-HACKING
23
Only if the only things that are true are boring, obvious things Maybe that’s the case The fact that something would be really important if true is not a good reason to publish preliminary evidence when it wouldn’t be that hard to collect more conclusive evidence. When it would be hard, then it makes sense, but conclusions still have to be calibrated to the strength of the evidence. ARE JOURNALS GOING TO BE FULL OF BORING, OBVIOUS STUDIES?
24
Power is almost necessary Often don’t need a power analysis, just get a large sample Pre-registration is very helpful Direct replication is great Conceptual replication doesn’t address replicability unless pre- registered Transparency always helps What gets published might look quite different than in the past If your effect is real, you should still be able to get it in If you’re willing to be extra open, you’ll have a better chance* Submit your work to journals that reward your practices CONCLUSIONS
25
Do them Submit your papers to journals that have explicitly expressed these values As a reviewer, use these considerations when evaluating manuscripts WHAT CAN YOU DO TO MAKE THESE PRACTICES MORE COMMON?
26
THE END
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.