Presentation is loading. Please wait.

Presentation is loading. Please wait.

The P-hacking Phenomenon

Similar presentations


Presentation on theme: "The P-hacking Phenomenon"— Presentation transcript:

1 The P-hacking Phenomenon
Megan Head, Luke Holman, Andrew Kahn, Rob Lanfear, Michael Jennions Ok, so I’m going to talk about a study that my colleagues here and I recently published in PLoS Biology about publication bias and p-hacking Before I get started though, I first want to ask you a few questions.

2 Have you ever…. Not reported variables that you’ve measured?
Decided whether to collect more data or not after looking at your results? Back engineered your predictions? So don’t worry I’m not going to make you raise your hands, I just want you to answer these questions in your head Have you ever not reported variables that you’ve measured… perhaps because after analysing the data it was clear that some of them weren’t very important… Or have you ever decided whether to collect more data after analysing your results? Perhaps because it became clear a greater sample size would give you more power to detect a marginal effect… Or have you ever back engineered your predictions to tell a good story? Either changing the focus of your paper to match the most exciting findings or reporting unexpected findings as having been predicted from the start….

3 * If you said yes to any of these questions you aren’t alone… In this study John and colleagues found that over 60% of psychologists surveyed have failed to report all dependent variables, over 55% have decided whether to collect more data after looking at the results and around 30% have back engineered their predictions. They also show that these practices are generally deemed acceptable by researchers. And I must admit, that prior to doing this research I would probably have been amongst them. But what I want to do today is demonstrate that, taking part in seemingly innocuous practices like these can actually be quite detrimental to scientific progress. * Score of 2 means all researchers thought the practice was defensible John et al 2012

4 The Replication Crisis
So… I don’t know if you’ve heard, but some fields of science are undergoing what called a “replication crisis” – that is, many published results are unable to be reproduced. This inability to replicate findings is bad, because reproducibility is an essential part of the scientific method. But I don’t think it’s necessarily being caused by intentional misconduct, but rather by the way we do our science and unintentional biases that we introduce into our work.

5 Publication Bias Types of publication bias The file drawer effect
p-hacking One such bias is publication bias So there are two types of publication bias: There’s selection bias, also known as the “file drawer effect”. This is a bias in which research is published, and which isn’t that arises because journal editors and reviewers place a higher value on significant findings. Because of this and because researchers are judged based on the number of papers they have and the prestige of the journals they publish in, they often don’t publish studies that yield non-significant results. When later reviewing the literature missing non-significant results can lead to an overestimation of the true effect of a treatment or relationship. P-hacking, is a little bit different but probably arises at least in part for the same reasons. P-hacking is the misreporting of true effect sizes in published studies. It occurs when researchers analyse their data multiple times or in multiple ways and then selectively report those analyses that produce significant results. Like the file drawer effect p-hacking can lead to an overestimation of the true effect size, but in this case it’s not because effect sizes aren’t being published, but because individual effect sizes that are being published are inflated.

6 How to p-hack Recording many response variables and deciding which to report post analysis Study 1 Study 2 Study 4 Study 3 Study 6 Study 5 Study 8 Study 7 Study 9 Effect size So let me give an example of how this works using a practice that seems common and deemed by most as acceptable. Recording many response variables and deciding which to report post analysis So let’s say researchers are very interested in understanding how body size is influenced by temperature. Each study estimates body size in a few different ways, simply because they can. When they relate each body size measurements to temperature they get a bunch of different effect sizes, that these blue data points. If these studies reported all of the effects they recorded you would see a mean effect size across studies that is here – around 0.5. If they all reported a random one of these variables or decided before the experiment just to measure one you would get a similar mean effect size, however if all these studies report only the variable that gives them the greatest effect, that is these points over here, then in the published research it will actually look like the true effect size is here around 0.7.

7 What is the problem with p-hacking?
And this is a big problem for people coming along later who want to weigh up all the evidence and come to some kind of general conclusion about an effect, because if the p-values and effect sizes presented in the literature are a biased sample of what is out there, reviews and meta-analyses will overestimate the strength of a relationship and this could influence policy and decision making.

8 How to detect p-hacking
So clearly it is important to prevent p-hacking, but before investing too much time into this it’s probably a good idea to see if p-hacking is really happening. So how can we detect p-hacking? We’ll never be able to know whether a particular p-value has been hacked or not, but we can look at the distribution of a bunch of p-values and look for anomalies in this distribution to see whether it is likely that p-hacking is going on. So just to demonstrate the kind of anomalies I’m talking about… So on the left… this is what we expect the distribution of p-values to look like if the true effect size is zero, every p-value is equally likely to be observed and the expected distribution of p-values is uniform. On the other hand, when the true effect size is nonzero, the expected distribution of p-values is exponential with a right skew like this. This is because researchers are more likely to obtain lower p-values when studying strong effects. If researchers p-hack and turn a truly non-significant result into a significant one, then what we get is an overabundance of p-values just below 0.05. when there is no real effect… the distribution shifts from being flat to left skewed. And when there is a true effect, the distribution is still right skewed but now with a little hump just below 0.05.

9 How to detect p-hacking
And using this knowledge we can test for two things… First to test for whether there is evidence for a non-zero effect size, we can use a binomial test to see whether there are more p-values in this part of the distribution compared to this part. Second to test for p-hacking, we can use a binomial test to see whether there are more p-values here, than here.

10 How widespread is p-hacking?
Text-mining of open access papers in Pubmed Analysed one p-value per results section Classified according to FOR code So using these tests we looked at how widespread p-hacking is. To do this we used text mining to extract p-values from the results section of all open access papers available in the pubmed database. We then randomly selected one p-value per paper to ensure they were all independent and we assigned each p-value to a scientific discipline.

11 How widespread is p-hacking?
What we found is that there is evidence that researchers are predominantly studying questions with non zero effect sizes. This is reassuring, particularly given recent concerns about the lack of reproducibility of findings. Across all disciplines, however there was also strong evidence for p-hacking. And when we look at the disciplines individually, in most, there were more p-values in the upper than the lower bin and in every discipline where we had good statistical power this difference was significant. Overall <0.001 <0.001

12 How widespread is p-hacking?
So our text-mining suggests that p-hacking is widespread, but as you can see from the effect sizes presented here for each discipline the proportion of p values in the right hand bin often isn’t that much greater than 50% so although p-hacking is occurring it’s effects on general conclusions may be expected to be negligible. Proportion of p-values in the upper bin ± 95%CI

13 Does p-hacking affect meta-analyses?
Evolutionary biology as a case study Re-extracted p-values from original papers So, in addition to looking at how widespread p-hacking is, we also wanted to have a closer look at how p-hacking influences meta-analyses. To do this we re-extracted p-values from papers that had been subject to previous meta-analyses as a kind of case study.

14 Does p-hacking affect meta-analyses?
Overall we again found strong evidence that these meta-analyses are focusing on questions for which there appear to be real effects. Looking at individual studies their were 3 studies that didn’t, but these all had very low sample sizes so it is likely that this is a power issue. When we look at p-hacking ,we again found a significant effect across all the studies, But for individual studies this affect was only significant for the study with the largest sample size Overall <0.001 0.033

15 How can we stop p-hacking?
What can researchers do? Educate themselves and others Clearly label research as pre-specified or exploratory Adhere to common analysis standards Perform blind analyses whenever possible Place greater emphasis on quality of methods than novelty of findings So in summary, our results suggest that p-hacking is widespread, and it is having noticeable effects on the distribution of p-values although small sample sizes can make p-hacking difficult to detect in individual meta-analyses. So now that we know that p-hacking is occurring what can we do to prevent it. One key thing is simply better education of researchers. As is clear from the research I presented earlier by John et al, many researchers don’t realise that the things they do are problematic • We can also clearly label research as prespecified or exploratory so that readers can treat results with appropriate caution. Exploratory research can be useful for identifying fruitful research directions, but prespecified studies offer far more convincing evidence for a specific effect. • We can adhere to common analysis standards; for instance measuring only response variables that are known (or predicted) to be important and concentrating research efforts on increasing sample sizes rather than measuring more variables. • We can perform data analyses blind whenever possible. Since not knowing which treatment is which makes it difficult to p-hack. • and finally and perhaps most importantly we can place greater emphasis on the quality of research methods and data collection rather than the significance or novelty of the subsequent findings when reviewing or assessing research.

16 Thank you! Thank you


Download ppt "The P-hacking Phenomenon"

Similar presentations


Ads by Google