Download presentation
Presentation is loading. Please wait.
Published byPolly Kelly Modified over 6 years ago
1
Transparency increases credibility and relevance of research
David Mellor Center for Open Science
2
Norms Counternorms Communality Secrecy Open sharing Closed
Communality – open sharing with colleagues; Secrecy Universalism – research evaluated only on its merit; Particularism – research evaluated by reputation/past productivity Disinterestedness – scientists motivated by knowledge and discovery, not by personal gain; self-interestedness – treat science as a competition with other scientists Organized skepticism – consider all new evidence, theory, data, even if it contradicts one’s prior work/point-of-view; organized dogmatism – invest career in promoting one’s own most important findings, theories, innovations Quality – seek quality contributions; Quantity – seek high volume
3
Norms Counternorms Communality Universalism Secrecy Particularlism
Open sharing Universalism Evaluate research on own merit Secrecy Closed Particularlism Evaluate research by reputation Communality – open sharing with colleagues; Secrecy Universalism – research evaluated only on its merit; Particularism – research evaluated by reputation/past productivity Disinterestedness – scientists motivated by knowledge and discovery, not by personal gain; self-interestedness – treat science as a competition with other scientists Organized skepticism – consider all new evidence, theory, data, even if it contradicts one’s prior work/point-of-view; organized dogmatism – invest career in promoting one’s own most important findings, theories, innovations Quality – seek quality contributions; Quantity – seek high volume
4
Norms Counternorms Communality Universalism Disinterestedness Secrecy
Open sharing Universalism Evaluate research on own merit Disinterestedness Motivated by knowledge and discovery Secrecy Closed Particularlism Evaluate research by reputation Self-interestedness Treat science as a competition Communality – open sharing with colleagues; Secrecy Universalism – research evaluated only on its merit; Particularism – research evaluated by reputation/past productivity Disinterestedness – scientists motivated by knowledge and discovery, not by personal gain; self-interestedness – treat science as a competition with other scientists Organized skepticism – consider all new evidence, theory, data, even if it contradicts one’s prior work/point-of-view; organized dogmatism – invest career in promoting one’s own most important findings, theories, innovations Quality – seek quality contributions; Quantity – seek high volume
5
Norms Counternorms Communality Universalism Disinterestedness
Open sharing Universalism Evaluate research on own merit Disinterestedness Motivated by knowledge and discovery Organized skepticism Consider all new evidence, even against one’s prior work Secrecy Closed Particularlism Evaluate research by reputation Self-interestedness Treat science as a competition Organized dogmatism Invest career promoting one’s own theories, findings Communality – open sharing with colleagues; Secrecy Universalism – research evaluated only on its merit; Particularism – research evaluated by reputation/past productivity Disinterestedness – scientists motivated by knowledge and discovery, not by personal gain; self-interestedness – treat science as a competition with other scientists Organized skepticism – consider all new evidence, theory, data, even if it contradicts one’s prior work/point-of-view; organized dogmatism – invest career in promoting one’s own most important findings, theories, innovations Quality – seek quality contributions; Quantity – seek high volume
6
Norms Counternorms Communality Universalism Disinterestedness
Open sharing Universalism Evaluate research on own merit Disinterestedness Motivated by knowledge and discovery Organized skepticism Consider all new evidence, even against one’s prior work Quality Secrecy Closed Particularlism Evaluate research by reputation Self-interestedness Treat science as a competition Organized dogmatism Invest career promoting one’s own theories, findings Quantity Communality – open sharing with colleagues; Secrecy Universalism – research evaluated only on its merit; Particularism – research evaluated by reputation/past productivity Disinterestedness – scientists motivated by knowledge and discovery, not by personal gain; self-interestedness – treat science as a competition with other scientists Organized skepticism – consider all new evidence, theory, data, even if it contradicts one’s prior work/point-of-view; organized dogmatism – invest career in promoting one’s own most important findings, theories, innovations Quality – seek quality contributions; Quantity – seek high volume
7
Anderson, Martinson, & DeVries, 2007
8
Incentives for individual success are focused on getting it published, not getting it right
Nosek, Spies, & Motyl, 2012
9
Problems Low power Flexibility in analysis Selective reporting Ignoring nulls Lack of replication Examples from: Button et al – Neuroscience Ioannidis – why most results are false (Medicine) GWAS Biology Two possibilities are that the percentage of positive results is inflated because negative results are much less likely to be published, and that we are pursuing our analysis freedoms to produce positive results that are not really there. These would lead to an inflation of false-positive results in the published literature. Some evidence from bio-medical research suggests that this is occurring. Two different industrial laboratories attempted to replicate 40 or 50 basic science studies that showed positive evidence for markers for new cancer treatments or other issues in medicine. They did not select at random. Instead, they picked studies considered landmark findings. The success rates for replication were about 25% in one study and about 10% in the other. Further, some of the findings they could not replicate had spurred large literatures of hundreds of articles following up on the finding and its implications, but never having tested whether the evidence for the original finding was solid. This is a massive waste of resources. Across the sciences, evidence like this has spurred lots of discussion and proposed actions to improve research efficiency and avoid the massive waste of resources linked to erroneous results getting in and staying in the literature, and about the culture of scientific practices that is rewarding publishing, perhaps at the expense of knowledge building. There have been a variety of suggestions for what to do. For example, the Nature article on the right suggests that publishing standards should be increased for basic science research. [It is not in my interest to replicate – myself or others – to evaluate validity and improve precision in effect estimates (redundant). Replication is worth next to zero (Makel data on published replications; motivated to not call it replication; novelty is supreme – zero “error checking”; not in my interest to check my work, and not in your interest to check my work (let’s just each do our own thing and get rewarded for that) Irreproducible results will get in and stay in the literature (examples from bio-med). Prinz and Begley articles (make sure to summarize accurately) The Nature article by folks in bio-medicine is great. The solution they offer is a popular one in commentators from the other sciences -- raise publishing standards. Sterling, 1959; Cohen, 1962; Lykken, 1968; Tukey, 1969; Greenwald, 1975; Meehl, 1978; Rosenthal, 1979
10
Papers report support for tested Hp
50% % % Papers report support for tested Hp Fanelli D (2010) “Positive” Results Increase Down the Hierarchy of the Sciences. PLoS ONE 5(4): e doi: /journal.pone
12
Median effect size (d) = .29 % p < .05 = 63%
Reported Tests (122) Median p-value = .02 Median effect size (d) = .29 % p < .05 = 63% Unreported Tests (147) Median p-value = .35 Median effect size (d) = .13 % p < .05 = 23% We find that about 40% of studies fail to fully report all experimental conditions and about 70% of studies do not report all outcome variables included in the questionnaire. Reported effect sizes are about twice as large as unreported effect sizes and are about 3 times more likely to be statistically significant. N = 32 studies in psychology Unreported tests (147) Median p-value = .35 Median d = .13 % significant = 23% Reported tests (N = 122) Median p = .02 Median d = .29 % sig p<.05 = 63% Franco, Malhotra, & Simonovits, 2015, SPPS
13
False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant
14
False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant
15
False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant
16
Flexibility in data analysis and bias toward significant findings provide the ability to fool others. Fooling ourselves is the bigger problem!
17
Ba Ba? Da Da? Ga Ga? The McGurk Effect
Actually a visual /ga sound Sounds like Da Da when watching the face, and like Ba Ba with eyes closed. Ba Ba? Da Da? Ga Ga? McGurk & MacDonald, 1976, Nature
19
Adelson, 1995
20
Adelson, 1995
21
“The human understanding when it has once adopted an opinion… draws all things else to support and agree with it. And though there be a greater number and weigh of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects; in order that by this great and pernicious predetermination the authority of its former conclusions my remain inviolate.” Sir Francis Bacon, 1620
22
The Recipe for a Replication Crisis
It is easier to publish with clean and positive results Almost any dataset can be used to find such results Motivated reasoning and confirmation bias are powerful forces
23
97% 37% xx Open Science Collaboration, 2015, Science
24
“Nullius in verba” ~ “Take nobody's word for it”
25
Evidence to encourage change Incentives to embrace change
Technology to enable change Improving scientific ecosystem
26
Infrastructure Metascience Community
28
Data sharing 1 2 3 Article states whether data are available, and, if so, where to access them Data must be posted to a trusted repository. Exceptions must be identified at article submission. Data must be posted to a trusted repository, and reported analyses will be reproduced independently prior to publication.
29
Disclose Require Verify
Three Tiers Disclose Require Verify Eight Standards Data citation Materials transparency Data transparency Code transparency Design transparency Study Preregistration Analysis Preregistration Replication
30
Disclose Require Verify
Three Tiers Disclose Require Verify Eight Standards Data citation Materials transparency Data transparency Code transparency Design transparency Study Preregistration Analysis Preregistration Replication
31
Disclose Require Verify
Three Tiers Disclose Require Verify Eight Standards Data citation Materials transparency Data transparency Code transparency Design transparency Study Preregistration Analysis Preregistration Replication
32
Nearly 3,000 Journals and Organizations
AAAS/Science American Academy of Neurology American Geophysical Union American Heart Association American Meterological Society American Society for Cell Biology Association for Psychological Science Association for Research in Personality Association of Research Libraries Behavioral Science and Policy Association BioMed Central Committee on Publication Ethics Electrochemical Society Frontiers MDPI Nature Publishing Group PeerJ Pensoft Publishers Psychonomic Society Public Library of Science The Royal Society Society for Personality and Social Psychology Society for a Science of Clinical Psychology Ubiquity Press Wiley
33
Signals: Making Behaviors Visible Promotes Adoption
Badges Open Data Open Materials Preregistration Psychological Science (Jan 2014) Kidwell et al., 2016
34
40% 30% % Articles reporting data available in repository 20% 10% 0%
38
What problems does preregistration fix?
The file drawer Kerr, 1998
39
What problems does preregistration fix?
The file drawer P-Hacking: Unreported flexibility in data analysis Kerr, 1998
40
What problems does preregistration fix?
The file drawer P-Hacking: Unreported flexibility in data analysis HARKing: Hypothesizing After Results are Known Kerr, 1998
41
What problems does preregistration fix?
The file drawer P-Hacking: Unreported flexibility in data analysis HARKing: Hypothesizing After Results are Known Dataset Hypothesis Kerr, 1998
42
What problems does preregistration fix?
The file drawer P-Hacking: Unreported flexibility in data analysis HARKing: Hypothesizing After Results are Known Dataset Hypothesis Kerr, 1998
43
Preregistration makes the distinction between
confirmatory (hypothesis testing) and exploratory (hypothesis generating) research more clear. The file drawer P-Hacking: Unreported flexibility in data analysis HARKing: Hypothesizing After Results are Known Dataset Hypothesis
44
Confirmatory versus exploratory analysis
Context of confirmation Traditional hypothesis testing Results held to the highest standards of rigor Goal is to minimize false positives P-values interpretable Context of discovery Pushes knowledge into new areas Finds unexpected relationships Goal is to minimize false negatives P-values meaningless! “In statistics, hypotheses suggested by a given dataset, when tested with the same dataset that suggested them, are likely to be accepted even when they are not true. This is because circular reasoning (double dipping) would be involved: something seems true in the limited data set, therefore we hypothesize that it is true in general, therefore we (wrongly) test it on the same limited data set, which seems to confirm that it is true. Generating hypotheses based on data already observed, in the absence of testing them on new data, is referred to as post hoc theorizing (from Latin post hoc, "after this"). The correct procedure is to test any hypothesis on a data set that was not used to generate the hypothesis.” Presenting exploratory results as confirmatory increases the publishability of results at the expense of credibility of results.
45
Confirmatory versus exploratory analysis
Context of confirmation Traditional hypothesis testing Results held to the highest standards of rigor Goal is to minimize false positives P-values interpretable Context of discovery Pushes knowledge into new areas Finds unexpected relationships Goal is to minimize false negatives P-values meaningless! “In statistics, hypotheses suggested by a given dataset, when tested with the same dataset that suggested them, are likely to be accepted even when they are not true. This is because circular reasoning (double dipping) would be involved: something seems true in the limited data set, therefore we hypothesize that it is true in general, therefore we (wrongly) test it on the same limited data set, which seems to confirm that it is true. Generating hypotheses based on data already observed, in the absence of testing them on new data, is referred to as post hoc theorizing (from Latin post hoc, "after this"). The correct procedure is to test any hypothesis on a data set that was not used to generate the hypothesis.” Presenting exploratory results as confirmatory increases the publishability of results at the expense of credibility of results.
46
Confirmatory versus exploratory analysis
Context of confirmation Traditional hypothesis testing Results held to the highest standards of rigor Goal is to minimize false positives P-values interpretable Context of discovery Pushes knowledge into new areas/ data-led discovery Finds unexpected relationships Goal is to minimize false negatives P-values meaningless “In statistics, hypotheses suggested by a given dataset, when tested with the same dataset that suggested them, are likely to be accepted even when they are not true. This is because circular reasoning (double dipping) would be involved: something seems true in the limited data set, therefore we hypothesize that it is true in general, therefore we (wrongly) test it on the same limited data set, which seems to confirm that it is true. Generating hypotheses based on data already observed, in the absence of testing them on new data, is referred to as post hoc theorizing (from Latin post hoc, "after this"). The correct procedure is to test any hypothesis on a data set that was not used to generate the hypothesis.” Presenting exploratory results as confirmatory increases the publishability of results at the expense of credibility of results.
47
Confirmatory versus exploratory analysis
Context of confirmation Traditional hypothesis testing Results held to the highest standards of rigor Goal is to minimize false positives P-values interpretable Context of discovery Pushes knowledge into new areas/ data-led discovery Finds unexpected relationships Goal is to minimize false negatives P-values meaningless “In statistics, hypotheses suggested by a given dataset, when tested with the same dataset that suggested them, are likely to be accepted even when they are not true. This is because circular reasoning (double dipping) would be involved: something seems true in the limited data set, therefore we hypothesize that it is true in general, therefore we (wrongly) test it on the same limited data set, which seems to confirm that it is true. Generating hypotheses based on data already observed, in the absence of testing them on new data, is referred to as post hoc theorizing (from Latin post hoc, "after this"). The correct procedure is to test any hypothesis on a data set that was not used to generate the hypothesis.” Presenting exploratory results as confirmatory increases the publishability of results at the expense of credibility of results.
48
Collect New Data Example Workflow 1
Theory driven, a-priori expectations Collect New Data
49
Collect New Data Confirmation Phase Hypothesis testing
Example Workflow 1 Theory driven, a-priori expectations Collect New Data Confirmation Phase Hypothesis testing
50
Hypothesis generating
Example Workflow 1 Theory driven, a-priori expectations Collect New Data Confirmation Phase Hypothesis testing Discovery Phase Exploratory research Hypothesis generating
51
Example Workflow 2 Few a-priori expectations Collect Data
52
Hypothesis generating
Example Workflow 2 Few a-priori expectations Collect Data Keep these data secret! Split Data Discovery Phase Exploratory research Hypothesis generating
53
Hypothesis generating
Example Workflow 2 Few a-priori expectations Collect Data Keep these data secret! Split Data Discovery Phase Exploratory research Hypothesis generating Confirmation Phase Hypothesis testing
54
Incentives to Preregister
cos.io/prereg cos.io/badges
55
How to preregister?
56
https://osf.io/prereg
60
Reporting preregistered work
Include a link to your preregistration (e.g. Report the results of ALL preregistered analyses ANY unregistered analyses must be transparent
61
Registered Reports
62
Registered Reports Are the hypotheses well founded?
Are the methods and proposed analyses feasible and sufficiently detailed? Is the study well powered? (≥90%) Have the authors included sufficient positive controls to confirm that the study will provide a fair test?
63
Registered Reports Did the authors follow the approved protocol?
Did positive controls succeed? Are the conclusions justified by the data?
64
None of these things matter
Chambers, 2017
65
Other benefits of Registered Reports
Full protection of exploratory analyses / serendipity: confirmatory (registered) and exploratory (unregistered) outcomes simply distinguished in Results Constructive review process where reviewers can help authors address methodological problems before it is too late
66
A few FAQs about RRs (see more at cos.io/rr)
67
} 1. “Is Registered Reports suitable for my journal?”
Applicable to any field engaged in hypothesis-driven research where one or more of the following problems apply: Publication bias Significance chasing (e.g. p-hacking) Post hoc hypothesizing (hindsight bias) Low statistical power Lack of direct replication Not applicable for Purely exploratory science Methods development } No hypothesis testing
68
2. “What’s to stop researchers from ‘pre-registering’ a study that they have already conducted?”
Time-stamped raw data files must be submitted at Stage 2 with basic lab log and certification from all authors that data was collected after provisional acceptance Submitting a completed study at Stage 1 would therefore be fraud Strategy would backfire anyway when reviewers ask for amendments at Stage 1 Registered Reports aren’t designed to prevent fraud but to incentivize good practice 3. “If accepting papers before results exist, how can we know that the studies will be conducted to a high standard?” Stage 1 review criteria include the a priori specification of data quality checks / positive controls that, subject to editorial discretion, must be passed at Stage 2 for the final paper to be accepted To prevent publication bias, all such tests must be independent of the main hypotheses Running experiments poorly or sloppily, or with errors, would therefore jeopardize final publication
69
This highlights the importance of the journal adopting specific review criteria
Stage 1 (protocol) Stage 2 (full paper with results) From Registered Report author guidelines at the European Journal of Neuroscience:
70
8. “Is this appropriate for secondary analysis of existing data sets?”
6. “What happens if the authors need to change something about their experimental procedures after they are provisionally accepted?” Minor changes (e.g. replacing equipment) can be footnoted in Stage 2 manuscript as protocol deviations Major changes (e.g. changing data exclusion criteria) would require withdrawal Editorial team decides whether deviation is sufficiently minor to continue 7. “How will this work when some of the authors’ proposed analyses will depend on the results?” (e.g. assumptions about statistical distributions) Pre-registration doesn’t require each decision to be specified, only the decision tree Authors can pre-register the contingencies / rules for future decisions 8. “Is this appropriate for secondary analysis of existing data sets?” Yes, several journals offer this feature, so long as authors can certify that they have not yet observed the data in question
71
9. “Are Registered Reports suitable only for low-risk research?”
No – they are ideal for any hypothesis-driven question where the answer is important to know either way, regardless of study risk. Adoption by Nature group attests to this. 10. “Will Registered Reports lower my journal’s impact factor?” No reason to think so – at Cortex the first 6 RRs have been cited 12.5% above current JIF 11. “Are you suggesting Registered Reports as a replacement of existing article types?” No – at most journals they are being added as a new option for authors 12. “How complicated is the implementation?” Straightforward – took approximately 1 week at Cortex and Royal Society Open Science to make the necessary changes to manuscript handling software. Most major publishers have one adopting journal already, making it even easier We have a dedicated repository of support materials to assist editors: cos.io/rr 13. “How many submissions have there been?” At Cortex there have been33 so far, other journals have had more or less. See our online database for a full listing of published RRs across journals:
73
Make registrations discoverable across all registries
Provide tools for communities to create and manage their own registry. Allows them to devote resources to community building and standards creation.
74
Content Experts Schol Comm Experts Technical Experts
Three layers (let experts be experts, reduce redundancy and cost, accelerate innovation) Top = content and interfaces -> researchers care Middle = services -> schol comm innovators care Bottom = tool-kit -> developers care Toolkit Ecosystem Data Technical Experts
75
Big, Mundane Challenges for Reproducibility
Forgetting Losing materials and data
77
Planning > Execution > Reporting > Archiving > Discovery
Create new projects Register a research plan First: Planning for your research -- namely, creating a project and then registering your plan. Let’s first talk about creating your project: *Create a project on the OSF. Maybe you’d like to break it down into various components - you can nest these components into your overall project. *On the OSF, you can add your collaborators. Given them access to the components that relevant to them - or give them access to the entire project. *This brings us to another topic: public and private workflows. The OSF allow management of privacy options on a component-level -- keep private what you want private, and make public what you want public. The OSF meets researchers where they are in terms of privacy and sharing - but provides incentives to open up your research.
79
Managing a research workflow
Planning Execution Reporting Archiving Discovery Managing a research workflow Collaboration Version Control Hub for Services Project Management After planning a project, it’s time for execution and managing your research workflow. We’ve already discussed the ways in which you can give collaborators access to your documents -- but there are other features of the OSF that aid in project management. Built in version control maintains a change history and access to prior versions of files, without burdening the user. The OSF is also a hub for other services you already use. We’ll get to that in a minute.
80
Collaboration
81
Add collaborators within your own organization or across institutions/organizations
82
Put data, materials, and code on the OSF
quite simply, it is a file repository that allows any sort of file to be stored and many types to be rendered in the browser without any special software. This is very important for increasing accessibility to research.
83
Automatic File Versioning
The OSF has built-in versioning for file changes, making it easy to keep your research up to date. When you or any collaborators add new versions of a file with the same name to your OSF project, the OSF automatically updates the file and maintains an accessible record of all previous versions stored in OSF Storage and many of our add-ons.
84
recent and previous versions of file
The OSF has built-in versioning for file changes, making it easy to keep your research up to date. When you or any collaborators add new versions of a file with the same name to your OSF project, the OSF automatically updates the file and maintains an accessible record of all previous versions stored in OSF Storage and many of our add-ons. recent and previous versions of file
85
Connects Services Researchers Use
The OSF integrates with services that researchers already use including storage providers (like Amazon S3, Dropbox, Google Drive, etc) and others.
86
OpenSesame
87
OpenSesame
88
The OSF can serve as a lab notebook and living archive for any research project.
All of these features allow the OSF to be a lab notebook and living archive for any research project.
89
Use view-only links to share your research with others
Planning Execution Reporting Archiving Discovery Review and Publish Use view-only links to share your research with others Unique, persistent URLs can be used in citations Attach A DOI to your public OSF project Okay - you’ve planned your study, you’ve run your study - and now it’s time to report on what you’re doing. The OSF supports this - for one, by offering version control for your manuscript - Never again will you wonder if “Final FINAL final paper” or “Final FINAL Really Final paper” is the actual final draft - but in other ways too. The OSF enables creation of view-only links for sharing materials. If your project is private, you can create a view-only link - allowing one time, read-only sharing with non-collaborators, for example, to aid the peer review process. If your domain follows double-blind peer review, your view-only link can be created to anonymize all contributor logs --- keeping the creators’ identities a secret. As I mentioned before, each project/component/file on the OSF has its own unique, persistent URL -- making each research object citable. Helpful if, for example, you’re reusing some data you’ve previously posted to the OSF and want to cite it in your new publication. This means others can use your data and cite it in their publications, too. You can also attach a DOI to your research object. We’re also working on a “badge bakery” in the OSF -- allowing for digital verification of badges to acknowledge open practices (awarded by several journals currently with several more on the way). With this “badge bakery” in place, you’ll be able to link your projects to their publications and your badge - and others can discover the work for which you’ve received a badge.
90
Making a Project Public
Privacy controls allow you to share: What you want When you want With whom you want Ensures that private content stays private. You can choose to make your projects and components public at any time- right from the start, after you’ve published your manuscript, or any time in between.
91
Persistent Citable Identifiers used in a citation
The OSF offers persistent identifiers for every project, component, and file on the OSF.
92
persistent identifier
used in a citation The last 5 digit/number combination in the URL ( is the persistent identifier. Everything on the OSF has a persistent identifier: whole projects, individual components of projects, and even files
93
See the Impact File downloads Forks
We can do small things like offer rich analytics to incentivize more open practices. Public projects gain immediate access to analytics showing visits over time, sources of traffic, and download counts for files. This is a much faster reward for one’s effort than waiting months or longer until something is published and then even longer until it is cited. Forks
94
UNIVERSITIES ecosystem PUBLISHING FUNDERS SOCIETIES
95
What can you do? Adopt TOP Guidelines Adopt Badges
Adopt Registered Reports Partner on Preregistration Challenge Connect services with OSF for linking, data archiving, etc. Slides available at: Hackathon July Community meeting: July Themes: 1) Pairing automatic enhancement with expert curation and the creation of tools to support these efforts 2) Pedagogy to develop expert curation of local data and technical skills to get SHARE data into local services (and then give back to SHARE) 3) Accessing (meta)data across the research workflow See below: I hope you'll consider attending SHARE's 2016 Community Meeting during the week of July 11, 2016 in Charlottesville, VA at the Center for Open Science (COS). If you're unfamiliar with SHARE or the free, open dataset we are creating, you can read more at share-research.org. The meeting will include a hackathon and a working meeting and you are welcome to register for one or both of these components. We welcome a diversity of skills, skill levels, backgrounds, and interests at the hackathon. This diversity is not only welcome, but will result in a better, more impactful event. Please do consider attending both events, whether or not you are an (experienced) programmer. There is no registration fee for this meeting, but we are asking participants to cover their own travel and hotel costs in Charlottesville. We have a room block at the Omni Charlottesville with a rate of $149/night if booked by May 27. You can make your reservation by calling the Omni directly and mentioning the Center for Open Science. We do have a limited budget for travel support if needed. Please fill out the registration form by Friday, April 15: Thank you and please let me know if you have questions or concerns.
96
Literature cited Chambers, C. D., Feredoes, E., Muthukumaraswamy, S. D., & Etchells, P. (2014). Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond. AIMS Neuroscience, 1(1), 4–17. Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502– Franco, A., Malhotra, N., & Simonovits, G. (2015). Underreporting in Political Science Survey Experiments: Comparing Questionnaires to Published Results. Political Analysis, 23(2), 306–312. Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Department of Statistics, Columbia University. Retrieved from Kaplan, R. M., & Irvin, V. L. (2015). Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time. PLoS ONE, 10(8), e Kerr, N. L. (1998). HARKing: Hypothesizing After the Results are Known. Personality and Social Psychology Review, 2(3), 196–217. Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability. Perspectives on Psychological Science, 7(6), 615–631. Ratner, K., Burrow, A. L., & Thoemmes, F. (2016). The effects of exposure to objective coherence on perceived meaning in life: a preregistered direct replication of Heintzelman, Trent & King (2013). Royal Society Open Science, 3(11), Rowhani-Farid, A., Allen, M., & Barnett, A. G. (2017). What incentives increase data sharing in health and medical research? A systematic review. Research Integrity and Peer Review, 2(1). Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22(11), 1359–
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.