Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sam Creavin1 Liz Ewins2 Dane Rayment2 Anna Noel-Storr3 Sarah Cullum4

Similar presentations


Presentation on theme: "Sam Creavin1 Liz Ewins2 Dane Rayment2 Anna Noel-Storr3 Sarah Cullum4"— Presentation transcript:

1 Sam Creavin1 Liz Ewins2 Dane Rayment2 Anna Noel-Storr3 Sarah Cullum4
Crowdsourcing citation screening for a Cochrane diagnostic test accuracy review on the mini-mental state examination Sam Creavin1 Liz Ewins2 Dane Rayment2 Anna Noel-Storr3 Sarah Cullum4 1NIHR ACF General Practice, University of Bristol , 2Specialist Registrar, Severn Deanery, Bristol, 3Cochrane Dementia and Cognitive Improvement Group, Oxford University, 4Consultant Psychiatrist/Hon Senior Clinical Lecturer, Academic Unit of Psychiatry, Bristol University Background The Cochrane Dementia and Cognitive Improvement Group (CDCIG) are conducting systematic reviews of Diagnostic Test Accuracy (DTA) studies of biomarkers and pen and paper tests, including the Mini Mental State Examination (MMSE).1 DTA studies investigate the clinical utility of an index test for the diagnosis of a disease in a particular population, reporting parameters such as sensitivity, specificity, predictive value and likelihood ratio. The MMSE is widely used for the screening and recognition of cognitive disorders2 and is extensively cited. For Cochrane DTA reviews, especially of the MMSE, many (>10,000) possible citations must be assessed.  We describe a novel method of expediting the completion of Cochrane DTA systematic reviews using the MMSE review as a test case. This work examines the hypothesis that crowds (groups of willing contributors) can be an effective method of screening large numbers of citations. Results Screeners reported that they felt more confident at classifying citations appropriately. Conclusions Non-experts can be trained to successfully screen large numbers of citations for systematic reviewers. A second training session to refine inclusion criteria increased the confidence and accuracy of the screeners to identify relevant citations. The second workshop may have also led to fewer dropouts in the second round. We intend to assess the accuracy and inter-rater reliability of the screeners as we continue with the MMSE DTA reviews. The first round results would have generated thousands of full text papers for each review, which is unmanageable for most reviewers, this approach attempted to navigate the lines between pragmatism and methodological rigour while engaging users of research in the Cochrane process. Method Lay screeners comprising junior doctors and psychology students were recruited and attended a training workshop. 20,823 retrieved citations were identified and divided into 19 batches of 1,096 citations, each batch assessed by two screeners. Citations were classified as ‘Probable’, ‘Possible’ or ‘Not Relevant’. Participants who successfully completed the first round were invited to a second training workshop where the inclusion and exclusion criteria were refined. The remaining citations were subsequently allocated and screened using the new criteria. The results of the two rounds are compared. References and acknowledgements 1. Davis D, Creavin ST Noel-Storr A, Quinn T, Smailagic N, Hyde C, Brayne C, McShane R, Cullum S. Neuropsychological tests for the diagnosis of Alzheimer’s disease dementia and other dementias: a generic protocol for cross-sectional and delayed-verification studies (Protocol).The Cochrane Library 2013, Issue 3 2. Folstein M, Folstein S, McHugh P. Mini-Mental State - Practical method for grading cognitive state of patients for clinicians. Journal of Psychiatric Research 1975;12:189–98. The authors would like to thank the non-expert screeners who have participated in the study. Discussing papers at a regional training workshop


Download ppt "Sam Creavin1 Liz Ewins2 Dane Rayment2 Anna Noel-Storr3 Sarah Cullum4"

Similar presentations


Ads by Google