USING DATA MANAGEMENT PLANS as a RESEARCH TOOL for IMPROVING DATA SERVICES in ACADEMIC LIBRARIES Jake Carlson, Patricia Hswe & Susan Wells Parham Amanda.

Slides:



Advertisements
Similar presentations
Topics: Quality of Measurements
Advertisements

RELIABILITY Reliability refers to the consistency of a test or measurement. Reliability studies Test-retest reliability Equipment and/or procedures Intra-
Correlation Chapter 6. Assumptions for Pearson r X and Y should be interval or ratio. X and Y should be normally distributed. Each X should be independent.
© McGraw-Hill Higher Education. All rights reserved. Chapter 3 Reliability and Objectivity.
Part II Sigma Freud & Descriptive Statistics
Part II Sigma Freud & Descriptive Statistics
Designs to Estimate Impacts of MSP Projects with Confidence. Ellen Bobronnikov March 29, 2010.
Project Title Name(s) School Grade(s). Question An excellent question will be interesting, creative, worded scientifically and relevant to the world today.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 3.3: Inter-rater reliability.
Measurement. Scales of Measurement Stanley S. Stevens’ Five Criteria for Four Scales Nominal Scales –1. numbers are assigned to objects according to rules.
Tape library, CERN, Geneva by Cory Doctorow / CC BY-SA 2.0 Research Data Management Assessment.
Critiquing Research Articles For important and highly relevant articles: 1. Introduce the study, say how it exemplifies the point you are discussing 2.
Data Management Plans PAUL H. BERN, PH.D. APRIL 3, 2014.
Concept of Measurement
Psychometrics Timothy A. Steenbergh and Christopher J. Devers Indiana Wesleyan University.
Measurement and Data Quality
Statistical Methods for Multicenter Inter-rater Reliability Study
PhD Research Seminar Series: Reliability and Validity in Tests and Measures Dr. K. A. Korb University of Jos.
Statistics for Education Research Lecture 10 Reliability & Validity Instructor: Dr. Tung-hsien He
Using data management plans as a research tool: an introduction to the DART Project NISO Virtual Conference Scientific Data Management: Caring for Your.
DATA MANAGEMENT: The gap between professor’s expectations and graduate student skill levels in data management Megan Sapp Nelson, Assoc. Professor of Library.
PTP 560 Research Methods Week 3 Thomas Ruediger, PT.
William Pooler and Heidi Imker PhD Department of Research Data Service & Graduate School of Library and Information Science, University of Illinois at.
Presenter: Karla Strieb Assistant Executive Director Transforming Research Libraries June 3, 2010 Supporting E-science: Progress at Research Institutions.
Research Data Management Services Katherine McNeill Social Sciences Librarians Boot Camp June 1, 2012.
PURR: A RESEARCH DATA CURATION SERVICE MODEL USING HUBZERO Courtney Earl Matthews Digital Data Repository Specialist HUBBUB 2012 Purdue University.
13 September 2012 The Libraries’ Role in Research Data Management: A Case Study from the University of Minnesota Meghan Lafferty, Chemistry, Chemical Engineering,
ACCESS for VALIDITY ACCESS for INNOVATION. Starting January 2011 for NEW proposals Not voluntary – “integral part” of proposal and FastLane Required for.
Elements of a Data Management Plan Bill Michener University Libraries University of New Mexico Data Management Practices for.
Assessing Quality for Integration Based Data M. Denk, W. Grossmann Institute for Scientific Computing.
Data Archiving and Networked Services DANS is an institute of KNAW en NWO Data Archiving and Networked Services Introduction to Data Management Planning.
Study of the day Misattribution of arousal (Dutton & Aron, 1974)
Assessment in Education Patricia O’Sullivan Office of Educational Development UAMS.
Rater Reliability How Good is Your Coding?. Why Estimate Reliability? Quality of your data Number of coders or raters needed Reviewers/Grant Applications.
Reliability & Validity
Data Management 101 for Earth Scientists Data Management Plans Robert Cook Environmental Sciences Division Oak Ridge National Laboratory.
Reliability & Agreement DeShon Internal Consistency Reliability Parallel forms reliability Parallel forms reliability Split-Half reliability Split-Half.
All Hands Meeting 2005 The Family of Reliability Coefficients Gregory G. Brown VASDHS/UCSD.
University Libraries/ITS Content Stewardship Program Mairéad Martin, Sr. Director, ITS Digital Library Technologies Presentation to FACAC March 1, 2011.
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Student assessment Assessment tools AH Mehrparvar,MD Occupational Medicine department Yazd University of Medical Sciences.
Appraisal and Its Application to Counseling COUN 550 Saint Joseph College For Class # 3 Copyright © 2005 by R. Halstead. All rights reserved.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Inter-rater reliability in the KPG exams The Writing Production and Mediation Module.
Elements of a Data Management Plan Bill Michener University of New Mexico
Examining Rubric Design and Inter-rater Reliability: a Fun Grading Project Presented at the Third Annual Association for the Assessment of Learning in.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Primer on Data Management Data Management Plans Robert Cook Environmental Sciences Division Oak Ridge National Laboratory American Meteorological Society.
Experimental Research Methods in Language Learning Chapter 12 Reliability and Reliability Analysis.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
DOE Data Management Plan Requirements
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Data Management Plans PAUL H. BERN, PH.D. APRIL 3, 2014.
GT Research Data Project Team Original Charge: to investigate, evaluate, assess, and communicate Georgia Tech researchers’ data practices, processes, and.
Data Analysis. Qualitative vs. Quantitative Data collection methods can be roughly divided into two groups. It is essential to understand the difference.
Choosing and using your statistic. Steps of hypothesis testing 1. Establish the null hypothesis, H 0. 2.Establish the alternate hypothesis: H 1. 3.Decide.
BSHS 382 MASTER Leading through innovation/bshs382masterdotcom.
Test-Retest Reliability (ICC) and Day to Day Variation.
OBJECTIVE INTRODUCTION Emergency Medicine Milestones: Longitudinal Interrater Agreement EM milestones were developed by EM experts for the Accreditation.
1 Measuring Agreement. 2 Introduction Different types of agreement Diagnosis by different methods  Do both methods give the same results? Disease absent.
USING DATA MANAGEMENT PLANS to EXPLORE VARIABILITY in RESEARCH DATA MANAGEMENT PRACTICES across DOMAINS Jake Carlson, Susan Wells Parham, Patricia Hswe,
The Basics of Social Science Research Methods
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Product Reliability Measuring
Classical Test Theory Margaret Wu.
Instrumentation: Reliability Measuring Caring in Nursing
Natalie Robinson Centre for Evidence-based Veterinary Medicine
15.1 The Role of Statistics in the Research Process
NSF Funding Melissa A. Moss Professor, Dept. Chemical Engineering
Presentation transcript:

USING DATA MANAGEMENT PLANS as a RESEARCH TOOL for IMPROVING DATA SERVICES in ACADEMIC LIBRARIES Jake Carlson, Patricia Hswe & Susan Wells Parham Amanda Whitmire, Lizzy Rolando & Brian Westra IASSIST 2015 Minneapolis, MN 2-6 June 2015

Amanda Whitmire Jake Carlson Patricia M. Hswe Susan Wells Parham |  Lizzy Rolando |  Brian Westra D A R T Team 5 June DART Project

Acknowledgements Amanda Whitmire | Oregon State University Libraries Jake Carlson | University of Michigan Library Patricia M. Hswe | Pennsylvania State University Libraries Susan Wells Parham | Georgia Institute of Technology Library Lizzy Rolando | Georgia Institute of Technology Library Brian Westra | University of Oregon Libraries This project was made possible in part by the Institute of Museum and Library Services grant number LG D A R T Team 5 June 20153

4

5

6

7

transition slide 5 June 20158

Levels of data services the basics DMP reviewworkshopswebsite mid-level dedicated “research services” metadata support facilitate deposit in DRs consults high level infrastructuredata curation From: Reznik-Zellen, Rebecca C.; Adamick, Jessica; and McGinty, Stephen. (2012). "Tiers of Research Data Support Services." Journal of eScience Librarianship 1(1): Article June 20159

Informed data services development 5 June Survey

Informed data services development 5 June SurveyDCPs

Informed data services development 5 June SurveyDCPsDMPs DMP

5 June DART Premise DMP Research Data Management needs practices capabilities knowledge researcher

5 June

5 June DART Premise Research Data Management needs practices capabilities knowledge Research Data Services

5 June DART Premise

5 June We need a tool

5 June Solution: an analytic rubric Performance Levels Performance Criteria HighMediumLow Thing 1 Thing 2 Thing 3

NSF Directorate or Division BIOBiological SciencesENGEngineering DBIBiological InfrastructureCBET Chemical, Bioengineering, Environmental, & Transport Systems DEBEnvironmental BiologyCMMICivil, Mechanical & Manufacturing Innovation EFEmerging Frontiers OfficeECCSElectrical, Communications & Cyber Systems IOSIntegrative Organismal SystemsEECEngineering Education & Centers MCBMolecular & Cellular BiosciencesEFRIEmerging Frontiers in Research & Innovation CISEComputer & Information Science & Engineering IIPIndustrial Innovation & Partnerships ACIAdvanced Cyberinfrastructure CCFComputing & Communication Foundations GEOGeosciences CNSComputer & Network SystemsAGSAtmospheric & Geospace Sciences IISInformation & Intelligent SystemsEAREarth Sciences EHREducation & Human Resources OCEOcean Sciences DGEDivision of Graduate EducationPLRPolar Programs DRL Research on Learning in Formal & Informal Settings MPSMathematical & Physical Sciences DUEUndergraduate EducationASTAstronomical Sciences HRDHuman Resources DevelopmentCHEChemistry DMRMaterials Research DMSMathematical Sciences PHYPhysics 5 June division-specific guidance

5 June SourceGuidance text NSF guidelines The standards to be used for data and metadata format and content (where existing standards are absent or deemed inadequate, this should be documented along with any proposed solutions or remedies) BIO Describe the data that will be collected, and the data and metadata formats and standards used. CSE The DMP should cover the following, as appropriate for the project:...other types of information that would be maintained and shared regarding data, e.g. the means by which it was generated, detailed analytical and procedural information required to reproduce experimental results, and other metadata ENG Data formats and dissemination. The DMP should describe the specific data formats, media, and dissemination approaches that will be used to make data available to others, including any metadata GEO AGS Data Format: Describe the format in which the data or products are stored (e.g. hardcopy logs and/or instrument outputs, ASCII, XML files, HDF5, CDF, etc).

5 June Advisory Board Project team testing & revisions Feedback & iteration Rubric

5 June Performance Level Performance CriteriaComplete / detailed Addressed issue, but incomplete Did not address issue Directorates General Assessment Criteria Describes what types of data will be captured, created or collected Clearly defines data type(s). E.g. text, spreadsheets, images, 3D models, software, audio files, video files, reports, surveys, patient records, samples, final or intermediate numerical results from theoretical calculations, etc. Also defines data as: observational, experimental, simulation, model output or assimilation Some details about data types are included, but DMP is missing details or wouldn’t be well understood by someone outside of the project No details included, fails to adequately describe data types. All Directorate- or division- specific assessment criteria Describes how data will be collected, captured, or created (whether new observations, results from models, reuse of other data, etc.) Clearly defines how data will be captured or created, including methods, instruments, software, or infrastructure where relevant. Missing some details regarding how some of the data will be produced, makes assumptions about reviewer knowledge of methods or practices. Does not clearly address how data will be captured or created. GEO AGS, GEO EAR SGP, MPS AST Identifies how much data (volume) will be produced Amount of expected data (MB, GB, TB, etc.) is clearly specified. Amount of expected data (GB, TB, etc.) is vaguely specified. Amount of expected data (GB, TB, etc.) is NOT specified. GEO EAR SGP, GEO AGS

5 June Mini-reviews 1 & 2 23

5 June

Inter-rater reliability 5 June

5 June Wherein I try not to put you to sleep. Inter-rater reliability

A primer on scoring 5 June X = T + E Very helpful excerpts from: Hallgren, Kevin A. “Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial.” Tutorials in Quantitative Methods for Psychology 8, no. 1 (2012): 23–34.

A primer on scoring 5 June X = T + E Observed Score True Score Measurement Error

A primer on scoring 5 June X = T + E If there were no error noise Observed Score True Score Measurement Error

A primer on scoring 5 June X = T + E Could be issues of: internal consistency test-retest reliability inter-rater reliability Observed Score True Score Measurement Error

A primer on scoring 5 June Var(X) = Var(T) + Var(E) Variance in Observed Scores Variance in True Scores Variance in Errors

Inter-rater reliability 5 June “IRR analysis aims to determine how much of the variance in the observed scores is due to variance in the true scores after the variance due to measurement error between coders has been removed.” Hallgren, Kevin A. “Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial.” Tutorials in Quantitative Methods for Psychology 8, no. 1 (2012): 23–34.

Inter-rater reliability 5 June “IRR analysis aims to determine how much of the variance in the observed scores is due to variance in the true scores after the variance due to measurement error between coders has been removed.” Hallgren, Kevin A. “Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial.” Tutorials in Quantitative Methods for Psychology 8, no. 1 (2012): 23–34. If IRR = 0.80: 80% of Var(X) is due to Var(T) 20% of Var(X) is due to Var(E) Var(X) = Var(T) + Var(E)

Measures of IRR 5 June Percentage agreement | not for ordinal data; overestimates agreement 2.Cronbach’s alpha | works for 2 raters only 3.Cohen’s kappa | used for nominal data; works for 2 raters only 4.Fleiss’s kappa | for nominal variables 5.Intra-class correlation (ICC) | perfect!

5 June Intra-class correlation (ICC) Variance due to rated subjects (DMPs) ICC = (Variance due to DMPs + Variance due to raters + Residual Variance) 6 variations of ICC – must choose carefully based on study design Shrout PE, Fleiss JL. Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin. 1979; 86(2):420–428. McGraw KO, Wong SP. Forming inferences about some intraclass correlation coefficients. Psychological Methods. 1996; 1(1):30–46.

Intra-class correlation (ICC) 5 June ICC_results <- icc(ratingsData, model="twoway", type="agreement", unit="single") “two-way” | vs. one-way; raters are random & DMPs are random “agreement” | vs. consistency; looking for absolute agreement b/w raters “single” | vs. average; single ratings are used, not averages of ratings

ICC: consistency vs. agreement 5 June Rater 2 = 1.5 x Rater 1Rater 2 always rates 4 points higher than Rater 1 Rater 2 = Rater 1

Intra-class correlation (ICC) 5 June ICC_results <- icc(ratingsData, model="twoway", type="agreement", unit="single") “two-way” | vs. one-way; raters are random & DMPs are random “agreement” | vs. consistency; looking for absolute agreement b/w raters “single” | vs. average; single ratings are used, not averages of ratings

Inter-rater reliability 5 June Mean = | Median = Standard Deviation = Mean = | Median = Standard Deviation =

Inter-rater reliability 5 June Mean = | Median = Standard Deviation = Mean = | Median = Standard Deviation =

Inter-rater reliability 5 June Mean = | Median = Standard Deviation = Mean = | Median = Standard Deviation = 0.112

5 June poor fair good excellent