Data Consistency: SEND Datasets and Study Reports: Request for Collaboration in the Dynamic SEND Landscape  ISSUE 3: Data/metadata for permissible or.

Slides:



Advertisements
Similar presentations
Ed-D 420 Inclusion of Exceptional Learners. CAT time Learner-Centered - Learner-centered techniques focus on strategies and approaches to improve learning.
Advertisements

Critical Reading Strategies: Overview of Research Process
SEND Standard for the Exchange of Nonclinical Data
Kendle Implementation of Clinical Data Acquisition Standards Harmonization Dr Elke Sennewald Kendle 9th German CDISC User Group Meeting Berlin, 28 September.
Creating Architectural Descriptions. Outline Standardizing architectural descriptions: The IEEE has published, “Recommended Practice for Architectural.
9 Closing the Project Teaching Strategies
1 Framework Programme 7 Guide for Applicants
Second Annual Japan CDISC Group (JCG) Meeting 28 January 2004 Julie Evans Director, Technical Services.
SacProNet An Overview of Project Management Techniques.
© 2012 Cengage Learning. All Rights Reserved. This edition is intended for use outside of the U.S. only, with content that may be different from the U.S.
WG4: Standards Implementation Issues with CDISC Data Models Data Guide Subteam Summary of Review of Proposed Templates and Next Steps July 23, 2012.
Study Data Reviewer’s Guide (SDRG): Recommendations on Use of the Clinical SDRG Model for Nonclinical Data Submission Nonclinical Working Group, SDRG Project.
Health eDecisions Use Case 2: CDS Guidance Service Strawman of Core Concepts Use Case 2 1.
Copyright 2010, The World Bank Group. All Rights Reserved. Testing and Documentation Part II.
Metadata By N.Gopinath AP/CSE Metadata and it’s role in the lifecycle. The collection, maintenance, and deployment of metadata Metadata and tool integration.
Template provided by: “posters4research.com” Challenges and Solutions for Mapping Pathology Data to SEND Mike Wasko, Rich Buchanan, Fred Mura, and Laura.
MARE 103 MOP Proposal Lecture.
Onsite Quarterly Meeting SIPP PIPs June 13, 2012 Presenter: Christy Hormann, LMSW, CPHQ Project Leader-PIP Team.
Public Health Reporting Initiative July 25, 2012.
1 Regional Hospital Full Scale Exercise CONTROLLER EVALUATOR GUIDE.
How Good is Your SDTM Data? Perspectives from JumpStart Mary Doi, M.D., M.S. Office of Computational Science Office of Translational Sciences Center for.
Community of Practice K Lead Project Team: الالتزامالتحفيز التفكير المؤسسي المرونةالتميزالشراكةالاستقامة.
PLC Year 2 Day 2 Inquiry Cycle
Iowa 21st CCLC Local Evaluation Form Training for Local evaluators
Experimental Psychology
Principles of Information Systems Eighth Edition
Literature review Methods
IFSP Aligned with the Early Intervention Data System
Nonclinical Information in the Common Technical Document: Opportunities for Content Reuse Peggy Zorn, MPI Research Susan Mattano, Pfizer, Inc.
Internal assessment criteria
ServiceNow Implementation Knowledge Management
SEER Case Consolidation Study: Design & Objective
Welcome to Level 3 OF SOTY 2017.
Software Documentation
CLINICAL JUDGMENT Concept 36.
Maintaining the Clinical & Nonclinical Study Data Reviewer’s Guides
WMO IT Security Incident Process
MAKE SDTM EASIER START WITH CDASH !
Software Requirements analysis & specifications
Traceability between SDTM and ADaM converted analysis datasets
Chapter 1: Introduction to Scientific Thinking
e-data Implementation Impact
What can we do? Answers from CSS Nonclinical Topics WG
SDTM and ADaM Implementation FAQ
Data Consistency: SEND Datasets and the Study Report
Study Data Reviewers’ Guide – Nonclinical Assessment
Improve Phase Wrap Up and Action Items
Integrating Outcomes Learning Community Call February 8, 2012
Designed for internal training use:
Test Submission Forum Goals of the project Project status: for example
Data Consistency - SEND Datasets and Study Reports: Request for Collaboration in the Dynamic SEND Landscape  ISSUE 3: Data/metadata for permissible or.
Barbara Gastel INASP Associate
Project Management Process Groups
CVE.
Bob Friedman, Xybion; Anthony Fata, SNBL
SEND Submissions – Quality review by QA – Yes/No?
Visualizing Subject Level Data in Clinical Trial Data
SDTM and ADaM Implementation FAQ
CS 8532: Advanced Software Engineering
Tutorial: Writing a Lab Report CHEM 1154
Nonclinical Working Group Update CSS 2014
Data Inconsistency Between SEND Datasets and Study Reports
Standard Scripts Project 2
Measure Phase Wrap Up and Action Items
Involving Families Early Childhood Outcomes Center.
ISSUE MANAGEMENT PROCESS MONTH DAY, YEAR
Using State and Local Data to Improve Results
Production EVM Working Group Outbrief May 2010 PMSC
Mastery Assessment in Teaching Statistics
APMP Professional Certification
Presentation transcript:

Data Consistency: SEND Datasets and Study Reports: Request for Collaboration in the Dynamic SEND Landscape  ISSUE 3: Data/metadata for permissible or occasionally expected variables may be in the report but not in the datasets. BACKGROUND: There may be method or qualifier data label variables (typically permissible) that are covered in the report and/or are recorded somewhere, but are not collected in the data collection system in a way that the system can output into SEND (e.g., LBMETHOD inclusion of the metadata requires manual intervention. QUESTION: In what scenarios is it acceptable for permissible variables included in the study report to not be submitted electronically, (e.g., LBMETHOD) considering the effort that manual intervention would require? Contributors: K. Brown; L. Eickhoff; D. Fish; M. Francomacaro; M. Rosentreter; C. Roy; C. Spicer; W. Wang; F. Wood EXAMPLES OF DATA DISCREPENCIES* RECOMMENDATIONS Low Impact Discrepancies: Resolution Is Not Necessary or Data Inconsistency is Acceptable Minimal or no impact on reviewer interpretation of SEND datasets versus Study Reports The discrepancies should still be called out in the nSDRG. High Impact Discrepancies: Resolution Is Necessary or Data Inconsistency Should Be Resolved Could potentially lead to differing interpretation of the SEND dataset versus the report for a given study Reconcile differences between datasets and study reports using the tool(s) or manual processes available to them to do so when possible. Data reconciliation processes must be sustainable so as not to put an undue burden on industry Vendors providing the technical tools can have the most impact here. Encouraged to mature their products to help industry with this reconciliation Sponsors and CROs need to consider moving toward collecting data in SEND format wherever possible Changing lexicons/libraries to use controlled terminology instead of mapping Collecting all data electronically, and setting up studies in ways that will allow consistent, automated population of trial design domains. INTERACTIVE & COLLABORATIVE WIKI CONCEPT ABSTRACT The “Data Consistency: SEND Datasets and the Study Report” Working Group was formed at PhUSE CSS 2016 to identify inconsistencies that were being discovered between nonclinical datasets and study reports and provide recommendations for action.  Team members drew from experience as well as scenarios that surfaced through the Nonclinical FDA Fit-for-Use Pilot to draft a list of discrepancies which was then assessed and categorized based on the item’s potential impact to study data interpretation.  Conclusions drawn from the assessment became the basis for recommendations. As work progressed, the team realized that identification of new inconsistencies and our ever‑changing standards landscape would quickly make the paper obsolete.  With this in mind, the paper is being transformed into a live wiki page where the community will be invited to contribute new scenarios and possible solutions.  For reasons listed previously, this approach may be applicable to other nonclinical projects.  Our poster provides a look into the process, progress, and future of the team’s work. BACKGROUND SEND datasets are intended to reflect the study design and individual data listings of a nonclinical report. However, differences between the report and SEND datasets can occur. These factors are generally due to differences in the processes and systems used to collect and report raw data and to generate SEND datasets. Many companies, CROs, and vendors are creating parallel paths for the generation of SEND datasets and data that go into the study reports. Subsequently, there are and will continue to be inconsistencies between the study reports and the SEND datasets. The effort required to understand, correct, and eliminate differences is still under evaluation and may be significant. Feedback received from the agency based on actual SEND submissions may improve our understanding and may impact the initial recommendations provided here. During PhUSE discussions, it became apparent that best practices were needed to help SEND implementers decide what methods should be used to address these differences Table 1. Low Impact Discrepancies – Explain discrepancy in the Nonclinical Study Data Reviewer’s Guide (nSDRG) The “Data Consistency: SEND Datasets and the Study Report” Working Group strongly encourage your participation in our Wiki page. This will: Keep this wiki page live Add additional discrepancies between SEND datasets and Study Reports Provide a forum to Ask your questions if issues encountered Provide answers or feedback. Instructions on how to edit the Wiki page are providedd. Low Impact Scenario What Will Be in SEND Dataset What Will Be in the Study Report Some pretest data may be present in SEND datasets but not in study report. Pretest data may be present since some systems are unable to filter out this data. Pretest data may or may not be present, or only a subset is present. Values for body weight gain are represented differently between SEND dataset and study report. Values will be reported interval to interval, but intervals may vary. Values could be reported cumulatively or in another customized way. Report may include additional cumulative gains (e.g., from start to termination or for recovery phase). Report may contain no body-weight gain data. CLSTRESC is less detailed than the Study Report* SENDIG has no single (standard) representation for CL data. Report format may vary by testing facility and/or data collection/reporting system. Compare Study Report and SEND Dataset and Identify Discrepancy Reconcile in Lab Data Capture System, Reporting System, SEND Dataset Generation System Explain in nSDRG if not reconciled Include in this Wiki. Report to vendor if technical solution is not available Please share this Wiki Page with your teams! *From the Technical Conformance Guide: Clinical Observations (CL) Domain: Only Findings should be provided in CL; ensure that Events and Interventions are not included. Sponsors should ensure that the standardization of findings in CLSTRESC closely adheres to the SENDIG. The information in CLTEST and CLSTRESC, along with CLLOC and CLSEV when appropriate, should be structured to permit grouping of similar findings and thus support the creation of scientifically interpretable incidence tables. Differences between the representation in CL and the presentation of Clinical Observations in the Study Report which impact traceability to the extent that terms or counts in incidence tables created from CL cannot be easily reconciled to those in the Study Report should be mentioned in the nSDRG. Reporting Database SEND Database Lab Data Capture System http://www.phusewiki.org/wiki/index.php?title=Data_Consistency:_SEND_Datasets_and_Study_Report_Wiki OUTSTANDING ISSUES What are the PROs of an interactive wiki page? Real time collaboration of SEND experts Increase of overall knowledge of Data Discrepancies First step to harmonize approaches to avoid data inconsistencies Provides a dynamic forum t stay current with the evolution of SEND. Are there any CONs? Contribution to the Wiki page requires some HTML skills Incoming additions must be maintained to avoid confusion Recommendations made for many report/dataset consistency issues identified by our team Immediate-term solution would be needed (typically, referencing the anomaly in the nSDRG) Evolution of documentation and collection practices, as well as collection and reporting systems will continue to adapt to the requirement for electronic data submissions in a standardized format. Direct feedback from the FDA would provide a definitive direction for the industry on a few items - categorized and summarized below. An open conversation with a panel of FDA reviewers would be useful in order to develop a go-forward recommendation. Is referencing this difference in the nSDRG sufficient or is there an expectation that the datasets will have the same number of records as the report related to pretest/acclimation data? Is it acceptable to simply state in the nSDRG that the rounding differences exist? Are there cases where these differences are acceptable considering the effort that manual intervention would require? Is the inclusion of replaced animal data meaningful to a reviewer? Is there a difference as to timing (i.e., before first dose/after first dose)? Is explanation in nSDRG sufficient? Does the FDA expect manual correlation of RELREC data for MA/MI? For other domains? Why do we have to submit BG when BW data would allow FDA, with Janus, to analyze the data as desired Question: How important is it that the intervals in the SEND dataset match those in the study report exactly? Table 2. High Impact Discrepancies - Reconcile if possible and if not, explain discrepancy in the nSDRG. ) Scenario What Will Be in SEND Dataset What Will Be in the Study Report Modifiers for pathology data may or may not be listed in the Study Report but should be in the SEND dataset (MA/MI domains). Modifiers will be listed in --SUPP and/or --STRESC, in addition to --ORRES depending on how data are collected and base processes are defined by the sponsor. Modifiers may or may not be listed as part of base finding. May lead to differences in incidence counts. The STRESC value must be scientifically meaningful but may not match the value in the incidence tables. Nomenclature of severity may differ between SEND dataset and study report. Will be present as Controlled Term because it is mapped to Controlled Terminology (CT). Severity will be listed as defined by sponsor Textual differences (controlled terminology) between reports and SEND data sets. Uses standard for Controlled Terminology May or may not use standard for Controlled Terminology. If the differences are impactful to data interpretation, it is recommended that they be listed out in the nSDRG. METHODS SEND Dataset and Study Report discrepancies were categorized by impact. Low impact discrepancies are expected to have minimal effect on study data interpretation and conclusions. High impact discrepancies could lead to misinterpretation of conclusions drawn from reviews of SEND datasets and those presented in Study Reports and/or data collection process changes by sponsors and CROs . Based on this assessment, this team has identified recommendations described in the next section. CONCLUSIONS Differences between the CDISC SEND (SEND) datasets and the study report are likely to occur and should be listed and explained in the Study Data Reviewer’s Guide (nSDRG). Data discrepancies are largely a function of the immaturity of the SEND generation tools and the overall industry inexperience with this new technology.  The SEND landscape is changing rapidly as experience increases and tools develop. The Wiki page concept allows the industry to stay abreast of developments and to share information as well as solutions. * See the Wiki Page for the full list of discrepancies identified.