Risk-Based Monitoring Quantitative Metrics Toolkit

Slides:



Advertisements
Similar presentations
Tips to a Successful Monitoring Visit
Advertisements

TITLE OF PROJECT PROPOSAL NUMBER Principal Investigator PI’s Organization ESTCP Selection Meeting DATE.
EDC Metrics: The Full Utilization of EDC Functionality Teresa Ancukiewicz, CCDM Boston Scientific Corporation December 7, 2007.
Student Learning targets
Joint Research & Enterprise Office Training The team, the procedures, the monitor and the Sponsor Lucy H H Parker Clinical Research Governance Manager.
Information for school leaders and teachers regarding the process of creating Student Learning Targets. Student Learning targets.
EQI and POM MP3 Metrics Full SFIREG Meeting June 1-2, 2015.
How To Design a Clinical Trial
1 Results-based Monitoring, Training Workshop, Windhoek, Results-based Monitoring Purpose and tasks Steps 1 to 5 of establishing a RbM.
Fidelity of Implementation A tool designed to provide descriptions of facets of a coherent whole school literacy initiative. A tool designed to provide.
+ SOUTH DAKOTA PRINCIPAL EFFECTIVENESS MODEL PROCESS OVERVIEW PE WEBINAR I 10/29/2015.
Cindy Tumbarello, RN, MSN, DHA September 22, 2011.
Responsibilities of Sponsor, Investigator and Monitor
Work shop on Procurement Key-performance indicators with selected implementing entities Public procurement and property administration agency August 2016.
An agency of the European Union Guidance on the anonymisation of clinical reports for the purpose of publication in accordance with policy 0070 Industry.
Issues that Matter Notification and Escalation
Stages of Research and Development
Principal Investigator ESTCP Selection Meeting
2016 Primary Assessment Update 27th September 2016
Reporting, Monitoring and Evaluation
Strengthening the Medical Device Clinical Trial Enterprise
Fort Stanwix National Monument Energy Audit Contract
How To Design a Clinical Trial
Placebo / Standard of Care (PSoC)
Investment Outsourcing
Observational Study Working Group
Teacher SLTs
NIEP Evaluation PO&A “How-to” Guide and Issue Classification
Responsibilities of Sponsor, Investigator and Monitor
Seminar On Energy Audit Submitted To: Submitted By:
PLANNING, MATERIALITY AND ASSESSING THE RISK OF MISSTATEMENT
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 3. Session 9. Periodic data collection methods.
Performance Improvement Project Validation Process Outcome Focused Scoring Methodology and Critical Analysis Presenter: Christi Melendez, RN, CPHQ Associate.
Establishing and Managing a Schedule Baseline
Chapter 6: Database Project Management
Principal Investigator ESTCP Selection Meeting
Benchmarking.
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
Protocol References Section Title 6.2 Entry Visit 5.1
Pharmacovigilance in clinical trials
Protocol References Section Title 6.2 Entry Visit 5.1
Performance Improvement Project Validation Process Outcome Focused Scoring Methodology and Critical Analysis Presenter: Christi Melendez, RN, CPHQ Associate.
Crucial Statistical Caveats for Percutaneous Valve Trials
Title: Owner: Ver: Date:
Actuaries Climate Index™
Title: Owner: Ver: Date:
Title: Owner: Ver: Date:
Change Assurance Dashboard
School Improvement Plans and School Data Teams
Plan the implementation of change
Friends of Cancer Research
Early Childhood Transition APR Indicators and National Trends
Participant Retention
Monitoring Reports that Assist with Remote Monitoring
Standards-based Individualized Education Program Module Four: Assessing and Reporting Student Progress SBIEP Module Four: Assessing and Reporting Student.
Performance Management Training
MARYLAND HEALTH SERVICES COST REVIEW COMMISSION
Principal Investigator ESTCP Selection Meeting
Chapter#8:Project Risk Management Planning
Presenter: Kate Bell, MA PIP Reviewer
Importance of Project Schedules
TPM/PBPP Implementation Timeline
Results Based Management for Monitoring & Evaluation
Understanding How the Ranking is Calculated
Chapter#8:Project Risk Management Planning
Principal Investigator ESTCP Selection Meeting
May 8, 2013 ISO – NJAIRE Central Processor Mike McAuley
Sepsis Program Development
Data Management for POC EID
How Should We Select and Define Trial Estimands
Presentation transcript:

Risk-Based Monitoring Quantitative Metrics Toolkit 04 November 2016

Contents Purpose of this Document 3 Objectives 4 Process 5 Core Metrics Recommendations for Historical Control Calculations Recommendations for Historical Control Periods Guidance for Potential Surrogates Establishing Target Ranges Expected Observations and Potential Alternative Observations Final Considerations RBM Metrics Report 1Q2016 (Bi-annual) Trial Inventory (RBM Uptake, trials planned or in progress) How to read the metrics (Stacked charts) Actual metrics report from 1Q2016 3 4 5 6-7 8 9 10 11-12 13-14 15 16-26 17 18 19-26

Purpose of this Document As the TransCelerate RBM initiative developed the methodology and framework for voluntary RBM implementation, there was a recognized need to quantitatively evaluate progress and impact. The team developed 8 core metrics which could be used to evaluate the impact of the TransCelerate RBM methodology on clinical development programs across three broad categories; Quality, Efficiency and Cycle Time A process was developed to support sponsors in determining historical controls, setting target values, measuring the metrics, and assign a dashboard rating. All TransCelerate RBM metrics have been reported anonymously to a neutral third party for aggregation. Overtime, it was recognized that companies needed the flexibility of defining the metrics differently due to internal systems, procedures, metrics, etc. Guidance was provided to companies in an effort to move toward some consistency in measurements for some metrics in accordance with regulatory agency requests. Companies may determine the extent to which they follow the recommendations. Additional guidance is also provided to assist with historical control calculations and periods, potential surrogates and setting target ranges. All information, recommendations or guidance contained herein is voluntary for sponsors to utilize.

Quantitative Metric Collection Objectives Determine the impact and effectiveness of the proposed RBM methodology on managing quality and risks associated with the conduct of clinical trials. Determine if the RBM methodology works from the standpoint of operational impact on an organization, clinical sites and investigators. Keep in Mind: Benefits realized must be accompanied by either an improvement or maintenance of current criteria in data quality and subject safety.

RBM Quantitative Metric Analysis Process Determine Historical Control Set Target Values Measure Metrics Assign Dashboard Rating Recommendation Generate historical control at one of four levels: 1.) IP level 2.) therapy area level 3.) cumulative 4.) split study This will remain constant through assessment Recommendation Determine target values on three levels: 1.) Improvement at X% 2.) Worsening at Y% 3.) Negligible change (About the same) between X% and Y% Measure the metrics that are feasible to be quantified Compare quarterly metric to target values and assign dashboard rating of better, about the same or worse

Core RBM Quantitative Metrics (1 of 2) Indicator Metric Optional Guidance: For consistency amongst sponsors Quality Number and classification of major/critical audit findings per site audit Better + 25% Worse -25% About the Same > -25% & < +25% Number of unreported, confirmed SAEs as discovered through any method Better + 10% Worse -10% About the Same > -10% & < +10% Significant Protocol Deviation rate per treated subject (total # of deviations/ total # of subjects for the protocol) Recommend normalize to per treated subject or patient. Do not include bio or stat programmed reports. Include only Significant PDs identified by central or site monitoring or data cleaning activities. Efficiency Average Monitoring (all types) cost per site Average interval between on-site monitoring visits per site Recommend measuring using Start Date to Start Date

Core RBM Quantitative Metrics (2 of 2) Indicator Metric Optional Guidance: For consistency amongst sponsors Cycle Time Median number of days from visit to eCRF data entry Primary recommendation is to use first data entered. Better + 10% Worse -10% About the Same > -10% & < +10% Median number of days from query open to close Recommend focus on site activity, use the response by site for consistency, exclude any auto generated queries Median days from significant/major issue open to close Recommend focus on significant or major findings if able to. Companies should define what they feel are significant.

Recommendations for Historical Control Calculation Metric Determination – historical controls and metrics Average number of major/critical audit findings per site audit Calculate the number of findings and divide by the number of site audits per quarter Potential sources to include: QA audit reports Percentage of unreported, confirmed SAEs as compared to total SAEs as discovered through any method Calculate the number of findings and divide by the total number of SAEs Potential sources to include: Monitoring reports Significant Protocol Deviation rate per treated subject (total # of deviations/ total # of subjects for the protocol) Calculate the number of findings and divide by the total number treated subjects. The definition of “Significant” to be defined by each sponsor Potential sources to include: EDC platform Average Monitoring (all types) cost per site Compile all costs associated with monitoring the trial and divide by the number of sites Potential sources to include: CTMS, Finance Average interval between on-site monitoring visits per site Determine the interval between on-site monitoring visits for all sites and divide by the number of sites. If a site has not had a second visit to perform analysis in the quarter, omit that site from analysis Potential sources to include: CTMS Median number of days from patient visit to eCRF data entry Calculate median Median number of days from query open to close Median number of days from significant/major issue open to close The terms “issue”, “open” and “close” to be defined by each sponsor Potential sources to include: Issue Tracking System

Recommendations for Historical Control Periods Metric Historical Control Period and Rationale Average number of major/critical audit findings per site audit Due to lower frequencies, consider at least a 2 year sample Percentage of unreported, confirmed SAEs as compared to total SAEs as discovered through any method Consider at least a 1 year sample Significant Protocol Deviation rate per treated subject (total # of deviations/ total # of subjects for the protocol) Average Monitoring (all types) cost per site Due to fluctuations in costs and time value of money, consider at most a 1 year sample Average interval between on-site monitoring visits per site Median number of days from patient visit to eCRF data entry Median number of days from query open to close Median number of days from significant/major issue open to close

Guidance for Potential Surrogates Metric Potential Surrogates Average number of major/critical audit findings per site audit Percentage of unreported, confirmed SAEs as compared to total SAEs as discovered through any method If a field does not exist on monitoring report, consider using data entry of SAEs vs. on-site monitoring visit date. E.g. Calculate the number of SAEs with start date that were data entered >/= 2 days after an on-site monitoring visit and divide by the total number of SAEs Significant Protocol Deviation rate per treated subject (total # of deviations/ total # of subjects for the protocol) Consider defining “Significant” as those protocol deviations impacting primary or secondary endpoints. Average Monitoring (all types) cost per site If direct costs cannot be obtained, consider collaborating with finance to estimate Also, consider determining average cost of visit and utilize decreased visit frequency to estimate Average interval between on-site monitoring visits per site Determine the interval between on-site monitoring visits for all sites and divide by the number of sites. If a site has not had a second visit to perform analysis in the quarter, omit that site from analysis Median number of days from patient visit to eCRF data entry Median number of days from query open to close Median number of days from significant/major issue open to close

Establishment of Target Ranges (1 of 2) Using historical controls as a guide, set ranges that will indicate “better”, “about the same” or “worse" since last quarter If unable to calculate a historical control for any reason, the quantitative target ranges will have to be created using best judgment qualitatively Example: Median Number of Days from Query Open to Close – historical control = 10 days In this example, the metric, when compared to the target ranges, will inform the dashboard ranking </= to 8 days would be better (e.g. could be top 10% or -2 standard deviations) 8-12 days would be about the same (e.g. could be mid – 80% or +/- 1 standard deviation) >/= 12 days would be worse (e.g. could be bottom 10% or +2 standard deviations) Historical Control = 10 days Better About the Same Worse 8 days 12 days

Establishment of Target Ranges (2 of 2) Example: Average number of major/critical audit findings per site audit – historical control = 2 findings In this example, the metric, when compared to the target ranges, will inform the dashboard ranking </= 1 finding would be better (e.g. could be top 10% or -2 standard deviations) 1-2 findings would be about the same (e.g. could be mid – 80% or +/- 1 standard deviation) >/= 2 findings would be worse (e.g. could be bottom 10% or +2 standard deviations) Note the range is tighter for this particular metric If no audits have been performed, the metric would be 0 and per this target range the dashboard ranking would be better Historical Control = 2 findings Better About the Same Worse 1 finding 2 findings

These Metric Narratives illustrate the expectations of RBM’s impact as well as alternative observations Metric Expected Observations Potential Alternative Observations Average number of major/critical audit findings per site audit Given RBM, it would be expected that the average number of major/critical findings per site audit will decrease Early during implementation, findings may rise due to it being a new procedure not necessarily a focus on critical data and processes with expectation that they would decrease over time Percentage of unreported, confirmed SAEs as compared to total SAEs as discovered through any method Given RBM, it would be expected that number of unreported, confirmed SAEs will decrease Early during implementation, findings may rise due to shift in focus from SDV to SDR with expectation that they would decrease over time Significant Protocol Deviation rate per treated subject (total # of deviations/ total # of subjects for the protocol) Given RBM, it would be expected that number of Significant Protocol Deviations will decrease New process implementation for Protocol Deviation review may reflect unexpected increases

These Metric Narratives illustrate the expectations of RBM’s impact as well as alternative observations Metric Expected Observations Potential Alternative Observations Average Monitoring (all types) cost per site Given RBM, it would be expected that average monitoring costs will decrease Costs may remain flat until second quarter of analysis or later Average interval between on-site monitoring visits per site Given RBM, it would be expected that interval between on-site monitoring visits will increase Average interval between on-site monitoring visits may remain flat until second quarter of analysis or later Median number of days from patient visit to eCRF data entry Given RBM, there are no expectations for the median number of days from patient visit to eCRF data entry, however, a decrease would be beneficial This metric measures an unintended consequence of RBM, namely, the site’s delay in performing a crucial function that empowers central monitoring due to the potential decrease in on-site visits Median number of days from query open to close Given RBM, there are no expectations for the median number of days from query open to close, however, a decrease would be beneficial This metric measures an unintended consequence of RBM, namely, the site’s delay in performing a crucial function due to the potential decrease in on-site visits Median number of days from significant/major issue open to close Given RBM, it would be expected that the median number of days from issue open to close will decrease Early during implementation, findings may rise if issues management process is new to the organization with expectation that they would decrease over time

Final Considerations Once the historical controls for your metrics are determined, they should remain static for the duration of analysis As RBM becomes more pervasive in a company (“business as usual”), controls will switch from “non-RBM trials vs RBM trials” to your normal baseline for control, (e.g., previous calendar year)

Progression of RBM Uptake Progression of RBM Uptake* at Member Companies Blinded Inventory of RBM Trials (planned and ongoing) Voluntary adoption information is reported to a blinded third party for aggregation

RBM Metrics Context for Collection and Analysis Collection process and limitations across 81 trials eligible to report data across 7 Companies for periods ending Q42015, Q12016 (out of inventory of 162 trials planned or in progress) Analysis Trial data with similar level of maturity (“RBM + x months”) is aggregated into stacked charts. In example below, the first quarter of metric data was grouped together into stacked charts(red circles), the second quarter of metric data was grouped together for analysis (green circles) and so on, regardless of actual calendar quarter. Metric #1 Example Trial 1 RBM + 3 Months RBM + 6 Months RBM + 9 Months RBM + 12 Months RBM + 15 Months Trial 2 RBM + 3 Months RBM + 6 Months RBM + 9 Months RBM + 12 Months Trial 3 RBM + 3 Months RBM + 6 Months Note that not all metrics are reported for all trials across all time periods Trial 4 RBM + 3 Months

How to read the metrics (stacked charts) What do the X and Y axis’ represent? The Y-axis (stacks and the numbers inside the stacks) represent the total number of trials reported by member companies (via voluntary reporting) that have utilized RBM methods during this report period. The X-axis shows how long the trials have been running using RBM methods (e.g., “RBM + 3 months” = trial using RBM methodology for 3 months) What do the colors mean? Dark Blue indicates this metric for this reporting period for these trials is Better compared to the control Light Blue indicates this metric for this reporting period for these trials is About The Same compared to the control Yellow indicates this metric for this reporting period for these trials is Worse compared to the control Refer to slides 6-7 for optional guidance defining “Better”, “About the Same” and “Worse”

Quality: Audit findings per audited site Average number of major/critical audit findings per audited site not all metrics are reported for all trials across all time periods

Quality: SAE reporting Percentage unreported, confirmed SAEs as compared to total SAEs not all metrics are reported for all trials across all time periods

Quality: Significant Protocol Deviations Significant Protocol Deviation rate per treated subject not all metrics are reported for all trials across all time periods

Efficiency: Overall Monitoring Cost Average Monitoring (all types) cost per site not all metrics are reported for all trials across all time periods

Efficiency: On-site visit interval Average interval between on-site monitoring visits per site not all metrics are reported for all trials across all time periods

Cycle Time: eCRF Entry Median number of days from patient visit to eCRF data entry not all metrics are reported for all trials across all time periods

Cycle Time: Query Open to Close Median number of days from query open to close not all metrics are reported for all trials across all time periods

Cycle Time: Issue Open to Close Median number of days from issue open to close not all metrics are reported for all trials across all time periods