FDA Hematology and Pathology Devices Panel Meeting October 22-23, 2009

Slides:



Advertisements
Similar presentations
The Impact of Gynecologic Pathology Diagnostic Errors on Patient Care Dana Marie Grzybicki MD, PhD Colleen M. Vrbin, BA Danielle Pirain, BS Stephen S.
Advertisements

Maura Bidinost User Experience Designer Omnyx LLC Usability: A Critical Factor in the Successful Adoption of Digital Pathology for Routine Sign-out.
Rev B 09/2012 ChromoPlex 1 Dual Detection Multiply Your Capabilities Leica ChromoPlex TM 1 Dual Detection for Leica BOND.
Digital Pathology for Improved Patient Care
ACR and SBI Statement Margarita Zuley, MD Associate Professor, Radiology Medical Director, Breast Imaging University of Pittsburgh.
Measuring diagnostic accuracy of using digital slides in routine histopathology and analyzing sources of diagnostic errors László FÓNYAD 1st. Dept. of.
Statistics for Health Care
1 Historical overview of FDA regulation of digital pathology imaging applications: the safety and effectiveness issues Tremel A. Faison, MS, RAC, SCT(ASCP)
Thoughts on Biomarker Discovery and Validation Karla Ballman, Ph.D. Division of Biostatistics October 29, 2007.
Diagnostic Assays to Plan Specific Drug Treatment Elizabeth Hammond MD.
High Volume Slide Scanning Architecture and Applications
12/10/02Harry Bushar1 Computerized Thermal Imaging Breast Cancer System 2100 (CTI BCS2100) Radiological Devices Advisory Panel December 10, 2002 Statistical.
Endoscopic Ultrasound in Chronic Pancreatitis
1 History and Lessons from FDA Regulation of Digital Radiology Kyle J. Myers, Ph.D. Division of Imaging and Applied Mathematics OSEL/CDRH/FDA October 22,
Achieving and Demonstrating “Quality-by-Design” with Respect to Drug Release/dissolution Performance for Conventional or Immediate Release Solid Oral Dosage.
Practical Issues on Clinical Validation of Digital Imaging Applications in Routine Surgical Pathology FDA Hematology and Pathology Devices Panel Meeting.
Tissue Bank Challenges Repository and Pathologist View Elizabeth H. Hammond M.D.
Image Quality in Digital Pathology (from a pathologist’s perspective) Jonhan Ho, MD, MS.
1 R2 ImageChecker CT CAD PMA: Clinical Results Nicholas Petrick, Ph.D. Office of Science and Technology Center for Devices and Radiological Health U.S.
1 Preclinical-Bench Testing II Using Human Observers to Objectively Measure and Evaluate Imaging Performance of Digital WSI Systems Max Robinowitz, MD.
Statistics for Health Care Biostatistics. Phases of a Full Clinical Trial Phase I – the trial takes place after the development of a therapy and is designed.
CAP -State Pathology Society Leadership Conference June 29, 2007.
EDRN Approaches to Biomarker Validation DMCC Statisticians Fred Hutchinson Cancer Research Center Margaret Pepe Ziding Feng, Mark Thornquist, Yingye Zheng,
Developing a Validated Tool For Evaluating Whole Slide Images.
Validating A Currently Available Measuring Tool for Melanoma Gabor Hertz, M.D.
Pathology Reports Nicole Draper, MD.
Challenges of Cancer Diagnosis in Resource Limited Settings Optimizing Pathology Support Ann Marie Nelson, M.D. AIDS and Infectious Disease Pathology Joint.
Proposal for Storing Whole Slide Images for Pathology in DICOM
Proposed Studies to Support the Approval of Over-the-Counter (OTC) Home-Use HIV Test Kits Blood Products Advisory Committee March 10, 2006 Elliot P. Cowan,
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Digital Imaging in Education and Distributed Pathology Practice
Issues in Validation of Whole Slide Digital Images for Frozen Section Interpretation Lewis A. Hassell, MD Pathology Visions October 2010.
Onsite Quarterly Meeting SIPP PIPs June 13, 2012 Presenter: Christy Hormann, LMSW, CPHQ Project Leader-PIP Team.
W. Scott Campbell, Ph.D., MBA University of Nebraska Medical Center
12/10/02Sacks - Clinical Assessment1 Clinical Assessment – Part II William Sacks, PhD, MD Clinical Assessment – Part II William Sacks, PhD, MD COMPUTERIZED.
Radiology Advisory Panel Meeting Radiology Advisory Panel Meeting Computer-Assisted Detection (CADe) Devices Joyce M. Whang Deputy Division Director Radiological.
Digital Pathology, An Introduction From glass slides to digital files and there after.
External Quality Assessment (Proficiency testing) in Diagnostic (Renal) Histopathology Professor Peter Furness Leicester UK.
Classroom Assessments Checklists, Rating Scales, and Rubrics
Diagnostic Test Studies
Comprised of Blood Agar and CHROMagarTM Orientation using WASPLabTM
From Diagnosis to Conclusion
W. Scott Campbell, MBA, PhD James R. Campbell, MD
As a specialised Histopathologist how would you evidence the quality of your specialty pathology team and how might you lead continued service improvement?
Hematology and Pathology Devices Panel Meeting October 22-23, 2009
Statistical Core Didactic
1 2 3 INDIAN INSTITUTE OF TECHNOLOGY ROORKEE PROJECT REPORT 2016
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
W. Scott Campbell, MBA, PhD University of Nebraska Medical Center
Tremel A. Faison, MS, RAC, SCT(ASCP) FDA/CDRH/OIVD/DIHD
W. Scott Campbell, MBA, PhD University of Nebraska Medical Center
Medical Device Regulatory Essentials: An FDA Division of Cardiovascular Devices Perspective Bram Zuckerman, MD, FACC Director, FDA Division of Cardiovascular.
Classroom Assessments Checklists, Rating Scales, and Rubrics
eHealth in the Region of the Americas:
WSI-FS II: Validation of Squamous Cell Carcinoma Frozen Section Whole Slide Image Diagnosis in Surgical Pathology Vamsi Parimi MD1; Ryba Dominika, Ewa.
SPECIMEN SONOGRAM - Procedure
Tremel A. Faison, MS, RAC, SCT(ASCP) FDA/CDRH/OIVD/DIHD
Crucial Statistical Caveats for Percutaneous Valve Trials
FINAL Recommendations
Digital Pathology Devices Panel Meeting
Presenters: Joel Saltz, Biomedical Informatics, Stony Brook University
Measuring Accuracy in Digital Pathology Ole Eichhorn Chief Technology Officer, Aperio FDA panel discussion, Oct 22-23, 2009.
Digital Pathology Devices Panel Meeting
Sampling and Power Slides by Jishnu Das.
Project Title Subtitle: make sure you specify it is a research project
Presenter: Kate Bell, MA PIP Reviewer
Digital Pathology Devices Panel Meeting
Interpreting Epidemiologic Results.
Session 3: Coverage and Reimbursement for Genetic Testing
Evidence Based Diagnosis
Presentation transcript:

FDA Hematology and Pathology Devices Panel Meeting October 22-23, 2009 Practical Issues on Clinical Validation of Digital Imaging Applications in Routine Surgical Pathology FDA Hematology and Pathology Devices Panel Meeting October 22-23, 2009 Tan Nguyen, MD, PhD, RAC FDA/CDRH/OIVD/DIHD-DCTD

Digitalization Not a Barrier to Pathologic Diagnosis Image-based telepathology having been in place for a number of years Availability of capable automated high-speed, high-resolution whole slide imaging technology (WSI) At issue: How can we demonstrate that pathologists can safely and effectively sign out routine surgical cases via WSI of H&E glass slides? Compare with diagnoses made by light microscopy

Presentation Outline Quality of images Clinical performance study Image acquisition, image display Clinical performance study Possible study designs Selection of study participants Case (specimen) selection Establishing “reference” diagnosis Evaluating diagnosis agreement Other issues

Image Acquisition Optimal objective lens power for image scanning? Digital magnification or magnification by interchangeable objective lenses? Single-focus plane or 3-D image enhancement? Z-stacks needed for certain examinations (e.g., surgical margin, H. pylori, microcalcifications, nucleoli) Compression algorithm, user-selectable ratio? Diagnosis made on uncompressed image or image retrieved from prior compressed image data file?

Image Display Viewing monitor Viewing software Viewer functionality Standardized size, aspect ratio, display resolution (low, medium, high)? Viewing software Image storage, retrieval, annotation Viewer functionality “Thumbnail” view Panning, zooming, side-by-side viewing of multiple images

Types of Possible Clinical Study Prospective study (“field study”)? Replicating real-world surgical pathology practice Minimizing case selection bias Introducing multiple new sources of variation e.g., non-uniform specimen selection/suitability, variable quality of glass slides Impractical? Resource constraints at each study site Possibly longer overall study duration

Types of Possible Clinical Study Retrospective study? Ability to select archival cases to challenge (“stress test”) the competing diagnostic modalities Possible to incorporate more case variation Inherent case selection bias Often employed in MRMC ROC studies* to assess diagnostic accuracy of radiologic imaging interpretations Large study to detect small differences in accuracy possible * Multiple-reader multiple-case receiver operating characteristic studies

MRMC ROC Paradigm Possible to adopt MRMC ROC paradigm? Frequently used tool in diagnostic radiology More information per case, smaller sample sizes Ability to compare accuracy of diagnostic modalities that rely on wide range of subjective interpretations by readers of varying skill degrees Generalizable to similar readers and similar cases Potentially complicated by multiple observations (diagnoses) in the same specimen

Selecting Study Participants Spectrum of pathologists without formal specialty training to specialty experts or more homogeneous population? Prior exposure to digital pathology Study locations Community/academic practices, commercial laboratories Number of study participants? Traditional MRMC ROC studies: 10-20 readers; 100-300 cases

Selecting a Balanced Set of Cases Adequate mix of biopsies to radical excisions Broad spectrum of diagnostic complexity Not based on ease of diagnosis, typicality of appearance Randomly or sequentially selected specimens Anonymized archival or prospectively collected cases Use of enriched samples for low-prevalence diseases? Including all or only representative diagnostic part(s)? How many cases? Statistical power against reader’s burden

Observer Variation Inherent subjectivity in interpretation thresholds e.g., “atypia,” tumor grading, borderline or uncommon lesions Paucity of lesional area; intra-lesional variation Lack of clear diagnostic criteria Non-quantitative nature of scoring (e.g., pleomorphism) Subjective distinctions on a histologic continuum Broad spectrum of experience and confidence Diagnostic “aggressiveness” or hedging in uncertainty

Reducing Observer Variation Strict adherence to diagnostic criteria and guidelines Use of pro forma histopathology reporting form Use of checklist standardized diagnostic lines Free-text diagnosis for diagnostic uncertainty? Accommodating personal reporting style, judgment Statistically problematic to evaluate Collapsed 2-tiered versus 3-tiered grading system? Circulating an annotated training set prior to study?

Establishing Light Microscopy “Reference” Diagnosis Diagnosis by expert or consensus panel? Number of experts Consensus diagnosis by study participants? Unanimous agreement or majority agreement? Allowing “acceptable” diagnosis? Disagreement in opinion, but not “error” (i.e., no amendment necessary)? “Reference” diagnosis be abstraction of the primary diagnosis or all diagnostic lines?

Evaluating Diagnosis Agreement Primary diagnosis agreement only? 2o diagnoses often posing no clinical impact Unacceptable for a pathologist simply to make accurate diagnosis of malignancy! Line-by-line agreement (1o and 2o diagnoses)? Ideal for collecting performance testing data Unrealistic to expect high agreements without clearly defined diagnostic criteria for all lesions under inquiry Incomplete agreement on 2o diagnoses?

Evaluating Diagnosis Agreement “Major” versus “minor” discrepancy Determined based on clinical impact or flat-out histopathologic error? Compound nevus versus junctional nevus; CIN II versus CIN III → flat-out error, but no difference in treatment Tumor on inked margin or within 1 mm of inked margin in breast biopsy → often a subjective call if specimen not adequately inked, but greatly affecting treatment decision False-positive versus false-negative diagnosis? Treated differently or equally in statistical evaluation?

Evaluating Diagnosis Agreement Panel’s “reference” diagnoses (light microscopy) R3 Participants’ diagnoses by light microscopy Participants’ diagnoses by digital pathology

“Wash-out” Period E.g., a study involving the same pathologist reading: ½ cases: digital imaging followed by light microscopy ½ cases: light microscopy followed by digital imaging “Wash-out” period between digital imaging reading and light microscopy reading? Easier said than done! Not necessary, if desirable to know whether one modality, when seen first, resulting in improved agreement rate of the subsequent one?

Evaluating Diagnosis Agreement If significant disagreement between R1, R2 , and R3: Case-sample variation Intra- and interobserver variations Variation intrinsic to each diagnostic modality Possible or need to tease out all variations? Or, account for effects of case and reader variations on accuracy of competing diagnostic modalities? e.g., by MRMC statistical models; then comparing the overall accuracy

Other Issues Assuming valid performance data exist for one tissue type (e.g., breast pathology): Can the test system be generalized and labeled for all other surgical pathology tissue types without the need for further validations? Can it be generalized and labeled for intraoperative (frozen section) diagnosis and telepathology? If not, how should the label explicitly state the test system’s limitations?

Other Issues Generalizing performance of WSI of H&E glass slides to non-H&E-stained glass slides? Required training of pathologists prior to using WSI? What type of training? Need for post-marketing study for additional safety and effectiveness data? How to conduct such study? What data to collect?