Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 MITA Observations On Draft CADe Guidances Released by FDA October 21, 2009.

Similar presentations


Presentation on theme: "1 MITA Observations On Draft CADe Guidances Released by FDA October 21, 2009."— Presentation transcript:

1 1 MITA Observations On Draft CADe Guidances Released by FDA October 21, 2009

2 2 Protocol for Use 1 The most common CADe clinical protocol for use is: 1.Physician reads images and forms a primary opinion 2.Physician triggers CADe display and reviews regions indicated by CADe 3.Physician forms final opinion CADe is an adjunctive decision making tool that does not result directly in biopsy –A highly trained radiologist makes the decisions 1 For most second reading CADe devices

3 3 Always/Never Rule 1 A physician should always read the case completely prior to displaying the CADe marks A physician should never not work-up a finding of which he/she is concerned, even if CADe failed to mark it 1 For most second reading CADe devices

4 4 Requirement for Multiple Scans of Same Patient Drs. Pisano and Zuley have clearly expressed 1 : –The medical community’s disapproval of double exposure unless medically indicated –The limits that double exposure would impose on patient enrollment FDA does not require multiple scans of the same patient in digital X-ray or CT submissions Alternatives exist through use of phantoms and simulation 1 Radiological Devices Advisory Panel, November 17 th, 2009 Multiple scans should not be required under any circumstances

5 5 FDA CADe Definition FDA Guidance defines CADe as: –“… intended to identify, mark, highlight, or in any other manner direct attention to portions of an image, or aspects of radiology device data, that may reveal abnormalities during interpretation of patient radiology images or patient radiology device data by the intended user.” Definition is too vague -- need a more precise and consistent distinction between CADe and non-CADe Products.

6 6 Comparison with Predicate FDA suggests 1 that a 510(k) product use performance comparison with predicate device –Industry cannot generally comply: Predicate device may be a competitors and hence unavailable Predicate dataset unlikely to be available Information about composition of predicate dataset is unlikely be available 1 CADe 510(k) Draft Guidance, Page 13

7 7 Restructuring Clinical Guidance Document The clinical guidance document is confusing as it interweaves 510(k) and PMA content FDA could make the submission process less burdensome by separating and clarifying the difference in clinical requirements between 510(k) and PMA

8 8 Determining Intended Use FDA published Intended Use guidance 1 instructs that manufacturers may define: –The intended use –Contraindicate other uses for which they have not validated device performance The intended use statement and the data provided in the labeling will reflect how the CADe device should be used. 1 Determination of Intended Use for 510(k) Devices; Guidance for CDRH Staff (Update to K98-1)

9 9 Off-Label Use Draft CADe clinical guidance, as written, contradicts Intended Use guidance: –FDA “encourages” reading scenarios 1 (such as concurrent reading) outside of manufacturers’ stated intended use FDA has not justified expectation that manufacturers will test CADe systems for contra-indicated, off-label use MITA maintains that testing for contra-indicated off-label use is not least burdensome 1 CADe Clinical Performance Assessment Draft Guidance, page 13

10 10 Balanced Risk-Based Evaluation MITA proposes different evaluation paths and control arms depending on device risk as an alternative to the draft guidance

11 11 Balanced Risk-Based Evaluation Standalone testing is sufficient for a modified CADe product if modifications are minor and direct comparison with predicate device demonstrates equivalent or better standalone performance

12 12 Balanced Risk-Based Evaluation Standalone testing is sufficient for a new CADe product if direct comparison with predicate device demonstrates equivalent or better standalone performance

13 13 Balanced Risk-Based Evaluation Standalone testing for a new CADe product with new technology may need to be supplemented with appropriately scaled clinical data

14 14 Standalone Testing FDA presents a hypothetical situation in which “The new CADe identifies additional abnormalities that are not detected by the predicate device, but misses some of the abnormalities that were detected by the predicate device”. –Unless there is large disparity in True Positive CADe findings, reader variability will have a bigger affect than algorithm variability, so standalone performance evaluation is sufficient.

15 15 Definition of Minor Modification Considered minor: –Software environment modifications that do not change the underlying CAD algorithm (e.g. operating system or compiler) –Design changes to the surrounding application (e.g. DICOM I/O or user interface) –Computer hardware changes –Retrain the algorithm on an expanded database

16 16 Minor Modification Industry asserts that any minor modification will be followed by: –Software testing to establish similarity with prior device –Standalone algorithm testing to assure similarity with prior device Minor modifications do not result in changes to sensitivity and specificity claims

17 17 Single Use of Test Datasets FDA has suggested only allowing a single use of test datasets This approach is burdensome due to: –Low incidence of disease –Request to collect cases that “contain a sufficient number of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, and concomitant diseases).” 1 –Very time consuming to identify and document ground truth –Sites less willing to participate if more patient demographics are required 1 CADe Clinical Performance Assessment Draft Guidance, page 16

18 18 Single Use of Test Datasets If required for each algorithm improvement, single use of test databases will drastically reduce number of new algorithms available to clinicians Manufacturers should be allowed to re-use test databases when appropriate –MITA provided FDA with lists of: Conditions when reuse would be appropriate Alternatives to allow some re-use of images

19 19 Powering for Sub-Groups FDA poses “Is there a minimum number of cancers that should be included in their clinical study to ensure that the entire spectrum of cancer is represented?” –No. While CADe manufacturers strive to collect large, broad datasets, it is not possible to ensure that a database contains “the entire spectrum of cancer”. –In addition, particularly where “stress test” datasets are required, the full spectrum of cancer may not be represented.

20 20 Powering for Sub-Groups FDA poses “Should [a CADe device’s] clinical performance assessment be powered so that statistically significant results can be obtained for the clinically relevant subgroups?” –It should not be necessary to power the study in order to objectively measure performance on sub-groups unless the manufacturer proposes to make such claims.

21 21 New Acquisition Devices Standalone performance testing is sufficient on a database that is scaled to show CADe non-inferiority: –Collected using the new acquisition device –Simulated lesions on normal images from the new acquisition device

22 22 Trade Secrets Industry spends millions of dollars developing proprietary CADe algorithms. CADe devices provide a useful clinical function Ensuring that testing is sufficient to establish safety and effectiveness or performance is the responsibility of the manufacturer, with the oversight of FDA. The testing data should be allowed to speak for itself. Exactly how CADe devices provide that clinical function is not relevant to their performance

23 23 Trade Secrets The list of algorithm details requested by FDA spans three pages, and includes sufficient disclosure to allow complete reverse engineering of the algorithm. –For example, how an algorithm determines “selection of seed points for region segmentation” 1 is not important for understanding the safety and effectiveness of a device. Providing this level of detail, which is not necessary for device evaluation, is extraordinarily time consuming and would not be least burdensome. 1 CAD 510(k) Draft Guidance, page 6

24 24 Trade Secrets Industry is concerned with FDA inadvertent or inappropriate disclosure of trade secret information 1,2 1 Visx & Summit Technology, 1996 2 CareToLive vs. FDA, Southern District of Ohio, Oct 29 2007

25 25 Industry Recommendations Decisions regarding the need for additional clinical evaluation should be made utilizing the same logic applied to other Class II 510(k) devices. CADe should be recognized and risk-managed as the adjunctive decision making tool for competent physicians. The agency should consider the use of scientific literature reviews with comparative analysis with predicate devices as a method of reducing the clinical burdens imposed upon the manufacturer even for minor changes.

26 26 Additional MITA Observations Guidances not practical CAD mark scoring criteria Requires Pre-Submission Consultation CADe appropriateness (not CADx) Interpretation Times Are Not Related To Safety and Effectiveness Enrichment Requires Specificity for CADe Display Device Information Unknown Post Approval Study Requirements


Download ppt "1 MITA Observations On Draft CADe Guidances Released by FDA October 21, 2009."

Similar presentations


Ads by Google