Machine Learning in Laparoscopy Francis Kaping’a 25/07/2018
title Deep learning for action and event detection in endoscopic videos for robotic assisted laparoscopy This project is big. Maybe talk a bit about the overall SARAS project and point out where I come in
Project background Current situation Operation performed by two; main surgeon and assistant Problems 1. Expensive 2. Human error Solution Replace the assistant with code This project is big. Maybe talk a bit about the overall SARAS project and point out where I come in
Project objectives 1. Come up with a list of relevant action/event classes 2. Annotate data by drawing bounding boxes around actions of interest 3. Test the current action detection codebase on the newly annotated dataset 4. (Optional) further improving the used deep learning architecture. This project is big. Maybe talk a bit about the overall SARAS project and point out where I come in
Implementation Data annotation Using MATLAB and Microsoft VoTT Code development Snippets of code to manage data in Torch Model training and testing 1. Existing model 2. Fine tuning model for optimal results
Implementation: CNN Convolutional Neural Networks Great for image, object, and action detection
IMPLEMENTATION: ssd SSD is a real time CNN based object detection and localisation architecture that underlies our model.
IMPLEMENTATION: action detection model Model takes advantage of real time SSD architecture. Each frame is passed to two SSDs; Appearance SSD and Flow SSD. The results from both SSDs are fused and incrementally added to tubes being generated.
IMPLEMENTATION: annotation VoTT is used for annotation Rate of annotation is 4 per second Skipping negative frames Annotating all positive frames So far annotated 6000 frames
IMPLEMENTATION: annotation Explored annotation strategies: Draw bounding boxes around tools and tissue Draw bounding boxes when actions are imminent Each box should include 30% to 70% of instrument(s) and tissue The above are to above the model being over reliant on either instruments or tissues
IMPLEMENTATION: annotation Example: Cutting seminal vesicle and pulling seminal vesicle respectively.
Progress Classes identification Unrevised list of 47 classes identified. Examples are: 1. Mesocolon dissection 2. Bleeding 3. Lymph node dissection 4. Prostate dissection 5. Anastomosis
Progress Surveyed literature (Ongoing) Gathered video data (100%) Discussions with experts to understand the data and clinical perspectives (90%) Annotated 20% of first video (about 6000 frames) Include video with a bounding box on an action
Challenges What makes an action? Tool or organ Opacity of actions Unfortunate incidences are mostly those never seen before Over reliance on tools creates room for error Hard to tell lymph nodes from fat And a few more
Some solutions Choose frames to learn from. Skip non-informative frames. Train model on plenteous data. Fine tune model during training. E.g. filter sizes, number of layers, strides.
Risks Risk Identifying wrong classes Mitigation Constant conversations with experts
Ethical and legal issues Data only to be disclosed to deserving individuals Legal GDPR and Data Protection Act (1998 & 2018) insist on privacy
What is next? Continuation of annotation Testing the existing model on the data Documenting results Proposing changes to model (time allowing)
Question?
Appendix Forceps
Appendix Electrical scissors
Appendix Clip applier
Appendix Needle driver
Appendix Scissors
Appendix Forceps
Appendix Catheter