Machine Learning in Laparoscopy

Slides:



Advertisements
Similar presentations
A brief review of non-neural-network approaches to deep learning
Advertisements

Matthew Schwartz Harvard University March 24, Boost 2011.
MACHINE LEARNING AND ARTIFICIAL NEURAL NETWORKS FOR FACE VERIFICATION
Thesis Proposal PrActive Learning: Practical Active Learning, Generalizing Active Learning for Real-World Deployments.
An Example of Course Project Face Identification.
Powerful Products for Communications Power Supply Development Electrical and Mechanical Simulation Missing Links Lorenzo Cividino.
I LEARNED TO KNIT. Why Knitting? I’ve wanted to learn how to knit for a very long time now. My Grandma passed away before she could teach. I never took.
Water Services Training Group WRc FOG Project FOG Strategy.
Students: Meera & Si Mentor: Afshin Dehghan WEEK 4: DEEP TRACKING.
Joe Bradish Parallel Neural Networks. Background  Deep Neural Networks (DNNs) have become one of the leading technologies in artificial intelligence.
1 Purpose of Checklist Identify major components of contribution: To assist the working group in categorizing proposals To help authors in the harmonization.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Face Recognition based on 2D-PCA and CNN
Facial Detection via Convolutional Neural Network Nathan Schneider.
Introduction to Machine Learning, its potential usage in network area,
Automatic Lung Cancer Diagnosis from CT Scans (Week 3)
Horizon 2020 Secure Societies European Info Day and Brokerage Event
When deep learning meets object detection: Introduction to two technologies: SSD and YOLO Wenchi Ma.
Recent developments in object detection
Automatic Grading of Diabetic Retinopathy through Deep Learning
Deep Learning Software: TensorFlow
A Discriminative Feature Learning Approach for Deep Face Recognition
The Relationship between Deep Learning and Brain Function
Observations by Dance Move
Convolutional Neural Fabrics by Shreyas Saxena, Jakob Verbeek
A200SAM Essential Skills for the project manager
Krishna Kumar Singh, Yong Jae Lee University of California, Davis
Textual Video Prediction Week 2
Week III: Deep Tracking
Done Done Course Overview What is AI? What are the Major Challenges?
Interreg V-A Romania-Bulgaria Programme
Depth estimation and Plane detection
Diagnosing heart diseases with deep neural networks
Implementing Boosting and Convolutional Neural Networks For Particle Identification (PID) Khalid Teli .
Ajita Rattani and Reza Derakhshani,
AI – an industry perspective And some thoughts on copyright
A Convolutional Neural Network Cascade For Face Detection
Object Recognition & Detection
Helpful Hints for action to prevent elder abuse
Quanzeng You, Jiebo Luo, Hailin Jin and Jianchao Yang
Figure 4. Testing minimal configurations with existing models for spatiotemporal recognition. (A-B) A binary classifier is trained to separate a positive.
Recurrent Neural Networks
INF 5860 Machine learning for image classification
Object Detection + Deep Learning
Volume 155, Issue 4, Pages e8 (October 2018)
Smart Robots, Drones, IoT
Optimization for Fully Connected Neural Network for FPGA application
A Proposal Defense On Deep Residual Network For Face Recognition Presented By SAGAR MISHRA MECE
Long Short Term Memory within Recurrent Neural Networks
Lecture 16: Recurrent Neural Networks (RNNs)
Tuning CNN: Tips & Tricks
Privacy-preserving Neural Networks
Object Tracking: Comparison of
Designing Neural Network Architectures Using Reinforcement Learning
Giorgos Flouris A Collective awareness platform for privacy concerns and expectations: CAPrice in the making July 2018 Giorgos Flouris.
Heterogeneous convolutional neural networks for visual recognition
Human-object interaction
Image Processing and Multi-domain Translation
DRC with Deep Networks Tanmay Lagare, Arpit Jain, Luis Francisco,
Object Detection Implementations
Weeks 1 and 2 Aaron Ott.
Trusting Machine Learning Algorithms for Safeguards Applications
Multi-UAV Detection and Tracking
Multi-UAV to UAV Tracking
CRCV REU 2019 Kara Schatz.
Report 4 Brandon Silva.
Week 3 Volodymyr Bobyr.
Report 2 Brandon Silva.
End-to-End Speech-Driven Facial Animation with Temporal GANs
Multi-Target Detection and Tracking of UAVs from a UAV
Presentation transcript:

Machine Learning in Laparoscopy Francis Kaping’a 25/07/2018

title Deep learning for action and event detection in endoscopic videos for robotic assisted laparoscopy This project is big. Maybe talk a bit about the overall SARAS project and point out where I come in

Project background Current situation Operation performed by two; main surgeon and assistant Problems 1. Expensive 2. Human error Solution Replace the assistant with code This project is big. Maybe talk a bit about the overall SARAS project and point out where I come in

Project objectives 1. Come up with a list of relevant action/event classes 2. Annotate data by drawing bounding boxes around actions of interest 3. Test the current action detection codebase on the newly annotated dataset 4. (Optional) further improving the used deep learning architecture. This project is big. Maybe talk a bit about the overall SARAS project and point out where I come in

Implementation Data annotation Using MATLAB and Microsoft VoTT Code development Snippets of code to manage data in Torch Model training and testing 1. Existing model 2. Fine tuning model for optimal results

Implementation: CNN Convolutional Neural Networks Great for image, object, and action detection

IMPLEMENTATION: ssd SSD is a real time CNN based object detection and localisation architecture that underlies our model.

IMPLEMENTATION: action detection model Model takes advantage of real time SSD architecture. Each frame is passed to two SSDs; Appearance SSD and Flow SSD. The results from both SSDs are fused and incrementally added to tubes being generated.

IMPLEMENTATION: annotation VoTT is used for annotation Rate of annotation is 4 per second Skipping negative frames Annotating all positive frames So far annotated 6000 frames

IMPLEMENTATION: annotation Explored annotation strategies: Draw bounding boxes around tools and tissue Draw bounding boxes when actions are imminent Each box should include 30% to 70% of instrument(s) and tissue The above are to above the model being over reliant on either instruments or tissues

IMPLEMENTATION: annotation Example: Cutting seminal vesicle and pulling seminal vesicle respectively.

Progress Classes identification Unrevised list of 47 classes identified. Examples are: 1. Mesocolon dissection 2. Bleeding 3. Lymph node dissection 4. Prostate dissection 5. Anastomosis

Progress Surveyed literature (Ongoing) Gathered video data (100%) Discussions with experts to understand the data and clinical perspectives (90%) Annotated 20% of first video (about 6000 frames) Include video with a bounding box on an action

Challenges What makes an action? Tool or organ Opacity of actions Unfortunate incidences are mostly those never seen before Over reliance on tools creates room for error Hard to tell lymph nodes from fat And a few more

Some solutions Choose frames to learn from. Skip non-informative frames. Train model on plenteous data. Fine tune model during training. E.g. filter sizes, number of layers, strides.

Risks Risk Identifying wrong classes Mitigation Constant conversations with experts

Ethical and legal issues Data only to be disclosed to deserving individuals Legal GDPR and Data Protection Act (1998 & 2018) insist on privacy

What is next? Continuation of annotation Testing the existing model on the data Documenting results Proposing changes to model (time allowing)

Question?

Appendix Forceps

Appendix Electrical scissors

Appendix Clip applier

Appendix Needle driver

Appendix Scissors

Appendix Forceps

Appendix Catheter