Download presentation
Presentation is loading. Please wait.
Published byLiliana Ribbens Modified over 9 years ago
1
Carol E. Reiley 1 Henry C. Lin 1, Balakrishnan Varadarajan 2, Balazs Vagvolgyi 1, Sanjeev Khudanpur 2, David D. Yuh 3, Gregory D. Hager 1 1 Engineering Research Center for Computer-Integrated Surgical Systems and Technology, The Johns Hopkins University 2 Center for Speech Language Processing, The Johns Hopkins University 3 Division of Cardiac Surgery, The Johns Hopkins Medical Institutions MMVR January 31 st, 2008 Automatic Recognition of Surgical Motions Using Statistical Modeling for Capturing Variability
2
Introduction Our Goal Automatically segment and recognize core surgical motion segments (surgemes) Capture the variability of a surgeon’s movement techniques using statistical methods
3
Introduction Given a surgical task, a single user tends to use similar movement patterns Lin 2005 Miccai
4
Introduction Different users demonstrate more variability to complete the same surgical task Our goal is to identify core surgical motions versus error/unintentional motion
5
Related Work Low level surgical modeling: Imperial College-ICSAD High level surgical modeling: University of Washington-Blue Dragon Low level surgical modeling: MIST-VR Prior work focuses on surgical metrics for skill evaluation High level (applied force and motion) Low level (motion data) Our work aims to automatically identify fundamental motions
6
Our Approach Surgeme: elementary portions of surgical motion Reaching for needle Positioning Needle Pull Suture with Left Hand
7
Motion Vocabulary End of Trial, Idle Motion LabelDescription AReach for Needle (gripper open) B Position Needle (holding needle) CInsert Needle/Push Needle Through Tissue DMove to Middle With Needle (left hand) EMove to Middle With Needle (right hand) FPull Suture With Left Hand GPull Suture With Right Hand* HOrient Needle With Two Hands IRight Hand Assisting Left While Pulling Suture* JLoosen Up More Suture* K *Added based on observed variability of technique
8
Our Approach Extraction of Structure Signal Processing Classification/ Modeling Feature Processing
9
Data Collection The da Vinci Surgical Robot System Courtesy of Intuitive Surgical With the increasing use of robotics in surgical procedures, a new wealth of data is available for analysis. Recorded parameters at 23 Hz: (Patient and master side) Joint angles, velocities End effector position, velocity, orientation High-quality stereo vision
10
Experimental Study SubjectMedical TrainingDa Vinci TrainingHrs 1 --10-15 2 - - 100+ 3 XX100+ 4 -X100+ 5 - X<10 6 7 --<1 Users had varied level of experience Each user performed five trials Each trial consisted of a four-throw suturing task
11
Classification Methods Linear Discriminant Analysis (LDA) with Single Gaussian LDA + Gaussian Mixture Model (GMM) 3-state Hidden Markov Model (HMM) Maximum Likelihood Linear Regression (MLLR) Supervised Unsupervised
12
Results Leave one trial out per user cross-validation MLLR not applicable Percent classifier accuracy (average):
13
Results Example classifier to manual segmentation result
14
Results We repeated the analysis, this time leaving one user out Supervised: Surgeme start/stop events manually defined Unsupervised: Surgeme start/stop events automatically derived
15
Conclusions Preliminary results show the potential for identifying core surgical motions User variability has a significant effect on classification rates Future work: Use contextual cues from video data Filter class decisions (eg. majority vote) to eliminate class jumping Apply to data from live surgery (eg. Prostatectomy)
16
Acknowledgements Intuitive Surgical Dr. Chris Hasser This work was supported in part by: NSF Grant No. 0534359 NSF Graduate Research Fellowship
17
References
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.