Steven Marsh, James Eagle, Juergen Meyer and Adrian Clark

Slides:



Advertisements
Similar presentations
Shape Context and Chamfer Matching in Cluttered Scenes
Advertisements

Bayesian Belief Propagation
Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
QR Code Recognition Based On Image Processing
Verification of specifications and aptitude for short-range applications of the Kinect v2 depth sensor Cecilia Chen, Cornell University Lewis’ Educational.
KinectFusion: Real-Time Dense Surface Mapping and Tracking
In the past few years the usage of conformal and IMRT treatments has been increasing rapidly. These treatments employ the use of tighter margins around.
Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
Kawada Industries Inc. has introduced the HRP-2P for Robodex 2002
KINECT REHABILITATION
Object Recognition & Model Based Tracking © Danica Kragic Tracking system.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
IBBT – Ugent – Telin – IPI Dimitri Van Cauwelaert A study of the 2D - SIFT algorithm Dimitri Van Cauwelaert.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Lecture 5 Template matching
The TrueBeam System ™ Clinic Name Presenter’s name Clinic location
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Arizona’s First University. Feasibility of Image-Guided SRS for Trigeminal Neuralgia with Novalis.
Computing motion between images
Warsaw University LiCAS Linear Collider Alignment & Survey IWAA08, G. Moss 1 The LiCAS LSM System First measurements from the Laser Straightness.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA.
1 REAL-TIME IMAGE PROCESSING APPROACH TO MEASURE TRAFFIC QUEUE PARAMETERS. M. Fathy and M.Y. Siyal Conference 1995: Image Processing And Its Applications.
Overview Introduction to local features
Computer vision: models, learning and inference
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Estimation of physical properties of real world objects Rohan Chabra & Akash Bapat.
Performance Analysis of an Optoelectronic Localization System for Monitoring Brain Lesioning with Proton Beams Fadi Shihadeh (1), Reinhard Schulte (2),
A HIGH RESOLUTION 3D TIRE AND FOOTPRINT IMPRESSION ACQUISITION DEVICE FOR FORENSICS APPLICATIONS RUWAN EGODA GAMAGE, ABHISHEK JOSHI, JIANG YU ZHENG, MIHRAN.
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different.
Reporter: Fei-Fei Chen. Wide-baseline matching Object recognition Texture recognition Scene classification Robot wandering Motion tracking.
Realtime 3D model construction with Microsoft Kinect and an NVIDIA Kepler laptop GPU Paul Caheny MSc in HPC 2011/2012 Project Preparation Presentation.
DIEGO AGUIRRE COMPUTER VISION INTRODUCTION 1. QUESTION What is Computer Vision? 2.
Medical Accelerator F. Foppiano, M.G. Pia, M. Piergentili
Integrating the Health Care Enterprise- Radiation Oncology Use Case: In Vivo Patient Dosimetry Editor: Juan Carlos Celi - IBA Reviewer: Zheng Chang – Duke.
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Spectrograms Revisited Feature Extraction Filter Bank Analysis EEG.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Autonomous Robots Vision © Manfred Huber 2014.
FREE-VIEW WATERMARKING FOR FREE VIEW TELEVISION Alper Koz, Cevahir Çığla and A.Aydın Alatan.
By Pushpita Biswas Under the guidance of Prof. S.Mukhopadhyay and Prof. P.K.Biswas.
Distinctive Image Features from Scale-Invariant Keypoints
研 究 生:周暘庭 Q36994477 電腦與通信工程研究所 通訊與網路組 指導教授 :楊家輝 Mean-Shift-Based Color Tracking in Illuminance Change.
MultiModality Registration Using Hilbert-Schmidt Estimators By: Srinivas Peddi Computer Integrated Surgery II April 6 th, 2001.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
A novel depth-based head tracking and facial gesture recognition system by Dr. Farzin Deravi– EDA UoK Dr Konstantinos Sirlantzis– EDA UoK Shivanand Guness.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
3D SCANNING TECHNOLOGIES IN MECHANICAL ENGINEERING AND MANUFACTURE Gábor Kimmel Computer engineering 3D Graphics Professional Days.
General Engineering Research Institute
A Plane-Based Approach to Mondrian Stereo Matching
CENG 789 – Digital Geometry Processing 08- Rigid-Body Alignment
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Benchmarking MAD, SAD and PLACET Characterization and performance of the CLIC Beam Delivery System with MAD, SAD and PLACET T. Asaka† and J. Resta López‡
Paper – Stephen Se, David Lowe, Jim Little
TP12 - Local features: detection and description
Range Imaging Through Triangulation
Iterative Optimization
By: Mohammad Qudeisat Supervisor: Dr. Francis Lilley
Vision Tracking System
Presented by Xu Miao April 20, 2005
Presentation transcript:

Steven Marsh, James Eagle, Juergen Meyer and Adrian Clark Part No 1, Lesson No 1 Aim Feasibility of using the Kinect for surface tracking in Radiation Oncology Steven Marsh, James Eagle, Juergen Meyer and Adrian Clark IAEA Training Material: Radiation Protection in Radiotherapy

Overview Background motivation for project Introduce the Kinect cameras Discuss tracking method options Present some initial results Conclusion

Project motivation There is a strong focus on highly conformal dose delivery techniques for example, IMRT, IMAT, SBRT and Proton Therapy These techniques produce high dose gradients and thus, on treatment, patient positioning needs to replicate (as best as possible) that from the planning CT Cone beam CT allows for verification of patient setup before treatment However any small shifts and deformations during treatment are more problematic when the treatment plan has high dose gradients

Part No 1, Lesson No 1 Aim Project motivation There is currently a lack of patient position monitoring during treatment Patients are observed to sag or shift during long treatment Observation verified by difference in setup CB-CT and post treatment Difference between the CB-CTs: Δx = -0.1mm Δy = 9.9mm Δz = -2.1mm Results from an optical tracking system developed at Uni Wuerzberg System showed patient shift in y and z directions Later verified by the difference detected in the pre and post CBCTs IAEA Training Material: Radiation Protection in Radiotherapy

Project motivation It would seem there is a current need for patient setup and monitoring tracking throughout treatment Commercial systems do exist e.g. Vision RT-AlignRT C-Rad However these are expensive!!

Project outline Can a cost-effective solution be developed? Requirements: Need for non-ionising tracking method System needs to be low cost Simplicity of design so as to not increase therapist’s workload The Kinect range of depth cameras looked viable. Thus project was to: Look at feasibility of using the Kinect for patient monitoring Characterise Kinect 1.0 and 2.0 Develop software to test the Kinect camera in the clinical environment

Camera specifications: Kinect 1.0 RGB-D 640 x 480 RGB 320 x 240 Depth 43o vertical by 57o horizontal F.O.V 30 FPS Depth IR projector and sensor Structured light pattern used for depth calculation

Kinect 1.0 depth sensor Kinect projects a structured light pattern on local environment Structured light pattern is deformed by variations in depth Calculation of depth is done based on the transform between known pattern and measured pattern Kinect uses propriety pattern Effective range 0.8-4m

Camera specifications: Kinect 2.0 Alpha development program 1920 x 1080 RGB Camera 512 x 424 Depth Camera 60o vertical by 70o horizontal F.O.V 30 FPS Depth Camera and IR emitter Time of Flight(ToF) used for Depth calculation

Kinect 2.0 depth sensor 830nm IR laser Effective range of 0.5-4.5m Measures the time difference of emitted light and backscattered light to obtain a depth value. Internal configuration of Kinect 2.0 is unknown as Microsoft has not released this information yet.

Kinect depth measurement long term stability Part No 1, Lesson No 1 Aim Kinect depth measurement long term stability 60 min warmup period IAEA Training Material: Radiation Protection in Radiotherapy

Kinect depth variation Part No 1, Lesson No 1 Aim Kinect depth variation The Kinect v1 has a mean position of -11.5mm and the Kinect v2 has a mean position of 3.5mm The standard deviation of the Kinect v1 is 2.4mm, while the standard deviation of the Kinect v2 is only 0.6mm The Kinect v1 has a mean position of -11.5mm and the Kinect v2 has a mean position of 3.5mm. The standard deviation of the Kinect v1 is 2.4mm, while the standard deviation of the Kinect v2 is only 0.6mm. Demonstrates the Kinect 2 has less depth variation IAEA Training Material: Radiation Protection in Radiotherapy

Software development – GUI design Simplistic Design Minimal controls Colour coded for fast and efficient reading Easy to read graphs Large number displaying offsets User defined tolerances Moving average Multiple Camera functions

Software development – GUI design

Tracking Needs to be Possible methods include: fast, accurate, Part No 1, Lesson No 1 Aim Tracking Needs to be fast, accurate, lighting independent. Possible methods include: Mean shift (Camshift a variant of this) Mode seeking Speed Up Robust Features (SURF) Local feature detector Kinect Fusion Microsoft Local feature tracking (Full 3D tracking) Least Squares Exhaustive search method What happened to camshift ??? Are all above lighting independent? IAEA Training Material: Radiation Protection in Radiotherapy

Tracking – mean shift Algorithm originally proposed by Y Cheng 1995 Histogram produced based on variable to be tracked Back projection based on histogram Iterative calculation for mean centre of mass of back projected data found

Tracking – speed up robust features (SURF) Algorithm proposed by H Bay 2008 Extracts key points Scale and rotation invariant Was unable to track smooth and flat surfaces

Tracking – Kinect fusion Developed by Microsoft 2011 Creates 3D model Tracks camera position based on features tracking Primarily works with ridged models Can use ray-traced Iterative closest point matching No Kinect 2.0 integration until just recently

Tracking – least squares fit Minimises difference between original surface and new surface High computational time High accuracy Two dimensional Does not track rotations or scale changes

Comparison of tracking algorithms Speed Accuracy Ability to deal with deformations Limitations Camshift Moderate High Requires accurate depth correction. SURF Low Requires significant variability in scene Kinect Fusion Fast Requires lots of variation in scene, develops random rotations when tracking is not accurate Least Squares Very Slow Tracking will fail if objects speed is too large, requires some variation in scene All the above methods were tested. The following slides show results for the Least Squares method

Results – Kinect 2.0 lateral position tracking

Results – Kinect 2.0 vertical position tracking

Results – Kinect 2.0 depth position tracking

Results – Kinect 2.0 Dynamic tracking Part No 1, Lesson No 1 Aim Results – Kinect 2.0 Dynamic tracking Tracking of a phantom showing motion platform (sin curve) and tracked position (x marks) IAEA Training Material: Radiation Protection in Radiotherapy

Standard deviation (mm) Ability to deal with deformations Tracking summary   Standard deviation (mm) Relative speed Lighting dependant Accuracy Ability to deal with deformations Degrees of freedom Camshift 0.5 Fast Yes Very High High 4 SURF 14.1 Medium No Very Low Moderate Kinect Fusion 3.0 Slow Low 6 Least squares 1.5 3

Tracking the motion of a volunteer in a treatment bunker Baseline movements of a volunteer over 25 seconds

Tracking the motion of a volunteer in a treatment bunker Volunteer coughing between 15 and 20 seconds

Tracking the motion of a volunteer in a treatment bunker   Detectable Beyond tolerance Comparison with baseline Normal breathing Yes No Very similar Heavy breathing Significantly increased motion Coughing Large motion during coughing Looking around Increased motion Moving buttocks Massive disruption followed by return to baseline Talking Moving Arms Largely increased motion

Conclusions Kinect V2.0 performs better than Kinect V1.0 Can observe small motions (sub millimeter) in the depth direction Tracking works accurately in real time ISO-Center mapping can accurately transform coordinate systems Horizontal and vertical directions limited by large FOV so only movements larger than 5mm can be detected Magnification or improvement of the horizontal and vertical resolution could result in this system being used in a clinical environment

Acknowledgements St George’s Hospital for use of radiation therapy facilities Microsoft for acceptance into the alpha-testing programme.