RGBD Camera Integration into CamC Computer Integrated Surgery II Spring, 2015 Han Xiao, under the auspices of Professor Nassir Navab, Bernhard Fuerst and.

Slides:



Advertisements
Similar presentations
Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Advertisements

Lecture 11: Two-view geometry
CSE473/573 – Stereo and Multiple View Geometry
Results/Conclusions: In computer graphics, AR is achieved by the alignment of the virtual camera with the actual camera and the virtual object with the.
Joshua Fabian Tyler Young James C. Peyton Jones Garrett M. Clayton Integrating the Microsoft Kinect With Simulink: Real-Time Object Tracking Example (
A Modified EM Algorithm for Hand Gesture Segmentation in RGB-D Data 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) July 6-11, 2014, Beijing,
3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised.
Real Time Visual Body Feedback & IR Tracking in HMD Based Virtual Environments Using Microsoft Kinects Speaker: Srivishnu ( Kaushik ) Satyavolu Advisor:
Stereo Algorithm Grimson’s From Images to Surfaces stereo algorithm Multi-resolution Proceed from coarse to fine level Assume 0 initial disparity — depth-dependent.
Srikumar Ramalingam Department of Computer Science University of California, Santa Cruz 3D Reconstruction from a Pair of Images.
Epipolar Geometry and the Fundamental Matrix F
3D Measurements by PIV  PIV is 2D measurement 2 velocity components: out-of-plane velocity is lost; 2D plane: unable to get velocity in a 3D volume. 
3D Augmented Reality for MRI-Guided Surgery Using Integral Videography Autostereoscopic Image Overlay Hongen Liao, Takashi Inomata, Ichiro Sakuma and Takeyoshi.
Lecture 20: Two-view geometry CS6670: Computer Vision Noah Snavely.
Lec 21: Fundamental Matrix
CSE473/573 – Stereo Correspondence
Abstract Overall Algorithm Target Matching Error Checking: By comparing what we transform from Kinect Camera coordinate to robot coordinate with what we.
Capturing the Motion of Ski Jumpers using Multiple Stationary Cameras Atle Nes Faculty of Informatics and e-Learning Trondheim University.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
InerVis Mobile Robotics Laboratory Institute of Systems and Robotics ISR – Coimbra Contact Person: Jorge Lobo Human inertial sensor:
TUMChair for Computer Aided Medical Procedures (I-16)1 Intra-operative Imaging & Visualization Lab Course SS 2004 Lab Course SS 2004 Implementation of.
Automatic Camera Calibration
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
Reprojection of 3D points of Superquadrics Curvature caught by Kinect IR-depth sensor to CCD of RGB camera Mariolino De Cecco, Nicolo Biasi, Ilya Afanasyev.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
3D Stereo Reconstruction using iPhone Devices Final Presentation 24/12/ Performed By: Ron Slossberg Omer Shaked Supervised By: Aaron Wetzler.
Simple Calibration of Non-overlapping Cameras with a Mirror
Introduction Kinect for Xbox 360, referred to as Kinect, is developed by Microsoft, used in Xbox 360 video game console and Windows PCs peripheral equipment.
Camera Geometry and Calibration Thanks to Martial Hebert.
KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings.
REU Project RGBD gesture recognition with the Microsoft Kinect Steven Hickson.
INTRODUCTION Generally, after stroke, patient usually has cerebral cortex functional barrier, for example, the impairment in the following capabilities,
Visual Perception PhD Program in Information Technologies Description: Obtention of 3D Information. Study of the problem of triangulation, camera calibration.
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
Realtime 3D model construction with Microsoft Kinect and an NVIDIA Kepler laptop GPU Paul Caheny MSc in HPC 2011/2012 Project Preparation Presentation.
Methods Validation with Simulated Data 1.Generate random linear objects in the model coordinate system. 2.Generate a random set of points on each linear.
A Frequency-Domain Approach to Registration Estimation in 3-D Space Phillip Curtis Pierre Payeur Vision, Imaging, Video and Autonomous Systems Research.
1 Registration algorithm based on image matching for outdoor AR system with fixed viewing position IEE Proc.-Vis. Image Signal Process., Vol. 153, No.
Affine Structure from Motion
TUH EEG Corpus Data Analysis 38,437 files from the Corpus were analyzed. 3,738 of these EEGs do not contain the proper channel assignments specified in.
Senior Design Project Megan Luh Hao Luo January
EECS 274 Computer Vision Affine Structure from Motion.
-BY KUSHAL KUNIGAL UNDER GUIDANCE OF DR. K.R.RAO. SPRING 2011, ELECTRICAL ENGINEERING DEPARTMENT, UNIVERSITY OF TEXAS AT ARLINGTON FPGA Implementation.
Copyright © 2012, 2006, 2000, 1996 by Saunders, an imprint of Elsevier Inc. Chapter 25 Digital Imaging.
Advanced Science and Technology Letters Vol.28 (EEC 2013), pp Fuzzy Technique for Color Quality Transformation.
Vision Based hand tracking for Interaction The 7th International Conference on Applications and Principles of Information Science (APIS2008) Dept. of Visual.
67 x 89 = ? 67 x
IPad: A Mobile Surgical Console Computer Integrated Surgery II, Spring, 2011 Hanlin Wan and Jonathan Satria Mentors: Balazs Vagvolgyi and Russell Taylor.
CSE 185 Introduction to Computer Vision
Design and Calibration of a Multi-View TOF Sensor Fusion System Young Min Kim, Derek Chan, Christian Theobalt, Sebastian Thrun Stanford University.
Heechul Han and Kwanghoon Sohn
Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth?
Date of download: 10/16/2017 Copyright © ASME. All rights reserved.
제 5 장 스테레오.
Figure 1: Current Setup of the Photoacoutic Registration System
Fluoroscopy Simulation on a Mobile C-arm Computer Integrated Surgery II Spring, 2016 Ju Young Ahn, and Seung Wook Lee, (mentorship by Matthew Jacobson,
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Courtesy from Dr. McNutt
Epipolar geometry.
Study the mathematical relations between corresponding image points.
Introduction to Computer Graphics with WebGL
Computer Graphics Recitation 12.
Above: An Actual View of Earth from Space (Courtesy NASA)
Multiple View Geometry for Robotics
Progress Review.
Above: An Actual View of Earth from Space (Courtesy NASA)
A visual surveillance using real-time curve evolution based on the level-set method and pan-tilt camera Good afternoon ~ sir. Today I want to talk about.
Jang Pyo Bae1, Dong Heon Lee2, Jae Soon Choi3, and Hee Chan Kim4
RGBD gesture recognition with the Microsoft Kinect
Presentation transcript:

RGBD Camera Integration into CamC Computer Integrated Surgery II Spring, 2015 Han Xiao, under the auspices of Professor Nassir Navab, Bernhard Fuerst and Javad Fotouhi Introduction The project is based on the original Camera Augmented Mobile C-arm(CamC), which provides guidance for trauma and orthopedics surgery [1]. Without proper tracking, finding targets in X-ray view is difficult and requires multiple X-ray shots. This will also increase the radiation exposure for both patients and surgeons. CamC can provide guidance, and thus reduce radiation exposure. The current system is illustrated in Fig. 1. The system is improved by integrating a depth camera. According to camera calibration and multi-view geometry, a depth map corresponding to the CCD camera is reconstruct, and consequently an improved X-ray overlay is rendered. Hands and tools rendered on top of the X-ray (black foam ) “Patient Body” rendered below the X-ray Problem: hands and tools covered by X-ray overlay Figure 2. Illustration of improved X-ray overlay Figure 1. Original CamC View Outcomes and Results Improved X-ray overlay: As illustrated In Fig. 2, an improved perception is shown without having hands and tools blocked. Software plugin for ImFusion [3] : Multi-video capturing Processing and registration for X-ray image Depth map reconstruction X-ray overlay rendering The Problem Lack of depth perception: The original CamC system only has a single CCD camera; therefore, the X-ray overlay is always rendered on top of the optical video, which results in an unrealistic view (Fig. 1). Need for better visualization: A better visualization for the CamC system is needed to improve the usability. The Solution Fusion of optical and X-ray view: A registration with a transformation matrix (2D affine) is perform between an acquired CCD camera image and an X-ray image. CCD camera and Kinect RGB camera calibration: Calibrations is performed on the Kinect RGB camera and the CCD camera to get camera intrinsic and extrinsic parameters [2]. The Kinect depth image is registered with its RGB image in OpenNI. Next, A points cloud is computed and transformed into the CCD camera coordinate. 3D to 2D projection: 3D points are projected into the CCD camera image plane with a projection matrix T. Rendering: A base depth map is recorded every time an X-ray image is acquired. By subtracting the current depth map and base depth map, a mask is created to render the X-ray overlay.   Figure 3. System Architecture Future Work Usability study to evaluate enhanced visualization. Better depth interpolation and blending. Parallel Implementation on GPU for speed-up. Further applications of depth data. References [1] Navab, Nassir, S-M. Heining, and Joerg Traub. "Camera augmented mobile C-arm (CAMC): calibration, accuracy study, and clinical applications." Medical Imaging, IEEE Transactions on 29.7 (2010): 1412-1423. [2] Zhang, Z. "A Flexible New Technique for Camera Calibration". IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 22, No. 11, 2000, pp. 1330–1334. [3] http://www.imfusion.de/ Support by and Acknowledgements Thank you to Bernhard Fuerst, Javad Fotouhi, and Singchun Lee for providing help in system setup, software tutoring and algorithm development. Thank you to Dr. Nassir Navab for providing supports and the original CamC idea. Engineering Research Center for Computer Integrated Surgical Systems and Technology