Download presentation
Presentation is loading. Please wait.
Published byVictor Carr Modified over 6 years ago
1
RGBD Camera Integration into CamC Computer Integrated Surgery II Spring, 2015 Han Xiao, under the auspices of Professor Nassir Navab, Bernhard Fuerst and Javad Fotouhi Introduction The project is based on the original Camera Augmented Mobile C-arm(CamC), which provides guidance for trauma and orthopedics surgery [1]. Without proper tracking, finding targets in X-ray view is difficult and requires multiple X-ray shots. This will also increase the radiation exposure for both patients and surgeons. CamC can provide guidance, and thus reduce radiation exposure. The current system is illustrated in Fig. 1. The system is improved by integrating a depth camera. According to camera calibration and multi-view geometry, a depth map corresponding to the CCD camera is reconstruct, and consequently an improved X-ray overlay is rendered. Hands and tools rendered on top of the X-ray (black foam ) “Patient Body” rendered below the X-ray Problem: hands and tools covered by X-ray overlay Figure 2. Illustration of improved X-ray overlay Figure 1. Original CamC View Outcomes and Results Improved X-ray overlay: As illustrated In Fig. 2, an improved perception is shown without having hands and tools blocked. Software plugin for ImFusion [3] : Multi-video capturing Processing and registration for X-ray image Depth map reconstruction X-ray overlay rendering The Problem Lack of depth perception: The original CamC system only has a single CCD camera; therefore, the X-ray overlay is always rendered on top of the optical video, which results in an unrealistic view (Fig. 1). Need for better visualization: A better visualization for the CamC system is needed to improve the usability. The Solution Fusion of optical and X-ray view: A registration with a transformation matrix (2D affine) is perform between an acquired CCD camera image and an X-ray image. CCD camera and Kinect RGB camera calibration: Calibrations is performed on the Kinect RGB camera and the CCD camera to get camera intrinsic and extrinsic parameters [2]. The Kinect depth image is registered with its RGB image in OpenNI. Next, A points cloud is computed and transformed into the CCD camera coordinate. 3D to 2D projection: 3D points are projected into the CCD camera image plane with a projection matrix T. Rendering: A base depth map is recorded every time an X-ray image is acquired. By subtracting the current depth map and base depth map, a mask is created to render the X-ray overlay. Figure 3. System Architecture Future Work Usability study to evaluate enhanced visualization. Better depth interpolation and blending. Parallel Implementation on GPU for speed-up. Further applications of depth data. References [1] Navab, Nassir, S-M. Heining, and Joerg Traub. "Camera augmented mobile C-arm (CAMC): calibration, accuracy study, and clinical applications." Medical Imaging, IEEE Transactions on 29.7 (2010): [2] Zhang, Z. "A Flexible New Technique for Camera Calibration". IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 22, No. 11, 2000, pp. 1330–1334. [3] Support by and Acknowledgements Thank you to Bernhard Fuerst, Javad Fotouhi, and Singchun Lee for providing help in system setup, software tutoring and algorithm development. Thank you to Dr. Nassir Navab for providing supports and the original CamC idea. Engineering Research Center for Computer Integrated Surgical Systems and Technology
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.