By: Mohammad Qudeisat Supervisor: Dr. Francis Lilley

Slides:



Advertisements
Similar presentations
UNIVERSIDAD DE MURCIA LÍNEA DE INVESTIGACIÓN DE PERCEPCIÓN ARTIFICIAL Y RECONOCIMIENTO DE PATRONES - GRUPO DE COMPUTACIÓN CIENTÍFICA A CAMERA CALIBRATION.
Advertisements

QR Code Recognition Based On Image Processing
Efficient High-Resolution Stereo Matching using Local Plane Sweeps Sudipta N. Sinha, Daniel Scharstein, Richard CVPR 2014 Yongho Shin.
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
A Novel Approach of Assisting the Visually Impaired to Navigate Path and Avoiding Obstacle-Collisions.
Dynamic Occlusion Analysis in Optical Flow Fields
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
Face Alignment with Part-Based Modeling
Automatic Feature Extraction for Multi-view 3D Face Recognition
Computer Vision Optical Flow
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Image Quilting for Texture Synthesis and Transfer Alexei A. Efros1,2 William T. Freeman2.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
Computing motion between images
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Chapter 8: Problem Solving
By Yevgeny Yusepovsky & Diana Tsamalashvili the supervisor: Arie Nakhmani 08/07/2010 1Control and Robotics Labaratory.
Information Extraction from Cricket Videos Syed Ahsan Ishtiaque Kumar Srijan.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Reconstructing 3D mesh from video image sequences supervisor : Mgr. Martin Samuelčik by Martin Bujňák specifications Master thesis
By Doğaç Başaran & Erdem Yörük
HOUGH TRANSFORM Presentation by Sumit Tandon
Extracting Barcodes from a Camera-Shaken Image on Camera Phones Graduate Institute of Communication Engineering National Taiwan University Chung-Hua Chu,
A 3D Model Alignment and Retrieval System Ding-Yun Chen and Ming Ouhyoung.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Digital Image Processing CCS331 Relationships of Pixel 1.
Detection of nerves in Ultrasound Images using edge detection techniques NIRANJAN TALLAPALLY.
: Chapter 8: Edge Detection 1 Montri Karnjanadecha ac.th/~montri Image Processing.
1/30 Challenge the future Auto-alignment of the SPARC mirror W.S. Krul.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Projector Calibration of Interactive Multi-Resolution Display Systems 互動式多重解析度顯示系統之投影機校正 Presenter: 邱柏訊 Advisor: 洪一平 教授.
Graphics II Image Processing I. Acknowledgement Most of this lecture note has been taken from the lecture note on Multimedia Technology course of University.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Distance Estimation Ohad Eliyahoo And Ori Zakin. Introduction Current range estimation techniques require use of an active device such as a laser or radar.
Adaptive Filter Based on Image Region Characteristics for Optimal Edge Detection Lussiana ETP STMIK JAKARTA STI&K Januari-2012.
Detection of nerves in Ultrasound Images using edge detection techniques NIRANJAN TALLAPALLY.
Date of download: 6/1/2016 Copyright © 2016 SPIE. All rights reserved. (a) Image pattern captured by CCD before Gamma correction, and (b) the 1-D Fourier.
Date of download: 6/24/2016 Copyright © 2016 SPIE. All rights reserved. The internal structure of the aligned 2CCD camera. Figure Legend: From: Pixel-to-pixel.
Acquiring, Stitching and Blending Diffuse Appearance Attributes on 3D Models C. Rocchini, P. Cignoni, C. Montani, R. Scopigno Istituto Scienza e Tecnologia.
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
General Engineering Research Institute
CMSC5711 Image processing and computer vision
Reading: R. Schapire, A brief introduction to boosting
Real-Time Soft Shadows with Adaptive Light Source Sampling
Week 7 - Monday CS361.
Imageodesy for co-seismic shift study
José Manuel Iñesta José Martínez Sotoca Mateo Buendía
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Bashar Mu’ala Ahmad Khader
3D Graphics Rendering PPT By Ricardo Veguilla.
Fast Preprocessing for Robust Face Sketch Synthesis
A new data transfer method via signal-rich-art code images captured by mobile devices Source: IEEE Transactions on Circuits and Systems for Video Technology,
Finite Element Surface-Based Stereo 3D Reconstruction
CSC 381/481 Quarter: Fall 03/04 Daniela Stan Raicu
Lecture 3 : Isosurface Extraction
Volume Graphics (lecture 4 : Isosurface Extraction)
Lecture 13 Clipping & Scan Conversion
Image Processing, Lecture #10
: Chapter 8: Edge Detection
Scalable light field coding using weighted binary images
IntroductionLecture 1: Basic Ideas & Terminology
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

By: Mohammad Qudeisat Supervisor: Dr. Francis Lilley Validation of Fringe-Projection Measurements Using Inverse Fringe Projection By: Mohammad Qudeisat Supervisor: Dr. Francis Lilley

Headlines Introduction Problem Statement Inverse Fringe Projection Introduction to the idea Camera-Projector mapping Generating and using the inverse fringe image Calculating errors in the object phase-map Summary Future Work

Introduction 3D shape measurement is a very common problem and has many applications. One common approach for 3D shape measurement is using fringe-projection. Basically, a straight fringe pattern is projected on the object and then captured by a camera. The object shape deforms the fringe pattern. We analyze deformations in the fringe pattern to calculate the depth map of the object.

3D Shape Measurement using Fringe Projection Step 1: Generate a straight fringe pattern Step 2: Project the fringe pattern on the object.

3D Shape Measurement using Fringe Projection contd Step 3: Calculate the phase map. Step 4: Use the phase map to obtain the depth map through a process that relates phase changes to depth changes, called “System Calibration”.

Problem Statement Fringe projection measurements can contain errors (noise, sharp edges, ripples, etc). We need a way by which we can validate our measurements. Repeating the measurement will not produce very different results. Measuring the object shape with a different device can be a solution, but it produces a different perspective of the object shape – Complexity, Cost and Completeness. We need to validate our measurements using the same devices used in the measurement process.

Inverse-Fringe Projection – The Idea To measure an object, we project a straight fringe pattern on the object and capture a deformed fringe pattern and use it to calculate the phase map. Inverse-Fringe Projection method reverses the whole operation. From the phase map obtained in step 1, we generate a deformed fringe pattern such that when projected on the object it produces a straight fringe pattern on the camera.

Inverse-Fringe Projection – The Idea From This We generate and project this

Inverse-Fringe Projection – The Idea We want to capture something like this And we practically capture this image

Measurement Validation steps using Inverse-Fringe Projection Camera-Projector Mapping Defining the wanted camera image Generating and projecting the Inverse-Fringe pattern Capturing the fringe image using the camera Calculating the phase-error map, that is, the phase difference between the wanted and the captured phase maps

Step 1: Camera-Projector mapping For each pixel in the camera, we need to find the corresponding pixel(s) in the projector in sub-pixel accuracy. This is how camera pixels “see” projector pixels.

Camera-Projector mapping How to find the projector pixel (or location) pp(i,j) that corresponds to camera pixel pc(l,m)? Idea: Project horizontal and vertical fringe patterns and calculate the phase-map for both the projected and the captured patterns. Camera and projector pixels that have equal horizontal and vertical phase values correspond to each other.

Camera-Projector mapping (Procedure) Project and grab a horizontal fringe pattern Project and grab vertical fringe pattern Calculate the horizontal and vertical phase maps for the camera and the projector For each pixel in the camera, find the corresponding pixel(s) in the projector by matching the horizontal and vertical phase values in the camera image with their counterparts in the projector image, use interpolation for sub-pixel accuracy Now we have a map that relates camera pixels to projector pixels

Camera – Projector Mapping – Horizontal Correspondence Projected Grabbed (Camera)

Camera-Projector mapping (Procedure) - Example For camera pixel (100,100): Horizontal phase value = 50.71 Vertical phase value = 36.94 We search projector phase maps: Horizontal phase map: Pixels (*, 123), (*,124) have phase values = 50.20, 50.83 Vertical Phase map: Pixels (270, *), (271, *) have phase values = 36.75, 37.44 Using linear interpolation we find that pixel (100,100) in the camera corresponds to pixel (270.34, 123.871) in the projector We repeat the procedure for all camera pixels to get a complete correspondence between camera and projector pixels.

Step 2: Defining the wanted-fringe image The easiest step: Normally, we want to capture a straight fringe pattern Something similar to this image

Step 3: Generating the inverse-fringe image The inverse fringe image is a function of both the camera-projector mapping and the wanted fringe image. Iinv = Iw[l(i,j), m(i,j)] For each pixel in the projected image pp(i,j) find the (supposed-to-be) corresponding camera pixel pc(l,m) from the Camera-Projector mapping with sub-pixel accuracy Fill the projector pixel pp(i,j) with the intensity value of the wanted camera image at pixel pc(l,m) Repeat the operation for all projector pixels that are in the view of the camera

Step 4: Using the Inverse-Fringe image Project the inverse-fringe image on the object Capture the image using the camera

Step 5: Calculating the phase-error map Ideally, the projected inverse-fringe image will be captured as a completely straight fringe pattern In practice, there are always various types of errors These errors originate from the object phase map and propagate to the Camera-Projector mapping Errors in the mapping result in an inverse fringe image that does NOT produce a 100% straight fringe image on the camera To calculate the phase-error map, simply calculate the difference between the wanted inverse fringe image and the captured inverse fringe image.

Calculating the phase-error map So we will calculate the phase difference between these two images

Calculating the phase-error map And the result is:

Another Example with a major error Depth map Captured inverse-fringe image

Another Example with major error Error phase-map

Summary A measurement validation method using inverse-fringe projection technique was proposed. This method is simple, accurate and does not need any additional hardware. Using this method, phase-map errors can be detected and quantitatively measured.

Future Work Currently, the method can quantitatively measure errors in the phase map. We aim to achieve a quantitative measure of errors in the depth map. Currently, this method can only detect errors. We aim to have the ability to correct errors. I am also working on reducing the computational complexity of the algorithm to be used in our real-time fringe-projection measurement system.

Thank You Thank You for Listening.