2013-1 Capstone Design Implementation of Depth Sensor Based on Structured Infrared Patterns June 11, 2013 School of Information and Communication Engineering,

Slides:



Advertisements
Similar presentations
Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Advertisements

Automatic Generation of 3D Machining Surfaces With Tool Compensation
Verification of specifications and aptitude for short-range applications of the Kinect v2 depth sensor Cecilia Chen, Cornell University Lewis’ Educational.
Laser Speckle Extensometer ME 53
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Kawada Industries Inc. has introduced the HRP-2P for Robodex 2002
--- some recent progress Bo Fu University of Kentucky.
Gratuitous Picture US Naval Artillery Rangefinder from World War I (1918)!!
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
 Understanding the Sources of Inefficiency in General-Purpose Chips.
Stereo.
Real-Time Video Analysis on an Embedded Smart Camera for Traffic Surveillance Presenter: Yu-Wei Fan.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Characterization Presentation Neural Network Implementation On FPGA Supervisor: Chen Koren Maria Nemets Maxim Zavodchik
Embedded Systems Programming
INTEGRATION OF A SPATIAL MAPPING SYSTEM USING GPS AND STEREO MACHINE VISION Ta-Te Lin, Wei-Jung Chen, Fu-Ming Lu Department of Bio-Industrial Mechatronics.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
Computing motion between images
CSE473/573 – Stereo Correspondence
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Digital Cameras (Basics) CCD (charge coupled device): image sensor Resolution: amount of detail the camera can capture Capturing Color: filters go on.
IC3 GS3 Standard Computing Fundamentals Module
BLDC MOTOR SPEED CONTROL USING EMBEDDED PROCESSOR
Virtual Imaging Peripheral for Enhanced Reality Aaron Garrett, Ryan Hannah, Justin Huffaker, Brendon McCool.
Video-rate dense depth mapping implemented by 2 webcams.
Critical Design Review 27 February 2007 Black Box Car System (BBCS) ctrl + z: Benjamin Baker, Lisa Furnish, Chris Klepac, Benjamin Mauser, Zachary Miers.
Reprojection of 3D points of Superquadrics Curvature caught by Kinect IR-depth sensor to CCD of RGB camera Mariolino De Cecco, Nicolo Biasi, Ilya Afanasyev.
Professor : Yih-Ran Sheu Student’s name : Nguyen Van Binh Student ID: MA02B203 Kinect camera 1 Southern Taiwan University Department of Electrical Engineering.
IMAGE COMPRESSION USING BTC Presented By: Akash Agrawal Guided By: Prof.R.Welekar.
SPCA554A Mobile Camera Multimedia Processor By Harrison Tsou.
ECE532 Final Project Demo Disparity Map Generation on a FPGA Using Stereoscopic Cameras ECE532 Final Project Demo Team 3 – Alim, Muhammad, Yu Ting.
Video Video.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
FPGA-based Platform for Real-Time Stereo Vision Sergiy Zhelnakov, Pil Woo (Peter) Chun, Valeri Kirischian Supervisor: Dr. Lev Kirischian Reconfigurable.
Stereo Many slides adapted from Steve Seitz.
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
Professor : Tsung Fu Chien Student’s name : Nguyen Trong Tuyen Student ID: MA02B208 An application Kinect camera controls Vehicles by Gesture 1 Southern.
-BY KUSHAL KUNIGAL UNDER GUIDANCE OF DR. K.R.RAO. SPRING 2011, ELECTRICAL ENGINEERING DEPARTMENT, UNIVERSITY OF TEXAS AT ARLINGTON FPGA Implementation.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
The 18th Meeting on Image Recognition and Understanding 2015/7/29 Depth Image Enhancement Using Local Tangent Plane Approximations Kiyoshi MatsuoYoshimitsu.
HONGIK UNIVERSITY School of Radio Science & Communication Engineering Visual Information Processing Lab Hong-Ik University School of Radio Science & Communication.
Development of a laser slit system in LabView
CONTENT 1. Introduction to Kinect 2. Some Libraries for Kinect 3. Implement 4. Conclusion & Future works 1.
Grant Thomas Anthony Fennell Justin Pancake Chris McCord TABLEGAMES UNLIMITED.
高精度高速度的三维测量技术. 3D stereo camera Stereo line scan camera 3D data and color image simultaneously 3D-PIXA.
Solving for Stereo Correspondence Many slides drawn from Lana Lazebnik, UIUC.
11/05/15 How the Kinect Works Computational Photography Derek Hoiem, University of Illinois Photo frame-grabbed from:
Chapter 1: Image processing and computer vision Introduction
Suggested Machine Learning Class: – learning-supervised-learning--ud675
Vision-Guided Robot Position Control SKYNET Tony BaumgartnerBrock Shepard Jeff Clements Norm Pond Nicholas Vidovich Advisors: Dr. Juliet Hurtig & Dr. J.D.
RenBED – Technical Training Jon Fuge – Renishaw plc.
Accuracy of 3D Scanning Technologies in a Face Scanning Scenario Chris Boehnen and Patrick Flynn University of Notre Dame.
Multi-Sensor 180° Panoramic View IP Cameras
Digital Signal Processor HANYANG UNIVERSITY 학기 Digital Signal Processor 조 성 호 교수님 담당조교 : 임대현
Microsoft Kinect How does a machine infer body position?
BDM Capstone Project team : HungPD - Supervisor ThanhLN – Leader ManhDC BienVT NinhVH.
CS 6501: 3D Reconstruction and Understanding Stereo Cameras
Southern Taiwan University Department of Electrical Engineering
reincarnation of an old friend
Depth Analysis With Stereo Cameras
An Implementation Method of the Box Filter on FPGA
제 5 장 스테레오.
Compression of 3D images using disparity based coder
How the Kinect Works Computational Photography
SoC and FPGA Oriented High-quality Stereo Vision System
Sum of Absolute Differences Hardware Accelerator
Chapter 1: Image processing and computer vision Introduction
By: Mohammad Qudeisat Supervisor: Dr. Francis Lilley
Technical Communication Skills Practicum
Presentation transcript:

Capstone Design Implementation of Depth Sensor Based on Structured Infrared Patterns June 11, 2013 School of Information and Communication Engineering, Inha University Song Myung Ho

Capstone Design 2 Calculation of depth using Structured Infrared Patterns Introduction Operation Principle Capture 2 pictures for standard distance Match Images and Induce a disparity map Calculate Depth from disparity map Used Hardware Information Camera module Board Test : Image capture to Calculation depth Test environment Object height Result of disparity Contents

Capstone Design 3 Introduction Depth information In these days, interest on virtual reality and need of Human- machine interaction grow bigger. Depth information is used both of them. And also using other areas. More detail, Depth information is used to make stereopsis, Unmanned Vehicle, U- health, and so on. Application I want to implement Depth measuring system to inform machine about object length inexpensively and quickly. Especially, assembling machine of facility automation would need assembled object’s 3D size.

Capstone Design 4 Principle [Source : ‘How the kinect work’, Derek Hoiem, University of illinois] [Source : US B2, ‘Depth mapping using projected patterns’] Projector : emit infrared light on random speckle pattern Sensor : this is camera. Capture patter image

Capstone Design 5 Principle IR projector Pattern : Random Speckle ( Unique pattern ) [Source : Kinect Hacking 103: Looking at Kinect IR Patterns] Disparity : Numbers of Shifted Pixel

Capstone Design 6 Calculate Depth Depth : dx : disparity d : distance between an (ref.) object and a camera F : focal length Alpha : approximately L’/L, where L’ is distance between camera and projector, L is distance between camera and object [Source : US B2, ‘Depth mapping using projected patterns’] Matching : Search Equal Dot Pattern and Record Disparity. Block matching algorithm : SAD (Sum of Absolute Differences) Block matching algorithm is faster other matching algorithm. [ref. : E. G. Richardson, Iain (2003). H.264 and MPEG-4 Video Compression: Video Coding for Next-generation Multimedia. Chichester: John Wiley & Sons Ltd..]

Capstone Design 7 Calculate Disparity map Ref. Image Cmp. Image 1. Get two windows 2. Sum each pixel of Absolute level difference on two windows 3. Move the window of compared image by 1 4. Repeat step 2 ( limit : setting range in same row) 5. We know equal pattern by minimum value of summation 6. Record the shifted range of same pattern 7. Move the window of reference image by 1 pixel. Repeat step 1~6. ( if all pixel have disparity value, then all is done. [Image Source : Effilux]

Capstone Design Camera Module 8 -. Aptina MT9M111 CMOS [Source : Aptina] -. Sensor : Aptina 1/3” CMOS Sensor MT9M active pixels : 1280 * 1024 pixels -. pixel size : 3.6um * 3.6um -. focal length : 16mm -. Data interface : 8-bit parallel -. Control interface : I2C -. Maximum input clock : 54Mhz Hardware Information RDB1768, CodeRed : using MCU – LPC ARM cortex M3 ( 32bit ) - Operation Clock : 100Mhz - Internal RAM : 64KB ( and no external memory ) - Flash : 512KB - Using peripheral : UART / I2C - GPIO Operation Clock : 25Mhz

Capstone Design 9 Test Environment 1 st object ( height : 52mm) 2 nd object (height : 17mm) Arrange to ceiling

Capstone Design 10 Test Environment Install Camera and IR projector (Base distance : 110mm, Focal length : 16mm ) Object and Sensors ( distance to ceiling : 1944mm)

Capstone Design 11 Result of images Reference Image Comparing Image Real disparity map Normalized disparity map visibly

Capstone Design 12 Calculation : depth Disparity : image size – output 128x 80 pixel / x4 zoom ( vision range : 256 x 160 pixels ) => pixel size : 7.2um x 7.2um Base distance : 110mm Focal length : 16mm Distance between Sensor chip to ceiling : 1944 mm Distance between Sensor chip to lense : 34mm 1 st object depth : 52mm 2 nd object depth : 17mm dx : camera pixel size * disparity pixels = disparity distance d : distance between an object and a camera F : focal length Alpha : approximately L’/L, where L’ is distance between camera and projector, L is distance between camera and object

Capstone Design 13 Calculation : depth Disparity : (24-5)/16 (5-5)/16 (63-5)/16 = = 0 = dx = * number of shifted d = 1944 F = 16 Alpha = 110/1944 = …. (rad) tangent Alpha = dZ = *Real Disparity dZ : (ref.) = 18 = 0 = 56 (mm) Error : (18-17)/17 (56-52)/52 = = = 5.88% = 7.69% ↑ Depth resolution

Capstone Design 14 Calculation : depth 1.height : 8.3mm 2.height : 16.6mm 3.height : 24.9mm 4.height : 33.2mm

Capstone Design 15 Result : depth accuracy Calculate real disparity as floating point, accuracy is 90.69%, but this is not real depth because real disparity is integer

Capstone Design 16 Result : depth accuracy This is real depth. Error is far bigger.

Capstone Design 17 Calculation : depth 1.height : 16.6mm 2.height : 32.45mm 4.height : 45.5mm 3.height : 66.4mm

Capstone Design 18 Result : depth accuracy Depth resolution is about , Stair 1,2,3 are nearby multiple of depth resolution but stair4. error rate of stair 1,2,3 is small but stair 4 is big. So, we know depth resolution is the most important factor of accuracy. ( we can obtain detail resolution using more distant focal length and smaller pixel size )

Capstone Design 19 References US Patent : Method and system for object reconstruction(2010) US Patent : Depth mapping using projected patterns(2012) Derek Hoiem(2012), How the kinect works Stefano Mattoccia(2012), Stereo Vision : Algorithms and applications Gary Rost Bradski(2009), Learning OpenCV Jihong Liu, Chengyuan Wang (2009), An algorithm for image binarization based on adaptive threshold

Capstone Design 20 Contact Information H.P : Thanks for listening