CCU VISION LABORATORY Object Speed Measurements Using Motion Blurred Images 林惠勇 中正大學電機系

Slides:



Advertisements
Similar presentations
Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
Advertisements

Sequence-to-Sequence Alignment and Applications. Video > Collection of image frames.
High-Resolution Three- Dimensional Sensing of Fast Deforming Objects Philip Fong Florian Buron Stanford University This work supported by:
www-video.eecs.berkeley.edu/research
Vision Sensing. Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo.
Vision Based Control Motion Matt Baker Kevin VanDyke.

3D Vision Topic 1 of Part II Camera Models CSC I6716 Fall 2010
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
1 MURI review meeting 09/21/2004 Dynamic Scene Modeling Video and Image Processing Lab University of California, Berkeley Christian Frueh Avideh Zakhor.
A Study of Approaches for Object Recognition
Announcements. Projection Today’s Readings Nalwa 2.1.
What are Good Apertures for Defocus Deblurring? Columbia University ICCP 2009, San Francisco Changyin Zhou Shree K. Nayar.
CS485/685 Computer Vision Prof. George Bebis
Modeling the imaging system Why? If a customer gives you specification of what they wish to see, in what environment the system should perform, you as.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
DIGITAL IMAGE PROCESSING Instructors: Dr J. Shanbehzadeh M.Gholizadeh M.Gholizadeh
Automatic Camera Calibration
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
1 CS6825: Image Formation How are images created. How are images created.
Digital Photography A tool for Graphic Design Graphic Design: Digital Photography.
Introduction to Computational Photography. Computational Photography Digital Camera What is Computational Photography? Second breakthrough by IT First.
3D-2D registration Kazunori Umeda Chuo Univ., Japan CRV2010 Tutorial May 30, 2010.
International Conference on Computer Vision and Graphics, ICCVG ‘2002 Algorithm for Fusion of 3D Scene by Subgraph Isomorphism with Procrustes Analysis.
Extracting Barcodes from a Camera-Shaken Image on Camera Phones Graduate Institute of Communication Engineering National Taiwan University Chung-Hua Chu,
Correspondence-Free Determination of the Affine Fundamental Matrix (Tue) Young Ki Baik, Computer Vision Lab.
Objects at infinity used in calibration
Generalized Hough Transform
Reporter: Wade Chang Advisor: Jian-Jiun Ding 1 Depth Estimation and Focus Recovery.
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
LCC, MIERSI SM 14/15 – T4 Special Effects Miguel Tavares Coimbra.
CS654: Digital Image Analysis
CS654: Digital Image Analysis Lecture 22: Image Restoration - II.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Single-view geometry Odilon Redon, Cyclops, 1914.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
8-1 Chapter 8: Image Restoration Image enhancement: Overlook degradation processes, deal with images intuitively Image restoration: Known degradation processes;
1 Camera calibration based on arbitrary parallelograms 授課教授:連震杰 學生:鄭光位.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
1 Motion Blur Identification in Noisy Images Using Fuzzy Sets IEEE 5th International Symposium on Signal Processing and Information Technology (ISSPIT.
3D Reconstruction Using Image Sequence
Removing motion blur from a single image
Suggested Machine Learning Class: – learning-supervised-learning--ud675
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Ec2029 digital image processing
An Introduction to Digital Image Processing Dr.Amnach Khawne Department of Computer Engineering, KMITL.
CSE 185 Introduction to Computer Vision
3D Perception and Environment Map Generation for Humanoid Robot Navigation A DISCUSSION OF: -BY ANGELA FILLEY.
CMSC5711 Image processing and computer vision
Degradation/Restoration Model
IMAGE RESTORATION.
Factors that Influence the Geometric Detection Pattern of Vehicle-based Licence Plate Recognition Systems Martin Rademeyer Thinus Booysen, Arno Barnard.
دکتر سعید شیری قیداری & فصل 4 کتاب
Car Recognition Through SIFT Keypoint Matching
CMSC5711 Image processing and computer vision
Removing motion blur from a single image
Multiple View Geometry for Robotics
Announcements Midterm out today Project 1 demos.
Single Image Rolling Shutter Distortion Correction
Filtering Things to take away from this lecture An image as a function
Digital Image Processing
Single-view geometry Odilon Redon, Cyclops, 1914.
Filtering An image as a function Digital vs. continuous images
Deblurring Shaken and Partially Saturated Images
Presentation transcript:

CCU VISION LABORATORY Object Speed Measurements Using Motion Blurred Images 林惠勇 中正大學電機系

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 2 Images …

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 3 Blur Images … Defocus blur: Motion blur:

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 4 What Do They Tell Us? Motion of Object Region of Interest:

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 5 Information from Blur Images Two types of image blur:  Defocus blur – due to the limitation of optical sensors Image restoration Identification of region of interest Depth measurement  Motion blur – due to the relative motion between the camera and the scene Image restoration Motion analysis Increase still resolution from video Special effect Speed measurements? From the movie: “Chicken Run”

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 6 Defocus Blur Blur circle

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 7 Motion Blur

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 8 Speed Measurements Why measure speed? (motivation)  Wind  Experiments  Sports (baseball, tennis ball), athletes  Vehicle speed detection How?  RADAR (Radio Detection And Ranging)  LIDAR (Laser Infrared Detection And Ranging)  GPS  Video-Based Analysis

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 9 Image-Based Speed Measurement Key idea:  For a fixed camera exposure time: Relative motion between object and static camera Motion blur appeared in the dynamic image region

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 10 Geometric Formulation Simple pinhole camera model: Key components:  Focal length, exposure time, CCD pixel size  Object distance, blur length (blur extent)

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 11 Image Degradation Image degradation – linear space invariant system  Characterized by its point spread function (PSF) h(x,y) Degradation under uniform linear motion (whole image) How about space variant case? (partial blur & total blur)

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 12 Blur Parameter Estimation Edge detection ABC:  Sharp edge  step response  Blur edge  ramp response How to use this fact to estimate blur extent?

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 13 Image Deblurring If H is linear, space invariant:  Inverse filtering  Wiener filter Bad news:  Our case is space variant  Region segmentation Degradation function H Degradation function H Restoration filter(s) Restoration filter(s) + + f(x,y)f(x,y) g(x,y)g(x,y) Noise  (x,y) f(x,y)f(x,y) Degradation Restoration

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 14 More General Case – I What if the object is not moving parallel to the image scanlines?  Motion direction estimation  Image rectification

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 15 Motion Direction Estimation Fourier spectrum analysis: It can also be implemented in spatial domain

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 16 More General Case – II What if the object is not moving parallel to the image plane?

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 17 Extended Camera Model

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 18 Required Parameters Intrinsic camera parameters  Focal length, CCD pixel size, exposure time Extrinsic camera parameters  Distance to the object, camera orientation Softball speed measurement  Size of the softball (physical measurement) Vehicle speed detection – “parallel case”  Length of the vehicle (from manufacturer’s data sheet) Vehicle speed detection – “non-parallel case” ?  How to obtain the parameters z, , etc.?

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 19 Vehicle Speed Detection Parameters:  K = 22 pixels, s x = 11  m, f = 10 mm, T = 1/160 sec.  l = 560 pixels, L = 4750 m Detected speed – km/hr Video-based speed – km/hr, speed limit – 110 km/hr

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 20 Camera Pose Estimation Theorem:  Given a parallelogram in 3-D space with known image projection of four points, their relative depths can be determined. To obtain the unknown scale factor:  Absolute metric between two 3-D points  License plate with standard size

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 21 Vehicle Speed Detection Parameters:  K = 22 pixels, s x = 6.8  m, T = 1/400 sec., l = 560 pixels, L = 4750 m  W = 320 mm,  = , f = 26 mm Detected speed – km/hr Video-based speed – km/hr

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 22 Fully Automated? How? Intrinsic camera parameters?  JPEG EXIF header Target identification  Motion blur analysis Region segmentation  Region growing  Additional image capture Robust blur extent estimation Image synthesis  Deblurred target region + static background region

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 23 Initial Target Segmentation Horizontal ramp edge detection Run-length coding or projection Vertical continuity checking Multiple direction analysis

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 24 Spherical Object in Motion Problems on parameter estimation  Accuracy, robustness, precision (subpixel resolution…) Spherical object  circular from any viewpoint Initial blur extent identification + circle detection  Circle fitting, Hough transform More problems  Motion blur due to rotation, three-dimensional translation, shading, etc.

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 25 Speed Measurement Flowchart

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 26 Motion Direction Estimation Camera pose estimation – non-parallel case Two or more captures with fast shutter speed  Vertical projection  Post-processing  Fixed object size  Could be blurred

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 27 Softball Speed Measurement Parameters:  K = 26 pixels, T = 1/320 sec., l = 72 pixels, d = mm Detected speed – 40.5 km/hr Video-based speed – 40.9 km/hr

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 28 Conclusion Object speed measurement using a single motion blurred image  Vehicle speed detection  Softball speed measurement Advantages  Low cost – off-the-shelf digital camera  Passive device – can avoid anti-detection  Passive device – no radiation, light  Large measurement range – through adjustable shutter speed Limitation  Lighting condition  Accuracy?

L L C C V V H.Y.Lin, CCUEE CCU Vision Lab 29 Thank you for your attention! Any questions?