Motion Blur Detection Ben Simandoyev & Keren Damari.

Slides:



Advertisements
Similar presentations
Environmental Remote Sensing GEOG 2021
Advertisements

JPEG Compresses real images Standard set by the Joint Photographic Experts Group in 1991.
By: Mani Baghaei Fard.  During recent years number of moving vehicles in roads and highways has been considerably increased.
QR Code Recognition Based On Image Processing
Some problems... Lens distortion  Uncalibrated structure and motion recovery assumes pinhole cameras  Real cameras have real lenses  How can we.
An Idiot’s Guide to Exposure a.k.a. John’s Guide to Exposure.
EDGE DETECTION ARCHANA IYER AADHAR AUTHENTICATION.
October 2, 2014Computer Vision Lecture 8: Edge Detection I 1 Edge Detection.
Chess Recognition Shai karnei – Reem Nathan Meisels.
Grape Detection in Vineyards Ishay Levi Eran Brill.
Multimedia communications EG 371Dr Matt Roach Multimedia Communications EG 371 and EE 348 Dr Matt Roach Lecture 6 Image processing (filters)
Hough Transform Reading Watt, An edge is not a line... How can we detect lines ?
ECE 472/572 - Digital Image Processing Lecture 8 - Image Restoration – Linear, Position-Invariant Degradations 10/10/11.
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
Lappeenranta University of Technology (Finland)
CCU VISION LABORATORY Object Speed Measurements Using Motion Blurred Images 林惠勇 中正大學電機系
Fitting a Model to Data Reading: 15.1,
The Segmentation Problem
Hough Transform. Detecting Lines Hough transform detects lines in images Equation of line is: y = mx + b or Hough transform uses an array called accumulator.
Image Analysis Preprocessing Arithmetic and Logic Operations Spatial Filters Image Quantization.
1 Chapter 21 Machine Vision. 2 Chapter 21 Contents (1) l Human Vision l Image Processing l Edge Detection l Convolution and the Canny Edge Detector l.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Photography Lesson 1 The Camera. What is Photography ? Photo- Light Graph- Drawing It means Light Drawing.... It literally means "To write with light.“
Camera Usage Photography I COM 241. Single lens reflex camera Uses interchangeable lenses Higher quality image than point and shoot cameras –Greater resolution.
Aspects of Exposure JEA Photojournalism Curriculum.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean Hall 5409 T-R 10:30am – 11:50am.
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
University of Kurdistan Digital Image Processing (DIP) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Extracting Barcodes from a Camera-Shaken Image on Camera Phones Graduate Institute of Communication Engineering National Taiwan University Chung-Hua Chu,
#MOTION ESTIMATION AND OCCLUSION DETECTION #BLURRED VIDEO WITH LAYERS
November 13, 2014Computer Vision Lecture 17: Object Recognition I 1 Today we will move on to… Object Recognition.
Why is computer vision difficult?
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Learning Object Representation Andrej Lúčny Department of Applied Informatics Faculty of Mathematics, Physics and Informatics Comenius University, Bratislava.
Vincent DeVito Computer Systems Lab The goal of my project is to take an image input, artificially blur it using a known blur kernel, then.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Typical Types of Degradation: Motion Blur.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Physics 114: Lecture 20 2D Data Analysis Dale E. Gary NJIT Physics Department.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Circles Finding with Clustering Method By: Shimon Machluf.
Wire Detection Version 2 Joshua Candamo Friday, February 29, 2008.
Vincent DeVito Computer Systems Lab The goal of my project is to take an image input, artificially blur it using a known blur kernel, then.
Digital Image Processing Part 3 Spatial Domain Processing.
Ec2029 digital image processing
Digital Image Processing
Whole Slide Image Stitching for Osteosarcoma detection Ovidiu Daescu Colaborators: Bogdan Armaselu and Harish Babu Arunachalam University of Texas at Dallas.
Filters– Chapter 6. Filter Difference between a Filter and a Point Operation is that a Filter utilizes a neighborhood of pixels from the input image to.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Art of the Day Lisa Sun.
Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
HOUGH TRANSFORM.
Environmental Remote Sensing GEOG 2021
Digital Image Processing (DIP)
Circle Recognition Using The ‘folding’ method
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Image Deblurring and noise reduction in python
Fourier Transform: Real-World Images
Find It VR Project (234329) Students: Yosef Albo, Bar Albo
Digital Image Processing
Sharpening..
A movement of a figure in a plane.
What I learned in the first 2 weeks
Vincent DeVito Computer Systems Lab
Object Recognition Today we will move on to… April 12, 2018
Camera Basics Digital Photography.
Deblurring Shaken and Partially Saturated Images
Presentation transcript:

Motion Blur Detection Ben Simandoyev & Keren Damari

Blur in Images There are two main types of blur Motion BlurOut of Focus

Motion Blur Motion blur is usually created when the time of exposure is long relatively to the velocity of movement.

Motion Blur Typically, motion blur creates smoothness in the image on the direction of movement, and many edges in the vertical direction.

Previous Work Most of known methods of debluring require prior knowledge uses PSF which a kernel based on the angle and length of the motion. Original PictureBlurred, angle=30, lenth=50 Deblurred Picture

Approach and Method We used edge detection with high sensitivity. In a motion blurred image, we expected to find large amount of parallel lines in the direction of movement and very few lines in other directions. 1. Edge Detection

Approach and Method 2. Divide to grids Since the blur could be local, we can expect part of the scene to be sharp. We divided the image matrix into grids, and looked for motion in each one of them

Approach and Method The next step was to find parallel lines in the edge map, we used Hough transform for lines and serached for dominant direction. 3. Hough Transform

Approach and Method 3. Hough Transform Teta=27  The angele comuted is 153

Results The result is a matrix represents the direction of blur detected in all parts, -1 if motion blur was not found. Here are the results as blue arrows.

Results Run time: sec Run time: sec

Results Run time: sec Run time: sec

Results Run time: sec Run time: sec

Results Run time: sec Run time: sec

Results Run time: sec Run time: sec

Results Better results can be achieved by different grid sizes, depending on the pictures size, the size of the object in motion and the motion direction change rate Division to 36 gridsDivision to 64 grids

Results Run time: sec

Conclusions Recognition of motion blurred images gives good results, recognition of about 80% in average of the the motion direction in the grids. Few pictures require lower threshold, more sensitive edge detector. The average run time for picture of size 1500X1000 is about 3.6 seconds. The majority of images with motion blur are recognized, but there is considerable amount of images without blur that are recognized as images with motion blur. Our suggestion is to run an algorithm to identify blurred areas prior to our algorithm, and try to detect motion only in those areas.

Conclusions When the algorithm successfully finds motions in a grid, the returned direction is a pretty good estimation of the real motion direction in that grid. Very dark or very bright areas of motion are more difficult to identify and requires lower threshold of edge detecting. In some pictures the edge detector find false edges at the ends, therefore motion recognition is more likely to fail in those areas and give false result. Future work: when the direction of movement is known, finding the amount of movement in that direction could help with restoration of images with motion blur.