Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Slides:



Advertisements
Similar presentations
Hough Transforms CSE 6367 – Computer Vision Vassilis Athitsos University of Texas at Arlington.
Advertisements

電腦視覺 Computer and Robot Vision I
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
October 2, 2014Computer Vision Lecture 8: Edge Detection I 1 Edge Detection.
Computer Vision Lecture 16: Texture
Segmentation (2): edge detection
November 4, 2014Computer Vision Lecture 15: Shape Representation II 1Signature Another popular method of representing shape is called the signature. In.
Lecture 5 Hough transform and RANSAC
CS 376b Introduction to Computer Vision 04 / 11 / 2008 Instructor: Michael Eckmann.
CS 376b Introduction to Computer Vision 04 / 14 / 2008 Instructor: Michael Eckmann.
Lecture 4: Edge Based Vision Dr Carole Twining Thursday 18th March 2:00pm – 2:50pm.
Edge Detection Today’s readings Cipolla and Gee –supplemental: Forsyth, chapter 9Forsyth Watt, From Sandlot ScienceSandlot Science.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
September 10, 2012Introduction to Artificial Intelligence Lecture 2: Perception & Action 1 Boundary-following Robot Rules 1  2  3  4  5.
Neighborhood Operations
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.
HOUGH TRANSFORM Presentation by Sumit Tandon
HOUGH TRANSFORM & Line Fitting Introduction  HT performed after Edge Detection  It is a technique to isolate the curves of a given shape / shapes.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
Lecture 08 Detecting Shape Using Hough Transform Lecture 08 Detecting Shape Using Hough Transform Mata kuliah: T Computer Vision Tahun: 2010.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
October 7, 2014Computer Vision Lecture 9: Edge Detection II 1 Laplacian Filters Idea: Smooth the image, Smooth the image, compute the second derivative.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
CS654: Digital Image Analysis Lecture 36: Feature Extraction and Analysis.
Course 8 Contours. Def: edge list ---- ordered set of edge point or fragments. Def: contour ---- an edge list or expression that is used to represent.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Lecture 10: Lines from Edges, Interest Points, Binary Operations CAP 5415: Computer Vision Fall 2006.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Image Segmentation Image segmentation (segmentace obrazu)
Edges and Lines Readings: Chapter 10:
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Instructor: Mircea Nicolescu Lecture 7
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
Detecting Image Features: Corner. Corners Given an image, denote the image gradient. C is symmetric with two positive eigenvalues. The eigenvalues give.
: Chapter 13: Finding Basic Shapes 1 Montri Karnjanadecha ac.th/~montri Image Processing.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
October 3, 2013Computer Vision Lecture 10: Contour Fitting 1 Edge Relaxation Typically, this technique works on crack edges: pixelpixelpixel pixelpixelpixelebg.
Another Example: Circle Detection
Image Filtering Spatial filtering
Chapter 10 Image Segmentation
Fitting: Voting and the Hough Transform
Detection of discontinuity using
Mean Shift Segmentation
Slope and Curvature Density Functions
Fourier Transform: Real-World Images
Computer Vision Lecture 5: Binary Image Processing
Fitting Curve Models to Edges
Image Processing, Leture #12
Jeremy Bolton, PhD Assistant Teaching Professor
Computer Vision Lecture 9: Edge Detection II
Dr. Chang Shu COMP 4900C Winter 2008
Computer Vision Lecture 16: Texture II
Edge Detection CSE 455 Linda Shapiro.
Vectors for Calculus-Based Physics
Hough Transform.
Magnetic Resonance Imaging
Morphological Operators
Edge Detection Today’s readings Cipolla and Gee Watt,
Finding Basic Shapes Hough Transforms
Finding Line and Curve Segments from Edge Images
Presentation transcript:

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image (for both technical and biological systems) is the contour of objects. Contours are indicated by abrupt changes in brightness. We can use edge detection filters to extract contour information from an image. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Types of Edges One-dimensional profiles of different edge types November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Types of Edges One-dimensional profile of actual edges November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Edge Detection First we need some definitions: An edge point is a point in an image with coordinates [i, j] at the location of a significant local intensity change. An edge fragment corresponds to the i and j coordinates of an edge and the edge orientation , which may be the gradient angle. An edge detector is an algorithm that produces a set of edges (edge points or edge fragments) from an image. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Edge Detection A contour is a list of edges or the mathematical curve that models the list of edges. Edge linking is the process of forming an ordered list of edges from an unordered list. By convention, edges are ordered by traversal in a clockwise direction. Edge following is the process of searching the (filtered) image to determine contours. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first derivative of the intensity function. In the two-dimensional case, we analyze the gradient instead of the first derivative. Just like the first derivative, the gradient measures the change in a function. For two-dimensional functions it is defined as November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Gradient In order to compute Gi and Gj in an image F at position [i, j], we need to consider the discrete case and get: Gi = F[i+1, j] – F[i, j] Gj = F[i, j+1] – F[i, j] This can be done with convolution filters: 1 -1 Gi = Gj = -1 1 To be precise in the assignment of gradients to pixels and to reduce noise, we usually apply 33 filters instead (next slide). November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Sobel Filters Sobel filters are the most common variant of edge detection filters. Two small convolution filters are used successively: -1 -2 1 2 Si Sj -1 1 -2 2 November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Sobel Filters Sobel filters yield two interesting pieces of information: The magnitude of the gradient (local change in brightness): The angle of the gradient (tells us about the orientation of an edge): November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Gradient vs. Edge Orientation Note: Edge and gradient orientation are perpendicular to each other: Here, the gradient orientation is horizontal (pointing to the right) and the edge orientation is vertical. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Sobel Filters Calculating the magnitude of the brightness gradient with a Sobel filter. Left: original image; right: filtered image. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Sobel Filters and Thresholding November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Hough Transform The Hough transform is a very general technique for feature detection. In the present context, we will use it for the detection of straight lines and circles as contour descriptors in edge point arrays. We could use other variants of the Hough transform to detect circular and other shapes. We could even use it outside of computer vision, for example in data mining applications. So understanding the Hough transform may benefit you in many situations. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Hough Transform The Hough transform is a voting mechanism. In general, each point in the input space votes for several combinations of parameters in the output space. Those combinations of parameters that receive the most votes are declared the winners. We will use the Hough transform to fit a straight line to edge position data. To keep the description simple and consistent, let us assume that the input image is continuous and described by an x-y coordinate system. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Hough Transform A straight line can be described by the equation: y = mx + c The variables x and y are the parameters of our input space, and m and c are the parameters of the output space. For a given value (x, y) indicating the position of an edge in the input, we can determine the possible values of m and c by rewriting the above equation: c = -xm + y You see that this represents a straight line in m-c space, which is our output space. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Hough Transform Example: Each of the three points A, B, and C on a straight line in input space are transformed into straight lines in output space. x y input space m c output space C C winner parameters B B A A The parameters of their crossing point (which would be the winners) are the parameters of the straight line in input space. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Hough Transform Hough Transform Algorithm: Quantize input and output spaces appropriately. Assume that each cell in the parameter (output) space is an accumulator (counter). Initialize all cells to zero. For each point (x, y) in the image (input) space, increment by one each of the accumulators that satisfy the equation. Maxima in the accumulator array correspond to the parameters of model instances. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Introduction to Artificial Intelligence Lecture 22: Computer Vision II Hough Transform The Hough transform does not require preprocessing of edge information such as ordering, noise removal, or filling of gaps. It simply provides an estimate of how to best fit a straight line (or other curve model) to the available edge data. If there are multiple straight lines in the image, the Hough transform will result in multiple peaks. You can search for these peaks to find the parameters for all the corresponding straight lines. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Improved Hough Transform Here is some practical advice for doing the Hough transform. The m-c space described on the previous slides is simple but not very practical. It cannot represent vertical lines, and the closer the orientation of a line gets to being vertical, the greater is the change in m required to turn the line significantly. We are going to discuss an alternative output space that requires a bit more computation but avoids the problems of the m-c space. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Improved Hough Transform As we said before, it is problematic to use m (slope) and c (intercept) as an output space. Instead, it is a good idea to use the orientation  and length d of the normal of a straight line to describe it. The normal n of a straight line l is perpendicular to l and connects l with the origin of the coordinate system. The range of  is from 0 to 360, and the range of d is from 0 to the length of the image diagonal. Note that we can skip the  interval from 180 to 270, because it would require a negative d. Let us assume that the image is 450×450 units large. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Improved Hough Transform Column j 450 0 360 636 output space  d 450 Row i input space  d line to be described representation of same line in output space The parameters  and d form the output space for our Hough transform. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Improved Hough Transform For any edge point (i0, j0) indicated by our Sobel edge detector, we have to find all parameters  and d for those straight lines that pass through (i0, j0). We will then increase the counters in our output space located at every (, d) by the edge strength, i.e., the magnitude provided by the Sobel detector. This way we will find out which parameters (, d) are most likely to indicate the clearest lines in the image. But first of all, we have to discuss how to find all the parameters (, d) for a given point (i0, j0). November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Improved Hough Transform By varying  from 0 to 360 we can find all lines crossing (i0, j0): 2 d2 450 Column j Row i But how can we compute parameter d for each value of ? Idea: Rotate (i0, j0) and the normal around origin by - so that the normal lands on i-axis. Then the i-coordinate of the rotated point is the value of d. 1 d1 3 d3 (i0, j0) November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Improved Hough Transform And how do we rotate a point in two-dimensional space? The simplest way is to multiply the point vector with a rotation matrix. We compute the rotated point (iR, jR) as obtained by rotation of point (i0, j0) around the point (0, 0) by the angle  as follows: November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II

Improved Hough Transform We are only interested in the i-coordinate: iR = i0  cos  - j0  sin  In our case, we want to rotate by the angle -: iR = i0  cos(-) - j0  sin(-) iR = i0  cos  + j0  sin  Now we can compute parameter d as a function of i0, j0, and : d(i0, j0; ) = i0  cos  + j0  sin  By varying  we are now able to determine all parameters (, d) for a given point (i0, j0) and increase the counters in output space accordingly. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II