Content Based Image Retrieval

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Distinctive Image Features from Scale- Invariant Keypoints Mohammad-Amin Ahantab Technische Universität München, Germany.
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
EE 7730 Image Segmentation.
Chapter 10 Image Segmentation.
Machinen Vision and Dig. Image Analysis 1 Prof. Heikki Kälviäinen CT50A6100 Lectures 8&9: Image Segmentation Professor Heikki Kälviäinen Machine Vision.
A Study of Approaches for Object Recognition
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Image Segmentation. Introduction The purpose of image segmentation is to partition an image into meaningful regions with respect to a particular application.
Feature extraction: Corners and blobs
Chapter 10 Image Segmentation.
Scale Invariant Feature Transform (SIFT)
Blob detection.
Machinen Vision and Dig. Image Analysis 1 Prof. Heikki Kälviäinen CT50A6100 Lectures 8&9: Image Segmentation Professor Heikki Kälviäinen Machine Vision.
CS292 Computational Vision and Language Segmentation and Region Detection.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Thresholding Thresholding is usually the first step in any segmentation approach We have talked about simple single value thresholding already Single value.
Image Segmentation CIS 601 Fall 2004 Longin Jan Latecki.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Chapter 10: Image Segmentation
Computer vision.
Internet-scale Imagery for Graphics and Vision James Hays cs195g Computational Photography Brown University, Spring 2010.
Object Tracking/Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition.
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different.
Lecture 16 Image Segmentation 1.The basic concepts of segmentation 2.Point, line, edge detection 3.Thresh holding 4.Region-based segmentation 5.Segmentation.
Digital Image Processing In The Name Of God Digital Image Processing Lecture8: Image Segmentation M. Ghelich Oghli By: M. Ghelich Oghli
Digital Image Processing CSC331
Chap. 9: Image Segmentation Jen-Chang Liu, Motivation Segmentation subdivides an image into its constituent regions or objects Example: 生物細胞在影像序列中的追蹤.
© by Yu Hen Hu 1 ECE533 Digital Image Processing Image Segmentation.
Joonas Vanninen Antonio Palomino Alarcos.  One of the objectives of biomedical image analysis  The characteristics of the regions are examined later.
Digital Image Processing Lecture 18: Segmentation: Thresholding & Region-Based Prof. Charlene Tsai.
Chapter 10 Image Segmentation.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
Chapter 4 Image Segmentation
Chapter 10, Part I.  Segmentation subdivides an image into its constituent regions or objects.  Image segmentation methods are generally based on two.
CVPR 2003 Tutorial Recognition and Matching Based on Local Invariant Features David Lowe Computer Science Department University of British Columbia.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
CS654: Digital Image Analysis
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
CSE 185 Introduction to Computer Vision Feature Matching.
Digital Image Processing
Evaluation of Image Segmentation algorithms By Dr. Rajeev Srivastava.
Presented by David Lee 3/20/2006
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Thresholding Foundation:. Thresholding In A: light objects in dark background To extract the objects: –Select a T that separates the objects from the.
Digital Image Processing CSC331
Digital Image Processing, 2nd ed. © 2002 R. C. Gonzalez & R. E. Woods Unit-VII Image Segmentation.
Chapter 10: Image Segmentation The whole is equal to the sum of its parts. Euclid The whole is greater than the sum of its parts. Max Wertheimer.
Blob detection.
Machine Vision ENT 273 Lecture 4 Hema C.R.
SIFT Scale-Invariant Feature Transform David Lowe
Chapter 10 Image Segmentation
Presented by David Lee 3/20/2006
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Image Segmentation.
Image Segmentation – Edge Detection
Computer Vision Lecture 12: Image Segmentation II
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
ECE 692 – Advanced Topics in Computer Vision
Digital Image Processing
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Image Segmentation Image analysis: First step:
Digital Image Processing
ECE734 Project-Scale Invariant Feature Transform Algorithm
IT472 Digital Image Processing
Presentation transcript:

Content Based Image Retrieval Natalia Vassilieva HP Labs Russia

Tutorial outline Lecture 1 Lecture 2 Lecture 3 Lecture 4 Lecture 5 Introduction Applications Lecture 2 Performance measurement Visual perception Color features Lecture 3 Texture features Shape features Fusion methods Lecture 4 Segmentation Local descriptors Lecture 5 Multidimensional indexing Survey of existing systems

Lecture 4 Segmentation Local descriptors

Lecture 4: Outline Segmentation Local descriptors Segmentation Detection of discontinuities Thresholding Region-based segmentation Watershed Segmentation Local descriptors SIFT: Scale-invariant feature transform Segmentation Detection of discontinuities Thresholding Region-based segmentation Watershed Segmentation Local descriptors SIFT: Scale-invariant feature transform 4/50

Introduction to segmentation The main purpose is to find meaningful regions with respect to a particular application To detect homogeneous regions To detect edges (boundaries, contours) Segmentation of non trivial images is one of the difficult task in image processing. Still under research Applications of image segmentation include Objects in a scene (for object-based retrieval) Objects in a moving scene (MPEG4) Spatial layout of objects (Path planning for a mobile robots) 5/50

Principal approaches Edge based methods Region based methods Based on discontinuity: ex. to partition an image based on abrupt changes in intensity Region based methods Based on similarity: to partition an image into regions that are similar according to a set of predefined criteria Solution can be based on intensity, texture, color, motion, etc. 6/50

Lecture 4: Outline Segmentation Local descriptors Detection of discontinuities Thresholding Region-based segmentation Watershed Segmentation Local descriptors SIFT: Scale-invariant feature transform 7/50

Detection of discontinuities 3 basic types of gray-level discontinuities: points , lines , edges The common way is to run a mask through the image 8/50

Point detection A point has been detected if |R|  T, T is a nonnegative threshold 9/50

Line detection If |Ri| > |Rj| for all ji – the point is within line i. Use one mask to detect lines of a given direction 10/50

Edge Detection First derivative – detect if a point is on the edge Second derivative – detect the midpoint of the edge (zero-crossing property) 11/50

Edge detection in noisy images Examples of a ramp edge corrupted by random Gaussian noise of mean 0 and  = 0.0, 0.1, 1.0 and 10.0. 12/50

Edge detection: calculating derivatives First derivative: magnitude of the gradient To calculate: apply gradient masks Roberts: Prewitt: Sobel: 13/50

Example 14/50

Edge detection: calculating derivatives Second derivative: Laplacian To calculate: apply laplacian masks 15/50

Edge detection: Laplacian of Gaussian Laplacian combined with smoothing as a precursor to find edges via zero-crossing. where r2 = x2+y2, and  is the standard deviation 16/50

Mexican hat the coefficient must be sum to zero 17/50

Example a) Original image b) Sobel Gradient c) Spatial Gaussian smoothing function d) Laplacian mask e) LoG f) Threshold LoG g) Zero crossing 18/50

Edge linking Local processing Global processing via the Hough transform Looking for lines between edge points Global processing via the Graph-based techniques Edge points are graph vertexes Looking for optimal path in graph 19/50

Lecture 4: Outline Segmentation Local descriptors Detection of discontinuities Thresholding Region-based segmentation Watershed Segmentation Local descriptors SIFT: Scale-invariant feature transform 20/50

Thresholding Global – when T is the same for all points of the image image with dark background and a light object image with dark background and two light objects Global – when T is the same for all points of the image Local or Dynamic – when T depends on (x,y) Adaptive – when T depends on I(x,y) 21/50

Global thresholding Based on visual inspection of histogram Automatically Select an initial estimate T0. Segment the image using T0: regions G1 and G2 consisting of pixels with gray level values >T0 and  T0 Compute the average gray level values 1 and 2 for the pixels in regions G1 and G2 T1 = 0.5 (1 + 2) Repeat until | Ti - Ti+1|< Tth 22/50

Global thresholding: example Tth = 0 3 iterations with result T = 125 23/50

Adaptive thresholding 24/50

Optimal thresholding 25/50

Multispectral thresholding 26/50

Lecture 4: Outline Segmentation Local descriptors Detection of discontinuities Thresholding Region-based segmentation Watershed Segmentation Local descriptors SIFT: Scale-invariant feature transform 27/50

Region-based segmentation A segmentation is the partition of an image R into sub-regions {Ri} such that A region can be defined by a predicate P such that P(Ri) = TRUE if all pixels within the region satisfy a specific property. P(Ri Rj) = FALSE for i j. 28/50

Region-based segmentation Region growing 29/50

Region-based segmentation Region splitting and merging Split into 4 disjoint quadrants any region Ri for which P(Ri) = FALSE Merge any adjacent region Rj and Rk for which P(Ri  Rk ) = TRUE Stop when no further merging or splitting is possible. 30/50

Example P(Ri) = TRUE if at least 80% of the pixels in Ri have the property |zj-mi|  2i, where zj is the gray level of the jth pixel in Ri mi is the mean gray level of that region i is the standard deviation of the gray levels in Ri 31/50

Lecture 4: Outline Segmentation Local descriptors Detection of discontinuities Thresholding Region-based segmentation Watershed Segmentation Local descriptors SIFT: Scale-invariant feature transform 32/50

Watersheds Segmentation 33/50

Watersheds Segmentation 34/50

Watersheds Segmentation A morphological region growing approach. Seed points: local minima points Growing method: Dilation Predicates Similar gradient values Sub-region boundary Dam building To avoid over-segmentation Use markers 35/50

Dam Building 36/50

Watershed Segmentation Example 37/50

Over-Segmentation and Use of Marker 38/50

Lecture 4: Outline Segmentation Local descriptors Detection of discontinuities Thresholding Region-based segmentation Watershed Segmentation Local descriptors SIFT: Scale-invariant feature transform 39/50

Local descriptors Features for local regions in the image Regions obtained by segmentation Regions of interest (RoI) – around interest points (keypoints) Interest points: corners, edges and others Keypoints: points in images, which are invariant to image translation, scale and rotation, and are minimally affected by noise and small distortions Scale-invariant feature transform (SIFT) by David Lowe 40/50

SIFT: main steps Scale-space peak selection Keypoint localization Using Difference-of-Gaussians (DoG) Keypoint localization Elimination of unstable keypoints Orientation assignment Based on keypoint local image patch Keypoint descriptor Based upon the image gradients in keypoint local neighbourhood 41/50

Scale space Build an image pyramid with resampling between each level 42/50

Difference-of-Gaussian The input image is convolved with Gaussian function: 43/50

Difference-of-Gaussian 44/50

SIFT keypoints Maxima and minima of DoG applied in scale-space: Extrema detection for the same scale Check if it is stable for different scales 45/50

Scale-space extrema detection 46/50

Keypoints orientation and scale Extract image gradients and orientations at each pixel Each key location is assigned a canonical orientation The orientation is determined by the peak in a histogram of local image gradient orientations 47/50

Example 48/50

Lecture 4: Resume Image segmentation Local descriptors Is necessary for many image processing tasks (shape features, object detection) The optimal methods depends on application Local descriptors Necessary for image/object matching, sub image retrieval, near duplicates detection SIFT is a very powerfull method for keypoints detection building local descriptors 49/50

Lecture 4: Bibliography Gonzalez R, Woods R. Digital Image Processing, published by Pearson Education, Inc, 2002. Lowe, David. Distinctive Image Features from Scale Invariant Keypoints. International Journal of Computer Vision, 2004. Ke Yan, Sukthankar Rahul. PCA-SIFT: A More Distinctive Representation for Local Image Descriptors. Mikolajczyk Krystian, Schmid Cordelia. A performance evaluation of local descriptors. 50/50