Laboratory of Image Processing Pier Luigi Mazzeo

Slides:



Advertisements
Similar presentations
3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
Advertisements

RGB-D object recognition and localization with clutter and occlusions Federico Tombari, Samuele Salti, Luigi Di Stefano Computer Vision Lab – University.
Summary of Friday A homography transforms one 3d plane to another 3d plane, under perspective projections. Those planes can be camera imaging planes or.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Chapter 8 Content-Based Image Retrieval. Query By Keyword: Some textual attributes (keywords) should be maintained for each image. The image can be indexed.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Mapping: Scaling Rotation Translation Warp
Neurocomputing,Neurocomputing, Haojie Li Jinhui Tang Yi Wang Bin Liu School of Software, Dalian University of Technology School of Computer Science,
The Statistics of Fingerprints A Matching Algorithm to be used in an Investigation into the Reliability of the Use of Fingerprints for Identification Bob.
Special Topic on Image Retrieval Local Feature Matching Verification.
Image alignment Image from
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Robust and large-scale alignment Image from
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
1 Interest Operators Find “interesting” pieces of the image –e.g. corners, salient regions –Focus attention of algorithms –Speed up computation Many possible.
1Ellen L. Walker Segmentation Separating “content” from background Separating image into parts corresponding to “real” objects Complete segmentation Each.
A Study of Approaches for Object Recognition
1 Model Fitting Hao Jiang Computer Science Department Oct 8, 2009.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
1 Interest Operator Lectures lecture topics –Interest points 1 (Linda) interest points, descriptors, Harris corners, correlation matching –Interest points.
8. Geometric Operations Geometric operations change image geometry by moving pixels around in a carefully constrained way. We might do this to remove distortions.
Distinctive Image Feature from Scale-Invariant KeyPoints
Fitting a Model to Data Reading: 15.1,
Lecture 8: Image Alignment and RANSAC
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
COS 429 PS3: Stitching a Panorama Due November 4 th.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2005 Lecture 3 Advanced Features Sebastian Thrun, Stanford.
Computing transformations Prof. Noah Snavely CS1114
1 Interest Operators Find “interesting” pieces of the image Multiple possible uses –image matching stereo pairs tracking in videos creating panoramas –object.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Robust fitting Prof. Noah Snavely CS1114
Automatic Camera Calibration
CSE 185 Introduction to Computer Vision
Computer vision.
By Yevgeny Yusepovsky & Diana Tsamalashvili the supervisor: Arie Nakhmani 08/07/2010 1Control and Robotics Labaratory.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Recognition and Matching based on local invariant features Cordelia Schmid INRIA, Grenoble David Lowe Univ. of British Columbia.
Local invariant features Cordelia Schmid INRIA, Grenoble.
IMAGE MOSAICING Summer School on Document Image Processing
Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao.
Object Removal in Multi-View Photos Aaron McClennon-Sowchuk, Michail Greshischev.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Puzzle Solver Sravan Bhagavatula EE 638 Project Stanford ECE.
Computer Vision - Fitting and Alignment (Slides borrowed from various presentations)
Fitting image transformations Prof. Noah Snavely CS1114
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
3D Reconstruction Using Image Sequence
776 Computer Vision Jan-Michael Frahm Spring 2012.
Laboratory of Image Processing Pier Luigi Mazzeo July 25, 2014.
Image features and properties. Image content representation The simplest representation of an image pattern is to list image pixels, one after the other.
Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
COS 429 PS3: Stitching a Panorama Due November 10 th.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
SIFT.
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Lecture 7: Image alignment
Features Readings All is Vanity, by C. Allan Gilbert,
Object recognition Prof. Graeme Bailey
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
SIFT.
Computational Photography
Recognition and Matching based on local invariant features
Image Stitching Linda Shapiro ECE/CSE 576.
Presentation transcript:

Laboratory of Image Processing Pier Luigi Mazzeo

Find Image Rotation and Scale Using Automated Feature Matching and RANSAC Step 1: Read Image original = imread('cameraman.tif'); imshow(original); text(size(original,2),size(original,1)+15,... 'Image courtesy of Massachusetts Institute of Technology',... 'FontSize',7,'HorizontalAlignment','right'); Step 2: Resize and Rotate the Image scale = 0.7; J = imresize(original, scale); % Try varying the scale factor. theta = 30; distorted = imrotate(J,theta); % Try varying the angle, theta. figure, imshow(distorted)

Step 3: Find Matching Features Between Images Find Image Rotation and Scale Using Automated Feature Matching and RANSAC ptsOriginal = detectSURFFeatures(original); ptsDistorted = detectSURFFeatures(distorted); %Detect features in both images. [featuresOriginal, validPtsOriginal] = extractFeatures(original, ptsOriginal); [featuresDistorted, validPtsDistorted] = extractFeatures(distorted, ptsDistorted); %Extract features descriptors. indexPairs = matchFeatures(featuresOriginal, featuresDistorted); %Match features using their descriptors.

matchedOriginal = validPtsOriginal(indexPairs(:,1)); matchedDistorted = validPtsDistorted(indexPairs(:,2)); %Retrieve locations of corresponding points of each image. Find Image Rotation and Scale Using Automated Feature Matching and RANSAC figure; showMatchedFeatures(original,distorted,matchedOriginal,matchedDistorted); title('Putatively matched points (including outliers)'); %Show putative matched points

Find Image Rotation and Scale Using Automated Feature Matching and RANSAC %Display matching point pairs used in the computation of the transformation Step 4: Estimate Transformation Find a transformation corresponding to the matching point pairs using the statistically robust M- estimator SAmple Consensus (MSAC) algorithm, which is a variant of the RANSAC algorithm. It removes outliers while computing the transformation matrix. You may see varying results of the transformation computation because of the random sampling employed by the MSAC algorithm. [tform, inlierDistorted, inlierOriginal] = estimateGeometricTransform(... matchedDistorted, matchedOriginal, 'similarity'); figure; showMatchedFeatures(original,distorted,inlierOriginal,inlierDistorted); title('Matching points (inliers only)'); legend('ptsOriginal','ptsDistorted');

Find Image Rotation and Scale Using Automated Feature Matching and RANSAC %Compute the inverse transformation matrix Step 5: Solve for Scale and Angle Use the geometric transform, tform, to recover the scale and angle. Since we computed the transformation from the distorted to the original image, we need to compute its inverse to recover the distortion. Let sc = s*cos(theta) Let ss = s*sin(theta) Then, Tinv = [sc -ss 0; ss sc 0; tx ty 1] where tx and ty are x and y translations, respectively. Tinv = tform.invert.T; ss = Tinv(2,1); sc = Tinv(1,1); scaleRecovered = sqrt(ss*ss + sc*sc) thetaRecovered = atan2(ss,sc)*180/pi The recovered values should match your scale and angle values selected in Step 2: Resize and Rotate the Image.

Find Image Rotation and Scale Using Automated Feature Matching and RANSAC %Recover the original image by transforming the Distorted image. Step 6: Recover the Original Image outputView = imref2d(size(original)); recovered = imwarp(distorted,tform,'OutputView',outputView); %Compare recovered to original by looking at them side-by- side in a montage. figure, imshowpair(original,recovered,'montage') The recovered (right) image quality does not match the original (left) image because of the distortion and recovery process. In particular, the image shrinking causes loss of information. The artifacts around the edges are due to the limited accuracy of the transformation. If you were to detect more points in Step 4: Find Matching Features Between Images, the transformation would be more accurate.

Object Detection In A Cluttered Scene Using Point Feature Matching This example presents an algorithm for detecting a specific object based on finding point correspondences between the reference and the target image. It can detect objects despite a scale change or in-plane rotation. It is also robust to small amount of out-of-plane rotation and occlusion. This method of object detection works best for objects that exhibit non- repeating texture patterns, which give rise to unique feature matches. This technique is not likely to work well for uniformly-colored objects, or for objects containing repeating patterns. Note that this algorithm is designed for detecting a specific object, for example, the elephant in the reference image, rather than any elephant Step 1: Read Images boxImage = imread('stapleRemover.jpg'); Figure; imshow(boxImage); title('Image of a Box'); %Read the reference image containing the object of interest.

Object Detection In A Cluttered Scene Using Point Feature Matching Step 1: Read Images sceneImage = imread('clutteredDesk.jpg'); figure; imshow(sceneImage); title('Image of a Cluttered Scene'); %Read the target image containing a cluttered scene.

Object Detection In A Cluttered Scene Using Point Feature Matching Step 2: Detect Feature Points %Detect feature points in both images. boxPoints = detectSURFFeatures(boxImage); scenePoints = detectSURFFeatures(sceneImage); figure; imshow(boxImage); title('100 Strongest Feature Points from Box Image'); hold on; plot(selectStrongest(boxPoints, 100)); %Visualize the strongest feature points found in the reference image.

Object Detection In A Cluttered Scene Using Point Feature Matching Step 2: Detect Feature Points %Visualize the strongest feature points found in the target image. figure; imshow(sceneImage); title('300 Strongest Feature Points from Scene Image'); hold on; plot(selectStrongest(scenePoints, 300));

Object Detection In A Cluttered Scene Using Point Feature Matching Step 3: Extract Feature Descriptors [boxFeatures, boxPoints] = extractFeatures(boxImage, boxPoints); [sceneFeatures, scenePoints] = extractFeatures(sceneImage, scenePoints); %Extract feature descriptors at the interest points in both images. Step 4: Find Putative Point Matches matchedBoxPoints = boxPoints(boxPairs(:, 1), :); matchedScenePoints = scenePoints(boxPairs(:, 2), :); figure; showMatchedFeatures(boxImage, sceneImage, matchedBoxPoints,... matchedScenePoints, 'montage'); title('Putatively Matched Points (Including Outliers)'); %Display putatively matched features. boxPairs = matchFeatures(boxFeatures, sceneFeatures); %Match the features using their descriptors.

Object Detection In A Cluttered Scene Using Point Feature Matching Putatively Matched Points (Including Outliers)

Object Detection In A Cluttered Scene Using Point Feature Matching Step 5: Locate the Object in the Scene Using Putative Matches [tform, inlierBoxPoints, inlierScenePoints] =... estimateGeometricTransform(matchedBoxPoints, matchedScenePoints, 'affine'); figure; showMatchedFeatures(boxImage, sceneImage, inlierBoxPoints,... inlierScenePoints, 'montage'); title('Matched Points (Inliers Only)'); %Display the matching point pairs with the outliers removed estimateGeometricTransform calculates the transformation relating the matched points, while eliminating outliers. This transformation allows us to localize the object in the scene.

Object Detection In A Cluttered Scene Using Point Feature Matching Step 5: Locate the Object in the Scene Using Putative Matches boxPolygon = [1, 1;... % top-left size(boxImage, 2), 1;... % top-right size(boxImage, 2), size(boxImage, 1);... % bottom-right 1, size(boxImage, 1);... % bottom-left 1, 1]; % top-left again to close the polygon newBoxPolygon = transformPointsForward(tform, boxPolygon); %Get the bounding polygon of the reference image. %Transform the polygon into the coordinate system of the target %image. The transformed polygon indicates the location of the %object in the scene. %Display the detected object. figure; imshow(sceneImage); hold on; line(newBoxPolygon(:, 1), newBoxPolygon(:, 2), 'Color', 'y'); title('Detected Box');

Object Detection In A Cluttered Scene Using Point Feature Matching The Detected Box

Object Detection In A Cluttered Scene Using Point Feature Matching Exercise elephantImage = imread('elephant.jpg'); figure; imshow(elephantImage); title('Image of an Elephant'); Detect a second object by using the same steps as before. Read an image containing the second object of interest.

Segmentation: Thresholding

Thresholding

Double Thresholding

Adaptive Thresholding

Hough Transform

Segmentation by K-means

Step1: Read image

Step2:Convert Image from RGB Color Space to L*a*b* Color Space

Step 3:Classify the colors in ‘a*b*’ space using K-Means Clustering

Step 4:Label every pixel in the Image using the results from KMEANS

Step 5:Create images that segment H&E Image by color

Step 5:Create images that segment H&E Image by color (cont.)