Object Recognition Today we will move on to… April 12, 2018

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Image Analysis Phases Image pre-processing –Noise suppression, linear and non-linear filters, deconvolution, etc. Image segmentation –Detection of objects.
October 2, 2014Computer Vision Lecture 8: Edge Detection I 1 Edge Detection.
Computer Vision Lecture 16: Texture
Computer Vision Lecture 16: Region Representation
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
CS292 Computational Vision and Language Pattern Recognition and Classification.
Region Segmentation. Find sets of pixels, such that All pixels in region i satisfy some constraint of similarity.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Highlights Lecture on the image part (10) Automatic Perception 16
Scale Invariant Feature Transform (SIFT)
CS292 Computational Vision and Language Visual Features - Colour and Texture.
1 Chapter 21 Machine Vision. 2 Chapter 21 Contents (1) l Human Vision l Image Processing l Edge Detection l Convolution and the Canny Edge Detector l.
December 2, 2014Computer Vision Lecture 21: Image Understanding 1 Today’s topic is.. Image Understanding.
Feature extraction Feature extraction involves finding features of the segmented image. Usually performed on a binary image produced from.
Brief overview of ideas In this introductory lecture I will show short explanations of basic image processing methods In next lectures we will go into.
Multiclass object recognition
September 10, 2012Introduction to Artificial Intelligence Lecture 2: Perception & Action 1 Boundary-following Robot Rules 1  2  3  4  5.
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
Computer vision.
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
OBJECT RECOGNITION. The next step in Robot Vision is the Object Recognition. This problem is accomplished using the extracted feature information. The.
Perception Introduction Pattern Recognition Image Formation
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Image Processing Jitendra Malik. Different kinds of images Radiance images, where a pixel value corresponds to the radiance from some point in the scene.
November 13, 2014Computer Vision Lecture 17: Object Recognition I 1 Today we will move on to… Object Recognition.
1 Pattern Recognition Pattern recognition is: 1. A research area in which patterns in data are found, recognized, discovered, …whatever. 2. A catchall.
Visual Information Systems Recognition and Classification.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Computer Graphics: Programming, Problem Solving, and Visual Communication Steve Cunningham California State University Stanislaus and Grinnell College.
October 7, 2014Computer Vision Lecture 9: Edge Detection II 1 Laplacian Filters Idea: Smooth the image, Smooth the image, compute the second derivative.
Autonomous Robots Vision © Manfred Huber 2014.
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
1 Machine Vision. 2 VISION the most powerful sense.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
A Statistical Approach to Texture Classification Nicholas Chan Heather Dunlop Project Dec. 14, 2005.
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Image features and properties. Image content representation The simplest representation of an image pattern is to list image pixels, one after the other.
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
- photometric aspects of image formation gray level images
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
CSE 455 HW 1 Notes.
DIGITAL SIGNAL PROCESSING
Mean Shift Segmentation
Computer Vision Lecture 12: Image Segmentation II
Fourier Transform: Real-World Images
Common Classification Tasks
Computer Vision Lecture 5: Binary Image Processing
Computer Vision Lecture 4: Color
Machine Vision Acquisition of image data, followed by the processing and interpretation of these data by computer for some useful application like inspection,
Image Segmentation Techniques
Computer Vision Lecture 9: Edge Detection II
ECE 692 – Advanced Topics in Computer Vision
Computer Vision Lecture 16: Texture II
Presented by :- Vishal Vijayshankar Mishra
Creating Data Representations
CS4670: Intro to Computer Vision
Announcements Project 4 out today Project 2 winners help session today
Presentation transcript:

Object Recognition Today we will move on to… April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Pattern and Object Recognition Pattern recognition is used for region and object classification, and represents an important building block of complex machine vision processes. No recognition is possible without knowledge. Specific knowledge about both the objects being processed and hierarchically higher and more general knowledge about object classes is required. April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Statistical Pattern Recognition Object recognition is based on assigning classes to objects. The device that does these assignments is called the classifier. The number of classes is usually known beforehand, and typically can be derived from the problem specification. The classifier does not decide about the class from the object itself — rather, sensed object properties called patterns are used. April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Statistical Pattern Recognition April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Statistical Pattern Recognition For statistical pattern recognition, quantitative descriptions of objects’ characteristics (features or patterns) are used. The set of all possible patterns forms the pattern space or feature space. The classes form clusters in the feature space, which can be separated by discrimination hyper-surfaces. April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Statistical Pattern Recognition April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Statistical Pattern Recognition Note that successful classification requires two components: Computation of discriminative feature vectors that are similar within classes and differ between them, Finding a discrimination function that accurately separates the feature clusters representing individual classes. The better the features that we define, the simpler can be the discrimination function. April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Statistical Pattern Recognition April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Computer Vision Lecture 17: Object Recognition I How can we devise an algorithm that recognizes certain everyday objects? Problems: The same object looks different from different perspectives. Changes in illumination create different images of the same object. Objects can appear at different positions in the visual field (image). Objects can be partially occluded. Objects are usually embedded in a scene. April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Computer Vision Lecture 17: Object Recognition I We are going to discuss an example for view-based object recognition. The presented algorithm (Blanz, Schölkopf, Bülthoff, Burges, Vapnik & Vetter, 1996) tackles some of the problems that we mentioned: It learns what each object in its database looks like from different perspectives. It recognizes objects at any position in an image. To some extent, the algorithm could compensate for changes in illumination. However, it would perform very poorly for objects that are partially occluded or embedded in a complex scene. April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Computer Vision Lecture 17: Object Recognition I The Set of Objects The algorithm learns to recognize 25 different chairs: It is shown each chair from 25 different viewing angles. April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Computer Vision Lecture 17: Object Recognition I The Algorithm April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Computer Vision Lecture 17: Object Recognition I The Algorithm For learning each view of each chair, the algorithm performs the following steps: Centering the object within the image, Detecting edges in four different directions, Downsampling (and thereby smoothing) the resulting five images. Low-pass filtering of each of the five images in four different directions. April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Computer Vision Lecture 17: Object Recognition I The Algorithm For classifying a new image of a chair (determining which of the 25 known chairs is shown), the algorithm carries out the following steps: In the new image, centering the object, detecting edges, downsampling and low-pass filtering as done for the database images, Computing the difference (distance) of the representation of the new image to all representations of the 2525 views stored in the database, Determining the chair with the smallest average distance of its 25 views to the new image (“winner chair”). April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Computer Vision Lecture 17: Object Recognition I The Algorithm Centering the object within the image: Binarize the image: Compute the center of gravity: 5 4 3 2 1 x y Finally, shift the image content so that the center of gravity coincides with the center of the image. April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Computer Vision Lecture 17: Object Recognition I Detecting edges in the image: Use a convolution filter for edge detection. For example, a Sobel or Canny filter would serve this purpose. Use the filter to detect edges in four different orientations. Store the resulting four images r1, …, r4 separately. April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Computer Vision Lecture 17: Object Recognition I Downsampling the image from 256256 to 1616 pixels: In order to keep as much of the original information as possible, use a Gaussian averaging filter that is slightly larger than 1616. Place the Gaussian filter successively at 1616 positions throughout the original image. Use each resulting value as the brightness value for one pixel in the downsampled image. April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Computer Vision Lecture 17: Object Recognition I Low-pass filtering the image: Use the following four convolution filters: Apply each filter to each of the images r0, …, r4. For example, when you apply k1 to r1 (vertical edges), the resulting image will contain its highest values in regions where the original image contains parallel vertical edges. April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Computer Vision Lecture 17: Object Recognition I Computing the difference between two views: For each view, we have computed 25 images (r0, …, r4 and their convolutions with k1, …, k4). Each image contains 1616 brightness values. Therefore, the two views to be compared, va and vb, can be represented as 6400-dimensional vectors. The distance (difference) d between the two views can then be computed as the length of their difference vector: d = || va – vb || April 12, 2018 Computer Vision Lecture 17: Object Recognition I

Computer Vision Lecture 17: Object Recognition I Results Classification error: 4.7% If no edge detection is performed, the error increases to 21%. We should keep in mind that this algorithm was only tested on computer models of chairs shown in front of a white background. The algorithm would fail for real-world images. The algorithm would require components for image segmentation and completion of occluded parts. April 12, 2018 Computer Vision Lecture 17: Object Recognition I