Computer Vision Lecture 18: Object Recognition II

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Perceptron Learning Rule
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
BRISK (Presented by Josh Gleason)
Computer Vision Lecture 16: Texture
Unsupervised Learning with Artificial Neural Networks The ANN is given a set of patterns, P, from space, S, but little/no information about their classification,
G5BAIM Artificial Intelligence Methods Graham Kendall Neural Networks.
CSC321: Neural Networks Lecture 3: Perceptrons
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
September 21, 2010Neural Networks Lecture 5: The Perceptron 1 Supervised Function Approximation In supervised learning, we train an ANN with a set of vector.
September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”
November 5, 2009Introduction to Cognitive Science Lecture 16: Symbolic vs. Connectionist AI 1 Symbolism vs. Connectionism There is another major division.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
October 5, 2010Neural Networks Lecture 9: Applying Backpropagation 1 K-Class Classification Problem Let us denote the k-th class by C k, with n k exemplars.
CS 4700: Foundations of Artificial Intelligence
Neural Networks Lecture 17: Self-Organizing Maps
CS 484 – Artificial Intelligence
November 21, 2012Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms III 1 Learning in the BPN Gradients of two-dimensional functions:
Radial Basis Function Networks
November 25, 2014Computer Vision Lecture 20: Object Recognition IV 1 Creating Data Representations The problem with some data representations is that the.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
From Biological to Artificial Neural Networks Marc Pomplun Department of Computer Science University of Massachusetts at Boston
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
776 Computer Vision Jan-Michael Frahm Fall SIFT-detector Problem: want to detect features at different scales (sizes) and with different orientations!
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Artificial Intelligence & Neural Network
Introduction to Neural Networks. Biological neural activity –Each neuron has a body, an axon, and many dendrites Can be in one of the two states: firing.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
DIPTA 陳昱廷 Feature Detector II.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
CS654: Digital Image Analysis
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Edge Segmentation in Computer Images CSE350/ Sep 03.
Course 3 Binary Image Binary Images have only two gray levels: “1” and “0”, i.e., black / white. —— save memory —— fast processing —— many features of.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
April 5, 2016Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigms II 1 Capabilities of Threshold Neurons By choosing appropriate.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Another Example: Circle Detection
Supervised Learning in ANNs
Mean Shift Segmentation
Computer Vision Lecture 5: Binary Image Processing
Fitting Curve Models to Edges
Computer Vision Lecture 9: Edge Detection II
Computer Vision Lecture 16: Texture II
Creating Data Representations
Artificial Intelligence Lecture No. 28
Capabilities of Threshold Neurons
The Naïve Bayes (NB) Classifier
Introduction to Artificial Intelligence Lecture 24: Computer Vision IV
Fourier Transform of Boundaries
Computer Vision Lecture 19: Object Recognition III
Sanguthevar Rajasekaran University of Connecticut
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Computer Vision Lecture 18: Object Recognition II Key Points Often we are unable to detect the complete contour of an object or characterize the details of its shape. In such cases, we can still represent shape information in an implicit way. The idea is to find key points in the image of the object that can be used to identify the objects and that do not change dramatically when the orientation or lighting of the object change. A good choice for this purpose are corners in the image. November 18, 2014 Computer Vision Lecture 18: Object Recognition II

Corner Detection with FAST A very efficient algorithm for corner detection is called FAST (Features from Accelerated Segment Test). For any given point c in the image, we can test whether it is a corner by: Considering the 16 pixels at a radius of 3 pixels around c and finding the longest run (i.e., uninterrupted sequence) of pixels whose intensity is either greater than that of c plus a threshold or less that that of c minus the same threshold. If the run is at least 12 pixels long, then c is a corner. November 18, 2014 Computer Vision Lecture 18: Object Recognition II

Corner Detection with FAST November 18, 2014 Computer Vision Lecture 18: Object Recognition II

Corner Detection with FAST The FAST algorithm can be made even faster by first checking only pixels 1, 5, 9, and 13. If not at least three of them fulfill the intensity condition, we can immediately rule out that the given point is a corner. In order to avoid detecting multiple corners near the same pixel, we can require a minimum distance between corners. If two corners are too close, we only keep the one with the higher corner score. Such a score can be computed as the sum of intensity differences between c and the pixels in the run. November 18, 2014 Computer Vision Lecture 18: Object Recognition II

Key Point Description with BRIEF Now that we have identified interesting points in the image, how can we describe them so that we can detect them in other images? A very efficient method is to use BRIEF (Binary Robust Independent Elementary Features) descriptors. They can be described with minimal memory requirement (32 bytes per point). Their comparison only requires 256 binary operations. November 18, 2014 Computer Vision Lecture 18: Object Recognition II

Key Point Description with BRIEF First, smooth the input image with a 9×9 Gaussian filter. Then choose 256 pairs of points within a 35×35 pixel area, following a Gaussian distribution with  = 7 pixels. Center the resulting mask on a corner c. For every pair of points, if intensity at the first point is greater than at the second one, add a 0 to the bitstring, otherwise add a 1. The resulting bit string of length 256 is the descriptor for point c. November 18, 2014 Computer Vision Lecture 18: Object Recognition II

Key Point Description with BRIEF November 18, 2014 Computer Vision Lecture 18: Object Recognition II

Key Point Matching with BRIEF In order to compute the matching distance between the descriptors of two different points, we can simply count the number of mismatching bits in their description (Hamming distance). For example, the bit strings 100110 and 110100 have a Hamming distance of 2, because they differ in their second and fifth bits. In order to find the match for point c in another image, we can find the pixel in that image whose descriptor has the smallest Hamming distance to the one for c. November 18, 2014 Computer Vision Lecture 18: Object Recognition II

Key Point Matching with BRIEF November 18, 2014 Computer Vision Lecture 18: Object Recognition II

How do Neural Networks (NNs) work? NNs are able to learn by adapting their connectivity patterns so that the organism improves its behavior in terms of reaching certain (evolutionary) goals. The strength of a connection, or whether it is excitatory or inhibitory, depends on the state of a receiving neuron’s synapses. The NN achieves learning by appropriately adapting the states of its synapses. November 18, 2014 Computer Vision Lecture 18: Object Recognition II

Supervised Function Approximation In supervised learning, we train an artificial NN (ANN) with a set of vector pairs, so-called exemplars. Each pair (x, y) consists of an input vector x and a corresponding output vector y. Whenever the network receives input x, we would like it to provide output y. The exemplars thus describe the function that we want to “teach” our network. Besides learning the exemplars, we would like our network to generalize, that is, give plausible output for inputs that the network had not been trained with. November 18, 2014 Computer Vision Lecture 18: Object Recognition II

Computer Vision Lecture 18: Object Recognition II An Artificial Neuron synapses x1 neuron i x2 Wi,1 Wi,2 … xi … Wi,n xn net input signal output November 18, 2014 Computer Vision Lecture 18: Object Recognition II

Computer Vision Lecture 18: Object Recognition II Linear Separability Input space in the two-dimensional case (n = 2): x2 1 2 3 -3 -2 -1 x2 1 2 3 -3 -2 -1 x2 1 2 3 -3 -2 -1 1 1 1 x1 1 2 3 -3 -2 -1 x1 1 2 3 -3 -2 -1 x1 1 2 3 -3 -2 -1 w1 = 1, w2 = 2,  = 2 w1 = -2, w2 = 1,  = 2 w1 = -2, w2 = 1,  = 1 November 18, 2014 Computer Vision Lecture 18: Object Recognition II