Download presentation
Presentation is loading. Please wait.
1
Chapter 12 Object Recognition
智能视觉理解 实验室
2
12.1 Patterns and Pattern Classes
Three types of iris flowers described by two measurements
3
A pattern is an arrangement of descriptors
12.1 Patterns and Pattern Classes A pattern is an arrangement of descriptors The name feature is used often in the pattern recognition literature to denote a descriptor A pattern class is a family of patterns that share some common properties
4
12.1 Patterns and Pattern Classes
Noisy Object and Signature 𝒙 𝟏 =𝒓 𝜽 𝟏 , 𝒙 𝟐 =𝒓 𝜽 𝟐 ,…, 𝒙 𝒏 =𝒓 𝜽 𝒏
5
12.1 Patterns and Pattern Classes
Staircase Structure String description …abababa…
6
12.1 Patterns and Pattern Classes
Satellite image of a heavily built downtown area and surrounding residential areas. Tree Description
7
12.2 Recognition Based on Decision-Theoretic Methods
Let 𝒙= 𝒙 𝟏 , 𝒙 𝟐 ,…, 𝒙 𝒏 𝑻 for 𝑾 pattern class 𝝎 𝟏 , 𝝎 𝟐 ,…, 𝝎 𝑾 𝒅 𝒊 𝒙 > 𝒅 𝒋 𝒙 𝒋=𝟏,𝟐,…,𝑾;𝒋≠𝒊 In other words, an unknown pattern 𝒙 is said to belong to the ith pattern class if, upon substitution of 𝒙 into all decision functions, 𝒅 𝒊 𝒙 yields the largest numerical value.
8
Minimum Distance Classifier
Matching Minimum Distance Classifier Define the prototype of each pattern class 𝒎 𝒋 = 𝟏 𝑵 𝒋 𝒙∈ 𝝎 𝒋 𝒙 𝒋 Assign x to class 𝝎 𝒋 if 𝑫 𝒋 (𝒙) is the smallest distance. 𝑫 𝒋 𝒙 = 𝒙− 𝒎 𝒋
9
Assign x to class 𝝎 𝒋 if 𝒅 𝒋 (𝒙) is the largest numerical value.
Matching Selecting the smallest distance is equivalent to evaluating the functions 𝒅 𝒋 𝒙 = 𝒙 𝑻 𝒎 𝒋 − 𝟏 𝟐 𝒎 𝒋 𝑻 𝒎 𝒋 Assign x to class 𝝎 𝒋 if 𝒅 𝒋 (𝒙) is the largest numerical value.
10
Matching Decision boundary of minimum distance classifier. (Dark dot and square are the means)
11
Matching American Bankers Association E-13B font character set and corresponding waveforms.
12
Matching by Correlation
𝒄 𝒙,𝒚 = 𝒔 𝒕 𝒇 𝒔,𝒕 𝒘(𝒙+𝒔, 𝒚+𝒕) Correlation Coefficient: 𝜸 𝒙,𝒚 = 𝒔 𝒕 𝒇 𝒔,𝒕 − 𝒇 𝒔,𝒕 𝒘 𝒙+𝒔, 𝒚+𝒕 − 𝒘 𝒔 𝒕 𝒇 𝒔,𝒕 − 𝒇 𝒔,𝒕 𝟐 𝒔 𝒕 𝒘 𝒙+𝒔, 𝒚+𝒕 − 𝒘 𝟐 𝟏 𝟐
13
Matching The mechanics of template matching
14
Location of the best match Correlation Coefficient
Matching Satellite image Template Location of the best match Correlation Coefficient
15
12.2.2 Optimum Statistical Classifiers
Conditional Average Risk 𝒓 𝒋 𝒙 = 𝒌=𝟏 𝑾 𝑳 𝒌𝒋 𝒑( 𝝎 𝒌 |𝒙) The classifier that minimizes the total average loss is called the Bayes Classifier. Assign pattern x to class 𝝎 𝒊 if 𝒌=𝟏 𝑾 𝑳 𝒌𝒊 𝒑 𝒙| 𝝎 𝒌 𝑷 𝝎 𝒌 < 𝒒=𝟏 𝑾 𝑳 𝒒𝒋 𝒑 𝒙| 𝝎 𝒒 𝑷( 𝝎 𝒒 ) for 𝒋=𝟏,𝟐,…,𝑾;𝒋≠𝒊
16
12.2.2 Optimum Statistical Classifiers
Bayes Classifier for Gaussian Pattern Classes 𝒅 𝒋 𝒙 =𝒑 𝒙 𝝎 𝒋 𝑷 𝝎 𝒋 = 𝟏 𝟐𝝅 𝝈 𝒋 𝒆 − 𝒙− 𝒎 𝒋 𝟐 𝟐 𝝈 𝒋 𝟐 𝒑 𝒙 𝝎 𝒋 = 𝟏 𝟐𝝅 𝒏 𝟐 𝑪 𝒋 𝒏 𝟐 𝒆 − 𝟏 𝟐 𝒙− 𝒎 𝒋 𝑻 𝑪 𝒋 −𝟏 (𝒙− 𝒎 𝒋 ) 𝑪 𝒋 = 𝑬 𝒋 𝒙− 𝒎 𝒋 𝒙− 𝒎 𝒋 𝑻 , 𝒎 𝒋 = 𝑬 𝒋 𝒙 Bayes decision function for class 𝝎 𝒊 is 𝒅 𝒋 𝒙 =𝐥𝐧𝑷 𝝎 𝒋 + 𝒙 𝑻 𝑪 −𝟏 𝒎 𝒋 − 𝟏 𝟐 𝒎 𝒋 𝑻 𝑪 −𝟏 𝒎 𝒋
17
12.2.2 Optimum Statistical Classifiers
Probability density functions for two 1-D pattern classes. The point x0 shown is the decision boundary if the two classes are equally likely to occur.
18
Two Simple Pattern Classes and their Bayes Decision Boundary
Optimum Statistical Classifiers Two Simple Pattern Classes and their Bayes Decision Boundary
19
12.2.2 Optimum Statistical Classifiers
Formation of a pattern vector from registered pixels of four digital images generated by a multispectral scanner.
20
Near Infrared Wavelengths Mask Result of Classification
Visible Blue Visible Green Visible Red Near Infrared Wavelengths Mask Result of Classification All Classified as Water As Urban development As Vegetations
21
12.2.3 Neural Networks 𝒅 𝒙 = 𝒊=𝟏 𝒏 𝒘 𝒊 𝒙 𝒊 + 𝒘 𝒏+𝟏
𝒅 𝒙 = 𝒊=𝟏 𝒏 𝒘 𝒊 𝒙 𝒊 + 𝒘 𝒏+𝟏 Two equivalent representations of the perceptron model for two pattern classes
22
Patterns Belonging to Two Classes
Neural Networks Patterns Belonging to Two Classes Decision Boundary
23
Multilayer Feedforward Neural Network
Neural Networks Multilayer Feedforward Neural Network
24
12.2.3 Neural Networks Sigmoidal Activation
𝒉 𝒋 𝑰 𝒋 = 𝟏 𝟏+ 𝒆 −( 𝑰 𝒋 + 𝜽 𝒋 )/ 𝜽 𝟎
25
Typical Noisy Shapes Used in Training
Neural Networks Reference Shapes Typical Noisy Shapes Used in Training
26
Three-layer Neural Network
Neural Networks Three-layer Neural Network
27
Neural Networks Performance of the neural network as a function of noise level.
28
Neural Networks Improvement in performance for Rt=0.4 by increasing the number of training patterns (the curve for Rt=0.3 is shown for reference).
29
Examples of Decision Boundaries
Neural Networks Two-input Two-layer Examples of Decision Boundaries
30
Types of decision regions that can be formed by single- and multilayer feedforward neural networks.
31
Hypothetical Similarity Tree
Matching Shape Number Shapes Hypothetical Similarity Tree Similarity Matrix
32
String Matching Sample boundaries of two different object classes Polygonal approximations Tabulations of R
33
12.3.3 Syntactic Recognition of Strings
34
12.3.3 Syntactic Recognition of Strings
Finite Automaton
35
12.3.4 Syntactic Recognition of Tree
Primitives
36
Processing Stages of a Frontier-to-root Tree
Syntactic Recognition of Tree Processing Stages of a Frontier-to-root Tree
37
A Bubble Chamber Photograph
Syntactic Recognition of Tree A Bubble Chamber Photograph
38
12.3.4 Syntactic Recognition of Tree
Tree Representation Coded Event
39
12.3.4 Syntactic Recognition of Tree
State Diagram
41
State Diagram for Af (R+, 1)
Syntactic Recognition of Tree State Diagram for Af (R+, 1)
42
智能视觉理解 实验室
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.