Download presentation
Presentation is loading. Please wait.
1
Supervised Training and Classification
2
Selection of Training Areas
3
DN’s of training fields plotted on a “scatter” diagram in two-dimensional feature space
Band 2 Band 1 from Lillesand & Kiefer
4
Classification Algorithms/Decision Rules
Non-parametric decision rule – independent of the properties/statistics of the data – does not assume normal distribution – Example: Parallelepiped Parametric decision rule – assumes normal distribution – defined by the signature mean vector and covariance matrix – Examples: Minimum distance, Mahalanobis distance, Maximum likelihood
5
Parallelepiped “In the parallelepiped decision rule, the data file values of the candidate pixel are compared to upper and lower limits” the minimum and maximum data file values of each band in the signature - the mean of each band, plus and minus a number of standard deviations
6
Parallelepiped Fastest of all classifiers
Problem of “overlapping bounds” Problem of “corner pixels”
7
Minimum Distance “Calculates the spectral distance between the measurement vector for the candidate pixel and the mean vector for each signature.” Advantages No unclassified pixels Disadvantages Does not incorporate variation Pixels that should be unclassified become classified
8
Maximum Likelihood For each pixel to be classified:
The probability of classification is calculated for each class The pixel is classified as the class with the largest probability The slowest of classifiers discussed Theoretically the best classification
9
Maximum Likelihood Degrees of Probability
10
Maximum Likelihood
11
Maximum Likelihood Water Forest Urban 0 10 20 30 40 50 60 70 80 90
Band 4 - DN
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.