Download presentation
Presentation is loading. Please wait.
Published byEmmeline Park Modified over 9 years ago
1
Chapter 7 Maximum likelihood classification of remotely sensed imagery 遥感影像分类的最大似然法 Chapter 7 Maximum likelihood classification of remotely sensed imagery 遥感影像分类的最大似然法 ZHANG Jingxiong ( 张景雄 ) School of Remote Sensing Information Engineering Wuhan University
2
Introduction n Thematic classification: an important technique in remote sensing for provision of spatial information n For example, land cover information key input to biophysical parameterization, environmental modeling, resources management, and other applications.
3
n Historically, visual interpretation ( 目视判 读 ) was applied to identify homogeneous areal-classes based on aerial photographs. n With digital images, it is possible to substantially automate this process by using two methods.
4
Unsupervised (非监督)
5
Supervised (监督)
6
The maximum likelihood classification ( 极大似然分类 ) - MLC The maximum likelihood classification ( 极大似然分类 ) - MLC n The most common method for supervised classification n Classes ( 类别 ): { k, k=1,…,c} n Measurement vector (观测向量 / 数据) : z(x) n Likelihood (似然) : p( k |z(x)) n Classification rule (分类规则) : x i, if p( i |z(x)) > p( j |z(x)) for all j i
7
Bayes’ theorem (贝叶斯定理)
8
n Candidate cover types { k, k=1,…,c}, c being the total number of classes considered such as {urban, forest, agriculture} n Bayes‘ theorem then allows for evaluation of the posterior probability ( 验后概率 ) p( k |z(x)) the posteriori probability of class k conditional to data z(x)) Bayes’ theorem at work …
9
n Based on: 1) prior probability ( 先验概率 ) – expected occurrence of candidate classes k p( k ) or p k (before measurement) 2) class-conditional probability density – the occurrence of measurement vector z(x) conditional to class k p(z(x) | k ) or p k (z(x)), (derived from training data)
10
n Bayes‘ theorem then allows for evaluation of the posterior probability ( 验后概率 ) p( k |z(x)) the posteriori probability of class k conditional to data z(x)) n p( k |z(x)) = p( k, z(x)) / p(z(x)) = p k p k (z(x)) / p(z(x)) where p(z(x)) = sum( p( k,z(x)) = sum(p k p k (z(x)) )
11
To assign observation z(x) to class k that has the highest posteriori probability p( k |z(x)), or that returns the highest product p k p k (z(x)), as p(z(x)) won’t change the relative magnitude of p( k |z(x)) across k.
12
Computing p(z(x) | k ) with normally distributed data n The occurrence of measurement vector z(x) conditional to class k, assuming multivariate normal distribution: p k (z(x)) = (2 ) -b/2 det|cov Z|k | -1/2 exp(-dis 2 /2)
13
n The squared Mahalanobis distance between an observation z(x) and the means of class- specific Z variable m Z|k : n cov Z|k : variance and covariance matrix of variable Z conditional to class k n m Z|k : the means of class-specific Z variable n b: the number of features (e.g., spectral bands)
14
n The objective/criterion ( 准则 ): to minimize the conditional average loss: Mathematical detail mis(i, j): the cost resulting from labeling a pixel actually belonging to class j to class i.
15
n “0-1 loss function” : 0 - no cost for correct classification 1 - unit cost for misclassification. n the expected loss incurred if pixel x with observation z(x) is classified as i :
16
n a decision rule that will lead to minimized loss is: decide z(x) i if and only if: n Bayesian classification rule works actually as a kind of maximum likelihood classification.
17
Examples n two classes 1 and 2 n means and dispersion matrices: m 1 = [4 2], m 2 = [3 3] COV 1 = |3 4| COV 2 = |4 5| |4 6| |5 7| COV 1 -1 = |3 -2| COV 2 -1 = |7/3 -5/3| |-2 1.5| |-5/3 4/3|
18
n To decide to which class the measure vector z = (4, 3) belongs. n squared Mahalanobis distance between this measurement vector and the two class means: dis 1 2 = 3/2 dis 2 2 = 4/3 n class-conditional probability densities: p 1 (z) = 1/(2pi*1.414) exp (-3/4) = 0.0532 p 2 (z) = 1/(2pi*1.732) exp (-2/3) = 0.0472
19
n Assume equal prior class probabilities: n p 1 = 1/2 and p 2 = 1/2 n the posterior probabilities are: p( 1 |z) = 0.0532 * (1/2) / (0.0266 + 0.0236) = 0.53 p( 2 |z) = 0.0472 * (1/2) / (0.0266 + 0.0236) = 0.47 n to classify the measurement z into class 1
20
n When p 1 = 1/3 and p 2 = 2/3, n the posterior probabilities are: p( 1 |z) = 0.0532 * (1/3) / (0.0177 + 0.0315) = 0.36 p( 2 |z) = 0.0472 * (2/3) / (0.0177 + 0.0315) = 0.64 n Then, class 2 is favored for z over class 1.
21
Looking back … n Pixel-based n Parcel-based methods adaptations to the MLC? object-oriented approaches?
22
Extraction of Impervious Surfaces Using Object-Oriented Image Segmentation Extraction of Impervious Surfaces Using Object-Oriented Image Segmentation × USGS NAPP 1 × 1 m DOQQ of an area in North Carolina × USGS NAPP 1 × 1 m DOQQ of an area in North Carolina Impervious surfaces
23
Methods for thematic mapping n Parametric MLC n Non-parametric Artificial neural networks n Non-metric Expert systems Decision-tree classifiers Machine learning
24
References n Tucker, C. J. and Townshend, J. R. G. and Goff, T. E. (1985) African Land-Cover Classification Using Satellite Data. Science, 227(4685): 369-375. n Anyamba, A. and Eastman, J. R. (1996) Interannual Variability of NDVI over Africa and its relation to El Niño / Southern Oscillation. International Journal of Remote Sensing 17(13) : 2533-2548.
25
Questions 1. Discuss the importance of statistics in thematic classification of remotely sensed imagery.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.