Download presentation
Presentation is loading. Please wait.
Published byJasmin Norris Modified over 9 years ago
1
Automated Detection and Classification Models SAR Automatic Target Recognition Proposal J.Bell, Y. Petillot
2
Automated Detection and Classification Models Background ATR on SAR ATR on Sonar Supporting Technologies Initial results on SAR Way forward Contents
3
Automated Detection and Classification Models Image Based techniques –Based on large training sets –Assumes some form of linearity in the imaging process –image based only (difficult to fuse with other external data) –NN / Pattern Matching for classification Model Based techniques –Model can be learned (trained) or imposed (CAD) –Can use physics –Can use simulation in the loop (test simulation vs data) –Can take into account non-linear image formation ATR Approaches
4
Automated Detection and Classification Models Classical Approach Typical Recognition Scenario Imaging Platform Target Classifier Orientation Estimator
5
Automated Detection and Classification Models Target Classifier Orientation Estimator Model-Based Recognition
6
Automated Detection and Classification Models Functional Estimation Training Data Scene and Sensor Physics Image Processing Inference Model-Based Recognition Difficult (and weak ?) part
7
Automated Detection and Classification Models Detect ROIs (MRF-based Model Saliency Context Detection) Fuse Other Views Import DTM Models Extract Highlight/Shadow (CSS Model) Classify Object Dempster-Shafer False Alarm? Positive Classification? 12YES NO Target REMOVE FALSE ALARM Our proposal
8
Automated Detection and Classification Models Our proposal Highlight/ Shadow parameter extraction section(3.2) Models (section 3.3.2) Simulate image Based on parameters And model (section 3.4) Generate set of parameters Compare real and simulated image(section 3.3.1) Segmentation Markov Trees Object detection Context detection (section 3.1)
9
Automated Detection and Classification Models Sonar ATR A Markov Random Field(MRF) model framework is used. MRF models operate well on noisy images. A priori information can be easily incorporated (priors). They are used to retrieve the underlying label field (e.g shadow/non-shadow)
10
Automated Detection and Classification Models Basic MRF Theory A pixel’s class is determined by 2 terms: –The probability of being drawn from each classes distribution. –The classes of its neighbouring pixels.
11
Automated Detection and Classification Models Incorporating A Priori Info Object-highlight regions appear as small, dense clusters. Most highlight regions have an accompanying shadow region. Segment by minimising:
12
Automated Detection and Classification Models Initial Detection Results Results are good (85-90% detection rate). Model sometimes detects false alarms due to clutter such as the surface return – requires more analysis! DETECTED OBJECT
13
Automated Detection and Classification Models Object Feature Extraction The object’s shadow is often extracted for classification. The shadow region is generally more reliable than the object’s highlight region for classification. Most shadow extraction models operate well on flat seafloors but give poor results on complex seafloors.
14
Automated Detection and Classification Models The CSS Model 2 Statistical Snakes segment the mugshot image into 3 regions : object-highlight, object-shadow and background. A priori information is modelled: The highlight is brighter than the shadow An object’s shadow region can only be as wide as its highlight region.
15
Automated Detection and Classification Models CSS Results CSS Model Standard Model
16
Automated Detection and Classification Models The Combined Model Objects detected by MRF model are put through the CSS model. The CSS snakes are initialised using the label field from the detection result. This ensures a confident initialisation each time. The CSS can detect MANY of the false alarms. False alarms without 3 distinct regions ensure the snakes rapidly expand, identifying the detection as a false alarm. Navigation info is also used to produce height information which can also remove false alarms.
17
Automated Detection and Classification Models Results
18
Results 2
19
Automated Detection and Classification Models Results 3
20
Automated Detection and Classification Models Result 4
21
Automated Detection and Classification Models Object Classification The extracted object’s shadow can be used for classification. We extend the classic mine/not-mine classification to provide shape and dimension information. The non-linear nature of the shadow-forming process ensures finding relevant invariant features is difficult. Shadows from the same object
22
Automated Detection and Classification Models Modelling the Sonar Process Mines can be approximated as simple shapes – cylinders, spheres and truncated cones. Using Nav data to slant-range correct, we can generate synthetic shadows under the same sonar conditions as the object was detected. Simple line-of-sight sonar simulator. Very fast.
23
Automated Detection and Classification Models Comparing the Shadows Iterative Technique is required to find best fit. Parameter space limited by considering highlight and shadow length. Synthetic and real shadow compared using the Hausdorff Distance. It measures the mismatch of the 2 shapes. HAUSDORFF DISTANCE
24
Automated Detection and Classification Models Mono-view Results Dempster-Shafer allocates a BELIEF to each class. Unlike Bayesian or Fuzzy methods, D-S theory can also consider union of classes. Bel(cyl)=0.83 Bel(sph)=0.0 Bel(cone)=0.0 Bel(clutter)=0.08 Bel(cyl)=0.0 Bel(sph)=0.303 Bel(cone)=0.45 Bel(clutter)=0.045 Bel(cyl)=0.42 Bel(sph)=0.0 Bel(cone)=0.0 Bel(clutter)=0.46
25
Automated Detection and Classification Models Multi-view Analysis Dempster-Shafer allows results from multiple views to be fused. Mono-Image BeliefFused Belief ObjCylSphConeCluttObjs Fused CylSphConeClutt 10.700.00 0.2110.700.00 0.21 20.830.00 0.081,20.930.00 0.05 30.830.00 0.081,2,30.980.00 0.01 40.170.00 0.671,2,3,40.960.00 0.03
26
Automated Detection and Classification Models Multi-Image Analysis Mono-Image BeliefFused Belief ObjCylSphConeCluttObjs Fused CylSphConeClutt 50.000.170.230.4550.000.170.230.45 60.00 0.370.445,60.00 0.300.60 70.000.3030.450.0455,6,70.000.020.670.17 80.000.320.230.315,6,7,80.000.010.620.20
27
Automated Detection and Classification Models Context Detection The current detection model considers objects as a Highlight/Shadow pair. An object can also be considered as a discrepancy in the surrounding texture field.
28
Automated Detection and Classification Models The way Forward Context Detection using segmentation based on –Markov Random Fields –Variational techniques –Saliency Shape extraction for highlight / shadow –Use of image formation process to force combined meaningful extraction. –Active contours to perform robust extraction (statistical snakes, Mumford-Shah)
29
Automated Detection and Classification Models The way Forward Model Based classification –Initial model parameters extracted from segmentation –Model is refined using A simulator of the image formation + search in parameter space Direct inference using training and large databases Active Appearance models trained on large sets Robustify classification via –Multi view combination –Inclusion of DTM models via simulator via statistical priors?
30
Automated Detection and Classification Models Initial tests Image Rayleigh Segmentation Hierarchical MRF Two class segmentation (Variational approach)
31
Automated Detection and Classification Models Simulator
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.