Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evolutionary Feature Extraction for SAR Air to Ground Moving Target Recognition – a Statistical Approach Evolving Hardware Dr. Janusz Starzyk Ohio University.

Similar presentations


Presentation on theme: "Evolutionary Feature Extraction for SAR Air to Ground Moving Target Recognition – a Statistical Approach Evolving Hardware Dr. Janusz Starzyk Ohio University."— Presentation transcript:

1

2 Evolutionary Feature Extraction for SAR Air to Ground Moving Target Recognition – a Statistical Approach Evolving Hardware Dr. Janusz Starzyk Ohio University

3 -2 Neural Network Data Classification Concept of “ Logic Brain” Random learning data generation Multiple space classification of data Feature function extraction Dynamic selectivity strategy Training procedure for data identification FPGA implementation for fast training process

4 -3 Neural Network Data Classification Concept of “ Logic Brain” Threshold setup converts analog to digital world Threshold setup converts analog to digital world “Logic Brain” is possible based on artificial neural “Logic Brain” is possible based on artificial neural network network Random learning data generation Gaussian distribution random multiple dimension Gaussian distribution random multiple dimension data generation data generation Half data sets prepared for learning procedure Half data sets prepared for learning procedure Another half used later for training procedure Another half used later for training procedure Abdulqadir Alaqeeli, and Jing Pang

5 -4 Neural Network Data Classification Multiple space classification of data Each space can be represented by a set of minimum base vectors Each space can be represented by a set of minimum base vectors Feature function extraction and dynamic selecting strategy Conditional entropy extracts information in each Conditional entropy extracts information in each subspace subspace Different combinations of base vectors compose the redundant sets of new subspace Different combinations of base vectors compose the redundant sets of new subspace  expansion strategy Minimum function selection Minimum function selection  shrinking strategy

6 -5 Neural Network Data Classification FPGA implementation for fast training process Learning results are saved on board Learning results are saved on board Testing data sets are generated on board and sent Testing data sets are generated on board and sent through the artificial neural network generated on through the artificial neural network generated on board to test the successful data classification rate board to test the successful data classification rate The results are displayed on board The results are displayed on board Promising application Especially useful for feature extraction of large data sets Especially useful for feature extraction of large data sets Catastrophic circuit fault detection Catastrophic circuit fault detection

7 -6 Information Index: Background A priori class probabilities are known Entropy measure based on conditional probabilities X X X X X X X XX X X X X OO O O O O O O O O O Class A Class B X

8 -7 Information Index: Background P 1 and P 2 and a priori class probabilities P 1w and P 2w are conditional probabilities of correct classification for each class P 12w and P 21w are conditional probabilities of misclassification given a test signal P 1w, P 2w, P 12w and P 21w are calculated using Bayesian estimates of their probability density functions

9 -8 Information Index: Background probability density functions of P 1w, P 2w, P 12w, P 21w

10 -9 Direct Integration for N dimensions, m n grid points are needed to estimate  S i <  S k uniform grid nonuniform grid nonuniform grid  S i =  S k  S k  S i  S k  S i

11 -10 Monte Carlo Integration pdf 1 pdf 2 W(X i ) xixixixi X i generated with pdf 1 pdf

12 -11 Information Index: probability density functions P2w

13 -12 Information Index: weighted pdfs P2w

14 -13 Information Index: Monte Carlo Integration To integrate the probability density function – generate random points x i with pdf 1 – weight generated points according to –estimate the conditional probability P 1w using

15 -14 Information Index and Probability of Misclassification

16 -15 Standard Deviation of Information in MC Simulation

17 -16 Normalized Standard Deviation of Information

18 -17 Information Index: Status MIIFS was generalized to continuous distributions N-dimensional information index was developed Efficient N-dimensional integration was used Information error analysis was performed Information index can be used with non Gaussian distributions For small training sets and low information index information error is larger than information

19 -18 Optimum Transformation: Background Principal Component Analysis (PCA) based on Mahalanobis distance suffers from scaling PCA assumes Gaussian distributions and estimates covariance matrices and mean values PCA is sensitive to outliers Wavelets provide compact data representation and improve recognition Improvement shows no statistically significant difference in recognition for different wavelets Need for a specialized transformation

20 -19 Optimum Transformation: Haar Wavelet Example

21 -20 Optimum Transformation: Haar Wavelet Repeat average and difference log 2 (n) times

22 -21 Optimum Transformation: Haar Wavelet Waveform interpretation

23 -22 Optimum Transformation: Haar Wavelet Matrix interpretation b=W*a where

24 -23 Optimum Transformation: Haar Wavelet Matrix interpretation for the class of signals B=W*A where A is (n x m) input signal matrix Selection of n best coefficients performed using the information index B s1 =S 1 *W*A where S 1 is (n x n*log 2 (n)) selection matrix

25 -24 Optimum Transformation: Evolutionary Iterations Iterating on the selected result B s2 =S 2 *W* B s1 where S 2 is a selection matrix or B s2 =S 2 *W* S 1 *W* A after k iterations B sk = S k *W*... S 2 *W* S 1 *W* A So, the optimized transformation matrix T= S k *W*... S 2 *W* S 1 *W can be obtained from the Haar wavelet

26 -25 Optimum Transformation: Evolutionary Iterations Learning with the evolved features

27 -26 Optimum Transformation: Evolutionary Iterations Waveform interpretation of T rows

28 -27 Optimum Transformation: Evolutionary Iterations Mean values and the evolved transformation 020406080100120140 -1.5 -0.5 0 0.5 1 1.5 Original Signals and the evolved transformation Bin Index Signal Value

29 -28 Two Class Training Training on HRR signals 17 o depression angle profiles of BMP2 and BTR60

30 -29 window t 8bit Sample # 1 Sample # m 8bit Haar-Wavelet Transform N.N.input signal is recognized 1 k Note: k  m Wavelet-Based Reconfigurable FPGA for Classification

31 -30 Block Diagram of The Parallel Architecture 0 1 2 3 4 5 6 7 (0+1)/2 0 1 2 3 4 5 6 7 (0+1)/2 0 1 2 3 4 5 6 7

32 -31 Simplified Block Diagram of The Serial Architecture RR ARDR DRARDRAR ADADADAD R: register using CLBs A: registered Average D: registered difference R: register using IOBs 2 01 (0+1)/2 (0-1) (2+3)/2 (2-3) (0+1)/2 (0-1) 4 23 First the Blue Second the Green

33 -32 RAM-Based Wavelet RAM 16x8 RAM 16x8 RAM 16x8 RAM 16x8 RAM 16x8 PE WA RA Control Done Start Data In

34 -33 The Processing Element 20 8 10 2 11 10 2 11 X 8 10 2 11 -8 9 XX 9 6 5 X 9 9 5 X 0010100101

35 -34 Results: For One Iteration of Haar Wavelet For 8 samples: –Parallel arch.: 120 CLBs, 128 IOBs, 58ns. –Serial arch. : 98 CLBs*, 72 IOBs, 148ns*. Parallel Arch. wins for larger number of samples. For 16 samples: –Parallel arch.: 320 CLBs, 256 IOBs, 233ns. –RAM-Based arch.: 136 CLBs, 16 IOBs, ~ 1  s. RAM-Based Arch. Wins since 1  s is not so slow. ------------------------------------------------------------ * These values increase very fast when the # of samples increases, and the delay becomes extremely higher.

36 -35 Reconfigurable Haar-Wavelet-Based Architecture PE ‘ Data

37 -36

38 -37 Test Results Testing on HRR signals 15 o depression angle profiles of BMP2 and BTR60 With 15 features selected correct classification for BMP2 data is 69.3% and for BTR60 is 82.6% Comparable results in SHARP Confusion Matrix for BMP2 data is 56.7% and for BTR60 is 67%

39 -38 Problem Issues BTR60 signals with 17 o and 15 o depression angles do not have compatible statistical distributions

40 -39 Problem Issues BMP2 and BTR60 signal distributions are not Gaussian

41 -40 Work Completed Information index and its properties Multidimensional MC integration Information as a measure of learning quality Information error Wavelets and their effect on pattern recognition Haar wavelet as a linear matrix operator Evolution of the Haar wavelet Statistical support for classification

42 -41 Recommendations and Future Work Training Data must represent a statistical sample of all signals not a hand picked subset Probability density functions will be approximated using parametric or NN approach Information measure will be extended to k-class problems Training and test will be performed on 12 class data Dynamic clustering will prepare decision tree structure Hybrid, evolutionary classifier will be developed


Download ppt "Evolutionary Feature Extraction for SAR Air to Ground Moving Target Recognition – a Statistical Approach Evolving Hardware Dr. Janusz Starzyk Ohio University."

Similar presentations


Ads by Google