Presentation is loading. Please wait.

Presentation is loading. Please wait.

Student Mini-Camp Project Report Pattern Recognition Participant StudentsAffiliations Patrick ChoiClaremont Graduate University Joseph McGrathUniv. of.

Similar presentations


Presentation on theme: "Student Mini-Camp Project Report Pattern Recognition Participant StudentsAffiliations Patrick ChoiClaremont Graduate University Joseph McGrathUniv. of."— Presentation transcript:

1 Student Mini-Camp Project Report Pattern Recognition Participant StudentsAffiliations Patrick ChoiClaremont Graduate University Joseph McGrathUniv. of Massachusetts, Lowell Peizhe ShiUniversity of Washington Hem WadharUC Los Angeles Qin WuWest Virginia University Flora XuClaremont Graduate University Advisor Jen-Mei ChangCSU Long Beach

2 Problem Statement Pattern Recognition The subject of pattern recognition in data is broadly known as a sub-category of machine learning which is a scientific discipline that is concerned with the design of algorithms that allow artificial intelligence to learn, based on the information given. We worked on a given set of data which contains distinct images of cats and dogs. The first 160 images are labeled as dogs or cats. And the left 38 images are unlabelled. Our object is to build up pattern recognition architecture on the known data (labeled dogs and cats). Then we use our pattern recognition routine to classify those unknown images (unlabeled dogs and cats) as either dogs or cats correctly.

3 Pattern Recognition Can we produce an algorithm/technique/method to train and distinguish between cats and dogs?

4 Image Pre-Preprocessing RawCanny Filtered2-D Wavelet Transform PCALDA Identification Model

5 Image Pre-Processing Raw When using the raw data of a 64x64 pixel image, it was not manipulated. It went directly to either the PCA or LDA method in the next step of the program. Using the “imread” command in MATLAB, each original TIF image of 80 cats and 79 dogs are written to 80 or 79 by 4096 matrix. Canny Filter Edge Detection Method Using the matrices created during the raw image pre-processing step, these images were then analyzed using a canny filter edge detection method. MATLAB automatically calculates the high and low thresholds, and the gaussian filter uses a sigma value of 1.

6 Image Pre-Processing Canny Edge Detecting cat=importdata('cat.mat');%opening up the matrix “cat”, which contains all of the %raw cat data in one 80x4096 matrix cat=cat'; [m,n]=size(cat); all_cats_edge=zeros(n,m); for j=1:n cat_j=reshape(cat(:,j),64, 64); cat_edge=edge(cat_j,'canny'); file_name=strcat('cat',num2str(j),'.mat'); save(file_name,'cat_edge'); all_cats_edge(j,:)=reshape(cat_edge,1,m); end save('all_cats_edge.mat','all_cats_edge');

7 Image Pre-Processing Raw Canny

8 The wavelet decomposition of an 2­D image can be obtained by performing the filtering consecutively along horizontal and vertical directions (separable filter bank). This is depicted schematically in the following figure. The wavelet decomposition of an 2­D image

9 LL: low-freq. components LH: high freq. components in vertical direction HL: high freq. components in horizontal direction HH: high freq. components in diagonal direction Wavelet decompostion The wavelet decomposition of an 2­D image

10 HL LH HH Edge Detection by Wavelet Method

11 LL: low-freq. components LH: high freq. components in vertical direction HL: high freq. components in horizontal direction HH: high freq. components in diagonal direction Wavelet decompostion The wavelet decomposition of an 2­D image

12 HL LH HH Edge Detection by Wavelet Method

13

14 PCA Method PCA transforms many potentially correlated variables to few uncorrelated ones. – This reduces the dimension of the problem so that we may more easily compare input images to our training sets. – The lower dimension representation uses the ‘highest energy’ singular vectors as a basis for representation.

15 PCA Method The first nine singular vectors for raw image data for dogs & cats.

16 PCA Method The first nine singular vectors for the canny filter edge data for dogs.

17 PCA Method The first nine singular vectors for the Vertical + Horizontal wavelet data for dogs.

18 PCA Method Results using the raw data for training into the PCA methodology

19 PCA Method Results using the canny edge filter data for training into the PCA methodology

20 PCA Method Results using the Wavelet coefficient horizontal + vertical data, for training into the PCA methodology

21 PCA Method Results using the Wavelet coefficient horizontal + vertical + diagonal data, for training into the PCA methodology

22 PCA Method Results: – 17 out of 38 unknown test images identified correctly; test images were converted to V + H Wavelets Take Away: – Potential coding or algorithm flaws

23 Linear Discriminant Analysis (LDA) Idea: project the high dimensional image data linearly into a one dimensional space, where the data is classified using an optimal threshold. Main procedures – Feature extraction: Preprocessing the data from training set. – Selecting the optimal direction of projection w. – Determine the optimal threshold c. – Identify the unknown data.

24 LDA – feature extraction Advantage – Lower dimension, faster computation – Discarding redundant information, more efficient classification Singular Value Decomposition (SVD) – X: preprocessed images for training X = USV T Feature selection – Features: the first n f columns of U as the principle components. – New data: the first n f rows of SV T as the extracted information of the images. – The dimension of the space of data decrease to n f

25 LDA – Optimal direction of projection Goal –maximize the inter-class distance in the projected space –minimize the intra-class distance in the projected space

26 LDA – Optimal direction of projection

27 LDA – Optimal threshold After projecting all training data to the optimal direction w, pick a threshold such that – Total number of error is minimized – Numbers of error of cats and dogs are equal Identification – Feature extraction: project the image on the principle components x e = U f x – Compute the projection of the extracted data on optimal direction w v = w T x e – Compare the projection with the threshold c to identify the class of the unknown image

28 LDA - Testing Various size of training set Use the rest for testing 30 features 10 trials, shuffled images Classification rate around 90% Training: all 80 dogs and 80 cats 40 features Threshold: 43.6 Error: dogs 2, cats 2.

29 LDA - Comparison

30 LDA – on the secret data Missed 3 out of 38, 2 dogs, 1 cat Rate of success: 92%


Download ppt "Student Mini-Camp Project Report Pattern Recognition Participant StudentsAffiliations Patrick ChoiClaremont Graduate University Joseph McGrathUniv. of."

Similar presentations


Ads by Google