Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Classification with the Radial Basis Function Network Based on a Novel Kernel Density Estimation Algorithm Yen-Jen Oyang Department of Computer Science.

Similar presentations


Presentation on theme: "Data Classification with the Radial Basis Function Network Based on a Novel Kernel Density Estimation Algorithm Yen-Jen Oyang Department of Computer Science."— Presentation transcript:

1 Data Classification with the Radial Basis Function Network Based on a Novel Kernel Density Estimation Algorithm Yen-Jen Oyang Department of Computer Science and Information Engineering National Taiwan University

2 An Example of Data Classification DataClassDataClassDataClass ( 15,33 ) O ( 18,28 ) × ( 16,31 ) O ( 9,23 ) × ( 15,35 ) O ( 9,32 ) × ( 8,15 ) × ( 17,34 ) O ( 11,38 ) × ( 11,31 ) O ( 18,39 ) × ( 13,34 ) O ( 13,37 ) × ( 14,32 ) O ( 19,36 ) × ( 18,32 ) O ( 25,18 ) × ( 10,34 ) × ( 16,38 ) × ( 23,33 ) × ( 15,30 ) O ( 12,33 ) O ( 21,28 ) × ( 13,22 ) ×

3 Distribution of the Data Set 。 。 101520 30 。 。 。 。 。 。 。。 × × × × × × × × × × × × × ×

4 Rule Based on Observation

5 Rule Generated by the Proposed RBF(Radial Basis Function) Network Based Learning Algorithm Let and If then prediction=“O”. Otherwise prediction=“X”.

6 (15,33)(11,31)(18,32)(12,33)(15,35)(17,34)(14,32)(16,31)(13,34)(15,30) 1.7232.7452.3271.7941.9732.0451.794 2.027 (9,23)(8,15)(13,37)(16,38)(18,28)(18,39)(25,18)(23,33)(21,28)(9,32)(11,38)(19,36)(10,34)(13,22) 6.45810.082.9392.7455.4513.28710.865.3225.0704.5623.4633.5873.2326.260

7 Identifying Boundary of Different Classes of Objects

8 Boundary Identified

9 The Vector Space Model In the vector space model, each object is described by a number of numerical attributes. For example, the outlook of a man is described by his height, weight, and age.

10 Transformation of Categorical Attributes into Numerical Attributes Represent the attribute values of the object in a binary table form as exemplified in the following:

11 Assign appropriate weight to each column. Treat the weighted vector of each row as the feature vector of the corresponding object.

12 Application of Data Classification in Bioinformatics Data classification has been applied to predict the function and tertiary structure of a protein sequence.

13 Basics of Protein Structures A typical protein consists of hundreds to thousands of amino acids. There are 20 basic amino acids, each of which is denoted by one English character.

14 Three-dimensional Structure of Myoglobin Source: Lectures of BioInfo by yukijuan

15 Prediction of Protein Functions and Tertiary Structures Given a protein sequence, biochemists are interested in its functions and its tertiary structure.

16 The PDB database, which collects proteins with verified tertiary structures, contains ~19,000 proteins. The SWISSPROT database, which collects proteins with verified functions, contains ~110,000 proteins. The PIR-PSD database, which collects proteins with verified functions, contains ~280,000 proteins. The PIR-NREF database, which collects all protein sequences, contains ~1,060,000 proteins.

17 Problem Definition of Kernel Smoothing Given the values of function at a set of samples. We want to find a set of symmetric kernel functions and the corresponding weights such that

18 Kernel Smoothing with the Spherical Gaussian Functions Hartman et al. showed that a linear combination of spherical Gaussian functions can approximate any function with arbitrarily small error. “Layered neural networks with Gaussian hidden units as universal approximations”, Neural Computation, Vol. 2, No. 2, 1990.

19 With the Gaussian kernel functions, we want to find such that

20 Problem Definition of Kernel Density Estimation Assume that we are given a set of samples taken from a probability distribution in a d- dimensional vector space. The problem now is how to find a linear combination of kernel functions that approximate the probability density function of the distribution?

21 The value of the probability density function at a vector can be estimated as follows: where n is the total number of samples, is the distance between vector and its k-th nearest samples, and is the volume of a sphere with radius = in a d-dimensional vector space.

22 A 1-D Example of Kernel Smoothing with the Spherical Gaussian Functions

23 The Existing Approaches for Kernel Smoothing with Spherical Gaussian Functions One conventional approach is to place one Gaussian function at each sample. As a result, the problem becomes how to find for each sample such that

24 The most widely-used objective is to minimize where are test samples and S is the set of training samples. The conventional approach suffers high time complexity, approaching, due to the need to compute the inverse of a matrix.

25 M. Orr proposed a number of approaches to reduce the number of units in the hidden layer of the RBF network. Beatson et. al. proposed O(nlogn) learning algorithms using polyharmonic spline functions.

26 An O(n) Algorithm for Kernel Smoothing In the proposed learning algorithm, we assume uniform sampling. That is, samples are located at the crosses of an evenly- spaced grid in the d-dimensional vector space. Let denote the distance between two adjacent samples. If the assumption of uniform sampling does not hold, then some sort of interpolation can be conducted to obtain the approximate function values at the crosses of the grid.

27 A 2-D Example of Uniform Sampling

28 The Basic Idea of the O(n) Kernel Smoothing Algorithm Under the assumption that the sampling density is sufficiently high, i.e., we have the function values at a sample and its k nearest samples,, are virtually equal. That is,. In other words, is virtually a constant function equal to in the proximity of

29 Accordingly, we can expect that

30 A 1-D Example

31 In the 1-D example, samples at located at, where i is an integer. Under the assumption that, we have and The issue now is to find appropriate and such that

32 If we set,then we have

33 Therefore, with, we can set and obtain for

34 In fact, it can be shown that with, is bounded by Therefore, we have the following function approximator:

35 Generalization of the 1-D Kernel Smoothing Function We can generalize the result by setting, where is a real number. The table on the next page shows the bounds of with various values.

36 Bounds of 0.5 1.0 1.5

37 An Example of the Effect of Different Setting of β

38 The Smoothing Effect The kernel smoothing function is actually a weighted average of the sampled function values. Therefore, selecting a larger value implies that the smoothing effect will be more significant. Our suggestion is set

39 An Example of the Smoothing Effect The smoothing effect Elimination of the smoothing effect with a compensation procedure

40 The General Form of a Kernel Smoothing Function in the Multi- Dimensional Vector Space Under the assumption that the sampling density is sufficiently high, i.e., we have the function values at a sample and its k nearest samples,, are virtually equal. That is,.

41 As a result, we can expect that where are the weights and bandwidths of the Gaussian functions located at, respectively.

42 Since the influence of a Gaussian function decreases exponentially as the distance increases, we can set k to a value such that, for a vector in the proximity of sample, we have

43 Since we have our objective is to find and such that

44 Let Then, we have

45

46 Therefore, with, is virtually a constant function and Accordingly, we want to set

47 Finally, by setting uniformly to, we obtain the following kernel smoothing function that approximates f(v):

48 Generally speaking, if we set uniformly to, we will obtain

49 Application in Data Classification One of the applications of the RBF network is data classification. However, recent development in data classification focuses on the support vector machines (SVM), due to accuracy concern. In this paper, we propose a RBF network based data classifier that can delivers the same level of accuracy as the SVM and enjoys some advantages.

50 The Proposed RBF Network Based Classifier The proposed algorithm constructs one RBF network for approximating the probability density function of one class of objects based on the kernel smoothing algorithm that we just presented.

51 The Proposed Kernel Density Estimation Algorithm for Data Classification Classification of a new object is conducted based on the likelihood function:

52 Let us adopt the following estimation of the value of the probability density function at each training sample:

53 In the kernel smoothing problem, we set the bandwidth of each Gaussian function uniformly to, where is the distance between two adjacent training samples. In the kernel density estimation problem, for each training sample, we need to determine the average distance between two adjacent training samples of the same class in the local region.

54 In the d-dimensional vector space, if the average distance between samples is, then the number of samples in a subspace of volume V is approximately equal to Accordingly, we can estimate by

55 Accordingly, with the kernel smoothing function that we obtain earlier, we have the following approximate probability density function for class-m objects:

56 An interesting observation is that, regardless of the value of, we have. If the observation holds generally, then

57 In the discussion above, is defined to be the distance between sample and its nearest training sample. However, this definition depends on only one single sample and tends to be unreliable, if the data set is noisy. We can replace with

58 Parameter Tuning The discussions so far are based on the assumption that the sampling density is sufficiently high, which may not hold for some real data sets.

59 Three parameters, namely d’, k’, k”,are incorporated in the learning algorithm:

60 One may wonder how should be set. According to our experimental results, the value of has essentially no effect, as long as is set to a value within.

61 Time Complexity The average time complexity to construct a RBF network is if the k-d tree structure is employed, where n is the number of training samples. The time complexity to classify c new objects with unknown class is

62 Comparison of Classification Accuracy of the 6 Smaller Data Sets Data setsclassification algorithms proposed algorithmSVM1NN3NN 1. iris (150) 97.33 (k’ = 24, k” = 14, d’ = 5,  = 0.7) 97.3394.094.67 2. wine (178) 99.44 (k’ = 3, k” = 16, d’ = 1,  = 0.7) 99.4496.0894.97 3. vowel (528) 99.62 (k’ = 15, k” = 1, d’ = 1,  = 0.7) 99.0599.4397.16 4. segment (2310) 97.27 (k’ = 25, k” = 1, d’ = 1,  = 0.7) 97.4096.8495.98 Avg. 1-498.4298.3196.5995.70 5. glass (214) 75.74 (k’ = 9, k” = 3, d’ = 2,  = 0.7) 71.5069.6572.45 6. vehicle (846) 73.53 (k’ = 13, k” = 8, d’ = 2,  = 0.7) 86.6470.4571.98 Avg. 1-690.4991.8987.7487.87

63 Data setsclassification algorithms proposed algorithmSVM1NN3NN 7. satimage (4435,2000) 92.30 (k’ = 6, k” = 26, d’ = 1,  = 0.7) 91.3089.3590.6 8. letter (15000,5000) 97.12 (k’ = 28, k” = 28, d’ = 2,  = 0.7) 97.9895.2695.46 9. shuttle (43500,14500) 99.94 (k’ = 18, k” = 1, d’ = 3,  = 0.7) 99.9299.9199.92 Avg. 7-996.4596.4094.8495.33 Comparison of Classification Accuracy of the 3 Larger Data Sets

64 Data Reduction As the proposed learning algorithm is instance- based, removal of redundant training samples will lower the complexity of the RBF network. The effect of a naïve data reduction mechanism was studied. The naïve mechanism removes a training sample, if all of its 10 nearest samples belong to the same class as this particular sample.

65 Effect of Data Reduction satimagelettershuttle # of training samples in the original data set 44351500043500 # of training samples after data reduction is applied 18157794627 % of training samples remaining 40.92%51.96%1.44% Classification accuracy after data reduction is applied 92.15%96.18%99.32% Degradation of accuracy due to data reduction -0.15%-0.94%-0.62%

66 # of training samples after data reduction is applied # of support vectors identified by LIBSVM satimage18151689 letter77948931 shuttle627287

67 Execution Times (in seconds) Proposed algorithm without data reduction Proposed algorithm with data reduction SVM Cross validationsatimage67026564622 letter28251724386814 shuttle9679559.9467825 Make classifiersatimage5.910.8521.66 letter17.056.48282.05 shuttle17450.69129.84 Testsatimage21.37.411.53 letter128.651.7494.91 shuttle996.15.852.13

68 Conclusions A novel learning algorithm for data classification with the RBF network is proposed. The proposed RBF network based data classification algorithm delivers the same level of accuracy as the SVM.

69 The time complexity for constructing a RBF network based on the proposed algorithm is, which is much lower than that required by the SVM. The proposed RBF network based classifier can handle data sets with multiple classes directly. It is of interest to develop some sort of data reduction mechanisms.

70 The powerpoint file of this presentation can be downloaded from syslab.csie.ntu.edu.tw An extended version of the presented paper can be downloaded at the same address.


Download ppt "Data Classification with the Radial Basis Function Network Based on a Novel Kernel Density Estimation Algorithm Yen-Jen Oyang Department of Computer Science."

Similar presentations


Ads by Google