Download presentation
Presentation is loading. Please wait.
1
Applications of Data Mining in Microarray Data Analysis Yen-Jen Oyang Dept. of Computer Science and Information Engineering
2
Observations and Challenges in the Information Age A huge volume of information has been and is being digitized and stored in the computer. Due to the volume of digitized information, effectively exploitation of information is beyond the capability of human being without the aid of intelligent computer software.
3
An Example of Data Mining Given the data set shown on next slide, can we figure out a set of rules that predict the classes of objects?
4
Data Set DataClassDataClassDataClass ( 15,33 ) O ( 18,28 ) × ( 16,31 ) O ( 9,23 ) × ( 15,35 ) O ( 9,32 ) × ( 8,15 ) × ( 17,34 ) O ( 11,38 ) × ( 11,31 ) O ( 18,39 ) × ( 13,34 ) O ( 13,37 ) × ( 14,32 ) O ( 19,36 ) × ( 18,32 ) O ( 25,18 ) × ( 10,34 ) × ( 16,38 ) × ( 23,33 ) × ( 15,30 ) O ( 12,33 ) O ( 21,28 ) × ( 13,22 ) ×
5
Distribution of the Data Set 。 。 101520 30 。 。 。 。 。 。 。。 × × × × × × × × × × × × × ×
6
Rule Based on Observation
7
Rule Generated by a RBF(Radial Basis Function) Network Based Learning Algorithm Let and If then prediction=“O”. Otherwise prediction=“X”.
8
(15,33)(11,31)(18,32)(12,33)(15,35)(17,34)(14,32)(16,31)(13,34)(15,30) 1.7232.7452.3271.7941.9732.0451.794 2.027 (9,23)(8,15)(13,37)(16,38)(18,28)(18,39)(25,18)(23,33)(21,28)(9,32)(11,38)(19,36)(10,34)(13,22) 6.45810.082.9392.7455.4513.28710.865.3225.0704.5623.4633.5873.2326.260
9
Identifying Boundary of Different Classes of Objects
10
Boundary Identified
11
Data Mining / Knowledge Discovery The main theme of data mining is to discover unknown and implicit knowledge in a large dataset. There are three main categories of data mining algorithms: Classification; Clustering; Mining association rule/correlation analysis.
12
Data Classification In a data classification problem, each object is described by a set of attribute values and each object belongs to one of the predefined classes. The goal is to derive a set of rules that predicts which class a new object should belong to, based on a given set of training samples. Data classification is also called supervised learning.
13
Instance-Based Learning In instance-based learning, we take k nearest training samples of a new instance (v 1, v 2, …, v m ) and assign the new instance to the class that has most instances in the k nearest training samples. Classifiers that adopt instance-based learning are commonly called the KNN classifiers.
14
Example of the KNN If an 1NN classifier is employed, then the prediction of “ ” = “X”. If an 3NN classifier is employed, then prediction of “ ” = “O”.
15
Applications of Data Classification in Bioinformatics In microarray data analysis, data classification is employed to predict the class of a new sample based on the existing samples with known class.
16
For example, in the Leukemia data set, there are 72 samples and 7129 genes. 25 Acute Myeloid Leukemia(AML) samples. 38 B-cell Acute Lymphoblastic Leukemia samples. 9 T-cell Acute Lymphoblastic Leukemia samples.
17
Model of Microarray Data Sets Gene 1 Gene 2 ‧‧‧‧‧‧ Gene n Sample 1 Sample 2 Sample m
18
Alternative Data Classification Algorithms Decision tree (Q4.5 and Q5.0); Instance-based learning(KNN); Naïve Bayesian classifier; Support vector machine(SVM); Novel approaches including the RBF network based classifier that we have recently proposed.
19
Accuracy of Different Classification Algorithms Data set classification algorithms RBFSVM1NN3NN Satimage (4335,2000) 92.3091.3089.3590.6 Letter (15000,5000) 97.1297.9895.2695.46 Shuttle (43500,14500) 99.9499.9299.9199.92 Average96.4596.4094.8495.33
20
Comparison of Execution Time(in seconds) RBF without data reduction RBF with data reduction SVM Cross validation Satimage 67026564622 Letter 28251724386814 Shuttle 9679559.9467825 Make classifier Satimage 5.910.8521.66 Letter 17.056.48282.05 Shuttle 17450.69129.84 Test Satimage 21.37.411.53 Letter 128.651.7494.91 Shuttle 996.15.852.13
21
More Insights SatimageLetterShuttle # of training samples in the original data set 44351500043500 # of training samples after data reduction is applied 18157794627 % of training samples remaining 40.92%51.96%1.44% Classification accuracy after data reduction is applied 92.1596.1899.32 # of support vectors in identified by LIBSVM 16898931287
22
Data Clustering Data clustering concerns how to group a set of objects based on their similarity of attributes and/or their proximity in the vector space. Data clustering is also called unsupervised learning.
23
The Agglomerative Hierarchical Clustering Algorithms The agglomerative hierarchical clustering algorithms operate by maintaining a sorted list of inter-cluster distances. Initially, each data instance forms a cluster. The clustering algorithm repetitively merges the two clusters with the minimum inter-cluster distance.
24
Upon merging two clusters, the clustering algorithm computes the distances between the newly-formed cluster and the remaining clusters and maintains the sorted list of inter-cluster distances accordingly. There are a number of ways to define the inter-cluster distance: minimum distance (single-link); maximum distance (complete-link); average distance; mean distance.
25
An Example of the Agglomerative Hierarchical Clustering Algorithm For the following data set, we will get different clustering results with the single- link and complete-link algorithms. 1 2 34 5 6
26
Result of the Single-Link algorithm 1 2 34 5 6 1 3 4526 Result of the Complete-Link algorithm 1 2 34 5 6 1 3 2456
27
Remarks The single-link and complete-link are the two most commonly used alternatives. The single-link suffers the so-called chaining effect. On the other hand, the complete-link also fails in some cases.
28
Example of the Chaining Effect Single-link (10 clusters) Complete-link (2 clusters)
29
Effect of Bias towards Spherical Clusters Single-link (2 clusters)Complete-link (2 clusters)
30
K-Means: A Partitional Data Clustering Algorithm The k-means algorithm is probably the most commonly used partitional clustering algorithm. The k-means algorithm begins with selecting k data instances as the means or centers of k clusters.
31
The k-means algorithm then executes the following loop iteratively until the convergence criterion is met. repeat { assign every data instance to the closest cluster based on the distance between the data instance and the center of the cluster; compute the new centers of the k clusters; } until(the convergence criterion is met);
32
A commonly-used convergence criterion is
33
Illustration of the K-Means Algorithm---(I) initial center
34
Illustration of the K-Means Algorithm---(II) x x x new center after 1 st iteration
35
Illustration of the K-Means Algorithm---(III) new center after 2 nd iteration
36
A Case in which the K-Means Algorithm Fails The K-means algorithm may converge to a local optimal state as the following example demonstrates: Initial Selection
37
Remarks As the examples demonstrate, no clustering algorithm is definitely superior to other clustering algorithms with respect to clustering quality.
38
Applications of Data Clustering in Microarray Data Analysis Data clustering has been employed in microarray data analysis for identifying the genes with similar expressions; identifying the subtypes of samples.
39
Feature Selection in Microarray Data Analysis In microarray data analysis, it is highly desirable to identify those genes that are correlated to the classes of samples. For example, in the Leukemia data set, there are 7129 genes. We want to identify those genes that lead to different disease types.
40
Furthermore, Inclusion of features that are not correlated to the classification decision may result in lower classification accuracy or poor clustering quality. For example, in the data set shown on the following page, inclusion of the feature corresponding to the Y-axis causes incorrect prediction of the test instance marked by “ ”, if a 3NN classifier is employed.
41
It is apparent that “o”s and “x” s are separated by x=10. If only the attribute corresponding to the x- axis was selected, then the 3NN classifier would predict the class of “ ” correctly. x=10 x y
42
Univariate Analysis in Feature Selection In the univariate analysis, the importance of each feature is determined by how objects of different classes are distributed in this particular axis. Let and denote the feature values of class-1 and class-2 objects, respectively. Assume that the feature values of both classes of objects follow the normal distribution.
43
Then, is a t-distribution with degree of freedom = (m+n-2), where If the t statistic of a feature is lower than a threshold, then the feature is deleted.
44
Multivariate Analysis The univariate analysis is not able to identify crucial features in the following example.
45
Therefore, multivariate analysis has been developed. However, most multivariate analysis algorithms that have been proposed suffer high time complexity and may not be applicable in real-world problems.
46
Summary Data clustering and data classification have been widely used in microarray data analysis. Feature selection is the most challenging issue as of today.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.