Download presentation
Presentation is loading. Please wait.
Published byGregory York Modified over 9 years ago
1
Gene Set Enrichment Analysis Microarray Classification STAT115 Jun S. Liu and Xiole Shirley Liu
2
Outline Gene ontology –Check differential expression and clustering results –Gene set enrichment analysisGene set enrichment analysis Unsupervised learning for classificationUnsupervised learning –Clustering and KNN –PCA (dimension reduction)PCA Supervised learning for classificationSupervised learning –CART, SVMCARTSVM Expression and genome resources
3
GO Relationships: –Subclass: Is_a –Membership: Part_of –Topological: adjacent_to; Derivation: derives_from –E.g. 5_prime_UTR is part_of a transcript, and mRNA is_a kind of transcript Same term could be annotated at multiple branches Directed acyclic graph
4
Evaluate Differentially Expressed Genes NetAffx mapped GO terms for all probesets Whole genomeUp genes GO term X10080 Total20K200 Statistical significance? Binomial proportional test –p = 100 / 20 K = 0.005 –Check z table
5
Evaluate Differentially Expressed Genes Whole genomeUp genes GO term X10080 Total20K200 Chi sq test: Up!UpTotal GO: 80 (1)20 (99)100 !GO: 120 (199)20K-120 (19701)20K-100 Total:20020K-20020K –Check Chi-sq table
6
GO Tools for Microarray Analysis 40 tools
7
GO on Clustering Evaluate and refine clustering –Check GO term for members in the cluster –Are GO term significantly enriched? –Can we summarize what this cluster of these genes do? –Are there conflicting members in the cluster? Annotate unknown genes –After clustering, check GO term –Can we infer an unknown gene’s function based on the GO terms of cluster members?
8
Gene Set Enrichment Analysis In some microarray experiments comparing two conditions, there might be no single gene significantly diff expressed, but a group of genes slightly diff expressed Check a set of genes with similar annotation (e.g. GO) and see their expression values –Kolmogorov-Smirnov test –One sample z-test GSEA at Broad Institute
9
Gene Set Enrichment Analysis Kolmogorov-Smirnov test –Determine if two datasets differ significantly –Cumulative fraction function What fraction of genes are below this fold change?
10
Gene Set Enrichment Analysis Set of genes with specific annotation involved in coordinated down-regulation Need to define the set before looking at the data Can only see the significance by looking at the whole set
11
Gene Set Enrichment Analysis Alternative to KS: one sample z-test –Population with all the genes follow normal ~ N( , 2 ) –Avg of the genes (X) with a specific annotation:
12
Dimension Reduction High dimensional data points are difficult to visualize Always good to plot data in 2D –Easier to detect or confirm the relationship among data points –Catch stupid mistakes (e.g. in clustering) Two ways to reduce: –By genes: some experiments are similar or have little information –By experiments: some genes are similar or have little information
13
Principal Component Analysis Optimal linear transformation that chooses a new coordinate system for the data set that maximizes the variance by projecting the data on to new axes in order of the principal components Components are orthogonal (mutually uncorrelated) Few PCs may capture most variation in original data E.g. reduce 2D into 1D data
14
Principal Component Analysis Achieved by singular value decomposition (SVD): X = UDV T X is the original N p data –E.g. N genes, p experiments V is p p project directions –Orthogonal matrix: U T U = I p –v 1 is direction of the first projection –Linear combination (relative importance) of each experiment or (gene if PCA on samples)
15
PCA U is N p, relative projection of points D is p p scaling factor –Diagonal matrix, d 1 d 2 d 3 … d p 0 u i1 d 1 is distance along v 1 from origin (first principal components) –Expression value projected on v 1 –v 2 is 2 nd projection direction, u i2 d 2 is 2 nd principal component, so on Captured variances by the first m principal components
16
PCA N P × P P = N P P Original data Projection dir Projected value scale X 11 V 11 + X 12 V 21 + X 13 V 31 + …= X 11 ’ = U 11 D 11 X 21 V 11 + X 22 V 21 + X 23 V 31 + …= X 21 ’ = U 21 D 11 X 11 V 12 + X 12 V 22 + X 13 V 32 + …= X 12 ’ = U 12 D 22 X 21 V 12 + X 22 V 22 + X 23 V 32 + …= X 22 ’ = U 22 D 22 1 st Principal Component 2 nd Principal Component
17
PCA v1v1 v2v2 v1v1 v2v2 v1v1 v2v2
18
PCA on Genes Example Cell cycle genes, 13 time points, reduced to 2D Genes: 1: G1; 4: S; 2: G2; 3: M
19
PCA Example Variance in data explained by the first n principle components
20
PCA Example The weights of the first 8 principle directions This is an example of PCA to reduce samples Can do PCA to reduce the genes as well –Use first 2-3 PC to plot samples, give more weight to the more differentially expressed genes, can often see sample classification v1v2v3v4v1v2v3v4
21
Microarray Classification ?
22
Classification Equivalent to machine learning methods Task: assign object to class based on measurements on object –E.g. is sample normal or cancer based on expression profile? Unsupervised learning –Ignore known class labels, e.g. cluster analysis or KNN –Sometimes can’t separate even the known classes Supervised learning: –Extract useful features based on known class labels to best separate classes –Can over fit the data, so need to separate training and test set (e.g. cross-validation)
23
Clustering Classification Which known samples does the unknown sample cluster with? No guarantee that the known sample will cluster Try different clustering methods (semi-supervised) –E.g. change linkage, use subset of genes
24
K Nearest Neighbor Used in missing value estimation For observation X with unknown label, find the K observations in the training data closest (e.g. correlation) to X Predict the label of X based on majority vote by KNN K can be determined by predictability of known samples, semi-supervised again! Offer little insights into mechanism
25
STAT115 03/18/2008 25 Supervised Learning Performance Assessment If error rate is estimated from whole learning data set, it will be over-optimistic (do well now, but poorly in future observations) Divide observations into L1 and L2 –Build classifier using L1 –Compute classifier error rate using L2 –Requirement: L1 and L2 are iid (independent & identically-distributed) N-fold cross validation –Divide data into N subsets (equal size), build classifier on (N-1) subsets, compute error rate on left out subset
26
Classification And Regression Tree Split data using set of binary (or multiple value) decisions Root node (all data) has certain impurities, need to split the data to reduce impurities
27
CART Measure of impurities –Entropy –Gini index impurity Example with Gini: multiply impurity by number of samples in the node –Root node (e.g. 8 normal & 14 cancer) –Try split by gene x i (x i 0, 13 cancer; x i < 0, 1 cancer & 8 normal): –Split at gene with the biggest reduction in impurities
28
CART Assume independence of partitions, same level may split on different gene Stop splitting –When impurity is small enough –When number of node is small Pruning to reduce over fit –Training set to split, test set for pruning –Split has cost, compared to gain at each split
29
Support Vector Machine SVM –Which hyperplane is the best?
30
Support Vector Machine SVM finds the hyperplane that maximizes the margin Margin determined by support vectors (samples lie on the class edge), others irrelevant
31
Support Vector Machine SVM finds the hyperplane that maximizes the margin Margin determined by support vectors others irrelevant Extensions: –Soft edge, support vectors diff weight –Non separable: slack var > 0 Max (margin – # bad)
32
Nonlinear SVM Project the data through higher dimensional space with kernel function, so classes can be separated by hyperplane A few implemented kernel functions available in Matlab & BioConductor, the choice is usually trial and error and personal experience K(x,y) = (xy) 2
33
Most Widely Used Sequence IDs GenBank: all submitted sequences EST: Expressed Sequence Tags (mRNA), some redundancy, might have contaminations UniGene: computationally derived gene-based transcribed sequence clusters Entrez Gene: comprehensive catalog of genes and associated information, ~ traditional concept of “gene” RefSeq: reference sequences mRNAs and proteins, individual transcript (splice variant)
34
UCSC Genome Browser Can display custom tracks
35
Entrez: Main NCBI Search Engine
36
Public Microarray Databases SMD: Stanford Microarray Database, most Stanford and collaborators’ cDNA arraysSMD GEO: Gene Expression Omnibus, a NCBI repository for gene expression and hybridization data, growing quickly.GEO Oncomine: Cancer Microarray DatabaseOncomine –Published cancer related microarrays –Raw data all processed, nice interface
37
Outline Gene ontology –Check diff expr and clustering, GSEA Microarray clustering: –Unsupervised Clustering, KNN, PCA –Supervised learning for classification CART, SVM Expression and genome resources
38
Acknowledgment Kevin Coombes & Keith Baggerly Darlene Goldstein Mark Craven George Gerber Gabriel Eichler Ying Xie Terry Speed & Group Larry Hunter Wing Wong & Cheng Li Ping Ma, Xin Lu, Pengyu Hong Mark Reimers Marco Ramoni Jenia Semyonov
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.