L15:Microarray analysis (Classification). The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.

Slides:



Advertisements
Similar presentations
Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
Advertisements

Component Analysis (Review)
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
Separating Hyperplanes
Linear Discriminant Functions
Visual Recognition Tutorial
CSE182 L14 Mass Spec Quantitation MS applications Microarray analysis.
Support Vector Machines (and Kernel Methods in general)
Principal Component Analysis
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
L15:Microarray analysis (Classification) The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Dimensional reduction, PCA
L16: Micro-array analysis Dimension reduction Unsupervised clustering.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Discriminant Functions Alexandros Potamianos Dept of ECE, Tech. Univ. of Crete Fall
Greg GrudicIntro AI1 Introduction to Artificial Intelligence CSCI 3202: The Perceptron Algorithm Greg Grudic.
Whole Genome Assembly Microarray analysis. Mate Pairs Mate-pairs allow you to merge islands (contigs) into super-contigs.
Linear Discriminant Functions Chapter 5 (Duda et al.)
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Sp’10Bafna/Ideker Classification (SVMs / Kernel method)
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Summarized by Soo-Jin Kim
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
Inference for the mean vector. Univariate Inference Let x 1, x 2, …, x n denote a sample of n from the normal distribution with mean  and variance 
CSE182 L14 Mass Spec Quantitation MS applications Microarray analysis.
Speech Recognition Pattern Classification. 22 September 2015Veton Këpuska2 Pattern Classification  Introduction  Parametric classifiers  Semi-parametric.
Ch 4. Linear Models for Classification (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized and revised by Hee-Woong Lim.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Non-Bayes classifiers. Linear discriminants, neural networks.
Chapter 3: Maximum-Likelihood Parameter Estimation l Introduction l Maximum-Likelihood Estimation l Multivariate Case: unknown , known  l Univariate.
Linear hyperplanes as classifiers Usman Roshan. Hyperplane separators.
Lecture 4 Linear machine
Linear Models for Classification
Discriminant Analysis
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Insight: Steal from Existing Supervised Learning Methods! Training = {X,Y} Error = target output – actual output.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 7: Linear and Generalized Discriminant Functions.
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 21: Perceptron training and convergence.
CZ5225: Modeling and Simulation in Biology Lecture 7, Microarray Class Classification by Machine learning Methods Prof. Chen Yu Zong Tel:
Final Exam Review CS479/679 Pattern Recognition Dr. George Bebis 1.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Dimensionality reduction
Classification Course web page: vision.cis.udel.edu/~cv May 14, 2003  Lecture 34.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 9: Review.
CSE182 L14 Mass Spec Quantitation MS applications Microarray analysis.
Linear Classifiers Dept. Computer Science & Engineering, Shanghai Jiao Tong University.
Linear hyperplanes as classifiers Usman Roshan. Hyperplane separators.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
SUPPORT VECTOR MACHINES Presented by: Naman Fatehpuria Sumana Venkatesh.
Computational Intelligence: Methods and Applications Lecture 22 Linear discrimination - variants Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Linear Discriminant Functions Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Support vector machines
LINEAR CLASSIFIERS The Problem: Consider a two class task with ω1, ω2.
LECTURE 11: Advanced Discriminant Analysis
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
LECTURE 10: DISCRIMINANT ANALYSIS
Molecular Classification of Cancer
Statistical Learning Dong Liu Dept. EEIS, USTC.
Whole Genome Assembly.
Pattern Recognition CS479/679 Pattern Recognition Dr. George Bebis
Generally Discriminant Analysis
LECTURE 09: DISCRIMINANT ANALYSIS
Support vector machines
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
Linear Discrimination
Presentation transcript:

L15:Microarray analysis (Classification)

The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute Lymphocytic Leukemia) & AML (Acute Myelogenous Leukima) Possibly, the set of genes over-expressed are different in the two conditions

Geometric formulation Each sample is a vector with dimension equal to the number of genes. We have two classes of vectors (AML, ALL), and would like to separate them, if possible, with a hyperplane.

Hyperplane properties Given an arbitrary point x, what is the distance from x to the plane L? –D(x,L) = (  T x -  0 ) When are points x 1 and x 2 on different sides of the hyperplane? Ans: If D(x 1,L)* D(x 2,L) < 0 x 00

Separating by a hyperplane Input: A training set of +ve & -ve examples Recall that a hyperplane is represented by – {x:-  0 +  1 x 1 +  2 x 2 =0} or –(in higher dimensions) {x:  T x-  0 =0} Goal: Find a hyperplane that ‘separates’ the two classes. Classification: A new point x is +ve if it lies on the +ve side of the hyperplane (D(x,L)> 0), -ve otherwise. x2x2 x1x1 + -

Hyperplane separation What happens if we have many choices of a hyperplane? –We try to maximize the distance of the points from the hyperplane. What happens if the classes are not separable by a hyperplane? –We define a function based on the amount of mis-classification, and try to minimize it

Error in classification Sample Function: sum of distances of all misclassified points –Let y i =-1 for +ve example i, y i =+1 otherwise. The best hyperplane is one that minimizes D( ,  0 ) Other definitions are also possible. x2x2 x1x1 + - 

Restating Classification The (supervised) classification problem can now be reformulated as an optimization problem. Goal: Find the hyperplane ( ,  0 ), that optimizes the objective D( ,  0 ). No efficient algorithm is known for this problem, but a simple generic optimization can be applied. Start with a randomly chosen ( ,  0 ) Move to a neighboring (  ’,  ’ 0 ) if D(  ’,  ’ 0 )< D( ,  0 )

Gradient Descent The function D(  ) defines the error. We follow an iterative refinement. In each step, refine  so the error is reduced. Gradient descent is an approach to such iterative refinement. D(  )  D’(  )

Rosenblatt’s perceptron learning algorithm

Classification based on perceptron learning Use Rosenblatt’s algorithm to compute the hyperplane L=( ,  0 ). Assign x to class 1 if f(x) >= 0, and to class 2 otherwise.

Perceptron learning If many solutions are possible, it does no choose between solutions If data is not linearly separable, it does not terminate, and it is hard to detect. Time of convergence is not well understood

Linear Discriminant analysis Provides an alternative approach to classification with a linear function. Project all points, including the means, onto vector . We want to choose  such that –Difference of projected means is large. –Variance within group is small x2x2 x1x1 + - 

LDA cont’d x2x2 x1x1 + -  What is the projection of a point x onto  ? –Ans:  T x What is the distance between projected means? x

LDA Cont’d Fisher Criterion

LDA Therefore, a simple computation (Matrix inverse) is sufficient to compute the ‘best’ separating hyperplane

Maximum Likelihood discrimination Suppose we knew the distribution of points in each class. –We can compute Pr(x|  i ) for all classes i, and take the maximum

ML discrimination recipe We know the distribution for each class, but not the parameters Estimate the mean and variance for each class. For a new point x, compute the discrimination function g i (x) for each class i. Choose argmax i g i (x) as the class for x

ML discrimination Suppose all the points were in 1 dimension, and all classes were normally distributed. 11 22 x

ML discrimination (multi- dimensional case) Not part of the syllabus.

Dimensionality reduction Many genes have highly correlated expression profiles. By discarding some of the genes, we can greatly reduce the dimensionality of the problem. There are other, more principled ways to do such dimensionality reduction.

Principle Components Analysis Consider the expression values of 2 genes over 6 samples. Clearly, the expression of the two genes is highly correlated. Projecting all the genes on a single line could explain most of the data. This is a generalization of “discarding the gene”.

PCA Suppose all of the data were to be reduced by projecting to a single line  from the mean. How do we select the line  ? m

PCA cont’d Let each point x k map to x’ k. We want to mimimize the error Observation 1: Each point x k maps to x’ k = m +  T (x k -m)  m  xkxk x’ k