Defining Scoring Functions, Multiple Sequence Alignment Lecture #4

Slides:



Advertisements
Similar presentations
Sequence Alignment I Lecture #2
Advertisements

Computational Genomics Lecture #3a
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Fa07CSE 182 CSE182-L4: Database filtering. Fa07CSE 182 Summary (through lecture 3) A2 is online We considered the basics of sequence alignment –Opt score.
Hidden Markov Model in Biological Sequence Analysis – Part 2
1 Introduction to Sequence Analysis Utah State University – Spring 2012 STAT 5570: Statistical Bioinformatics Notes 6.1.
. Sequence Alignment I Lecture #2 This class has been edited from Nir Friedman’s lecture which is available at Changes made by.
HMM II: Parameter Estimation. Reminder: Hidden Markov Model Markov Chain transition probabilities: p(S i+1 = t|S i = s) = a st Emission probabilities:
. Sequence Alignment III Lecture #4 This class has been edited from Nir Friedman’s lecture which is available at Changes made by.
Measuring the degree of similarity: PAM and blosum Matrix
Markov Chains Lecture #5
Lecture 8 Alignment of pairs of sequence Local and global alignment
Heuristic alignment algorithms and cost matrices
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
. Sequence Alignment via HMM Background Readings: chapters 3.4, 3.5, 4, in the Durbin et al.
Multiple Sequence Alignment Algorithms in Computational Biology Spring 2006 Most of the slides were created by Dan Geiger and Ydo Wexler and edited by.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
Computational Genomics Lecture #3a Much of this class has been edited from Nir Friedman’s lecture which is available at Changes.
. Multiple Sequence Alignment Tutorial #4 © Ilan Gronau.
Reminder -Structure of a genome Human 3x10 9 bp Genome: ~30,000 genes ~200,000 exons ~23 Mb coding ~15 Mb noncoding pre-mRNA transcription splicing translation.
. Multiple Sequence Alignment Tutorial #4 © Ilan Gronau.
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Multiple Sequence alignment Chitta Baral Arizona State University.
Sequence Alignment III CIS 667 February 10, 2004.
. Computational Genomics Lecture #3a (revised 24/3/09) This class has been edited from Nir Friedman’s lecture which is available at
Class 3: Estimating Scoring Rules for Sequence Alignment.
BLOSUM Information Resources Algorithms in Computational Biology Spring 2006 Created by Itai Sharon.
. Sequence Alignment Tutorial #3 © Ydo Wexler & Dan Geiger.
. Sequence Alignment II Lecture #3 This class has been edited from Nir Friedman’s lecture. Changes made by Dan Geiger, then by Shlomo Moran. Background.
Bioinformatics Unit 1: Data Bases and Alignments Lecture 3: “Homology” Searches and Sequence Alignments (cont.) The Mechanics of Alignments.
Sequence Alignments Revisited
Alignment IV BLOSUM Matrices. 2 BLOSUM matrices Blocks Substitution Matrix. Scores for each position are obtained frequencies of substitutions in blocks.
Alignment III PAM Matrices. 2 PAM250 scoring matrix.
Multiple sequence alignment methods 1 Corné Hoogendoorn Denis Miretskiy.
Hidden Markov Model Continues …. Finite State Markov Chain A discrete time stochastic process, consisting of a domain D of m states {1,…,m} and 1.An m.
. Multiple Sequence Alignment Tutorial #4 © Ilan Gronau.
Sequence Alignment - III Chitta Baral. Scoring Model When comparing sequences –Looking for evidence that they have diverged from a common ancestor by.
Class 2: Basic Sequence Alignment
CISC667, F05, Lec8, Liao CISC 667 Intro to Bioinformatics (Fall 2005) Multiple Sequence Alignment Scoring Dynamic Programming algorithms Heuristic algorithms.
. Sequence Alignment I Lecture #2 This class has been edited from Nir Friedman’s lecture. Changes made by Dan Geiger, then Shlomo Moran. Background Readings:
. Pairwise and Multiple Alignment Lecture #4 This class has been edited from Nir Friedman’s lecture which is available at Changes.
. Sequence Alignment II Lecture #3 This class has been edited from Nir Friedman’s lecture which is available at Changes made by.
. Sequence Alignment II Lecture #3 This class has been edited from Nir Friedman’s lecture which is available at Changes made by.
. Sequence Alignment II Lecture #3 This class has been edited from Nir Friedman’s lecture. Changes made by Dan Geiger, then by Shlomo Moran. Background.
Multiple Sequence Alignment S 1 = AGGTC S 2 = GTTCG S 3 = TGAAC Possible alignment A-TA-T GGGGGG G--G-- TTATTA -TA-TA CCCCCC -G--G- AG-AG- GTTGTT GTGGTG.
Alignment Statistics and Substitution Matrices BMI/CS 576 Colin Dewey Fall 2010.
Multiple Sequence Alignments
Bioiformatics I Fall Dynamic programming algorithm: pairwise comparisons.
CISC667, S07, Lec5, Liao CISC 667 Intro to Bioinformatics (Spring 2007) Pairwise sequence alignment Needleman-Wunsch (global alignment)
Evolution and Scoring Rules Example Score = 5 x (# matches) + (-4) x (# mismatches) + + (-7) x (total length of all gaps) Example Score = 5 x (# matches)
. EM with Many Random Variables Another Example of EM Sequence Alignment via HMM Lecture # 10 This class has been edited from Nir Friedman’s lecture. Changes.
Amino Acid Scoring Matrices Jason Davis. Overview Protein synthesis/evolution Protein synthesis/evolution Computational sequence alignment Computational.
Pairwise Sequence Alignment. The most important class of bioinformatics tools – pairwise alignment of DNA and protein seqs. alignment 1alignment 2 Seq.
Pairwise Sequence Alignment (II) (Lecture for CS498-CXZ Algorithms in Bioinformatics) Sept. 27, 2005 ChengXiang Zhai Department of Computer Science University.
Sequence Alignment Csc 487/687 Computing for bioinformatics.
Chapter 3 Computational Molecular Biology Michael Smith
. Correctness proof of EM Variants of HMM Sequence Alignment via HMM Lecture # 10 This class has been edited from Nir Friedman’s lecture. Changes made.
More statistical stuff CS 394C Feb 6, Today Review of material from Jan 31 Calculating pattern probabilities Why maximum parsimony and UPGMA are.
BLAST: Basic Local Alignment Search Tool Altschul et al. J. Mol Bio CS 466 Saurabh Sinha.
Pairwise Sequence Analysis-III
A Table-Driven, Full-Sensitivity Similarity Search Algorithm Gene Myers and Richard Durbin Presented by Wang, Jia-Nan and Huang, Yu- Feng.
. Fasta, Blast, Probabilities. 2 Reminder u Last classes we discussed dynamic programming algorithms for l global alignment l local alignment l Multiple.
Sequence Alignment.
The statistics of pairwise alignment BMI/CS 576 Colin Dewey Fall 2015.
Multiple Sequence Alignment Vasileios Hatzivassiloglou University of Texas at Dallas.
Finding Motifs Vasileios Hatzivassiloglou University of Texas at Dallas.
Substitution Matrices and Alignment Statistics BMI/CS 776 Mark Craven February 2002.
Computational Genomics Lecture #2b
Computational Genomics Lecture #3a
Alignment IV BLOSUM Matrices
Presentation transcript:

Defining Scoring Functions, Multiple Sequence Alignment Lecture #4 Background Readings: Chapter 6 in Biological Sequence Analysis, Durbin et al., 2001. Chapter 3.4, 3.5 in Introduction to Computational Molecular Biology, Setubal and Meidanis, 1997. This class has been edited from Nir Friedman’s lecture. Changes made by Dan Geiger and Shlomo Moran. .

Reminder: A Probabilistic Model for Defining the Scoring Function Assume alignment without indels. Assume each position (nucleotide /amino-acid) is independent of other positions. Consider two options: M: the sequences are Matched (related) R: the sequences are Random (unrelated)

R: Unrelated Sequences Our random model of unrelated sequences is simple Each position is sampled independently from a distribution over the alphabet  We assume there is a distribution q() that describes the probability of letters in such positions. Then:

M: Related Sequences We assume that each pair of aligned positions (s[i],t[i]) evolved from a common ancestor Let p(a,b) be a distribution over pairs of letters. p(a,b) is the probability that some ancestral letter evolved into this particular pair of letters.

Odds-Ratio Test for Alignment If Q > 1, then the two strings s and t are more likely to be related (M) than unrelated (R). If Q < 1, then the two strings s and t are more likely to be unrelated (R) than related (M).

Log Odds-Ratio Test for Alignment Taking logarithm of Q yields Score(s[i],t[i]) If log Q > 0, then s and t are more likely to be related. If log Q < 0, then they are more likely to be unrelated. Next we estimate Pr((a,b)|M)=P(a,b) for amino acids

Estimating p(·,·) for proteins (Setubel et al, p. 81-84) We need to define for each pair (a,b) of amino acids (possibly a=b) the probability p(a,b) that a is aligned with b given that the sequences are related. p(a,b) should be determined by the data we have. The next slides present one way to define p(a,b) .

Estimating p(·,·) for proteins First, we define for each letter a a probability qa, which is the relative frequency of the amino-acid a in a collection of proteins which is considered reliable for this purpose. Specifically, qa = na/n where na is the number of occurrences of letter a and n is the total number of letters in the collection, so n = ana.

Estimating p(·,·) for proteins For ab, we use as a data a collection of accepted mutations: Formally, it is a multiset of the form {(a,b): a,b are amino acids and ab}. We consider the mutations as unordered pairs. The collection of accepted mutations contains mutations in alignments of closely related protein sequences. For example, Hemoglobin alpha chain in humans and other organisms (homologous proteins).

Estimating p(·,·) for proteins Using the collection of accepted mutations, which we consider as unordered pairs of amino acids. Since the pairs are unordered, we denote them by {a,b}.Some parameters: Mutation counts fab=fba is the number of mutations {a, b}: ab in the above collection. f is the number of mutations (unordered pairs) in the collection. fab /f is the conditional probability:

Estimating p(·,·) for proteins We still have to define the relative mutability, Pr(xy|M). This is the probability that in the “match” model M, a random pair (x,y) is a mutation. The relative mutability is a user defined parameter, denoted r.

PAM-1 matrices [1-Percent Accepted Mutation]. In PAM-1 matrix it is assumed that the relative mutability r is 1%. That is: 1/100 of the amino acids are mutated. This is modeled by letting n, the total number of pairs of amino acids, be 100f. Using this, we now can compute faa , the number of {a,a} pairs. In this model, p(a,b) is given as the relative frequency of the (unordered) pairs {a,b}:

The log odds-ratio for PAM-1: The entries in PAM-1 matrix are the logarithms of the odd-ratio: The actual values are 10 times the logarithm, rounded to integer.

Evolutionary distance The choice that r=1% of amino acids change is quite arbitrary. It could fit specific set of proteins whose evolutionary distance is such that indeed 1% of the letters have mutated. This is a unit of evolutionary change, not time because evolution acts differently on distinct sequence types. What is the substitution matrix for k units of evolutionary time ?

Model of Evolution We make some assumptions: Each position changes independently of the rest The probability of mutations is the same in each position Evolution does not “remember” T C G A C G Time t t+ t+2 t+3 t+4

Model of Evolution How do we model such a process? This process is called a Markov Chain A chain is defined by the transition probability P(Xt+ =b|Xt=a) - the probability that the next state is b given that the current state is a We often describe these probabilities by a matrix: M[]ab = P(Xt+ =b|Xt=a)

Multi-Step Changes Based on Pab, we can compute the probabilities of changes over two time periods Using Conditional independence (No memory) Thus P[2] = P[]P[] By induction (HMW exercise): P[n] = P[] n

More on Markov Chains – next week

Comments regarding PAM Historically researchers use PAM-250. (The only one published in the original paper.) Original PAM matrices were based on small number of proteins (circa 1978). Later versions use many more examples. Used to be the most popular scoring rule, but there are some problems with PAM matrices.

Degrees of freedom in PAM definition With r=1/100 the 1-PAM matrix is obtained from the matrix With r=1/50 the basic matrix is different, namely: Thus we have two different ways to estimate the matrix M[4] : Use the PAM-1 matrix to the fourth power: M[4] = M[] 4 Or Use the K=50 matrix to the second power: M[4] = M’[2] 2

Problems in building distance matrices How do we find pairs of aligned sequences? How far is the ancestor ? earlier divergence  low sequence similarity later divergence  high sequence similarity E.g., M[250] is known not to reflect well long period changes (see p. 43 at Durbin et al). Does one letter mutate to the other or are they both mutations of a third letter ?

BLOSUM Outline (Durbin et. al, p. 43-44) PAM matrices does not capture the difference between short and long time mutations. BLOck-SUbstitution-Matrices (1992) attempt to model long time mutations. Idea: use aligned ungapped regions of protein families.These are assumed to have a common ancestor. Similar ideas but better statistics and modeling. Procedure: Cluster together sequences in a family whenever more than L% identical residues are shared. Count number of substitutions across different clusters (in the same family). Estimate frequencies using the counts. Practice: Blosum50 (ie L=50) and Blosum62 are wildly used. (See page 43-44 in Durbin et al).

Multiple Sequence Alignment S1=AGGTC Possible alignment A - T G C S2=GTTCG S3=TGAAC Possible alignment A G - T C

Multiple Sequence Alignment Aligning more than two sequences. Definition: Given strings S1, S2, …,Sk a multiple (global) alignment map them to strings S’1, S’2, …,S’k that may contain gaps, where: |S’1|= |S’2|=…= |S’k| The removal of gaps from S’i leaves Si

Multiple alignments We use a matrix to represent the alignment of k sequences, K=(x1,...,xk). We assume no columns consists solely of gaps. The common scoring functions give a score to each column, and set: score(K)= ∑i score(column(i)) x1 x2 x3 x4 M Q _ I L R - K P V The scoring function is symmetric - the order of arguments need not matter: score(I,_,I,V) = score(_,I,I,V). Still, for k=10, a scoring function has > 2k -1 > 1000 entries to specify.

SUM OF PAIRS A common scoring function is SP – sum of scores of the projected pairwise alignments: SPscore(K)=∑i<j score(xi,xj). M Q _ I L R - K P V Note that we need to specify the score(-,-) because a column may have several blanks (as long as not all entries are blanks). In order for this score to be written as ∑i score(column(i)), we set score(-,-) = 0. Why ? Because these entries appear in the sum of columns but not in the sum of projected pairwise alignments (lines).

SUM OF PAIRS Definition: The sum-of-pairs (SP) value for a multiple global alignment A of k strings is the sum of the values of all projected pairwise alignments induced by A where the pairwise alignment function score(xi,xj) is additive. M Q _ I L R - K P V

Example Consider the following alignment: a c - c d b - 3 3 +4 - c - a d b d 3 + 4 + 5 = 12 a - b c d a d Using the edit distance and for , this alignment has a SP value of

Multiple Sequence Alignment Given k strings of length n, there is a natural generalization of the dynamic programming algorithm that finds an alignment that maximizes SP-score(K) = ∑i<j score(xi,xj). Instead of a 2-dimensional table, we now have a k-dimensional table to fill. For each cell i =(i1,..,ik), compute an optimal multiple alignment for the k prefix sequences x1(1,..,i1),...,xk(1,..,ik). The adjacent cells are those that differ in their indices by one or zero. Each cell depends on 2k-1 adjacent cells.

The idea via K=2 Recall the notation: and the following recurrence for V: V[i,j] V[i+1,j] V[i,j+1] V[i+1,j+1] Note that the new cell index (i+1,j+1) differs from previous indices by one of 2k-1 non-zero binary vectors (1,1), (1,0), (0,1).

The idea for arbitrary k Order the cells i =(i1,..,ik) by increasing order of the sum ∑ij. Set s(0,..,0)=0, and for i > (0,...,0): Where b ranges over all non-zero binary k-tuples. The cell i-b is the non-negative difference of i and b. column(i,b) is the k-tuple (c1,…,ck) where: cj= xj(ij) if bi=1, and cj= ‘-’ otherwise. (Reflecting that bj=1 iff the sequence xj was extended by a letter in the corresponding case).

Complexity of the DP approach Number of cells nk. Number of adjacent cells O(2k). Computation of SP score for each column(i,b) is O(k2) Total run time is O(k22knk) which is utterly unacceptable ! Not much hope for a polynomial algorithm because the problem has been shown to be NP complete. Need heuristic to reduce time.

Time saving heuristics: Relevance tests Heuristic: Avoid computing score(i) for irrelevant cells. x1 x2 x3 x4 M Q _ I L R - K P V Let L be a lower bound on the optimal SP score of a multiple alignment of the k sequences (L can be obtained from an arbitrary multiple alignment, computed in any way).

Time saving heuristics: Relevance tests Heuristic: Avoid computing score(i) for irrelevant cells. Relevance test for cell i=(..iu,..iv…): For each two indices 1  u < v  k, compute the optimal score of an alignment of s=xu and t=xv, which goes via cell i. If the sum of these scores over all pairs (u,v) is smaller than L, then i is irrelevant. An alignment via s=xu and t=xv, which goes via cell i is an alignment of s and t which has a prefix which aligns (s[1],…s[iu]) with (t[1],..,t[iv]). How do we find an optimal such alignment?

Recall the Linear Space algorithm V[i,j] = d(s[1..i],t[1..j]) B[i,j] = d(s[i+1..n],t[j+1..m]) F[i,j] + B[i,j] = score of best alignment through (i,j) t s These computations done in linear space. Build such a table for every two sequences s=xu and t=xv, 1  u, v  k. This entry encodes the optimum through (iu,iv).

Time saving heuristics: Relevance test But can we go over all cells determine if they are relevant or not ? No. Start with (0,…,0) and add to the list relevant entries until reaching (n1,…,nk)

Multiple Sequence Alignment – Approximation Algorithm In tutorial time you saw an O(k2n2) multiple alignment algorithm for the SP-score that errs by a factor of at most 2(1-1/k) < 2.

Star-score(K) = ∑j>0score(x0,xj). Star Alignments Rather then summing up all pairwise alignments, select a fixed sequence x0 as a center, and set Star-score(K) = ∑j>0score(x0,xj). The algorithm to find optimal alignment: at each step, add another sequence aligned with x0, keeping old gaps and possibly adding new ones.

Tree Alignments Assume that there is a tree T=(V,E) whose leaves are the sequences. Associate a sequence in each internal node. Tree-score(K) = ∑(i,j)Escore(xi,xj). Finding the optimal assignment of sequences to the internal nodes is NP Hard. We will meet again this problem in the study of Phylogenetic trees