. Sequence Alignment II Lecture #3 This class has been edited from Nir Friedman’s lecture. Changes made by Dan Geiger, then by Shlomo Moran. Background.

Slides:



Advertisements
Similar presentations
Sequence Alignment I Lecture #2
Advertisements

. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Fa07CSE 182 CSE182-L4: Database filtering. Fa07CSE 182 Summary (through lecture 3) A2 is online We considered the basics of sequence alignment –Opt score.
1 Introduction to Sequence Analysis Utah State University – Spring 2012 STAT 5570: Statistical Bioinformatics Notes 6.1.
Hidden Markov Models (1)  Brief review of discrete time finite Markov Chain  Hidden Markov Model  Examples of HMM in Bioinformatics  Estimations Basic.
Gapped BLAST and PSI-BLAST Altschul et al Presenter: 張耿豪 莊凱翔.
. Sequence Alignment I Lecture #2 This class has been edited from Nir Friedman’s lecture which is available at Changes made by.
. Sequence Alignment III Lecture #4 This class has been edited from Nir Friedman’s lecture which is available at Changes made by.
BLAST, PSI-BLAST and position- specific scoring matrices Prof. William Stafford Noble Department of Genome Sciences Department of Computer Science and.
Measuring the degree of similarity: PAM and blosum Matrix
Markov Chains Lecture #5
Bioinformatics Finding signals and motifs in DNA and proteins Expectation Maximization Algorithm MEME The Gibbs sampler Lecture 10.
. Class 4: Fast Sequence Alignment. Alignment in Real Life u One of the major uses of alignments is to find sequences in a “database” u Such collections.
Lecture outline Database searches
Sequence Alignment Storing, retrieving and comparing DNA sequences in Databases. Comparing two or more sequences for similarities. Searching databases.
Bioinformatics Algorithms and Data Structures
Heuristic alignment algorithms and cost matrices
We continue where we stopped last week: FASTA – BLAST
. Class 4: Fast Sequence Alignment. Alignment in Real Life u One of the major uses of alignments is to find sequences in a “database” u Such collections.
1 1. BLAST (Basic Local Alignment Search Tool) Heuristic Only parts of protein are frequently subject to mutations. For example, active sites (that one.
Heuristic alignment algorithms; Cost matrices 2.5 – 2.9 Thomas van Dijk.
Introduction to bioinformatics
Defining Scoring Functions, Multiple Sequence Alignment Lecture #4
Sequence similarity.
Similar Sequence Similar Function Charles Yan Spring 2006.
Sequence Alignment III CIS 667 February 10, 2004.
Heuristic Approaches for Sequence Alignments
. Computational Genomics Lecture #3a (revised 24/3/09) This class has been edited from Nir Friedman’s lecture which is available at
Class 3: Estimating Scoring Rules for Sequence Alignment.
. Sequence Alignment II Lecture #3 This class has been edited from Nir Friedman’s lecture. Changes made by Dan Geiger, then by Shlomo Moran. Background.
Bioinformatics Unit 1: Data Bases and Alignments Lecture 3: “Homology” Searches and Sequence Alignments (cont.) The Mechanics of Alignments.
Alignment III PAM Matrices. 2 PAM250 scoring matrix.
Class 2: Basic Sequence Alignment
. Sequence Alignment I Lecture #2 This class has been edited from Nir Friedman’s lecture. Changes made by Dan Geiger, then Shlomo Moran. Background Readings:
Sequence comparison: Score matrices Genome 559: Introduction to Statistical and Computational Genomics Prof. James H. Thomas
. Pairwise and Multiple Alignment Lecture #4 This class has been edited from Nir Friedman’s lecture which is available at Changes.
Practical algorithms in Sequence Alignment Sushmita Roy BMI/CS 576 Sep 17 th, 2013.
Information theoretic interpretation of PAM matrices Sorin Istrail and Derek Aguiar.
. Sequence Alignment II Lecture #3 This class has been edited from Nir Friedman’s lecture which is available at Changes made by.
. Sequence Alignment II Lecture #3 This class has been edited from Nir Friedman’s lecture which is available at Changes made by.
Alignment Statistics and Substitution Matrices BMI/CS 576 Colin Dewey Fall 2010.
An Introduction to Bioinformatics
Protein Sequence Alignment and Database Searching.
Evolution and Scoring Rules Example Score = 5 x (# matches) + (-4) x (# mismatches) + + (-7) x (total length of all gaps) Example Score = 5 x (# matches)
Gapped BLAST and PSI- BLAST: a new generation of protein database search programs By Stephen F. Altschul, Thomas L. Madden, Alejandro A. Schäffer, Jinghui.
Classifier Evaluation Vasileios Hatzivassiloglou University of Texas at Dallas.
Sequence Alignment Goal: line up two or more sequences An alignment of two amino acid sequences: …. Seq1: HKIYHLQSKVPTFVRMLAPEGALNIHEKAWNAYPYCRTVITN-EYMKEDFLIKIETWHKP.
Amino Acid Scoring Matrices Jason Davis. Overview Protein synthesis/evolution Protein synthesis/evolution Computational sequence alignment Computational.
Pairwise Sequence Alignment. The most important class of bioinformatics tools – pairwise alignment of DNA and protein seqs. alignment 1alignment 2 Seq.
Database Searches BLAST. Basic Local Alignment Search Tool –Altschul, Gish, Miller, Myers, Lipman, J. Mol. Biol. 215 (1990) –Altschul, Madden, Schaffer,
Comp. Genomics Recitation 3 The statistics of database searching.
Function preserves sequences Christophe Roos - MediCel ltd Similarity is a tool in understanding the information in a sequence.
Chapter 3 Computational Molecular Biology Michael Smith
BLAST: Basic Local Alignment Search Tool Altschul et al. J. Mol Bio CS 466 Saurabh Sinha.
Pairwise Sequence Analysis-III
A Table-Driven, Full-Sensitivity Similarity Search Algorithm Gene Myers and Richard Durbin Presented by Wang, Jia-Nan and Huang, Yu- Feng.
. Fasta, Blast, Probabilities. 2 Reminder u Last classes we discussed dynamic programming algorithms for l global alignment l local alignment l Multiple.
Pairwise Local Alignment and Database Search Csc 487/687 Computing for Bioinformatics.
. Sequence Alignment. Sequences Much of bioinformatics involves sequences u DNA sequences u RNA sequences u Protein sequences We can think of these sequences.
Pairwise Sequence Alignment Part 2. Outline Summary Local and Global alignments FASTA and BLAST algorithms Evaluating significance of alignments Alignment.
Pairwise sequence alignment Lecture 02. Overview  Sequence comparison lies at the heart of bioinformatics analysis.  It is the first step towards structural.
Sequence Alignment.
Construction of Substitution matrices
Step 3: Tools Database Searching
The statistics of pairwise alignment BMI/CS 576 Colin Dewey Fall 2015.
©CMBI 2005 Database Searching BLAST Database Searching Sequence Alignment Scoring Matrices Significance of an alignment BLAST, algorithm BLAST, parameters.
Substitution Matrices and Alignment Statistics BMI/CS 776 Mark Craven February 2002.
9/6/07BCB 444/544 F07 ISU Dobbs - Lab 3 - BLAST1 BCB 444/544 Lab 3 BLAST Scoring Matrices & Alignment Statistics Sept6.
Pairwise Sequence Alignment and Database Searching
Fast Sequence Alignments
Presentation transcript:

. Sequence Alignment II Lecture #3 This class has been edited from Nir Friedman’s lecture. Changes made by Dan Geiger, then by Shlomo Moran. Background Readings: Chapters 2.5, 2.7 in the text book, Biological Sequence Analysis, Durbin et al., Chapters , in Introduction to Computational Molecular Biology, Setubal and Meidanis, 1997.

2 Reminder u Last class we discussed dynamic programming algorithms for l global alignment l local alignment u All of these assumed a scoring rule: that determines the quality of perfect matches, substitutions, insertions, and deletions.

3 Alignment in Real Life u One of the major uses of alignments is to find sequences in a “database.” u The current protein database contains about 10 8 residues ! So searching a 10 3 long target sequence requires to evaluate about matrix cells which will take about three hours in the rate of 10 millions evaluations per second. u Quite annoying when, say, one thousand target sequences need to be searched because it will take about four months to run.

4 Heuristic Search u Instead, most searches rely on heuristic procedures u These are not guaranteed to find the best match u Sometimes, they will completely miss a high-scoring match We now describe the main ideas used by the best known(?) of these heuristic procedures.

5 Basic Intuition u Almost all heuristic search procedures are based on the observation that real-life matches often contain long strings with gap-less matches. u These heuristic try to find significant gap-less matches and then extend them.

6 Banded DP  Suppose that we have two strings s[1..n] and t[1..m] such that n  m u If the optimal alignment of s and t has few gaps, then path of the alignment will be close to diagonal t s

7 Banded DP u To find such a path, it suffices to search in a diagonal band of the matrix.  If the diagonal band consists of k diagonals (width k ), then dynamic programming takes O(kn).  Much faster than O(n 2 ) of standard DP. t s k V[i+1, i+k/2 +1]V[i+1, i+k/2] Out of rangeV[i,i+k/2] Note that for diagonals, i-j = constant.

8 Banded DP for local alignment Problem: Where is the banded diagonal ? It need not be the main diagonal when looking for a good local alignment (or when the lengths of s and t are different). How do we select which subsequences to align using banded DP? t s k We heuristically find potential diagonals and evaluate them using Banded DP. This is the main idea of FASTA.

9 Overview of FASTA Input: strings s and t, and a parameter ktup Output: A highly scored local alignment. 1. Find pairs of matching substrings s[i...i+ktup]=t[j...j+ktup] 2. Extend to ungapped diagonals 3. Extend to gapped matches using banded DP

10 Finding Potential Diagonals Suppose there exists a relatively long gap-less match S=****AGCGCCATGGATTGAGCGA* T=**TGCGACATTGATCGACCTA** u Each such sequence defines a potential diagonal as follows. If the first sequence starts at location i (e.g.,5 above) and the second starts at location j (e.g.,3 above), then the potential diagonal starts at location (i,j). u Can we identify potential diagonals quickly? u Such diagonals can then be evaluated using Banded DP. t s i j

11 Identifying Potential Diagonals Assumption: High scoring gap-less alignments contain several “seeds” of perfect matches S=****AGCGCCATGGATTGAGCGA* T=**TGCGACATTGATCGACCTA** t s i j Since this is a gap-less alignment, all perfect match regions reside on the same diagonal (defined by i-j). How do we find seeds efficiently ?

12 Formalizing the task Task at hand (Identifying seeds): Find all pairs (i,j) such that s[i...i +ktup] = t[j...j+ktup] Let ktup be a parameter denoting the seed length of interest.

13 Finding Seeds Efficiently Index Table (ktup =2) AA - AC - AG 5, 19 AT 11, 15 CA 10 CC 9,21 CG 7 … TT 16 S=****AGCGCCATGGATTGAGCGA* T=**TGCGACATTGATCGACCTA** 7 (-,7) No match (10,8) One match 89 (11,9), (15,9) Two matches u March on the query sequence T while using the index table to list all matches with the database sequence S. u Prepare an index table of the database sequence S such that for any sequence of length ktup, one gets the list of its positions in S. In practice, these steps take linear time: O(|s|+|t|).

14 Comments The maximal size of the index table is |  | ktup where  is the alphabet size (4 or 20). For small ktup, the entire table is stored. For large ktup values, one should keep only entries for tuples actually found in the database, so the index table size is indeed linear. In this case, hashing is needed. Typical values of ktup are 1-2 for Proteins and 4-6 for DNA. Tradeoffs of these values to be discussed. The index table is prepared for each database sequence ahead of users’ matching requests, at compilation time. So matching time is O(t). AA - AC - AG 5, 19 AT 11, 15 CA 10 CC 9,21 CG 7 … TT 16 Index table

15 S=***AGCGCCATGGATTGAGCGA* T=**TGCGACATTGATCGACCTA** t s i j Identifying Potential Diagonals u Input: Sets of pairs. E.g, (6,4),(10,8),(14,12),(15,10),(20,4) … u Task: Locate sets of pairs that are on the same diagonal. 20 i-j = 20-4=16  Method: Sort according to the difference i-j. i-j = 2; 6-4 ; 10-8;

16 Processing Potential Diagonals For high i-j offset frequency, namely, diagonals with many pieces, combine the pieces into regions by extending pieces greedily along the diagonal as long as the score improves (and never below some score value). t s

17 FASTA’s Final steps: using banded DP l List the highest scoring diagonal matches l Run banded DP on regions containing a high scoring diagonal (say with width 12). t s Hence, the algorithm may combine some diagonals into gapped matches. In the example above it could combine diagonals 2 and 3).

18 Most applications of FASTA use very small ktup (1-2 for proteins, and 4-6 for DNA). Higher values yield less potential diagonals. Hence to search around potential diagonals (DP) is faster. But the chance to miss an optimal local alignment is increased. FASTA- practical choices Some implementation choices /tricks have not been explicated herein. t s

19 BLAST (Basic Local Alignment Search Tool) Based on similar ideas described earlier (High scoring pairs rather than exact k tuples as seeds). Uses an established statistical framework to determine thresholds. The new PSI-BLAST (Position Specific Iterated – BLAST ) is the state of the art sequence comparison software. Iterative Procedure l Performs BLAST on a database l Uses significant alignments to construct “position specific” score matrix. l This matrix is used in the next round of database searching until no new significant alignments are found. Can sometime detect remote homologs.

20 BLAST Overview Input: strings s and t, and a parameter T = threshold value Output: A highly scored local alignment Definition: Two strings u and v of length k are a high scoring pair (HSP) if d(u,v) > T (usually consider un-gapped alignments only). 1. Find high scoring pairs of substrings such that d(u,v) > T  These words serve as seeds for finding longer matches 2. Extend to ungapped diagonals (as in FASTA) 3. Extend to gapped matches

21 BLAST Overview (cont.) Step 1: Find high scoring pairs of substrings such that d(u,v) > T (The seeds): u Find all strings of length k which score at least T with substrings of s in a gapless alignment (k = 4 for proteins, 11 for DNA) (note: possibly, not all k-words must be tested, e.g. when such a word scores less than T with itself). u Find in t all exact matches with each of the above strings.

22 Extending Potential Matches s t Once a seed is found, BLAST attempts to find a local alignment that extends the seed. Seeds on the same diagonal are combined (as in FASTA), then extended as far as possible in a greedy manner. During the extension phase, the search stops when the score passes below some lower bound computed by BLAST (to save time).

23 Where do scoring rules come from ? We have defined an additive scoring function by specifying a function  ( ,  ) such that  (x,y) is the score of replacing x by y  (x,-) is the score of deleting x  (-,x) is the score of inserting x But how do we come up with the “correct” score ? Answer: By encoding experience of what are similar sequences for the task at hand. Similarity depends on time, evolution trends, and sequence types.

24 Why use probability to define and/or interpret a scoring function ? Similarity is probabilistic in nature because biological changes like mutation, recombination, and selection, are not deterministic. We could answer questions such as: How probable two sequences are similar? Is the similarity found significant or random? How to change a similarity score when, say, mutation rate of a specific area on the chromosome becomes known ?

25 A Probabilistic Model u For now, we will focus on alignment without indels. u For now, we assume each position (nucleotide /amino-acid) is independent of other positions. u We consider two options: M: the sequences are Matched (related) R: the sequences are Random (unrelated)

26 Unrelated Sequences u Our random model of unrelated sequences is simple l Each position is sampled independently from a distribution over the alphabet  We assume there is a distribution q(  ) that describes the probability of letters in such positions. u Then:

27 Related Sequences  We assume that each pair of aligned positions (s[i],t[i]) evolved from a common ancestor  Let p(a,b) be a distribution over pairs of letters.  p(a,b) is the probability that some ancestral letter evolved into this particular pair of letters.

28 Odd-Ratio Test for Alignment If Q > 1, then the two strings s and t are more likely to be related (M) than unrelated (R). If Q < 1, then the two strings s and t are more likely to be unrelated (R) than related (M).

29 Score(s[i],t[i]) Log Odd-Ratio Test for Alignment Taking logarithm of Q yields If log Q > 0, then s and t are more likely to be related. If log Q < 0, then they are more likely to be unrelated. How can we relate this quantity to a score function ?

30 Probabilistic Interpretation of Scores u We define the scoring function via u Then, the score of an alignment is the log-ratio between the two models: Score > 0  Model is more likely Score < 0  Random is more likely

31 Modeling Assumptions u It is important to note that this interpretation depends on our modeling assumption!! u For example, if we assume that the letter in each position depends on the letter in the preceding position, then the likelihood ratio will have a different form.

32 Estimating Probabilities  Suppose we are given a long string s[1..n] of letters from   We want to estimate the distribution q(·) that generated the sequence u How should we go about this? We build on the theory of parameter estimation in statistics, eg by using maximum likelihood.

33 Estimating q(  )  Suppose we are given a long string s[1..n] of letters from  s can be the concatenation of all sequences in our database  We want to estimate the distribution q(  )  That is, q is defined per letter Likelihood function:

34 Estimating q(  ) (cont.) How do we define q ? Likelihood function: ML parameters ( M aximum L ikelihood) MAP parameters ( M aximum A posteriori P robability)

35 Estimating p(·,·) Intuition:  Find pair of aligned sequences s[1..n], t[1..n], u Estimate probability of pairs: u Again, s and t can be the concatenation of many aligned pairs from the database Number of times a is aligned with b in (s,t)

36 Problems in Estimating p(·,·) u How do we find pairs of aligned sequences? u How far is the ancestor ? earlier divergence  low sequence similarity later divergence  high sequence similarity u Does one letter mutate to the other or are they both mutations of a common ancestor having yet another residue/nucleotide acid ?

37 Estimating p(·,·) for proteins Generate a large diverse collection of accepted mutations. An accepted mutation is a mutation due to an alignment of closely related protein sequences. For example, Hemoglobin alpha chain in humans and other organisms (homologous proteins). Recall that Define: to be the number of mutations a  b, to be the total number of mutations of a, and to be the total number of amino acids involved in a mutation. Note that f is twice the number of mutations.

38 PAM-1 matrices For PAM-1 it is assumed That 1% of all amino acids are mutated. #(a-mutations) = #(a-occurrences) =, The relative mutability of amino acid a, should reflect the Probability that a is mutated to any other amino acid :

39 PAM-1 matrices Define M ab to be the probability matrix for switching from a to b via A mutation

40 Properties of PAM-1 matrices Note that Namely, the probability of not changing and changing sums to 1. Namely, only 1% of amino acids change according to this matrix. Hence the name, 1-Percent Accepted Mutation (PAM). Also note that This is a unit of evolutionary change, not time because evolution acts differently on distinct sequence types. What is the substitution matrix for k units of evolutionary time ?

41 Model of Evolution Again, we need to make some assumptions u Each position changes independently of the rest u The probability of mutations is the same in each positions u Evolution does not “remember” Time t t+  t+2  t+3  t+4  A A C CG T T T CG

42 Model of Evolution u How do we model such a process? u This process is called a Markov Chain A chain is defined by the transition probability  P(X t+  =b|X t =a) - the probability that the next state is b given that the current state is a  We often describe these probabilities by a matrix: M[  ] ab = P(X t+  =b|X t =a)

43 Multi-Step Changes  Thus M[2  ] = M[  ]M[  ]  By induction (HMW exercise): M[k  ] = M[  ] k  Based on M ab, we can compute the probabilities of changes over two time periods

44 Longer Term Changes  Estimate M[  ] (PAM-1 matrices)  Use M[k  ] = M[  ] k (PAM-k matrices) u Define

45 Using PAM u Historically researchers use PAM-250. (The only one published in the original paper.) u Original PAM matrices were based on small number of proteins (circa 1978). Later versions use many more examples. u Used to be the most popular scoring rule, but there are some problems with PAM matrices.

46 Problems with PAM Normalization step is quite arbitrary. If, for example, we define relative mutability using the constant 50 rather than 100, we get: We will get: Now we have two different ways to estimate the matrix M[4  ] : M[4  ] = M[  ] 4 as we did before or M[4  ] = M[2  ] 2 M[250  ] for example does not reflect well long period changes.

47 BLOSUM u Idea: use aligned ungapped regions of protein families.These are assumed to have a common ancestor. Similar ideas but better statistics and modeling. u Procedure: l Cluster together sequences in a family whenever more than L% identical residues are shared. l Count number of substitutions across different clusters in the same family. l Estimate frequencies as before. u Practice: Blosum50 and Blosum62 are wildly used. (See page in the text book). Considered state of the art nowadays.