Sequence Alignment. 2 Sequence Comparison Much of bioinformatics involves sequences u DNA sequences u RNA sequences u Protein sequences We can think of.

Slides:



Advertisements
Similar presentations
Sequence Alignment I Lecture #2
Advertisements

Fa07CSE 182 CSE182-L4: Database filtering. Fa07CSE 182 Summary (through lecture 3) A2 is online We considered the basics of sequence alignment –Opt score.
CS 5263 Bioinformatics Lecture 3: Dynamic Programming and Global Sequence Alignment.
. Sequence Alignment I Lecture #2 This class has been edited from Nir Friedman’s lecture which is available at Changes made by.
Sequence Alignment Tutorial #2
Inexact Matching of Strings General Problem –Input Strings S and T –Questions How distant is S from T? How similar is S to T? Solution Technique –Dynamic.
Rapid Global Alignments How to align genomic sequences in (more or less) linear time.
C E N T R F O R I N T E G R A T I V E B I O I N F O R M A T I C S V U E Alignments 1 Sequence Analysis.
Sequence Alignment.
Welcome to CS262: Computational Genomics Instructor: Serafim Batzoglou TAs: Eugene Davydov Christina Pop Monday & Wednesday.
Welcome to CS262: Computational Genomics Instructor: Serafim Batzoglou TAs: Marc Schaub Andreas Sundquist Monday & Wednesday.
Heuristic Local Alignerers 1.The basic indexing & extension technique 2.Indexing: techniques to improve sensitivity Pairs of Words, Patterns 3.Systems.
Linear-Space Alignment. Subsequences and Substrings Definition A string x’ is a substring of a string x, if x = ux’v for some prefix string u and suffix.
Sequence Alignment Tutorial #2
Genomic Sequence Alignment. Overview Dynamic programming & the Needleman-Wunsch algorithm Local alignment—BLAST Fast global alignment Multiple sequence.
Sequence Alignment Storing, retrieving and comparing DNA sequences in Databases. Comparing two or more sequences for similarities. Searching databases.
Welcome to CS262!. Goals of this course Introduction to Computational Biology  Basic biology for computer scientists  Breadth: mention many topics &
Computational Genomics Lecture 1, Tuesday April 1, 2003.
Heuristic alignment algorithms and cost matrices
Sequence Alignment Algorithms in Computational Biology Spring 2006 Edited by Itai Sharon Most slides have been created and edited by Nir Friedman, Dan.
CS 5263 Bioinformatics Lecture 5: Affine Gap Penalties.
Sequence Alignment. Scoring Function Sequence edits: AGGCCTC  MutationsAGGACTC  InsertionsAGGGCCTC  DeletionsAGG. CTC Scoring Function: Match: +m Mismatch:
Sequence Alignment. CS262 Lecture 2, Win06, Batzoglou Complete DNA Sequences More than 300 complete genomes have been sequenced.
Linear-Space Alignment. Linear-space alignment Using 2 columns of space, we can compute for k = 1…M, F(M/2, k), F r (M/2, N – k) PLUS the backpointers.
Inexact Matching General Problem –Input Strings S and T –Questions How distant is S from T? How similar is S to T? Solution Technique –Dynamic programming.
Sequence Alignment Cont’d. Needleman-Wunsch with affine gaps Initialization:V(i, 0) = d + (i – 1)  e V(0, j) = d + (j – 1)  e Iteration: V(i, j) = max{
Sequence Alignment Cont’d. Sequence Alignment -AGGCTATCACCTGACCTCCAGGCCGA--TGCCC--- TAG-CTATCAC--GACCGC--GGTCGATTTGCCCGAC Definition Given two strings.
Reminder -Structure of a genome Human 3x10 9 bp Genome: ~30,000 genes ~200,000 exons ~23 Mb coding ~15 Mb noncoding pre-mRNA transcription splicing translation.
Sequence Alignment.
Sequence Alignment Variations Computing alignments using only O(m) space rather than O(mn) space. Computing alignments with bounded difference Exclusion.
Sequence Alignment. Before we start, administrivia Instructor: Serafim Batzoglou, CS x Office hours: Monday 2:00-3:30 TA:
Alignments and Comparative Genomics. Welcome to CS374! Today: Serafim: Alignments and Comparative Genomics Omkar: Administrivia.
Sequence Alignment Cont’d. Evolution Scoring Function Sequence edits: AGGCCTC  Mutations AGGACTC  Insertions AGGGCCTC  Deletions AGG.CTC Scoring Function:
Welcome to CS262: Computational Genomics Instructor: Serafim Batzoglou TAs: Eugene Davydov Christina Pop Monday & Wednesday.
Sequence Alignment Slides courtesy of Serafim Batzoglou, Stanford Univ.
CS262 Lecture 4, Win07, Batzoglou Heuristic Local Alignerers 1.The basic indexing & extension technique 2.Indexing: techniques to improve sensitivity Pairs.
. Computational Genomics Lecture #3a (revised 24/3/09) This class has been edited from Nir Friedman’s lecture which is available at
Introduction to Bioinformatics Algorithms Sequence Alignment.
. Sequence Alignment II Lecture #3 This class has been edited from Nir Friedman’s lecture. Changes made by Dan Geiger, then by Shlomo Moran. Background.
Bioinformatics Unit 1: Data Bases and Alignments Lecture 3: “Homology” Searches and Sequence Alignments (cont.) The Mechanics of Alignments.
Sequence similarity. Motivation Same gene, or similar gene Suffix of A similar to prefix of B? Suffix of A similar to prefix of B..Z? Longest similar.
Sequence Alignment Lecture 2, Thursday April 3, 2003.
Sequence Alignment. CS262 Lecture 3, Win06, Batzoglou Sequence Alignment -AGGCTATCACCTGACCTCCAGGCCGA--TGCCC--- TAG-CTATCAC--GACCGC--GGTCGATTTGCCCGAC Definition.
Sequence Alignment. -AGGCTATCACCTGACCTCCAGGCCGA--TGCCC--- TAG-CTATCAC--GACCGC--GGTCGATTTGCCCGAC Given two strings x = x 1 x 2...x M, y = y 1 y 2 …y N,
Class 2: Basic Sequence Alignment
. Sequence Alignment I Lecture #2 This class has been edited from Nir Friedman’s lecture. Changes made by Dan Geiger, then Shlomo Moran. Background Readings:
Alignment Statistics and Substitution Matrices BMI/CS 576 Colin Dewey Fall 2010.
Developing Pairwise Sequence Alignment Algorithms
Sequence Alignment.
CS 5263 Bioinformatics Lecture 4: Global Sequence Alignment Algorithms.
Pairwise alignments Introduction Introduction Why do alignments? Why do alignments? Definitions Definitions Scoring alignments Scoring alignments Alignment.
Comp. Genomics Recitation 2 12/3/09 Slides by Igor Ulitsky.
CISC667, S07, Lec5, Liao CISC 667 Intro to Bioinformatics (Spring 2007) Pairwise sequence alignment Needleman-Wunsch (global alignment)
Pairwise Sequence Alignment. The most important class of bioinformatics tools – pairwise alignment of DNA and protein seqs. alignment 1alignment 2 Seq.
. Sequence Alignment. Sequences Much of bioinformatics involves sequences u DNA sequences u RNA sequences u Protein sequences We can think of these sequences.
Pairwise Sequence Alignment BMI/CS 776 Mark Craven January 2002.
Pairwise alignment of DNA/protein sequences I519 Introduction to Bioinformatics, Fall 2012.
Minimum Edit Distance Definition of Minimum Edit Distance.
1 Sequence Alignment Input: two sequences over the same alphabet Output: an alignment of the two sequences Example: u GCGCATGGATTGAGCGA u TGCGCCATTGATGACCA.
. Sequence Alignment. Sequences Much of bioinformatics involves sequences u DNA sequences u RNA sequences u Protein sequences We can think of these sequences.
Space Efficient Alignment Algorithms and Affine Gap Penalties Dr. Nancy Warter-Perez.
Pairwise sequence alignment Lecture 02. Overview  Sequence comparison lies at the heart of bioinformatics analysis.  It is the first step towards structural.
Sequence Alignment.
DNA, RNA and protein are an alien language
1 Выравнивание двух последовательностей. 2 AGC A A A C
Sequence Similarity.
1 Sequence Alignment -AGGCTATCACCTGACCTCCAGGCCGA--TGCCC--- TAG-CTATCAC--GACCGC--GGTCGATTTGCCCGAC Definition Given two strings x = x 1 x 2...x M, y = y.
Sequence Alignment ..
Pairwise sequence Alignment.
Sequence Alignment Tutorial #2
Presentation transcript:

Sequence Alignment

2 Sequence Comparison Much of bioinformatics involves sequences u DNA sequences u RNA sequences u Protein sequences We can think of these sequences as strings of letters u DNA & RNA: alphabet ∑ of 4 letters u Protein: alphabet ∑ of 20 letters

3 Sequence Comparison u Finding similarity between sequences is important for many biological questions u Biological evolution (mutation, deletion, duplication, addition, move of subsequences…) u Homologous (share a common ancestor) sequences are (relatively) similar u Algorithms try to detect similar sequence that possibly share a common function

4 Sequence Comparison (cont) For example: u Find similar proteins  Allows to predict function & structure u Locate similar subsequences in DNA  Allows to identify (e.g) regulatory elements u Locate DNA sequences that might overlap  Helps in sequence assembly g1g1 g2g2

Complete DNA Sequences More than 1000 complete genomes have been sequenced

Evolution

Evolution at the DNA level …ACGGTGCAGTTACCA… …AC----CAGTCCACCA… Mutation SEQUENCE EDITS REARRANGEMENTS Deletion Inversion Translocation Duplication

Evolutionary Rates OK X X Still OK? next generation

Sequence conservation implies function Alignment is the key to Finding important regions Determining function Uncovering evolutionary events

Sequence Alignment -AGGCTATCACCTGACCTCCAGGCCGA--TGCCC--- TAG-CTATCAC--GACCGC--GGTCGATTTGCCCGAC Definition Given two strings x = x 1 x 2...x M, y = y 1 y 2 …y N, an alignment is an assignment of gaps to positions 0,…, N in x, and 0,…, N in y, so as to line up each letter in one sequence with either a letter, or a gap in the other sequence AGGCTATCACCTGACCTCCAGGCCGATGCCC TAGCTATCACGACCGCGGTCGATTTGCCCGAC

What is a good alignment? AGGCTAGTT, AGCGAAGTTT AGGCTAGTT- 6 matches, 3 mismatches, 1 gap AGCGAAGTTT AGGCTA-GTT- 7 matches, 1 mismatch, 3 gaps AG-CGAAGTTT AGGC-TA-GTT- 7 matches, 0 mismatches, 5 gaps AG-CG-AAGTTT

Scoring Function Sequence edits: AGGCCTC  MutationsAGGACTC  InsertionsAGGGCCTC  DeletionsAGG. CTC Scoring Function: Match: +m Mismatch: -s Gap:-d Score F = (# matches)  m - (# mismatches)  s – (#gaps)  d Alternative definition: minimal edit distance “Given two strings x, y, find minimum # of edits (insertions, deletions, mutations) to transform one string to the other”

13 Simple Scoring Rule Score each position independently: u Match m: +1 u Mismatch s: -1 u Indel d: -2 Score of an alignment is sum of position scores Scoring Function: Match: mm≥0 Mismatch: ss≤0 Gap:ds≤0 Score F = (#matches)  m + (#mismatches)  s + (#gaps)  d

14 Alignments -GCGC-ATGGATTGAGCGA TGCGCCATTGAT-GACC-A Three elements: u Matches u Mismatches u Insertions & deletions (indel)

15 Example Example: -GCGC-ATGGATTGAGCGA TGCGCCATTGAT-GACC-A Score: (+1x13) + (-1x2) + (-2x4) = GCGCATGGATTGAGCGA TGCGCC----ATTGATGACCA-- Score: (+1x5) + (-1x6) + (-2x11) = -23

16 More General Scores u The choice of +1,-1, and -2 scores is quite arbitrary u Depending on the context, some changes are more plausible than others  Exchange of an amino-acid by one with similar properties (size, charge, etc.)  Exchange of an amino-acid by one with opposite properties u Probabilistic interpretation: (e.g.) How likely is one alignment versus another ?

17 Additive Scoring Rules u We define a scoring function by specifying a function  (x,y) is the score of replacing x by y  (x,-) is the score of deleting x  (-,x) is the score of inserting x u The score of an alignment is the sum of position scores

How do we compute the best alignment? AGTGCCCTGGAACCCTGACGGTGGGTCACAAAACTTCTGGA AGTGACCTGGGAAGACCCTGACCCTGGGTCACAAAACTC Too many possible alignments: >> 2 N (exercise)

19 The Optimal Score  The optimal alignment score between two sequences is the maximal score over all alignments of these sequences: u Computing the maximal score or actually finding an alignment that yields the maximal score are closely related tasks with similar algorithms. u We now address these two problems.

Alignment is additive Observation: The score of aligningx 1 ……x M y 1 ……y N is additive Say thatx 1 …x i x i+1 …x M aligns to y 1 …y j y j+1 …y N The two scores add up: F(x[1:M], y[1:N]) = F(x[1:i], y[1:j]) + F(x[i+1:M], y[j+1:N])

Dynamic Programming There are only a polynomial number of subproblems  Align x 1 …x i to y 1 …y j Original problem is one of the subproblems  Align x 1 …x M to y 1 …y N Each subproblem is easily solved from smaller subproblems  We will show next Dynamic Programming!!!Then, we can apply Dynamic Programming!!! Let F(i, j) = optimal score of aligning x 1 ……x i y 1 ……y j F is the DP “Matrix” or “Table” “Memoization”

Dynamic Programming (cont’d) Notice three possible cases: 1.x i aligns to y j x 1 ……x i-1 x i y 1 ……y j-1 y j 2.x i aligns to a gap x 1 ……x i-1 x i y 1 ……y j - 3.y j aligns to a gap x 1 ……x i - y 1 ……y j-1 y j m, if x i = y j F(i, j) = F(i – 1, j – 1) + -s, if not F(i, j) = F(i – 1, j) – d F(i, j) = F(i, j – 1) – d

Dynamic Programming (cont’d) How do we know which case is correct? Inductive assumption: F(i, j – 1), F(i – 1, j), F(i – 1, j – 1) are optimal Then, F(i – 1, j – 1) + s(x i, y j ) F(i, j) = max F(i – 1, j) – d F(i, j – 1) – d Where s(x i, y j ) = m, if x i = y j ;-s, if not

G - AGTA A10 -2 T 0010 A-3 02 F(i,j) i = Example x = AGTAm = 1 y = ATAs = -1 d = -1 j = F(1, 1) = max{F(0,0) + s(A, A), F(0, 1) – d, F(1, 0) – d} = max{0 + 1, – 1, – 1} = 1 AAAA TTTT AAAA Procedure to output Alignment Follow the backpointers When diagonal, OUTPUT x i, y j When up, OUTPUT y j When left, OUTPUT x i

The Needleman-Wunsch Matrix x 1 ……………………………… x M y 1 ……………………………… y N Every nondecreasing path from (0,0) to (M, N) corresponds to an alignment of the two sequences An optimal alignment is composed of optimal subalignments

The Needleman-Wunsch Algorithm 1.Initialization. a.F(0, 0) = 0 b.F(0, j) = - j  d c.F(i, 0)= - i  d 2.Main Iteration. Filling-in partial alignments a.For each i = 1……M For each j = 1……N F(i – 1,j – 1) + s(x i, y j ) [case 1] F(i, j) = max F(i – 1, j) – d [case 2] F(i, j – 1) – d [case 3] DIAG, if [case 1] Ptr(i, j)= LEFT,if [case 2] UP,if [case 3] 3.Termination. F(M, N) is the optimal score, and from Ptr(M, N) can trace back optimal alignment

Performance Time: O(NM) Space: O(NM) Later we will cover more efficient methods

28 Recursive Argument u Of course, we also need to handle the base cases in the recursion: AA - We fill the matrix using the recurrence rule: S T versus

29 Dynamic Programming Algorithm We continue to fill the matrix using the recurrence rule S T

30 Dynamic Programming Algorithm V[0,0]V[0,1] V[1,0]V[1,1] A A- -2 (A- versus -A) versus S T

31 Dynamic Programming Algorithm S T

32 Dynamic Programming Algorithm Conclusion: d( AAAC, AGC ) = -1 S T

33 Reconstructing the Best Alignment u To reconstruct the best alignment, we record which case(s) in the recursive rule maximized the score S T

34 Reconstructing the Best Alignment u We now trace back a path that corresponds to the best alignment AAAC AG-C S T

35 Reconstructing the Best Alignment u Sometimes, more than one alignment has the best score S T AAAC A-GC AAAC -AGC AAAC AG-C

36 The Needleman-Wunsch Matrix x 1 ……………………………… x M y 1 ……………………………… y N Every nondecreasing path from (0,0) to (M, N) corresponds to an alignment of the two sequences An optimal alignment is composed of optimal subalignments

37 The Needleman-Wunsch Algorithm Global Alignment Algorithm 1. Initialization. a.F(0, 0) = 0 b.F(0, j) = j  d c.F(i, 0)= i  d 2. Main Iteration. Filling-in partial alignments a.For each i = 1……M For eachj = 1……N F(i-1,j-1) + s(x i, y j ) [case 1] F(i, j) = max F(i-1, j) + d [case 2] F(i, j-1) + d [case 3] DIAG, if [case 1] Ptr(i,j)= LEFT,if [case 2] UP,if [case 3] 3. Termination. F(M, N) is the optimal score, and from Ptr(M, N) can trace back optimal alignment

38 Time Complexity Space: O(mn) Time: O(mn)  Filling the matrix O(mn)  Backtrace O(m+n) S T

39 Space Complexity  In real-life applications, n and m can be very large u The space requirements of O(mn) can be too demanding  If m = n = 1000, we need 1MB space  If m = n = 10000, we need 100MB space u We can afford to perform extra computation to save space  Looping over million operations takes less than seconds on modern workstations u Can we trade space with time?

40 Why Do We Need So Much Space?  Compute V(i,j), column by column, storing only two columns in memory (or line by line if lines are shorter) A 1 G 2 C 3 0 A 1 A 2 A 3 C 4 Note however that u This “trick” fails when we need to reconstruct the optimizing sequence.  Trace back information requires O(mn) memory bytes. To compute V[n,m]=d(s[1..n],t[1..m]), we need only O(min(n,m)) space:

Bounded Dynamic Programming Assume we know that x and y are very similar Assumption: # gaps(x, y) < k(N) xixi Then,|implies | i – j | < k(N) yj yj We can align x and y more efficiently: Time, Space: O(N  k(N)) << O(N 2 )

Bounded Dynamic Programming Initialization: F(i,0), F(0,j) undefined for i, j > k Iteration: For i = 1…M For j = max(1, i – k)…min(N, i+k) F(i – 1, j – 1)+ s(x i, y j ) F(i, j) = maxF(i, j – 1) – d, if j > i – k(N) F(i – 1, j) – d, if j < i + k(N) Termination:same x 1 ………………………… x M y 1 ………………………… y N k(N)

A variant of the basic algorithm: Maybe it is OK to have an unlimited # of gaps in the beginning and end: CTATCACCTGACCTCCAGGCCGATGCCCCTTCCGGC ||||||| |||| | || || GCGAGTTCATCTATCAC--GACCGC--GGTCG Then, we don’t want to penalize gaps in the ends

Different types of overlaps Example: 2 overlapping“reads” from a sequencing project Example: Search for a mouse gene within a human chromosome

The Overlap Detection variant Changes: 1.Initialization For all i, j, F(i, 0) = 0 F(0, j) = 0 2.Termination max i F(i, N) F OPT = max max j F(M, j) x 1 ……………………………… x M y 1 ……………………………… y N

46 The Overlap Detection variant Changes: 1. Initialization For all i, j, V(i, 0) = 0 V(0, j) = 0 2. Termination max i V(i, N) V OPT = max max j V(M, j) x 1 ……………………………… x M y 1 ……………………………… y N

47 Overlap Alignment Example s = PAWHEAE t = HEAGAWGHEE Scoring system: u Match: +4 u Mismatch: -1 u Indel: -5

48 u Recurrence: as in global alignment u Score: maximum value at the bottom line and rightmost line in the matrix Overlap Alignment  Initialization: V[i,0]=0, V[0,j]=0

49 Overlap Alignment Example s = PAWHEAE t = HEAGAWGHEE Scoring system: u Match: +4 u Mismatch: -1 u Indel: -5

50 Overlap Alignment Example s = PAWHEAE t = HEAGAWGHEE Scoring system: u Match: +4 u Mismatch: -1 u Indel: -5

51 Overlap Alignment Example The best overlap is: PAWHEAE HEAGAWGHEE Pay attention! A different scoring system could yield a different result, such as: ---PAW-HEAE HEAGAWGHEE-

The local alignment problem Given two strings x = x 1 ……x M, y = y 1 ……y N Find substrings x’, y’ whose similarity (optimal global alignment value) is maximum x = aaaacccccggggtta y = ttcccgggaaccaacc

Why local alignment – examples Genes are shuffled between genomes Portions of proteins (domains) are often conserved

54 Cross-species genome similarity u 98% of genes are conserved between any two mammals u >70% average similarity in protein sequence hum_a : 57331/ mus_a : 78560/ rat_a : / fug_a : 36008/68174 hum_a : 57381/ mus_a : 78610/ rat_a : / fug_a : 36058/68174 hum_a : 57431/ mus_a : 78659/ rat_a : / fug_a : 36084/68174 hum_a : 57481/ mus_a : 78708/ rat_a : / fug_a : 36097/68174 “atoh” enhancer in human, mouse, rat, fugu fish

The Smith-Waterman algorithm Idea: Ignore badly aligning regions Modifications to Needleman-Wunsch: Initialization:F(0, j) = F(i, 0) = 0 0 Iteration:F(i, j) = max F(i – 1, j) – d F(i, j – 1) – d F(i – 1, j – 1) + s(x i, y j )

The Smith-Waterman algorithm Termination: 1.If we want the best local alignment… F OPT = max i,j F(i, j) Find F OPT and trace back 2.If we want all local alignments scoring > t ??For all i, j find F(i, j) > t, and trace back? Complicated by overlapping local alignments Waterman–Eggert ’87: find all non-overlapping local alignments with minimal recalculation of the DP matrix

57 Local Alignment New option: u We can start a new match instead of extending a previous alignment Alignment of empty suffixes

58 Local Alignment Example s = TAATA t = TACTAA S T

59 Local Alignment Example s = TAATA t = TACTAA S T

60 Local Alignment Example s = TAATA t = TACTAA S T

61 Local Alignment Example s = TAATA t = TACTAA S T

62 Local Alignment Example s = TAATA t = TACTAA S T

63 Alignment with gaps Observation: Insertions and deletions often occur in blocks longer than a single nucleotide. Consequence: Standard scoring of alignment studied in lecture, which give a constant penalty d per gap unit, does not score well this phenomenon; Hence, a better gap score model is needed. Question: Can you think of an appropriate change to the scoring system for gaps?

Scoring the gaps more accurately Current model: Gap of length n incurs penaltyn  d However, gaps usually occur in bunches Convex gap penalty function:  (n): for all n,  (n + 1) -  (n)   (n) -  (n – 1)  (n)

Convex gap dynamic programming Initialization:same Iteration: F(i – 1, j – 1) + s(x i, y j ) F(i, j) = maxmax k=0…i-1 F(k, j) –  (i – k) max k=0…j-1 F(i, k) –  (j – k) Termination: same Running Time: O(N 2 M)(assume N>M) Space: O(NM)

Compromise: affine gaps  (n) = d + (n – 1)  e || gap gap open extend To compute optimal alignment, At position i, j, need to “remember” best score if gap is open best score if gap is not open F(i, j):score of alignment x 1 …x i to y 1 …y j if if x i aligns to y j if G(i, j):score if x i aligns to a gap after y j if H(i, j): score if y j aligns to a gap after x i V(i, j) = best score of alignment x 1 …x i to y 1 …y j d e  (n)

67 Needleman-Wunsch with affine gaps Why do we need two matrices? x i aligns to y j x 1 ……x i-1 x i x i+1 y 1 ……y j-1 y j - 2.x i aligns to a gap x 1 ……x i-1 x i x i+1 y 1 ……y j …- - Add -d Add -e

Needleman-Wunsch with affine gaps Why do we need matrices F, G, H? x i aligns to y j x 1 ……x i-1 x i x i+1 y 1 ……y j-1 y j - x i aligns to a gap after y j x 1 ……x i-1 x i x i+1 y 1 ……y j …- - Add -d Add -e G(i+1, j) = F(i, j) – d G(i+1, j) = G(i, j) – e Because, perhaps G(i, j) < V(i, j) (it is best to align x i to y j if we were aligning only x 1 …x i to y 1 …y j and not the rest of x, y), but on the contrary G(i, j) – e > V(i, j) – d (i.e., had we “fixed” our decision that x i aligns to y j, we could regret it at the next step when aligning x 1 …x i+1 to y 1 …y j )

Needleman-Wunsch with affine gaps Initialization:V(i, 0) = d + (i – 1)  e V(0, j) = d + (j – 1)  e Iteration: V(i, j) = max{ F(i, j), G(i, j), H(i, j) } F(i, j) = V(i – 1, j – 1) + s(x i, y j ) V(i – 1, j) – d G(i, j) = max G(i – 1, j) – e V(i, j – 1) – d H(i, j) = max H(i, j – 1) – e Termination: V(i, j) has the best alignment Time? Space?

To generalize a bit… … think of how you would compute optimal alignment with this gap function ….in time O(MN)  (n)

71 Remark: Edit Distance Instead of speaking about the score of an alignment, one often talks about an edit distance between two sequences, defined to be the “cost” of the “cheapest” set of edit operations needed to transform one sequence into the other.  Cheapest operation is “no change”  Next cheapest operation is “replace”  The most expensive operation is “add space”. Our goal is now to minimize the cost of operations, which is exactly what we actually did.

72 Where do scoring rules come from ? We have defined an additive scoring function by specifying a function  ( ,  ) such that  (x,y) is the score of replacing x by y  (x,-) is the score of deleting x  (-,x) is the score of inserting x But how do we come up with the “correct” score ? Answer: By encoding experience of what are similar sequences for the task at hand.

73 Probabilistic Interpretation of Scores u We define the scoring function via u Then, the score of an alignment is the log-ratio between the two models:  Score > 0  Model is more likely  Score < 0  Random is more likely

74 Modeling Assumptions u It is important to note that this interpretation depends on our modeling assumption!! u For example, if we assume that the letter in each position depends on the letter in the preceding position, then the likelihood ratio will have a different form.

75 Constructing Scoring Rules The formula suggests how to construct a scoring rule:  Estimate p(·,·) and q(·) from the data  Compute  (a,b) based on the estimated p(·,·) and q(·) u How to estimate these parameters is the subject matter of parameter estimation in Statistics.

76 Substitution matrix u There exist several matrix based on this scoring scheme but differing by the way the statistic is computed u The two major one are PAM and BLOSUM u PAM 1 correspond to statistics computed from an global alignments of proteins with at most 1% of mutations u Other PAM matrix (until PAM 250) are extrapolated by matrix products u BLOSUM 62 correspond to statistics from local alignments with 62% of similarity. u Other BLOSUM matrix are build from other alignments PAM100 ==> Blosum90 PAM120 ==> Blosum80 PAM160 ==> Blosum60 PAM200 ==> Blosum52 PAM250 ==> Blosum45

Linear-Space Alignment

Subsequences and Substrings Definition A string x’ is a substring of a string x, if x = ux’v for some prefix string u and suffix string v (similarly, x’ = x i …x j, for some 1  i  j  |x|) A string x’ is a subsequence of a string x if x’ can be obtained from x by deleting 0 or more letters (x’ = x i1 …x ik, for some 1  i 1  …  i k  |x|) Note: a substring is always a subsequence Example: x = abracadabra y = cadabr; substring z = brcdbr;subseqence, not substring

Hirschberg’s algortihm Given a set of strings x, y,…, a common subsequence is a string u that is a subsequence of all strings x, y, … Longest common subsequence  Given strings x = x 1 x 2 … x M, y = y 1 y 2 … y N,  Find longest common subsequence u = u 1 … u k Algorithm: F(i – 1, j) F(i, j) = maxF(i, j – 1) F(i – 1, j – 1) + [1, if x i = y j ; 0 otherwise] Ptr(i, j) = (same as in N-W) Termination: trace back from Ptr(M, N), and prepend a letter to u whenever Ptr(i, j) = DIAG and F(i – 1, j – 1) < F(i, j) Hirschberg’s original algorithm solves this in linear space

F(i,j) Introduction: Compute optimal score It is easy to compute F(M, N) in linear space Allocate ( column[1] ) Allocate ( column[2] ) For i = 1….M If i > 1, then: Free( column[ i – 2 ] ) Allocate( column[ i ] ) For j = 1…N F(i, j) = …

Linear-space alignment To compute both the optimal score and the optimal alignment: Divide & Conquer approach: Notation: x r, y r : reverse of x, y E.g.x = accgg; x r = ggcca F r (i, j): optimal score of aligning x r 1 …x r i & y r 1 …y r j same as aligning x M-i+1 …x M & y N-j+1 …y N

Linear-space alignment Lemma: (assume M is even) F(M, N) = max k=0…N ( F(M/2, k) + F r (M/2, N – k) ) x y M/2 k*k* F(M/2, k) F r (M/2, N – k) Example: ACC-GGTGCCCAGGACTG--CAT ACCAGGTG----GGACTGGGCAG k * = 8

Linear-space alignment Now, using 2 columns of space, we can compute for k = 1…M, F(M/2, k), F r (M/2, N – k) PLUS the backpointers x1x1 …x M/2 y1y1 xMxM yNyN x1x1 …x M/2+1 xMxM … y1y1 yNyN …

Linear-space alignment Now, we can find k * maximizing F(M/2, k) + F r (M/2, N-k) Also, we can trace the path exiting column M/2 from k * k*k* k * …… M/2 M/2+1 …… M M+1

Linear-space alignment Iterate this procedure to the left and right! N-k * M/2 k*k*

Linear-space alignment Hirschberg’s Linear-space algorithm: MEMALIGN(l, l’, r, r’):(aligns x l …x l’ with y r …y r’ ) 1.Let h =  (l’-l)/2  2.Find (in Time O((l’ – l)  (r’ – r)), Space O(r’ – r)) the optimal path,L h, entering column h – 1, exiting column h Let k 1 = pos’n at column h – 2 where L h enters k 2 = pos’n at column h + 1 where L h exits 3.MEMALIGN(l, h – 2, r, k 1 ) 4.Output L h 5.MEMALIGN(h + 1, l’, k 2, r’) Top level call: MEMALIGN(1, M, 1, N)

Linear-space alignment Time, Space analysis of Hirschberg’s algorithm: To compute optimal path at middle column, For box of size M  N, Space: 2N Time:cMN, for some constant c Then, left, right calls cost c( M/2  k * + M/2  (N – k * ) ) = cMN/2 All recursive calls cost Total Time: cMN + cMN/2 + cMN/4 + ….. = 2cMN = O(MN) Total Space: O(N) for computation, O(N + M) to store the optimal alignment

Heuristic Local Alignerers 1.The basic indexing & extension technique 2.Indexing: techniques to improve sensitivity Pairs of Words, Patterns 3.Systems for local alignment

Indexing-based local alignment Dictionary: All words of length k (~10) Alignment initiated between words of alignment score  T (typically T = k) Alignment: Ungapped extensions until score below statistical threshold Output: All local alignments with score > statistical threshold …… query DB query scan

Indexing-based local alignment— Extensions A C G A A G T A A G G T C C A G T C T G A T C C T G G A T T G C G A Gapped extensions until threshold Extensions with gaps until score < C below best score so far Output: GTAAGGTCCAGT GTTAGGTC-AGT

Sensitivity-Speed Tradeoff long words (k = 15) short words (k = 7) Sensitivity Speed Kent WJ, Genome Research 2002 Sens. Speed X%

Sensitivity-Speed Tradeoff Methods to improve sensitivity/speed 1.Using pairs of words 2.Using inexact words 3.Patterns—non consecutive positions ……ATAACGGACGACTGATTACACTGATTCTTAC…… ……GGCACGGACCAGTGACTACTCTGATTCCCAG…… ……ATAACGGACGACTGATTACACTGATTCTTAC…… ……GGCGCCGACGAGTGATTACACAGATTGCCAG…… TTTGATTACACAGAT T G TT CAC G

Measured improvement Kent WJ, Genome Research 2002

Non-consecutive words—Patterns Patterns increase the likelihood of at least one match within a long conserved region 3 common 5 common 7 common Consecutive PositionsNon-Consecutive Positions 6 common On a 100-long 70% conserved region: Consecutive Non-consecutive Expected # hits: Prob[at least one hit]:

Advantage of Patterns 11 positions 10 positions

Multiple patterns K patterns  Takes K times longer to scan  Patterns can complement one another Computational problem:  Given: a model (prob distribution) for homology between two regions  Find: best set of K patterns that maximizes Prob(at least one match) TTTGATTACACAGAT T G TT CAC G T G T C CAG TTGATT A G Buhler et al. RECOMB 2003 Sun & Buhler RECOMB 2004 How long does it take to search the query?

Variants of BLAST NCBI BLAST: search the universe MEGABLAST:  Optimized to align very similar sequences Works best when k = 4i  16 Linear gap penalty WU-BLAST: (Wash U BLAST)  Very good optimizations  Good set of features & command line arguments BLAT  Faster, less sensitive than BLAST  Good for aligning huge numbers of queries CHAOS  Uses inexact k-mers, sensitive PatternHunter  Uses patterns instead of k-mers BlastZ  Uses patterns, good for finding genes Typhon  Uses multiple alignments to improve sensitivity/speed tradeoff