Download presentation
Presentation is loading. Please wait.
Published byApril Hardy Modified over 8 years ago
1
1 Sequence Alignment -AGGCTATCACCTGACCTCCAGGCCGA--TGCCC--- TAG-CTATCAC--GACCGC--GGTCGATTTGCCCGAC Definition Given two strings x = x 1 x 2...x M, y = y 1 y 2 …y N, an alignment is an assignment of gaps to positions 0,…, N in x, and 0,…, N in y, so as to line up each letter in one sequence with either a letter, or a gap in the other sequence AGGCTATCACCTGACCTCCAGGCCGATGCCC TAGCTATCACGACCGCGGTCGATTTGCCCGAC
2
2 Sequence Comparison Much of bioinformatics involves sequences u DNA sequences u RNA sequences u Protein sequences We can think of these sequences as strings of letters u DNA & RNA: alphabet ∑ of 4 letters u Protein: alphabet ∑ of 20 letters
3
3 Sequence Comparison u Finding similarity between sequences is important for many biological questions u During evolution biological evolves (mutation, deletion, duplication, addition, move of subsequences…) u Homologous (share a common ancestor) sequences are (relatively) similar u Algorithms try to detect similar sequence that possibly share a common function
4
4 Sequence Comparison (cont) For example: u Find similar proteins Allows to predict function & structure u Locate similar subsequences in DNA Allows to identify (e.g) regulatory elements u Locate DNA sequences that might overlap Helps in sequence assembly g1g1 g2g2
5
5 Sequence Alignment Input: two sequences over the same alphabet Output: an alignment of the two sequences Example: u GCGCATGGATTGAGCGA u TGCGCCATTGATGACCA A possible alignment: -GCGC-ATGGATTGAGCGA TGCGCCATTGAT-GACC-A
6
6 Alignments -GCGC-ATGGATTGAGCGA TGCGCCATTGAT-GACC-A Three elements: u Matches u Mismatches u Insertions & deletions (indel)
7
7 Choosing Alignments There are many possible alignments For example, compare: -GCGC-ATGGATTGAGCGA TGCGCCATTGAT-GACC-A to ------GCGCATGGATTGAGCGA TGCGCC----ATTGATGACCA-- Which one is better?
8
8 Scoring Alignments Intuition: u Similar sequences evolved from a common ancestor u Evolution changed the sequences from this ancestral sequence by mutations: Substitution: one letter replaced by another Deletion: deletion of a letter Insertion: insertion of a letter u Scoring of sequence similarity should examine how many and which operations took place
9
9 Simple Scoring Rule Score each position independently: u Match m: +1 u Mismatch s: -1 u Indel d: -2 Score of an alignment is sum of position scores Scoring Function: Match: mm≥0 Mismatch: ss≤0 Gap:ds≤0 Score F = (#matches) m + (#mismatches) s + (#gaps) d
10
10 Example Example: -GCGC-ATGGATTGAGCGA TGCGCCATTGAT-GACC-A Score: (+1x13) + (-1x2) + (-2x4) = 3 ------GCGCATGGATTGAGCGA TGCGCC----ATTGATGACCA-- Score: (+1x5) + (-1x6) + (-2x11) = -23
11
11 More General Scores u The choice of +1,-1, and -2 scores is quite arbitrary u Depending on the context, some changes are more plausible than others Exchange of an amino-acid by one with similar properties (size, charge, etc.) Exchange of an amino-acid by one with opposite properties u Probabilistic interpretation: (e.g.) How likely is one alignment versus another ?
12
12 Additive Scoring Rules u We define a scoring function by specifying a function (x,y) is the score of replacing x by y (x,-) is the score of deleting x (-,x) is the score of inserting x u The score of an alignment is the sum of position scores
13
13 How do we compute the best alignment? AGTGCCCTGGAACCCTGACGGTGGGTCACAAAACTTCTGGA AGTGACCTGGGAAGACCCTGACCCTGGGTCACAAAACTC Alignment is a path from cell (0,0) to cell (m,n) Too many possible alignments: O( 2 M+N )
14
14 The Optimal Score The optimal alignment score between two sequences is the maximal score over all alignments of these sequences: u Computing the maximal score or actually finding an alignment that yields the maximal score are closely related tasks with similar algorithms. u We now address these two problems.
15
15 Alignment is additive Observation: The score of aligningx 1 ……x M y 1 ……y N is additive Say thatx 1 …x i x i+1 …x M aligns to y 1 …y j y j+1 …y N The two scores add up: V(x[1:M], y[1:N]) = V(x[1:i], y[1:j]) + V(x[i+1:M], y[j+1:N])
16
16 Dynamic Programming u We will now describe a dynamic programming algorithm Suppose we wish to align x 1 ……x M y 1 ……y N Let V(i,j) = optimal score of aligning x 1 ……x i y 1 ……y j
17
17 Dynamic Programming Notice three possible cases: 1. x i aligns to y j x 1 ……x i-1 x i y 1 ……y j-1 y j 2.x i aligns to a gap x 1 ……x i-1 x i y 1 ……y j - 3. y j aligns to a gap x 1 ……x i - y 1 ……y j-1 y j m, if x i = y j V(i,j) = V(i-1, j-1) + s, if not V(i,j) = V(i-1, j) + d V(i,j) = V(i, j-1) + d
18
18 Dynamic Programming u How do we know which case is correct? Inductive assumption: V(i, j-1), V(i-1, j), V(i-1, j-1) are optimal Then, V(i-1, j-1) + σ(x i, y j ) V(i, j) = max V(i-1, j) + d V( i, j-1) + d Where σ(x i, y j ) = m, if x i = y j ;s, if not
19
19 Recursive Argument Define the notation: Using our recursive argument, we get the following recurrence for V : V[i,j]V[i+1,j] V[i,j+1]V[i+1,j+1]
20
20 Recursive Argument u Of course, we also need to handle the base cases in the recursion: AA - We fill the matrix using the recurrence rule: S T versus
21
21 Dynamic Programming Algorithm We continue to fill the matrix using the recurrence rule S T
22
22 Dynamic Programming Algorithm V[0,0]V[0,1] V[1,0]V[1,1] +1 -2 -A A- -2 (A- versus -A) versus S T
23
23 Dynamic Programming Algorithm S T
24
24 Dynamic Programming Algorithm Conclusion: d( AAAC, AGC ) = -1 S T
25
25 Reconstructing the Best Alignment u To reconstruct the best alignment, we record which case(s) in the recursive rule maximized the score S T
26
26 Reconstructing the Best Alignment u We now trace back a path that corresponds to the best alignment AAAC AG-C S T
27
27 Reconstructing the Best Alignment u Sometimes, more than one alignment has the best score S T AAAC A-GC AAAC -AGC AAAC AG-C
28
28 The Needleman-Wunsch Matrix x 1 ……………………………… x M y 1 ……………………………… y N Every nondecreasing path from (0,0) to (M, N) corresponds to an alignment of the two sequences An optimal alignment is composed of optimal subalignments
29
29 The Needleman-Wunsch Algorithm Global Alignment Algorithm 1. Initialization. a.F(0, 0) = 0 b.F(0, j) = j d c.F(i, 0)= i d 2. Main Iteration. Filling-in partial alignments a.For each i = 1……M For eachj = 1……N F(i-1,j-1) + s(x i, y j ) [case 1] F(i, j) = max F(i-1, j) + d [case 2] F(i, j-1) + d [case 3] DIAG, if [case 1] Ptr(i,j)= LEFT,if [case 2] UP,if [case 3] 3. Termination. F(M, N) is the optimal score, and from Ptr(M, N) can trace back optimal alignment
30
30 Time Complexity Space: O(mn) Time: O(mn) Filling the matrix O(mn) Backtrace O(m+n) S T
31
31 Space Complexity In real-life applications, n and m can be very large u The space requirements of O(mn) can be too demanding If m = n = 1000, we need 1MB space If m = n = 10000, we need 100MB space u We can afford to perform extra computation to save space Looping over million operations takes less than seconds on modern workstations u Can we trade space with time?
32
32 Why Do We Need So Much Space? Compute V(i,j), column by column, storing only two columns in memory (or line by line if lines are shorter). 0 -2 -4 -6 -8 -2 1 -3 -5 -4 0 -2 -4 -6 -3 -2 0 A 1 G 2 C 3 0 A 1 A 2 A 3 C 4 Note however that u This “trick” fails when we need to reconstruct the optimizing sequence. Trace back information requires O(mn) memory bytes. To compute V[n,m]=d(s[1..n],t[1..m]), we need only O(min(n,m)) space:
33
33 Space Efficient Version: Outline u If n=1 align s[1,1] and t[1,m] Else, find position (n/2, j) at which some best alignment crosses a midpoint s t u Construct alignments A=s[1,n/2] vs t[1,j] B=s[n/2+1,n] vs t[j+1,m] Return AB Input: Sequences s[1,n] and t[1,m] to be aligned. Idea: perform divide and conquer
34
34 Finding the Midpoint The score of the best alignment that goes through j equals: d(s[1,n/2],t[1,j]) + d(s[n/2+1,n],t[j+1,m]) Thus, we need to compute these two quantities for all values of j s t
35
35 Finding the Midpoint (Algorithm) Define u F[i,j] = d(s[1,i],t[1,j]) u B[i,j] = d(s[i+1,n],t[j+1,m]) F[i,j] + B[i,j] = score of best alignment through (i,j) We compute F[i,j] as we did before We compute B[i,j] in exactly the same manner, going “backward” from B[n,m] u Requires linear space complexity because there is no need to keep trace back information in this step
36
36 Time Complexity Analysis Time to find a mid-point: cnm ( c - a constant) u Size of recursive sub-problems is (n/2,j) and (n/2,m-j-1), hence T(n,m) = cnm + T(n/2,j) + T(n/2,m-j-1) Lemma: T(n,m) 2cnm Proof (by induction): T(n,m) cnm + 2c(n/2)j + 2c(n/2)(m-j-1) 2cnm. Thus, time complexity is linear in size of the problem At worst, twice the cost of the regular solution.
37
37 A variant of the NW algorithm: u Maybe it is OK to have an unlimited # of gaps in the beginning and end: ----------CTATCACCTGACCTCCAGGCCGATGCCCCTTCCGGC GCGAGTTCATCTATCAC--GACCGC--GGTCG-------------- u Then, we don’t want to penalize gaps in the ends
38
38 Different types of overlaps
39
39 The Overlap Detection variant Changes: 1. Initialization For all i, j, V(i, 0) = 0 V(0, j) = 0 2. Termination max i V(i, N) V OPT = max max j V(M, j) x 1 ……………………………… x M y 1 ……………………………… y N
40
40 Overlap Alignment Example s = PAWHEAE t = HEAGAWGHEE Scoring system: u Match: +4 u Mismatch: -1 u Indel: -5
41
41 u Recurrence: as in global alignment u Score: maximum value at the bottom line and rightmost line in the matrix Overlap Alignment Initialization: V[i,0]=0, V[0,j]=0
42
42 Overlap Alignment Example s = PAWHEAE t = HEAGAWGHEE Scoring system: u Match: +4 u Mismatch: -1 u Indel: -5
43
43 Overlap Alignment Example s = PAWHEAE t = HEAGAWGHEE Scoring system: u Match: +4 u Mismatch: -1 u Indel: -5
44
44 Overlap Alignment Example The best overlap is: PAWHEAE------ ---HEAGAWGHEE Pay attention! A different scoring system could yield a different result, such as: ---PAW-HEAE HEAGAWGHEE-
45
45 The local alignment problem Given two strings x = x 1 ……x M, y = y 1 ……y N Find substrings x’, y’ whose similarity (optimal global alignment value) is maximum e.g.x = aaaacccccgggg y = cccgggaaccaacc
46
46 Why local alignment u Genes are shuffled between genomes u Portions of proteins (domains) are often conserved
47
47 Cross-species genome similarity u 98% of genes are conserved between any two mammals u >70% average similarity in protein sequence hum_a : GTTGACAATAGAGGGTCTGGCAGAGGCTC--------------------- @ 57331/400001 mus_a : GCTGACAATAGAGGGGCTGGCAGAGGCTC--------------------- @ 78560/400001 rat_a : GCTGACAATAGAGGGGCTGGCAGAGACTC--------------------- @ 112658/369938 fug_a : TTTGTTGATGGGGAGCGTGCATTAATTTCAGGCTATTGTTAACAGGCTCG @ 36008/68174 hum_a : CTGGCCGCGGTGCGGAGCGTCTGGAGCGGAGCACGCGCTGTCAGCTGGTG @ 57381/400001 mus_a : CTGGCCCCGGTGCGGAGCGTCTGGAGCGGAGCACGCGCTGTCAGCTGGTG @ 78610/400001 rat_a : CTGGCCCCGGTGCGGAGCGTCTGGAGCGGAGCACGCGCTGTCAGCTGGTG @ 112708/369938 fug_a : TGGGCCGAGGTGTTGGATGGCCTGAGTGAAGCACGCGCTGTCAGCTGGCG @ 36058/68174 hum_a : AGCGCACTCTCCTTTCAGGCAGCTCCCCGGGGAGCTGTGCGGCCACATTT @ 57431/400001 mus_a : AGCGCACTCG-CTTTCAGGCCGCTCCCCGGGGAGCTGAGCGGCCACATTT @ 78659/400001 rat_a : AGCGCACTCG-CTTTCAGGCCGCTCCCCGGGGAGCTGCGCGGCCACATTT @ 112757/369938 fug_a : AGCGCTCGCG------------------------AGTCCCTGCCGTGTCC @ 36084/68174 hum_a : AACACCATCATCACCCCTCCCCGGCCTCCTCAACCTCGGCCTCCTCCTCG @ 57481/400001 mus_a : AACACCGTCGTCA-CCCTCCCCGGCCTCCTCAACCTCGGCCTCCTCCTCG @ 78708/400001 rat_a : AACACCGTCGTCA-CCCTCCCCGGCCTCCTCAACCTCGGCCTCCTCCTCG @ 112806/369938 fug_a : CCGAGGACCCTGA------------------------------------- @ 36097/68174 “atoh” enhancer in human, mouse, rat, fugu fish
48
48 Local Alignment u As before, we use dynamic programming We now want to set V[i,j] to record the best alignment of a suffix of s[1..i] and a suffix of t[1..j] u How should we change the recurrence rule? Same as before but with an option to start afresh u The result is called the Smith-Waterman algorithm
49
49 The Smith-Waterman algorithm Idea: Ignore badly aligning regions Modifications to Needleman-Wunsch: Initialization:V(0, j) = V(i, 0) = 0 0 Iteration:V(i, j) = max V(i – 1, j) + d V(i, j – 1) + d V(i – 1, j – 1) + σ(x i, y j )
50
50 The Smith-Waterman algorithm Termination: 1. If we want the best local alignment… V OPT = max i,j V(i, j) 2. If we want all local alignments scoring > t ??For all i, j find V(i, j) > t, and trace back Complicated by overlapping local alignments
51
51 Local Alignment New option: u We can start a new match instead of extending a previous alignment Alignment of empty suffixes
52
52 Local Alignment Example s = TAATA t = TACTAA S T
53
53 Local Alignment Example s = TAATA t = TACTAA S T
54
54 Local Alignment Example s = TAATA t = TACTAA S T
55
55 Local Alignment Example s = TAATA t = TACTAA S T
56
56 Local Alignment Example s = TAATA t = TACTAA S T
57
57 Alignment with gaps Observation: Insertions and deletions often occur in blocks longer than a single nucleotide. Consequence: Standard scoring of alignment studied in lecture, which give a constant penalty d per gap unit, does not score well this phenomenon; Hence, a better gap score model is needed. Question: Can you think of an appropriate change to the scoring system for gaps?
58
58 Scoring the gaps more accurately Current model: Gap of length n incurs penaltyn d However, gaps usually occur in bunches Convex gap penalty function: (n): for all n, (n + 1) - (n) (n) - (n – 1) (n)
59
59 Convex gap alignment Initialization:same Iteration: V(i-1, j-1) + s(x i, y j ) V(i, j) = maxmax k=0…i-1 V(k,j) – (i-k) max k=0…j-1 V(i,k) – (j-k) Termination: same Running Time: O(N 2 M)(assume N>M) Space:O(NM)
60
60 Compromise: affine gaps (n) = d + (n – 1) e | | gap gap open extend To compute optimal alignment, At position i,j, need to “remember” best score if gap is open best score if gap is not open F(i, j):score of alignment x 1 …x i to y 1 …y j if if x i aligns to y j if G(i, j):score if x i aligns to a gap after y j if H(i, j): score if y j aligns to a gap after x i V(i, j) = best score of alignment x 1 …x i to y 1 …y j d e (n)
61
61 Needleman-Wunsch with affine gaps Why do we need two matrices? x i aligns to y j x 1 ……x i-1 x i x i+1 y 1 ……y j-1 y j - 2.x i aligns to a gap x 1 ……x i-1 x i x i+1 y 1 ……y j …- - Add -d Add -e
62
62 Needleman-Wunsch with affine gaps Initialization:V(i, 0) = d + (i – 1) e V(0, j) = d + (j – 1) e Iteration: V(i, j) = max{ F(i, j), G(i, j), H(i, j) } F(i, j) = V(i – 1, j – 1) + s(x i, y j ) V(i, j – 1) – d G(i, j) = max G(i, j – 1) – e V(i – 1, j) – d H(i, j) = max H(i – 1, j) – e Termination: similar
63
63 To generalize a little… … think of how you would compute optimal alignment with this gap function ….in time O(MN) (n)
64
64 Remark: Edit Distance Instead of speaking about the score of an alignment, one often talks about an edit distance between two sequences, defined to be the “cost” of the “cheapest” set of edit operations needed to transform one sequence into the other. Cheapest operation is “no change” Next cheapest operation is “replace” The most expensive operation is “add space”. Our goal is now to minimize the cost of operations, which is exactly what we actually did.
65
65 Where do scoring rules come from ? We have defined an additive scoring function by specifying a function ( , ) such that (x,y) is the score of replacing x by y (x,-) is the score of deleting x (-,x) is the score of inserting x But how do we come up with the “correct” score ? Answer: By encoding experience of what are similar sequences for the task at hand.
66
66 Probabilistic Interpretation of Scores u We define the scoring function via u Then, the score of an alignment is the log-ratio between the two models: Score > 0 Model is more likely Score < 0 Random is more likely
67
67 Modeling Assumptions u It is important to note that this interpretation depends on our modeling assumption!! u For example, if we assume that the letter in each position depends on the letter in the preceding position, then the likelihood ratio will have a different form.
68
68 Constructing Scoring Rules The formula suggests how to construct a scoring rule: Estimate p(·,·) and q(·) from the data Compute (a,b) based on the estimated p(·,·) and q(·) u How to estimate these parameters is the subject matter of parameter estimation in Statistics.
69
69 Substitution matrix u There exist several matrix based on this scoring scheme but differing by the way the statistic is computed u The two major one are PAM and BLOSUM u PAM 1 correspond to statistics computed from an global alignments of proteins with at most 1% of mutations u Other PAM matrix (until PAM 250) are extrapolated by matrix products u BLOSUM 62 correspond to statistics from local alignments with 62% of similarity. u Other BLOSUM matrix are build from other alignments PAM100 ==> Blosum90 PAM120 ==> Blosum80 PAM160 ==> Blosum60 PAM200 ==> Blosum52 PAM250 ==> Blosum45
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.