Download presentation
Presentation is loading. Please wait.
1
CS262 Lecture 15, Win06, Batzoglou Rapid Global Alignments How to align genomic sequences in (more or less) linear time
2
CS262 Lecture 15, Win06, Batzoglou
3
Main Idea Genomic regions of interest contain islands of similarity, such as genes 1.Find local alignments 2.Chain an optimal subset of them 3.Refine/complete the alignment Systems that use this idea to various degrees: MUMmer, GLASS, DIALIGN, CHAOS, AVID, LAGAN, TBA, & others
4
CS262 Lecture 15, Win06, Batzoglou Saving cells in DP 1.Find local alignments 2.Chain -O(NlogN) L.I.S. 3.Restricted DP
5
CS262 Lecture 15, Win06, Batzoglou Methods to CHAIN Local Alignments Sparse Dynamic Programming O(N log N)
6
CS262 Lecture 15, Win06, Batzoglou The Problem: Find a Chain of Local Alignments (x,y) (x’,y’) requires x < x’ y < y’ Each local alignment has a weight FIND the chain with highest total weight
7
CS262 Lecture 15, Win06, Batzoglou Sparse Dynamic Programming – L.I.S. Let input be w: w 1,…, w n INITIALIZATION: L: 1-indexed array, L[1] w 1 B: 0-indexed array of backpointers; B[0] = 0 P: array used for traceback // L[j]: smallest last element w i of j-long LIS seen so far ALGORITHM for i = 2 to n { Find j such that L[j] < w[i] ≤ L[j+1] L[j+1] w[i] B[j+1] i P[i] B[j] } That’s it!!! Running time?
8
CS262 Lecture 15, Win06, Batzoglou Sparse LCS expressed as LIS Create a sequence w Every matching point (i, j), is inserted into w as follows: For each column j = 1…m, insert in w the points (i, j), in decreasing row i order The 11 example points are inserted in the order given a = (y, x), b = (y’, x’) can be chained iff a is before b in w, and y < y’ 15324162042431118 6 4 27 18 10 9 5 11 3 4 20 24 3 11 15 11 4 18 20 x y
9
CS262 Lecture 15, Win06, Batzoglou Sparse LCS expressed as LIS Create a sequence w w = (4,2) (3,3) (10,5) (2,5) (8,6) (1,6) (3,7) (4,8) (7,9) (5,9) (9,10) Consider now w’s elements as ordered lexicographically, where (y, x) < (y’, x’) if y < y’ Claim: An increasing subsequence of w is a common subsequence of x and y 15324162042431118 6 4 27 18 10 9 5 11 3 4 20 24 3 11 15 11 4 18 20 x y
10
CS262 Lecture 15, Win06, Batzoglou Sparse Dynamic Programming for LIS Example: w = (4,2) (3,3) (10,5) (2,5) (8,6) (1,6) (3,7) (4,8) (7,9) (5,9) (9,10) L = 1.(4,2) 2.(3,3) 3.(3,3) (10,5) 4.(2,5) (10,5) 5.(2,5) (8,6) 6.(1,6) (8,6) 7.(1,6) (3,7) 8.(1,6) (3,7) (4,8) 9.(1,6) (3,7) (4,8) (7,9) 10.(1,6) (3,7) (4,8) (5,9) 11.(1,6) (3,7) (4,8) (5,9) (9,10) Longest common subsequence: s = 4, 24, 3, 11, 18 15324162042431118 6 4 27 18 10 9 5 11 3 4 20 24 3 11 15 11 4 18 20 x y
11
CS262 Lecture 15, Win06, Batzoglou Sparse DP for rectangle chaining 1,…, N: rectangles (h j, l j ): y-coordinates of rectangle j w(j):weight of rectangle j V(j): optimal score of chain ending in j L: list of triplets (l j, V(j), j) L is sorted by l j : smallest (top) to largest (bottom) value L is implemented as a balanced binary tree y h l
12
CS262 Lecture 15, Win06, Batzoglou Sparse DP for rectangle chaining Main idea: Sweep through x- coordinates To the right of b, anything chainable to a is chainable to b Therefore, if V(b) > V(a), rectangle a is “useless” – remove it In L, keep rectangles j sorted with increasing l j - coordinates sorted with increasing V(j) score V(b) V(a)
13
CS262 Lecture 15, Win06, Batzoglou Sparse DP for rectangle chaining Go through rectangle x-coordinates, from lowest to highest: 1.When on the leftmost end of rectangle i: a.j: rectangle in L, with largest l j < h i b.V(i) = w(i) + V(j) 2.When on the rightmost end of i: a.k: rectangle in L, with largest l k l i b.If V(i) V(k): i.INSERT (l i, V(i), i) in L ii.REMOVE all (l j, V(j), j) with V(j) V(i) & l j l i i j k
14
CS262 Lecture 15, Win06, Batzoglou Example x y 1: 5 3: 3 2: 6 4: 4 5: 2 2 5 6 9 10 11 12 14 15 16
15
CS262 Lecture 15, Win06, Batzoglou Time Analysis 1.Sorting the x-coords takes O(N log N) 2.Going through x-coords: N steps 3.Each of N steps requires O(log N) time: Searching L takes log N Inserting to L takes log N All deletions are consecutive, so log N per deletion Each element is deleted at most once: N log N for all deletions Recall that INSERT, DELETE, SUCCESSOR, take O(log N) time in a balanced binary search tree
16
CS262 Lecture 15, Win06, Batzoglou Examples Human Genome Browser ABC
17
CS262 Lecture 15, Win06, Batzoglou Whole-genome alignment Rat—Mouse—Human
18
CS262 Lecture 15, Win06, Batzoglou Next 2 years: 20+ mammals, & many other animals, will be sequenced & aligned
19
CS262 Lecture 15, Win06, Batzoglou Protein Classification
20
CS262 Lecture 15, Win06, Batzoglou PDB Growth New PDB structures
21
CS262 Lecture 15, Win06, Batzoglou Protein classification Number of protein sequences grow exponentially Number of solved structures grow exponentially Number of new folds identified very small (and close to constant) Protein classification can Generate overview of structure types Detect similarities (evolutionary relationships) between protein sequences Morten Nielsen,CBS, BioCentrum, DTU SCOP release 1.67, Class# folds# superfamilies# families All alpha proteins202342550 All beta proteins141280529 Alpha and beta proteins (a/b)130213593 Alpha and beta proteins (a+b)260386650 Multi-domain proteins40 55 Membrane & cell surface428291 Small proteins72104162 Total88714472630
22
CS262 Lecture 15, Win06, Batzoglou Protein world Protein fold Protein structure classification Protein superfamily Protein family Morten Nielsen,CBS, BioCentrum, DTU
23
CS262 Lecture 15, Win06, Batzoglou Structure Classification Databases SCOP Manual classification (A. Murzin) scop.berkeley.edu scop.berkeley.edu CATH Semi manual classification (C. Orengo) www.biochem.ucl.ac.uk/bsm/cath www.biochem.ucl.ac.uk/bsm/cath FSSP Automatic classification (L. Holm) www.ebi.ac.uk/dali/fssp/fssp.html www.ebi.ac.uk/dali/fssp/fssp.html Morten Nielsen,CBS, BioCentrum, DTU
24
CS262 Lecture 15, Win06, Batzoglou Protein Classification Given a new protein, can we place it in its “correct” position within an existing protein hierarchy? Methods BLAST / PsiBLAST Profile HMMs Supervised Machine Learning methods Fold Family Superfamily Proteins ? new protein
25
CS262 Lecture 15, Win06, Batzoglou PSI-BLAST Given a sequence query x, and database D 1.Find all pairwise alignments of x to sequences in D 2.Collect all matches of x to y with some minimum significance 3.Construct position specific matrix M Each sequence y is given a weight so that many similar sequences cannot have much influence on a position (Henikoff & Henikoff 1994) 4.Using the matrix M, search D for more matches 5.Iterate 1–4 until convergence Profile M
26
CS262 Lecture 15, Win06, Batzoglou Profiles & Profile HMMs Psi-BLAST builds a profile Profile HMMs: more elaborate versions of a profile Intuitively, profile that models gaps
27
CS262 Lecture 15, Win06, Batzoglou
28
Profile HMMs Each M state has a position-specific pre-computed substitution table Each I state has position-specific gap penalties (and in principle can have its own emission distributions) Each D state also has position-specific gap penalties In principle, D-D transitions can also be customized per position M1M1 M2M2 MmMm Protein Family F BEGIN I0I0 I1I1 I m-1 D1D1 D2D2 DmDm END ImIm D m-1
29
CS262 Lecture 15, Win06, Batzoglou Profile HMMs transition between match states – α M(i)M(i+1) transitions between match and insert states – α M(i)I(i), α I(i)M(i+1) transition within insert state – α I(i)I(i) transition between match and delete states – α M(i)D(i+1), α D(i)M(i+1) transition within delete state – α D(i)D(i+1) emission of amino acid b at a state S – ε S (b) M1M1 M2M2 MmMm Protein Family F BEGIN I0I0 I1I1 I m-1 D1D1 D2D2 DmDm END ImIm D m-1
30
CS262 Lecture 15, Win06, Batzoglou Profile HMMs transition probabilities ~ frequency of a transition in alignment emission probabilities ~ frequency of an emission in alignment pseudocounts are usually introduced M1M1 M2M2 MmMm Protein Family F BEGIN I0I0 I1I1 I m-1 D1D1 D2D2 DmDm END ImIm D m-1
31
CS262 Lecture 15, Win06, Batzoglou Alignment of a protein to a profile HMM To align sequence x 1 …x n to a profile HMM: We will find the most likely alignment with the Viterbi DP algorithm Define V j M (i):score of best alignment of x 1 …x i to the HMM ending in x i being emitted from M j V j I (i):score of best alignment of x 1 …x i to the HMM ending in x i being emitted from I j V j D (i):score of best alignment of x 1 …x i to the HMM ending in D j (x i is the last character emitted before D j ) Denote by q a the frequency of amino acid a in a ‘random’ protein You can fill-in the details! (or, read Durbin et al. Chapter 5)
32
CS262 Lecture 15, Win06, Batzoglou How to build a profile HMM
33
CS262 Lecture 15, Win06, Batzoglou Resources on the web HMMer – a free profile HMM software http://hmmer.wustl.edu/ http://hmmer.wustl.edu/ SAM – another free profile HMM software http://www.cse.ucsc.edu/research/compbio/sam.html http://www.cse.ucsc.edu/research/compbio/sam.html PFAM – database of alignments and HMMs for protein families and domains http://www.sanger.ac.uk/Software/Pfam/ http://www.sanger.ac.uk/Software/Pfam/ SCOP – a structural classification of proteins http://scop.berkeley.edu/data/scop.b.html http://scop.berkeley.edu/data/scop.b.html
34
CS262 Lecture 15, Win06, Batzoglou Classification with Profile HMMs Fold Family Superfamily ? new protein
35
CS262 Lecture 15, Win06, Batzoglou Classification with Profile HMMs How generative models work Training examples ( sequences known to be members of family ): positive Tuning parameters with a priori knowledge Model assigns a probability to any given protein sequence Idea: The sequence from the family (hopefully) yield a higher probability than sequences outside the family Log-likelihood ratio as score P(X | H 1 ) P(H 1 ) P(H 1 |X) P(X) P(H 1 |X) L(X) = log -------------------------- = log --------------------- = log -------------- P(X | H 0 ) P(H 0 ) P(H 0 |X) P(X) P(H 0 |X)
36
CS262 Lecture 15, Win06, Batzoglou Generative Models
37
CS262 Lecture 15, Win06, Batzoglou Generative Models
38
CS262 Lecture 15, Win06, Batzoglou Generative Models
39
CS262 Lecture 15, Win06, Batzoglou Generative Models
40
CS262 Lecture 15, Win06, Batzoglou Discriminative Models -- SVM v Decision Rule: red: v T x > 0 margin If x 1 … x n training examples, sign( i i x i T x) “decides” where x falls Train i to achieve best margin Large Margin for |v| < 1 Margin of 1 for small |v|
41
CS262 Lecture 15, Win06, Batzoglou k-mer based SVMs for protein classification For given word size k, and mismatch tolerance l, define K(X, Y) = # distinct k-long word occurrences with ≤ l mismatches Define normalized kernel K’(X, Y) = K(X, Y)/ sqrt(K(X,X)K(Y,Y)) SVM can be learned by supplying this kernel function A B A C A R D I A B R A D A B I X Y K(X, Y) = 4 K’(X, Y) = 4/sqrt(7*7) = 4/7 Let k = 3; l = 1
42
CS262 Lecture 15, Win06, Batzoglou SVMs will find a few support vectors v After training, SVM has determined a small set of sequences, the support vectors, who need to be compared with query sequence X
43
CS262 Lecture 15, Win06, Batzoglou Benchmarks
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.