Download presentation
Presentation is loading. Please wait.
1
Indiana University, Bloomington, IN
Sequence Homology M.M. Dalkilic, PhD Monday, September 08, 2008 Class IV Indiana University, Bloomington, IN Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
2
Outline New Programming and written homework Friday
New Reading Posted on Website Readings [R] Chaps 5 Most Important Aspect of Bioinformatics—homology search through sequence similarity (cont’d) Sequence Alignment Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
3
Introduction to Entropy
Shannon’s theory of quantifying communication Can be derived axiomatically Simple model Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
4
Introduction to Entropy
An increase in surprise means an increase in information A decreate in surprise means a decrease in information Since for each message set we associate a probability function Encoding of M Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
5
Introduction to Entropy
An increase in surprise means an increase in information A decrease in surprise means a decrease in information Since for each message set we associate a probability function Encoding of M Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
6
Introduction to Entropy
An increase in surprise means an increase in information A decrease in surprise means a decrease in information Since for each message set we associate a probability function Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
7
Introduction to Entropy
Can formally prove these later—not complicated. We’ll look at multivariate entropy, conditional, and mutual information later as we examine the internals of BLAST Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
8
NW and SM Alignment Algorithms
Initialization Phase (the initial values of the recurrences) Fill-in (Bottom-up recursion) Trace-back This reduces complexity to Cost? We cannot guarantee the best solution—only a decent solution (at best) This is why it is mandatory to manually inspect alignments Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
9
NM Initialize top row and left column by placing the negative distance away from the start of the sequences Fill-in Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
10
Sequence Similiarty (Computation) M. M
Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
11
Sequence Similiarty (Computation) M. M
Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
12
NM and SM Alignment Traceback—start at right-bottom and follow arrows to left- top finish sequence sequence Start Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
13
Recurrence Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
14
SM is local alignment Initialization of top row and left column to zeros Cell values can only be non-negative Traceback starts at maximum value and ends at zero Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
15
Affine gap scores Initial gap cost is high
Continuing gaps are constant and lower Sequence Similiarty (Computation) M.M. Dalkilic, PhD SoI Indiana University, Bloomington, IN 2008 ©
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.