CS5263 Bioinformatics Lecture 11 Motif finding
HW2 2(C) Click to find out K and lambda
What is a (biological) motif? A motif is a recurring fragment, theme or pattern Sequence motif: a sequence pattern of nucleotides in a DNA sequence or amino acids in a protein Structural motif: a pattern in a protein structure formed by the spatial arrangement of amino acids. Network motif: patterns that occur in different parts of a network at frequencies much higher than those found in randomized network Commonality: –higher frequency than would be expected by chance –Has, or is conjectured to have, a biological significance
Sequence motif finding Given: a set of sequences Goal: find sequence motifs that appear in all or the majority of the sequences, and are likely associated with some functions –In DNA: regulatory sequences –In protein: functional/structural domains
Roadmap Biological background Representation of motifs Algorithms for finding motifs Other issues –Search for instances of given motifs –Distinguish functional vs non-functional motifs
Biological background for motif finding
Genome is fixed – Cells are dynamic A genome is static –(almost) Every cell in our body has a copy of the same genome A cell is dynamic –Responds to internal/external conditions –Most cells follow a cell cycle of division –Cells differentiate during development
Gene regulation … is responsible for the dynamic cell Gene expression (production of protein) varies according to: –Cell type –Cell cycle –External conditions –Location –Etc.
Where gene regulation takes place Opening of chromatin Transcription Translation Protein stability Protein modifications
GenePromoter RNA polymerase (Protein) Transcription Factor (TF) (Protein) DNA Transcriptional Regulation of genes
Gene TF binding site, cis-regulatory element RNA polymerase (Protein) Transcription Factor (TF) (Protein) DNA Transcriptional Regulation of genes
Gene RNA polymerase Transcription Factor (Protein) DNA TF binding site, cis-regulatory element
Gene RNA polymerase Transcription Factor DNA New protein Transcriptional Regulation of genes TF binding site, cis-regulatory element
The Cell as a Regulatory Network ABMake DC If C then D If B then NOT D If A and B then D D Make BD If D then B C gene D gene B
Transcription Factors Binding to DNA Transcriptional regulation: Transcription factors bind to DNA Binding recognizes specific DNA substrings: Regulatory motifs
Experimental methods DNase footprinting –Tedious –Time-consuming High-throughput techniques: ChIP-chip, ChIP- seq –Expensive –Other limitations
Computational methods for finding regulatory motifs Given a collection of genes that are believed to be regulated by the same/similar protein –Co-expressed genes –Evolutionarily conserved genes Find the common TF-binding motif from promoters......
Essentially a Multiple Local Alignment Find “best” multiple local alignment Multidimensional Dynamic Programming? –Heuristics must be used instance
Characteristics of Regulatory Motifs Tiny (6-12bp) Intergenic regions are very long Highly Variable ~Constant Size –Because a constant-size transcription factor binds Often repeated Often conserved
Motif representation Collection of exact words –{ACGTTAC, ACGCTAC, AGGTGAC, …} Consensus sequence (with wild cards) –{AcGTgTtAC} –{ASGTKTKAC} S=C/G, K=G/T (IUPAC code) Position-specific weight matrices (PWM)
Position-Specific Weight Matrix A C G T ASGTKTKA C
Sequence Logo frequency A C G T
Sequence Logo A C G T
Entropy and information content Entropy: a measure of uncertainty The entropy of a random variable X that can assume the n different values x 1, x 2,..., x n with the respective probabilities p 1, p 2,..., p n is defined as
Entropy and information content Example: A,C,G,T with equal probability H = 4 * (-0.25 log ) = log 2 4 = 2 bits Need 2 bits to encode (e.g. 00 = A, 01 = C, 10 = G, 11 = T) Maximum uncertainty 50% A and 50% C: H = 2 * (-0. 5 log 2 0.5) = log 2 2 = 1 bit 100% A H = 1 * (-1 log 2 1) = 0 bit Minimum uncertainty Information: the opposite of uncertainty I = maximum uncertainty – entropy The above examples provide 0, 1, and 2 bits of information, respectively
Entropy and information content A C G T H I Mean 1.15 Total 10.4 Expected occurrence in random DNA: 1 / = 1 / 1340 Expected occurrence of an exact 5-mer: 1 / 2 10 = 1 / 1024
Sequence Logo A C G T I
Real example E. coli. Promoter “TATA-Box” ~ 10bp upstream of transcription start TACGAT TAAAAT TATACT GATAAT TATGAT TATGTT Consensus: TATAAT Note: none of the instances matches the consensus perfectly
Finding Motifs
Motif finding schemes Genome 1Genome 2 Gene set 1Gene set 2 Conservation YesNo Whole genome YesGenome 1 & 2 & 3Genome 1 NoGene 1A & 1B & 1C or Gene Set 1 & 2 & 3 Gene Set 1 Genome 3 Gene set 3 1A1B1C Phylogenetic footprinting Dictionary building “Motif finding” Ideally, all information should be used, at some stage. i.e., inside algorithm vs pre- or post-processing.
Classification of approaches Combinatorial algorithms –Based on enumeration of words and computing word similarities Probabilistic algorithms –Construct probabilistic models to distinguish motifs vs non-motifs –Will discuss in later lectures
Combinatorial motif finding Idea 1: find all k-mers that appeared at least m times Idea 2: find all k-mers that are statistically significant Problem: most motifs allow divergence. Each variation may only appear once. Idea 3: find all k-mers, considering IUPAC code –e.g. ASGTKTKAC, S = C/G, K = G/T –Still inflexible Idea 4: find k-mers that approximately appeared at least m times –i.e. allow some mismatches
Combinatorial motif finding Given a set of sequences S = {x 1, …, x n } A motif W is a consensus string w 1 …w K Find motif W * with “best” match to x 1, …, x n Definition of “best”: d(W, x i ) = min hamming dist. between W and a word in x i d(W, S) = i d(W, x i ) W* = argmin( d(W, S) )
Exhaustive searches 1. Pattern-driven algorithm: For W = AA…A to TT…T (4 K possibilities) Find d( W, S ) Report W* = argmin( d(W, S) ) Running time: O( K N 4 K ) (where N = i |x i |) Guaranteed to find the optimal solution.
Exhaustive searches 2. Sample-driven algorithm: For W = a K-char word in some x i Find d( W, S ) Report W* = argmin( d( W, S ) ) OR Report a local improvement of W * Running time: O( K N 2 )
Exhaustive searches Problem with sample-driven approach: If: –True motif does not occur in data, and –True motif is “weak” Then, –random strings may score better than any instance of true motif
Example E. coli. Promoter “TATA-Box” ~ 10bp upstream of transcription start TACGAT TAAAAT TATACT GATAAT TATGAT TATGTT Consensus: TATAAT Each instance differs at most 2 bases from the consensus None of the instances matches the consensus perfectly
Heuristic methods Cannot afford exhaustive search on all patterns Sample-driven approaches may miss real patterns However, a real pattern should not differ too much from its instances in S Start from the space of all words in S, extend to the space with real patterns
Some of the popular tools Consensus (Hertz & Stormo, 1999) WINNOWER (Pevzner & Sze, 2000) MULTIPROFILER (Keich & Pevzner, 2002) PROJECTION (Buhler & Tompa, 2001) WEEDER (Pavesi et. al. 2001) And dozens of others
Consensus Algorithm: Cycle 1: For each word W in S For each word W’ in S Create alignment (gap free) of W, W’ Keep the C 1 best alignments, A 1, …, A C1 ACGGTTG,CGAACTT,GGGCTCT … ACGCCTG,AGAACTA,GGGGTGT …
Algorithm (cont’d): Cycle i: For each word W in S For each alignment A j from cycle i-1 Create alignment (gap free) of W, A j Keep the C i best alignments A 1, …, A Ci
C 1, …, C n are user-defined heuristic constants Running time: O(kN 2 ) + O(kN C 1 ) + O(kN C 2 ) + … + O(kN C n ) = O(kN 2 + kNC total ) Where C total = i C i, typically O(nC), where C is a big constant
Extended sample-driven (ESD) approaches Hybrid between pattern-driven and sample-driven Assume each instance does not differ by more than α bases to the motif ( usually depends on k) motif instance The real motif will reside in the - neighborhood of some words in S. Instead of searching all 4 K patterns, we can search the -neighborhood of every word in S. α-neighborhood
Extended sample-driven (ESD) approaches Naïve: N K α 3 α NK # of patterns to test# of words in sequences
Better idea Using a joint suffix tree, find all patterns that: –Have length K –Appeared in at least m sequences with at most α mismatches Post-processing
WEEDER: algorithm sketch A list containing all eligible nodes: with at most α mismatches to P For each node, remember #mismatches accumulated (e), and bit vector (B) for seq occ, e.g. [ ] Bit OR all B’s to get seq occurrence for P Suppose #occ >= m –Pattern still valid Now add a letter ACGTTACGTT Current pattern P, |P| < K (e, B) # mismatches Seq occ
WEEDER: algorithm sketch Simple extension: no branches. –No change to B –e may increase by 1 or no change –Drop node if e > α Branches: replace a node with its child nodes –Drop if e > α –B may change Re-do Bit OR using all B’s Try a different char if #occ < m Report P when |P| = K ACGTTAACGTTA Current pattern P (e, B)
WEEDER: complexity Can get all patterns in time O(Nn(K choose α) 3 α ) ~ O(N nK α 3 α ). n: # sequences. Needed for Bit OR. Better than O(KN 4 K ) and O(N K α 3 α NK) since usually α << K K α 3 α may still be expensive for large K –E.g. K = 20, α = 6
WEEDER: More tricks Eligible nodes: with at most α mismatches to P Eligible nodes: with at most min( L, α) mismatches to P –L: current pattern length – : error ratio –Require that mismatches to be somewhat evenly distributed among positions Prune tree at length K ACGTTAACGTTA Current pattern P
MULTIPROFILER W differs from W* at positions. The consensus sequence for the words in the -neighborhood of W is similar to W. If we ignore all the chars that are similar to W, the rest may suggest the difference between W and W* W W* W*: ACGTACG W: ATGTAAG
MULTIPROFILER: alg sketch For each word W in S –Find its α-neighborhood in S –List of words: C For each position j from 1..K of the words in C –Find the most popular char that differ from W[j] Replace α positions in W with the chars found above –Call the new word W’ W* = argmin D(W’, S) W W* W*: ACGTACG W: ATGTAAG
MULTIPROFILER No complexity provided in the paper More efficient than WEEDER for longer patterns: N < K α 3 α How to choose α is an issue: –Large α: too many noises in neighborhood –Small α: few true instances in neighborhood W W* W*: ACGTACG W: ATGTAAG
Challenging problem (k, d)-motif challenge problem Many algorithms fail at (15, 4)-motif for n = 20 and L = 1000 Combinatorial algorithms usually work better on challenge problem –However, they are usually designed to find (k, d)-motifs –Performance in real data varies k d mutations n = 20 L = 1000
Probabilistic modeling approaches for motif finding Will be covered later