Gene Regulation and Microarrays. Finding Regulatory Motifs Given a collection of genes with common expression, Find the TF-binding motif in common......

Slides:



Advertisements
Similar presentations
Hidden Markov Model in Biological Sequence Analysis – Part 2
Advertisements

Random Projection Approach to Motif Finding Adapted from RandomProjections.ppt.
CS5263 Bioinformatics Probabilistic modeling approaches for motif finding.
Bioinformatics Motif Detection Revised 27/10/06. Overview Introduction Multiple Alignments Multiple alignment based on HMM Motif Finding –Motif representation.
Designing Algorithms Csci 107 Lecture 4. Outline Last time Computing 1+2+…+n Adding 2 n-digit numbers Today: More algorithms Sequential search Variations.
Regulatory Motifs. Contents Biology of regulatory motifs Experimental discovery Computational discovery PSSM MEME.
Bioinformatics Finding signals and motifs in DNA and proteins Expectation Maximization Algorithm MEME The Gibbs sampler Lecture 10.
CS262 Lecture 9, Win07, Batzoglou Gene Regulation and Microarrays.
Gene Regulation and Microarrays. Overview A. Gene Expression and Regulation B. Measuring Gene Expression: Microarrays C. Finding Regulatory Motifs.
Motif Finding. Regulation of Genes Gene Regulatory Element RNA polymerase (Protein) Transcription Factor (Protein) DNA.
Hidden Markov Models Lecture 5, Tuesday April 15, 2003.
Transcription factor binding motifs (part I) 10/17/07.
A Very Basic Gibbs Sampler for Motif Detection Frances Tong July 28, 2004 Southern California Bioinformatics Summer Institute.
Hidden Markov Models Lecture 5, Tuesday April 15, 2003.
MOPAC: Motif-finding by Preprocessing and Agglomerative Clustering from Microarrays Thomas R. Ioerger 1 Ganesh Rajagopalan 1 Debby Siegele 2 1 Department.
Finding Compact Structural Motifs Presented By: Xin Gao Authors: Jianbo Qian, Shuai Cheng Li, Dongbo Bu, Ming Li, and Jinbo Xu University of Waterloo,
Regulatory motif discovery 6.095/ Computational Biology: Genomes, Networks, Evolution Lecture 10Oct 12, 2005.
Algorithms for Regulatory Motif Discovery Xiaohui Xie University of California, Irvine.
Designing Algorithms Csci 107 Lecture 4.
Biological Sequence Pattern Analysis Liangjiang (LJ) Wang March 8, 2005 PLPTH 890 Introduction to Genomic Bioinformatics Lecture 16.
Hidden Markov Models.
Implementation of Planted Motif Search Algorithms PMS1 and PMS2 Clifford Locke BioGrid REU, Summer 2008 Department of Computer Science and Engineering.
Fuzzy K means.
6/29/20151 Efficient Algorithms for Motif Search Sudha Balla Sanguthevar Rajasekaran University of Connecticut.
(Regulatory-) Motif Finding. Clustering of Genes Find binding sites responsible for common expression patterns.
Learning HMM parameters Sushmita Roy BMI/CS 576 Oct 21 st, 2014.
Finding Regulatory Motifs in DNA Sequences
CS262 Lecture 17, Win07, Batzoglou Gene Regulation and Microarrays.
Cis-regultory module 10/24/07. TFs often work synergistically (Harbison 2004)
Finding Regulatory Motifs in DNA Sequences. Motifs and Transcriptional Start Sites gene ATCCCG gene TTCCGG gene ATCCCG gene ATGCCG gene ATGCCC.
Motif finding : Lecture 2 CS 498 CXZ. Recap Problem 1: Given a motif, finding its instances Problem 2: Finding motif ab initio. –Paradigm: look for over-represented.
Counting position weight matrices in a sequence & an application to discriminative motif finding Saurabh Sinha Computer Science University of Illinois,
Motif finding: Lecture 1 CS 498 CXZ. From DNA to Protein: In words 1.DNA = nucleotide sequence Alphabet size = 4 (A,C,G,T) 2.DNA  mRNA (single stranded)
A Statistical Method for Finding Transcriptional Factor Binding Sites Authors: Saurabh Sinha and Martin Tompa Presenter: Christopher Schlosberg CS598ss.
Guiding Motif Discovery by Iterative Pattern Refinement Zhiping Wang, Mehmet Dalkilic, Sun Kim School of Informatics, Indiana University.
Expectation Maximization and Gibbs Sampling – Algorithms for Computational Biology Lecture 1- Introduction Lecture 2- Hashing and BLAST Lecture 3-
Motif finding with Gibbs sampling CS 466 Saurabh Sinha.
Sampling Approaches to Pattern Extraction
Outline More exhaustive search algorithms Today: Motif finding
CS5263 Bioinformatics Lecture 20 Practical issues in motif finding Final project.
Gibbs Sampler in Local Multiple Alignment Review by 온 정 헌.
Motifs BCH364C/391L Systems Biology / Bioinformatics – Spring 2015 Edward Marcotte, Univ of Texas at Austin Edward Marcotte/Univ. of Texas/BCH364C-391L/Spring.
HMMs for alignments & Sequence pattern discovery I519 Introduction to Bioinformatics.
Computational Genomics and Proteomics Lecture 8 Motif Discovery C E N T R F O R I N T E G R A T I V E B I O I N F O R M A T I C S V U E.
1 Finding Regulatory Motifs. 2 Copyright notice Many of the images in this power point presentation are from Bioinformatics and Functional Genomics by.
Introduction to Bioinformatics Algorithms Finding Regulatory Motifs in DNA Sequences.
Gibbs sampling for motif finding Yves Moreau. 2 Overview Markov Chain Monte Carlo Gibbs sampling Motif finding in cis-regulatory DNA Biclustering microarray.
1 Motifs for Unknown Sites Vasileios Hatzivassiloglou University of Texas at Dallas.
Local Multiple Sequence Alignment Sequence Motifs
CS 6243 Machine Learning Advanced topic: pattern recognition (DNA motif finding)
Learning Sequence Motifs Using Expectation Maximization (EM) and Gibbs Sampling BMI/CS 776 Mark Craven
Statistical Estimation Vasileios Hatzivassiloglou University of Texas at Dallas.
. Finding Motifs in Promoter Regions Libi Hertzberg Or Zuk.
Computational Biology, Part 3 Representing and Finding Sequence Features using Frequency Matrices Robert F. Murphy Copyright  All rights reserved.
Multiple Sequence Alignment Vasileios Hatzivassiloglou University of Texas at Dallas.
Transcription factor binding motifs (part II) 10/22/07.
Finding Motifs Vasileios Hatzivassiloglou University of Texas at Dallas.
CS5263 Bioinformatics Lecture 11 Motif finding. HW2 2(C) Click to find out K and lambda.
1 Discovery of Conserved Sequence Patterns Using a Stochastic Dictionary Model Authors Mayetri Gupta & Jun S. Liu Presented by Ellen Bishop 12/09/2003.
CS5263 Bioinformatics Lecture 19 Motif finding. (Sequence) motif finding Given a set of sequences Goal: find sequence motifs that appear in all or the.
Hidden Markov Models BMI/CS 576
A Very Basic Gibbs Sampler for Motif Detection
Gibbs sampling.
Learning Sequence Motif Models Using Expectation Maximization (EM)
Algorithm design and Analysis
CS 5263 & CS 4233 Bioinformatics Motif finding.
(Regulatory-) Motif Finding
Finding regulatory modules
CS5263 Bioinformatics Lecture 18 Motif finding.
Motif finding in groups of related sequences
Presentation transcript:

Gene Regulation and Microarrays

Finding Regulatory Motifs Given a collection of genes with common expression, Find the TF-binding motif in common......

Characteristics of Regulatory Motifs Tiny Highly Variable ~Constant Size  Because a constant-size transcription factor binds Often repeated Low-complexity-ish

Sequence Logos Information at pos’n I, H(i) = –  {letter x} freq(x, i) log 2 freq(x, i) Height of x at pos’n i, L(x, i) = freq(x, i) (2 – H(i))  Examples: freq(A, i) = 1;H(i) = 0;L(A, i) = 2 A: ½; C: ¼; G: ¼; H(i) = 1.5; L(A, i) = ¼; L(not T, i) = ¼

Problem Definition Probabilistic Motif: M ij ; 1  i  W 1  j  4 M ij = Prob[ letter j, pos i ] Find best M, and positions p 1,…, p N in sequences Combinatorial Motif M: m 1 …m W Some of the m i ’s blank Find M that occurs in all s i with  k differences Or, Find M with smallest total hamming dist Given a collection of promoter sequences s 1,…, s N of genes with common expression

Essentially a Multiple Local Alignment Find “best” multiple local alignment Alignment score defined differently in probabilistic/combinatorial cases......

Algorithms Combinatorial CONSENSUS, TEIRESIAS, SP-STAR, others Probabilistic 1.Expectation Maximization: MEME 2.Gibbs Sampling: AlignACE, BioProspector

Combinatorial Approaches to Motif Finding

Discrete Formulations Given sequences S = {x 1, …, x n } A motif W is a consensus string w 1 …w K Find motif W * with “best” match to x 1, …, x n Definition of “best”: d(W, x i ) = min hamming dist. between W and any word in x i d(W, S) =  i d(W, x i )

Approaches Exhaustive Searches CONSENSUS MULTIPROFILER TEIRESIAS, SP-STAR, WINNOWER

Exhaustive Searches 1. Pattern-driven algorithm: For W = AA…A to TT…T (4 K possibilities) Find d( W, S ) Report W* = argmin( d(W, S) ) Running time: O( K N 4 K ) (where N =  i |x i |) Advantage: Finds provably “best” motif W Disadvantage: Time

Exhaustive Searches 2. Sample-driven algorithm: For W = any K-long word occurring in some x i Find d( W, S ) Report W* = argmin( d( W, S ) ) or, Report a local improvement of W * Running time: O( K N 2 ) Advantage: Time Disadvantage:If the true motif is weak and does not occur in data then a random motif may score better than any instance of true motif

CONSENSUS Algorithm: Cycle 1: For each word W in S(of fixed length!) For each word W’ in S Create alignment (gap free) of W, W’ Keep the C 1 best alignments, A 1, …, A C1 ACGGTTG,CGAACTT,GGGCTCT … ACGCCTG,AGAACTA,GGGGTGT …

CONSENSUS Algorithm: Cycle t: For each word W in S For each alignment A j from cycle t-1 Create alignment (gap free) of W, A j Keep the C l best alignments A 1, …, A Ct ACGGTTG,CGAACTT,GGGCTCT … ACGCCTG,AGAACTA,GGGGTGT … ……… ACGGCTC,AGATCTT,GGCGTCT …

CONSENSUS C 1, …, C n are user-defined heuristic constants  N is sum of sequence lengths  n is the number of sequences Running time: O(N 2 ) + O(N C 1 ) + O(N C 2 ) + … + O(N C n ) = O( N 2 + NC total ) Where C total =  i C i, typically O(nC), where C is a big constant

MULTIPROFILER Extended sample-driven approach Given a K-long word W, define: N α (W) = words W’ in S s.t. d(W,W’)  α Idea: Assume W is occurrence of true motif W * Will use N α (W) to correct “errors” in W

MULTIPROFILER Assume W differs from true motif W * in at most L positions Define: A wordlet G of W is a L-long pattern with blanks, differing from W  L is smaller than the word length K Example: K = 7; L = 3 W = ACGTTGA G = --A--CG

MULTIPROFILER Algorithm: For each W in S: For L = 1 to L max 1.Find the α- neighbors of W in S  N α (W) 2.Find all “strong” L-long wordlets G in N a (W) 3.For each wordlet G, 1.Modify W by the wordlet G  W’ 2.Compute d(W’, S) Report W * = argmin d(W’, S) Step 1 above: Smaller motif-finding problem; Use exhaustive search

Expectation Maximization in Motif Finding

Expectation Maximization The MM algorithm, part of MEME package uses Expectation Maximization Algorithm (sketch): 1.Given genomic sequences find all K-long words 2.Assume each word is motif or background 3.Find likeliest Motif Model Background Model classification of words into either Motif or Background

Expectation Maximization Given sequences x 1, …, x N, Find all k-long words X 1,…, X n Define motif model: M = (M 1,…, M K ) M i = (M i1,…, M i4 )(assume {A, C, G, T}) where M ij = Prob[ letter j occurs in motif position i ] Define background model: B = B 1, …, B 4 B i = Prob[ letter j in background sequence ]

Expectation Maximization Define Z i1 = { 1, if X i is motif; 0, otherwise } Z i2 = { 0, if X i is motif; 1, otherwise } Given a word X i = x[1]…x[k], P[ X i, Z i1 =1 ] = M 1x[1] …M kx[k] P[ X i, Z i2 =1 ] = (1 - ) B x[1] …B x[K] Let 1 = ; 2 = (1- )

Expectation Maximization Define: Parameter space  = (M,B)  1 : Motif;  2 : Background Objective: Maximize log likelihood of model:

Expectation Maximization Maximize expected likelihood, in iteration of two steps: Expectation: Find expected value of log likelihood: Maximization: Maximize expected value over ,

Expectation: Find expected value of log likelihood: where expected values of Z can be computed as follows: Expectation Maximization: E-step

Maximization: Maximize expected value over  and independently For, this is easy: Expectation Maximization: M-step

For  = (M, B), define c jk = E[ # times letter k appears in motif position j] c 0k = E[ # times letter k appears in background] c ij values are calculated easily from E[Z] values It easily follows: to not allow any 0’s, add pseudocounts Expectation Maximization: M-step

Initial Parameters Matter! Consider the following “artificial” example: x 1, …, x N contain:  2 12 patterns on {A, T}:A…A, A…AT,……, T… T  2 12 patterns on {C, G}:C…C, C…CG,……, G…G  D << 2 12 occurrences of 12-mer ACTGACTGACTG Some local maxima:  ½; B = ½C, ½G; M i = ½A, ½T, i = 1,…, 12  D/2 k+1 ; B = ¼A,¼C,¼G,¼T; M 1 = 100% A, M 2 = 100% C, M 3 = 100% T, etc.

Overview of EM Algorithm 1.Initialize parameters  = (M, B), :  Try different values of from N -1/2 up to 1/(2K) 2.Repeat: a.Expectation b.Maximization 3.Until change in  = (M, B), falls below  4.Report results for several “good”

Overview of EM Algorithm One iteration running time: O(NK)  Usually need < N iterations for convergence, and < N starting points.  Overall complexity: unclear – typically O(N 2 K) - O(N 3 K) EM is a local optimization method Initial parameters matter MEME: Bailey and Elkan, ISMB 1994.