Download presentation
Presentation is loading. Please wait.
Published byCarmel Blair Modified over 9 years ago
1
Parsing A Bacterial Genome Mark Craven Department of Biostatistics & Medical Informatics University of Wisconsin U.S.A. craven@biostat.wisc.edu www.biostat.wisc.edu/~craven
2
The Task Given: a bacterial genome Do: use computational methods to predict a “parts list” of regulatory elements
3
Outline 1.background on bacterial gene regulation 2.background on probabilistic language models 3.predicting transcription units using probabilistic language models 4.augmenting training with “weakly” labeled examples 5.refining the structure of a stochastic context free grammar
4
The Central Dogma of Molecular Biology
5
Transcription in Bacteria
6
Operons in Bacteria operon: sequence of one or more genes transcribed as a unit under some conditions promoter: “signal” in DNA indicating where to start transcription terminator: “signal” indicating where to stop transcription promoter gene terminator gene mRNA
7
The Task Revisited Given: –DNA sequence of E. coli genome –coordinates of known/predicted genes –known instances of operons, promoters, terminators Do: –learn models from known instances –predict complete catalog of operons, promoters, terminators for the genome
8
Our Approach: Probabilistic Language Models 1.write down a “grammar” for elements of interest (operons, promoters, terminators, etc.) and relations among them 2.learn probability parameters from known instances of these elements 3.predict new elements by “parsing” uncharacterized DNA sequence
9
Transformational Grammars a transformational grammar characterizes a set of legal strings the grammar consists of –a set of abstract nonterminal symbols –a set of terminal symbols (those that actually appear in strings) –a set of productions
10
A Grammar for Stop Codons this grammar can generate the 3 stop codons: taa, tag, tga with a grammar we can ask questions like –what strings are derivable from the grammar? –can a particular string be derived from the grammar?
11
The Parse Tree for tag
12
A Probabilistic Version of the Grammar each production has an associated probability the probabilities for productions with the same left-hand side sum to 1 this grammar has a corresponding Markov chain model 1.0 0.7 0.3 1.0 0.2 0.8
13
A Probabilistic Context Free Grammar for Terminators START PREFIX STEM_BOT1 STEM_BOT2 STEM_MID STEM_TOP2 STEM_TOP1 LOOP LOOP_MID SUFFIX B t l STEM_BOT2 t r t l * STEM_MID t r * | t l * STEM_TOP2 t r * t l LOOP t r B B LOOP_MID B B t l * STEM_TOP1 t r * B LOOP_MID | B B B B B B B B B a | c | g | u B B B B B B B B B PREFIX STEM_BOT1 SUFFIX t = { a,c,g,u }, t * = { a,c,g,u, } c g a c c g c c-u-c-a-a-a-g-g- g c u g g c g u a u c c -u-u-u-u-u-u-u-u prefix stem loop suffix
14
Inference with Probabilistic Grammars for a given string there may be many parses, but some are more probable than others we can do prediction by finding relatively high probability parses there are dynamic programming algorithms for finding the most probable parse efficiently
15
Learning with Probabilistic Grammars in this work, we write down the productions by hand, but learn the probability parameters to learn the probability parameters, we align sequences of a given classs (e.g. terminators) with the relevant part of the grammar when there is hidden state (i.e. the correct parse is not known), we use Expectation Maximization (EM) algorithms
16
Outline 1.background on bacterial gene regulation 2.background on probabilistic language models 3.predicting transcription units using probabilistic language models [Bockhorst et al., ISMB/Bioinformatics ‘03] 4.augmenting training with “weakly” labeled examples 5.refining the structure of a stochastic context free grammar
17
untranscribed region transcribed region ORF SCFG position specific Markov model semi-Markov model -35-10TSS ORF last ORF RIT prefix RDT prefix stem loop stem loop start RIT suffix RDT suffix end spacer start spacer prom intern post prom intra ORF pre term UTR A Model for Transcription Units
18
The Components of the Model stochastic context free grammars (SCFGs) represent variable-length sequences with long-range dependencies semi-Markov models represent variable-length sequences position-specific Markov models represent fixed-length sequence motifs
19
Gene Expression Data in addition to DNA sequence data, we also use expression data to make our parses microarrays enable the simultaneous measurement of the transcription levels of thousands of genes genes/ sequence positions experimental conditions
20
Incorporating Expression Data ACGTAGATAGACAGAATGACAGATAGAGACAGTTCGCTAGCTGACAGCTAGATCGATAGCTCGATAGCACGTGTACGTAGATAGACAGAATGACAGATAGAGACAGTTCGCT our models parse two sequences simultaneously –the DNA sequence of the genome –a sequence of expression measurements associated with particular sequence positions the expression data is useful because it provides information about which subsequences look like they are transcribed together
21
Predictive Accuracy for Operons
22
Predictive Accuracy for Promoters
23
Predictive Accuracy for Terminators
24
Accuracy of Promoter & Terminator Localization
25
Terminator Predictive Accuracy
26
Outline 1.background on bacterial gene regulation 2.background on probabilistic language models 3.predicting transcription units using probabilistic language models 4.augmenting training data with “weakly” labeled examples [Bockhorst & Craven, ICML ’02] 5.refining the structure of a stochastic context free grammar
27
Key Idea: Weakly Labeled Examples regulatory elements are inter-related –promoters precede operons –terminators follow operons –etc. relationships such as these can be exploited to augment training sets with “weakly labeled” examples
28
Inferring “Weakly” Labeled Examples g1 g2g3g4 g5 ACGTAGATAGACAGAATGACAGATAGAGACAGTTCGCTAGCTGACAGCTAGATCGATAGCTCGATAGCACGTGTACGTAGATAGACAGAATGACAGATAGAGACAGTTCGCT TGCATCTATCTGTCTTACTGTCTATCTCTGTCAAGCGATCGACTGTCGATCTAGCTATCGAGCTATCGTGCACATGCATCTATCTGTCTTACTGTCTATCTCTGTCAAGCGA if we know that an operon ends at g4, then there must be a terminator shortly downstream if we know that an operon begins at g2, then there must be a promoter shortly upstream we can exploit relations such as this to augment our training sets
29
Strongly vs. Weakly Labeled Terminator Examples gtccgttccgccactattcactcatgaaaatgagttcagagagccgcaagatttttaattttgcggtttttttgtatttgaattccaccatttctctgttcaatg end of stem-loop strongly labeled terminator : weakly labeled terminator: extent of terminator sub-class: rho-independent
30
Training the Terminator Models: Strongly Labeled Examples rho-dependent terminator model negative model rho-independent terminator model negative examples rho-independent examples rho-dependent examples
31
Training the Terminator Models: Weakly Labeled Examples rho-dependent terminator model negative model rho-independent terminator model negative examples weakly labeled examples combined terminator model
32
Do Weakly Labeled Terminator Examples Help? task: classification of terminators (both sub-classes) in E. coli K-12 train SCFG terminator model using: –S strongly labeled examples and –W weakly labeled examples evaluate using area under ROC curves
33
0.5 0.6 0.7 0.8 0.9 1 020406080100120140 Area under ROC curve Number of strong positive examples 250 weak examples 25 weak examples 0 weak examples Learning Curves using Weakly Labeled Terminators
34
Are Weakly Labeled Examples Better than Unlabeled Examples? train SCFG terminator model using: –S strongly labeled examples and –U unlabeled examples vary S and U to obtain learning curves
35
Training the Terminator Models: Unlabeled Examples rho-dependent terminator model negative model rho-independent terminator model unlabeled examples combined model
36
040 80 120 250 unlabeled examples 25 unlabeled examples 0 unlabeled examples Area under ROC curve Number of strong positive examples 0.6 0.8 1 040 80 120 250 weak examples 25 weak examples 0 weak examples Weakly Labeled Unlabeled Learning Curves: Weak vs. Unlabeled
37
Are Weakly Labeled Terminators from Predicted Operons Useful? train operon model with S labeled operons predict operons generate W weakly labeled terminators from W most confident predictions vary S and W
38
0.5 0.6 0.7 0.8 0.9 1 020406080100120140160 Area under ROC curve Number of strong positive examples 200 weak examples 100 weak examples 25 weak examples 0 weak examples Learning Curves using Weakly Labeled Terminators
39
Outline 1.background on bacterial gene regulation 2.background on probabilistic language models 3.predicting transcription units using probabilistic language models 4.augmenting training with “weakly” labeled examples 5.refining the structure of a stochastic context free grammar [Bockhorst & Craven, IJCAI ’01]
40
Learning SCFGs given the productions of a grammar, can learn the probabilities using the Inside-Outside algorithm we have developed an algorithm that can add new nonterminals & productions to a grammar during learning basic idea: –identify nonterminals that seem to be “overloaded” –split these nonterminals into two; allow each to specialize
41
Refining the Grammar in a SCFG there are various “contexts” in which each grammar nonterminal may be used consider two contexts for the nonterminal 0.4 0.1 if the probabilities for look very different, depending on its context, we add a new nonterminal and specialize 0.1 0.4
42
Refining the Grammar in a SCFG we can compare two probability distributions P and Q using Kullback-Leibler divergence 0.4 0.1 0.4 P Q
43
Learning Terminator SCFGs extracted grammar from the literature (~ 120 productions) data set consists of 142 known E. coli terminators, 125 sequences that do not contain terminators learn parameters using Inside-Outside algorithm (an EM algorithm) consider adding nonterminals guided by three heuristics –KL divergence –chi-squared –random
44
SCFG Accuracy After Adding 25 New Nonterminals
45
SCFG Accuracy vs. Nonterminals Added
46
Conclusions summary –we have developed an approach to predicting transcription units in bacterial genomes –we have predicted a complete set of transcription units for the E. coli genome advantages of the probabilistic grammar approach –can readily incorporate background knowledge –can simultaneously get a coherent set of predictions for a set of related elements –can be easily extended to incorporate other genomic elements current directions –expanding the vocabulary of elements modeled (genes, transcription factor binding sites, etc.) –handling overlapping elements –making predictions for multiple related genomes
47
Acknowledgements Craven Lab: Joe Bockhorst, Keith Noto David Page, Jude Shavlik Blattner Lab: Fred Blattner, Jeremy Glasner, Mingzhu Liu, Yu Qiu funding from National Science Foundation, National Institutes of Health
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.