Download presentation
Presentation is loading. Please wait.
Published byBriana Flynn Modified over 8 years ago
1
Eukaryotic Gene Finding with GlimmerHMM Mihaela Pertea Assistant Research Scientist CBCB
2
Outline Brief overview of the eukaryotic gene finding problem GlimmerHMM architecture: signal sensors, coding statistics, GHMMs Training GlimmerHMM GlimmerHMM results
3
Eukaryotic Gene Finding Goals Given an uncharacterized DNA sequence, find out: –Which regions code for proteins? –Which DNA strand is used to encode each gene? –Where does the gene starts and ends? –Where are the exon-intron boundaries in eukaryotes? Overall accuracy usually below 50%
4
The Problem Given a string S over the alphabet {A,C,G,T}, find the “optimal” parse of S (with respect to some coding score function): S=s 1,s 2,…,s n Here, s i represents a coding or a non-coding subsequence of S.
5
Gene Finding: Different Approaches Similarity-based methods. These use similarity to annotated sequences like proteins, cDNAs, or ESTs (e.g. Procrustes, GeneWise). Ab initio gene-finding. These don’t use external evidence to predict sequence structure (e.g. GlimmerHMM, GeneZilla, Genscan, SNAP). Comparative (homology) based gene finders. These align genomic sequences from different species and use the alignments to guide the gene predictions (e.g. TWAIN, SLAM, TWINSCAN, SGP-2). Integrated approaches. These combine multiple forms of evidence, such as the predictions of other gene finders (e.g. Jigsaw, EuGène, Gaze)
6
Why ab-initio gene prediction? Ab initio gene finders can predict novel genes not clearly homologous to any previously known gene.
7
Eukaryotic Gene Finding with Parse Graphs 1.Build a parse graph. A parse graph represents all (or all high-scoring) open reading frames. Each vertex is a signal and each edge is a feature such as an exon or intron. Coding statistics and signal sensors are integrated in a mathematical gene model using machine learning techniques: HMMs/GHMMs, decision trees, neural networks, etc. 2.Find highest-scoring path through the parse graph, usually using dynamic programming to efficiently enumerate all possible parses, score them, and choose the maximal scoring one. Whereas most gene-finders give only the highest-scoring gene model, GlimmerHMM’s parse graph can be used to explore the sub-optimal gene models. When GlimmerHMM’s prediction is not exactly correct, the true gene model is often one of the top few sub-optimal parses.
8
Signal Sensors Signals – short sequence patterns in the genomic DNA that are recognized by the cellular machinery.
9
GCTATCGATTCTCTAATCGTCTATCGATCGTGGTATCGTACGTTCATTACTGACT... sensor 1 sensor 2 sensor n... ATG’s GT’S AG’s... signal queues sequence: detect putative signals during left-to-right pass over squence insert into type-specific signal queues...ATG.........ATG......ATG..................GT newly detected signal elements of the “ATG” queue trellis links Efficient Decoding via Signal Sensors
10
ATGGATGCTACTTGACGTACTTAACTTACCGATCTCT 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 in-frame stop codon! The Notion of “Eclipsing”
11
…ACTGATGCGCGATTAGAGTCATGGCGATGCATCTAGCTAGCTATATCGCGTAGCTAGCTAGCTGATCTACTATCGTAGC… Signal sensor We slide a fixed-length model or “window” along the DNA and evaluate score(signal) at each point: When the score is greater than some threshold (determined empirically to result in a desired sensitivity), we remember this position as being the potential site of a signal. The most common signal sensor is the Weight Matrix: A 100% A = 31% T = 28% C = 21% G = 20% T 100% G 100% A = 18% T = 32% C = 24% G = 26% A = 19% T = 20% C = 29% G = 32% A = 24% T = 18% C = 26% G = 32% Identifying Signals In DNA with a Signal Sensor
12
Signal Sensors in GlimmerHMM Given a signal X of fixed length λ, estimate the distributions: p + (X) = the probability that X is a signal p - (X) = the probability that X is not a signal Compute the score of the signal: …GGCTAGTCATGCCAAACGCGG… …AAACCTAGTATGCCCACGTTGT… …ACCCAGTCCCATGACCACACACAACC… …ACCCTGTGATGGGGTTTTAGAAGGACTC…
13
Start and stop codon scoring Score all potential start/stop codons within a window of length 19. The probability of generating the sequence is given by: (WAM model or inhomogeneous Markov model) CATCCACCATGGAGAACCACCATGG Kozak consensus
14
Splice site prediction The splice site score is a combination of: first or second order inhomogeneous Markov models on windows around the acceptor and donor sites MDD decision trees longer Markov models to capture difference between coding and non- coding on opposite sides of site (optional) maximal splice site score within 60 bp (optional) 16bp24bp
15
A key observation regarding splice sites and start and stop codons is that all of these signals delimit the boundaries between coding and noncoding regions within genes (although the situation becomes more complex in the case of alternative splicing). One might therefore consider weighting a signal score by some function of the scores produced by the coding and noncoding content sensors applied to the regions immediately 5 and 3 of the putative signal: Codong-noncoding Boundaries
16
When identifying putative signals in DNA, we may choose to completely ignore low-scoring candidates in the vicinity of higher-scoring candidates. The purpose of the local optimality criterion is to apply such a weighting in cases where two putative signals are very close together, with the chosen weight being 0 for the lower-scoring signal and 1 for the higher-scoring one. Local Optimality Criterion
17
Rather than using one weight array matrix for all splice sites, MDD differentiates between splice sites in the training set based on the bases around the AG/GT consensus: Each leaf has a different WAM trained from a different subset of splice sites. The tree is induced empirically for each genome. Maximal Dependence Decomposition (MDD) (Arabidopsis thaliana MDD trees)
18
MDD uses the Χ 2 measure between the variable K i representing the consensus at position i in the sequence and the variable N j which indicates the nucleotide at position j : where O x,y is the observed count of the event that K i =x and N j =y, and E x,y is the value of this count expected under the null hypothesis that K i and N j are independent. Split if, for the cuttof P=0.001, 3df. MDD splitting criterion GAATGGA GAATGAA TATTGGA GAGTGGC GCATGCT AGATGGG CACTGGA GAATGTA +5+4+3+2+1-2 Example: position: consensus: 71411All 30.401.720.40 1[CGT] 40.612.320.61 0A OEOEOEOEO AllTGCA N +5 K -2 Χ 2 -2,5 =2.9
19
Donor/Acceptor sites at location k: DS(k) = S comb (k,16) + (S cod (k-80)-S nc (k-80)) + (S nc (k+2)-S cod (k+2)) AS(k) = S comb (k,24) + (S nc (k-80)-S cod (k-80)) + (S cod (k+2)-S nc (k+2)) S comb (k,i) = score computed by the Markov model/MDD method using window of i bases S cod/nc (j) = score of coding/noncoding Markov model for 80bp window starting at j Splice Site Scoring
20
False negatives(%): test data False negatives(%): train data False positives(%) Trade-off between False-Positive Rates and False- Negative Rates ThresholdFNFP Acceptor train file2.927628414(7.00%)8921(2.16%) Donor train file2.278564411(7.01%)7163(2.05%) Acceptor test file2.15537152(10.06%)1060(2.67%) Donor test file2.81754052(10.16%)497( 1.47%) Arabidopsis thaliana data
21
Coding Statistics Unequal usage of codons in the coding regions is a universal feature of the genomes We can use this feature to differentiate between coding and non- coding regions of the genome Coding statistics - a function that for a given DNA sequence computes a likelihood that the sequence is coding for a protein Many different ones ( codon usage, hexamer usage,GC content, Markov chains, IMM, ICM.)
22
A three-periodic ICM uses three ICMs in succession to evaluate the different codon positions, which have different statistics: ATC GAT CGA TCA GCT TAT CGC ATC ICM 0 ICM 1 ICM 2 P[C|M 0 ] P[G|M 1 ] P[A|M 2 ] The three ICMs correspond to the three phases. Every base is evaluated in every phase, and the score for a given stretch of (putative) coding DNA is obtained by multiplying the phase-specific probabilities in a mod 3 fashion: GlimmerHMM uses 3-periodic ICMs for coding and homogeneous (non-periodic) ICMs for noncoding DNA. 3-periodic ICMs
23
The Advantages of Periodicity and Interpolation
24
HMMs and Gene Structure Nucleotides {A,C,G,T} are the observables Different states generate nucleotides at different frequencies A simple HMM for unspliced genes: AAAGC ATG CAT TTA ACG AGA GCA CAA GGG CTC TAA TGCCG The sequence of states is an annotation of the generated string – each nucleotide is generated in intergenic, start/stop, coding state ATG TAA
25
An HMM is a following: An HMM is a stochastic machine M=(Q, , P t, P e ) consisting of the following: a finite set of states, Q={q 0, q 1,..., q m } a finite alphabet ={s 0, s 1,..., s n } a transition distribution P t : Q×Q [0,1] i.e., P t (q j | q i ) an emission distribution P e : Q× [0,1] i.e., P e (s j | q i ) q 0q 0 100% 80% 15% 30% 70% 5% R =0% Y = 100% q 1 Y =0% R = 100% q 2 M 1 =({q 0,q 1,q 2 },{ Y, R },P t,P e ) P t ={(q 0,q 1,1), (q 1,q 1,0.8), (q 1,q 2,0.15), (q 1,q 0,0.05), (q 2,q 2,0.7), (q 2,q 1,0.3)} P e ={(q 1, Y,1), (q 1, R,0), (q 2, Y,0), (q 2, R,1) } An Example Recall: “Pure” HMMs
26
exon length geometric distribution geometric HMMs & Geometric Feature Lengths
27
Lengths Distribution in Human Feature lengths were computed for Human chromosome 22 with RefSeq annotation (as of July 2005).
28
Generalized Hidden Markov Models Advantages: * Submodel abstraction * Architectural simplicity * State duration modeling Disadvantages: * Decoding complexity
29
A GHMM is a following: A GHMM is a stochastic machine M=(Q, , P t, P e, P d ) consisting of the following: a finite set of states, Q={q 0, q 1,..., q m } a finite alphabet ={s 0, s 1,..., s n } a transition distribution P t : Q×Q [0,1] i.e., P t (q j | q i ) an emission distribution P e : Q× * × N [0,1] i.e., P e (s j | q i,d j ) a duration distribution P e : Q× N [0,1] i.e., P d (d j | q i ) each state now emits an entire subsequence rather than just one symbol feature lengths are now explicitly modeled, rather than implicitly geometric emission probabilities can now be modeled by any arbitrary probabilistic model there tend to be far fewer states => simplicity & ease of modification Key Differences Ref: Kulp D, Haussler D, Reese M, Eeckman F (1996) A generalized hidden Markov model for the recognition of human genes in DNA. ISMB '96. Generalized HMMs
30
emission prob. transition prob. Recall: Decoding with an HMM
31
emission prob. transition prob. duration prob. Decoding with a GHMM
32
Given a sequence S, we would like to determine the parse of that sequence which segments the DNA into the most likely exon/intron structure: The parse consists of the coordinates of the predicted exons, and corresponds to the precise sequence of states during the operation of the GHMM (and their duration, which equals the number of symbols each state emits). This is the same as in an HMM except that in the HMM each state emits bases with fixed probability, whereas in the GHMM each state emits an entire feature such as an exon or intron. parse exon 1exon 2exon 3 AGCTAGCAGTCGATCATGGCATTATCGGCCGTAGTACGTAGCAGTAGCTAGTAGCAGTCGATAGTAGCATTATCGGCCGTAGCTACGTAGCGTAGCTC sequence S prediction Gene Prediction with a GHMM
33
GHMMs generalize HMMs by allowing each state to emit a subsequence rather than just a single symbol Whereas HMMs model all feature lengths using a geometric distribution, coding features can be modeled using an arbitrary length distribution in a GHMM Emission models within a GHMM can be any arbitrary probabilistic model (“submodel abstraction”), such as a neural network or decision tree GHMMs tend to have many fewer states => simplicity & modularity GHMMs generalize HMMs by allowing each state to emit a subsequence rather than just a single symbol Whereas HMMs model all feature lengths using a geometric distribution, coding features can be modeled using an arbitrary length distribution in a GHMM Emission models within a GHMM can be any arbitrary probabilistic model (“submodel abstraction”), such as a neural network or decision tree GHMMs tend to have many fewer states => simplicity & modularity GHMMs Summary
34
GlimmerHMM architecture I2I1I0 Exon2Exon1Exon0 Exon Sngl Init Exon I1I2 Exon1Exon2 Term Exon I0 Exon0 Exon Sngl Init Exon + forward strand - backward strand Phase-specific introns Four exon types Uses GHMM to model gene structure (explicit length modeling) WAM and MDD for splice sites ICMs for exons, introns and intergenic regions Different model parameters for regions with different GC content Can emit a graph of high- scoring ORFS Intergenic
35
θ=(P t,P e,P d ) Training the Gene Finder
36
estimate via labeled training data construct a histogram of observed feature lengths Training for GHMMs
37
Need of training organism specific gene finders
38
–parameter mismatching: train on a close relative –use a comparative GF trained on a close relative –use BLAST to find conserved genes & curate them, use as training set –augment training set with genes from related organisms, use weighting –manufacture artificial training data long ORFs –be sensitive to sample sizes during training by reducing the number of parameters (to reduce overtraining) fewer states (1 vs. 4 exon states, intron=intergenic) lower-order models –pseudocounts –smoothing (esp. for length distributions) Gene Finding in the Dark: Dealing with Small Sample Sizes
39
train (800) test (200) G (1000 genes) donors acceptors starts stops exons introns intergenic train-model model files SLOP evaluation reported accuracy SLOP = Separate Local Optimization of Parameters
40
train (800) test (200) T (1000 genes) final evaluation reported accuracy MLE model files control parms gradient ascent evaluation accuracy final model files “peeking” GRAPE GRAPE = GRadient Ascent Parameter Estimation unseen (1000)
41
Evaluation of Gene Finding Programs Nucleotide level accuracy TN FPFNTN TPFN TP FN REALITY PREDICTION Sensitivity: Specificity:
42
More Measures of Prediction Accuracy Exon level accuracy REALITY PREDICTION WRONG EXON CORRECT EXON MISSING EXON
43
Nuc Sens Nuc Spec Nuc Acc Exon Sens Exon Spec Exon Acc Exact Genes GlimmerHMM86%72%79%72%62%67% 17% Genscan86%68%77%69%60%65% 13% GlimmerHMM’s performace compared to Genscan on 963 human RefSeq genes selected randomly from all 24 chromosomes, non-overlapping with the training set. The test set contains 1000 bp of untranslated sequence on either side (5' or 3') of the coding portion of each gene. GlimmerHMM on human data
44
GlimmerHMM on other species Nucleotide Level Exon LevelCorreclty Predicted Genes Size of test set SnSpSnSp Arabidopsis thaliana 97%99%84%89%60%809 genes Cryptococcus neoformans 96%99%86%88%53%350 genes Coccidoides posadasii 99% 84%86%60%503 genes Oryza sativa95%98%77%80%37%1323 genes GlimmerHMM is also trained on: Aspergillus fumigatus, Entamoeba histolytica, Toxoplasma gondii, Brugia malayi, Trichomonas vaginalis, and many others.
45
GlimmerHMM is a high-performance ab initio gene finder All three programs were tested on a test data set of 809 genes, which did not overlap with the training data set of GlimmerHMM. All genes were confirmed by full-length Arabidopsis cDNAs and carefully inspected to remove homologues. Arabidopsis thaliana test results NucleotideExonGene SnSpAccSnSpAccSnSpAcc GlimmerHMM979998848986.5606160.5 SNAP969997.5838584605758.5 Genscan+939996748177.535
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.