Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS262 Lecture 9, Win07, Batzoglou Conditional Random Fields A brief description.

Similar presentations


Presentation on theme: "CS262 Lecture 9, Win07, Batzoglou Conditional Random Fields A brief description."— Presentation transcript:

1 CS262 Lecture 9, Win07, Batzoglou Conditional Random Fields A brief description

2 CS262 Lecture 9, Win07, Batzoglou HMMs and CRFs Features used in objective function P(x,  ) for an HMM:  a kl, e k (b), where b    V l (i) = V k (i – 1) + (a(k, l) + e(l, i)) = V k (i – 1) + g(k, l, x i )  Let’s generalize g!!! V k (i – 1) + g(k, l, x, i) 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xNxN 2 1 K 2

3 CS262 Lecture 9, Win07, Batzoglou CRFs - Features 1.Define a set of features that may be important  Number the features 1…n: f 1 (k, l, x, i), …, f n (k, l, x, i) features are indicator true/false variables 2.Train weights w j for each feature f j  Then, g(k, l, x, i) =  j=1…n f j (k, l, x, i)  w j x1x1 x2x2 x3x3 x6x6 x4x4 x5x5 x7x7 x 10 x8x8 x9x9 ii  i-1

4 CS262 Lecture 9, Win07, Batzoglou “Features” that depend on many pos. in x Score of a parse depends on all of x at each position Can still do Viterbi because state  i only “looks” at prev. state  i-1 and the constant sequence x 11 x1x1 22 x2x2 33 x3x3 44 x4x4 55 x5x5 66 x6x6 … 11 x1x1 22 x2x2 33 x3x3 44 x4x4 55 x5x5 66 x6x6 … HMM CRF

5 CS262 Lecture 9, Win07, Batzoglou Conditional Training Hidden Markov Model training:  Given training sequence x, “true” parse   Maximize P(x,  ) Disadvantage:  P(x,  ) = P(  | x) P(x) Quantity we care about so as to get a good parse Quantity we don’t care so much about because x is always given

6 CS262 Lecture 9, Win07, Batzoglou Conditional Training P(x,  ) = P(  | x) P(x) P(  | x) = P(x,  ) / P(x) Recall F j (x,  ) = # times feature f j occurs in (x,  ) =  i=1…N f j (k, l, x, i) ; count f j in x,  In HMMs, let’s denote by w j the weight of j th feature: w j = log(a kl ) or log(e k (b)) Then, HMM: P(x,  ) = exp [  j=1…n w j  F j (x,  ) ] CRF:Score(x,  ) = exp [  j=1…n w j  F j (x,  ) ]

7 CS262 Lecture 9, Win07, Batzoglou Conditional Training In HMMs, P(  | x) = P(x,  ) / P(x) P(x,  ) = exp [ w T F(x,  ) ] P(x) =   exp [ w T F(x,  ) ] =: Z Then, in CRF we can do the same to normalize Score(x,  ) into a prob. P CRF (  | x) = exp [  j=1…n w j  F(j, x,  ) ] / Z QUESTION: Why is this a probability???

8 CS262 Lecture 9, Win07, Batzoglou Conditional Training 1.We need to be given a set of sequences x and “true” parses  2.Calculate Z by a sum-of-paths algorithm similar to HMM We can then easily calculate P(  | x) 3.Calculate partial derivative of P(  | x) w.r.t. each parameter w j (not covered—akin to forward/backward) Update each parameter with gradient descent 4.Continue until convergence to optimal set of weights P(  | x) = exp [  j=1…n w j  F(j, x,  ) ] / Zis convex

9 CS262 Lecture 9, Win07, Batzoglou Conditional Random Fields—Summary 1.Ability to incorporate complicated non-local feature sets Do away with some independence assumptions of HMMs Parsing is still equally efficient 2.Conditional training Train parameters that are best for parsing, not modeling Need labeled examples—sequences x and “true” parses  (Can train on unlabeled sequences, however it is unreasonable to train too many parameters this way) Training is significantly slower—many iterations of forward/backward

10 DNA Sequencing

11 CS262 Lecture 9, Win07, Batzoglou Some Terminology insert a fragment that was incorporated in a circular genome, and can be copied (cloned) vector the circular genome (host) that incorporated the fragment BAC Bacterial Artificial Chromosome, a type of insert–vector combination, typically of length 100-200 kb read a 500-900 long word that comes out of a sequencing machine coverage the average number of reads (or inserts) that cover a position in the target DNA piece shotgun the process of obtaining many reads sequencing from random locations in DNA, to detect overlaps and assemble

12 CS262 Lecture 9, Win07, Batzoglou Sequencing and Fragment Assembly AGTAGCACAGA CTACGACGAGA CGATCGTGCGA GCGACGGCGTA GTGTGCTGTAC TGTCGTGTGTG TGTACTCTCCT 3x10 9 nucleotides 50% of human DNA is composed of repeats Error! Glued together two distant regions

13 CS262 Lecture 9, Win07, Batzoglou What can we do about repeats? Two main approaches: Cluster the reads Link the reads

14 CS262 Lecture 9, Win07, Batzoglou What can we do about repeats? Two main approaches: Cluster the reads Link the reads

15 CS262 Lecture 9, Win07, Batzoglou What can we do about repeats? Two main approaches: Cluster the reads Link the reads

16 CS262 Lecture 9, Win07, Batzoglou Sequencing and Fragment Assembly AGTAGCACAGA CTACGACGAGA CGATCGTGCGA GCGACGGCGTA GTGTGCTGTAC TGTCGTGTGTG TGTACTCTCCT 3x10 9 nucleotides C R D ARB, CRD or ARD, CRB ? ARB

17 CS262 Lecture 9, Win07, Batzoglou Sequencing and Fragment Assembly AGTAGCACAGA CTACGACGAGA CGATCGTGCGA GCGACGGCGTA GTGTGCTGTAC TGTCGTGTGTG TGTACTCTCCT 3x10 9 nucleotides

18 CS262 Lecture 9, Win07, Batzoglou Strategies for whole-genome sequencing 1.Hierarchical – Clone-by-clone i.Break genome into many long pieces ii.Map each long piece onto the genome iii.Sequence each piece with shotgun Example: Yeast, Worm, Human, Rat 2.Online version of (1) – Walking i.Break genome into many long pieces ii.Start sequencing each piece with shotgun iii.Construct map as you go Example: Rice genome 3.Whole genome shotgun One large shotgun pass on the whole genome Example: Drosophila, Human (Celera), Neurospora, Mouse, Rat, Dog

19 CS262 Lecture 9, Win07, Batzoglou Whole Genome Shotgun Sequencing cut many times at random genome forward-reverse paired reads plasmids (2 – 10 Kbp) cosmids (40 Kbp) known dist ~500 bp

20 CS262 Lecture 9, Win07, Batzoglou Fragment Assembly (in whole-genome shotgun sequencing)

21 CS262 Lecture 9, Win07, Batzoglou Fragment Assembly Given N reads… Where N ~ 30 million… We need to use a linear-time algorithm

22 CS262 Lecture 9, Win07, Batzoglou Steps to Assemble a Genome 1. Find overlapping reads 4. Derive consensus sequence..ACGATTACAATAGGTT.. 2. Merge some “good” pairs of reads into longer contigs 3. Link contigs to form supercontigs Some Terminology read a 500-900 long word that comes out of sequencer mate pair a pair of reads from two ends of the same insert fragment contig a contiguous sequence formed by several overlapping reads with no gaps supercontig an ordered and oriented set (scaffold) of contigs, usually by mate pairs consensus sequence derived from the sequene multiple alignment of reads in a contig

23 CS262 Lecture 9, Win07, Batzoglou 1. Find Overlapping Reads aaactgcagtacggatct aaactgcag aactgcagt … gtacggatct tacggatct gggcccaaactgcagtac gggcccaaa ggcccaaac … actgcagta ctgcagtac gtacggatctactacaca gtacggatc tacggatct … ctactacac tactacaca (read, pos., word, orient.) aaactgcag aactgcagt actgcagta … gtacggatc tacggatct gggcccaaa ggcccaaac gcccaaact … actgcagta ctgcagtac gtacggatc tacggatct acggatcta … ctactacac tactacaca (word, read, orient., pos.) aaactgcag aactgcagt acggatcta actgcagta cccaaactg cggatctac ctactacac ctgcagtac gcccaaact ggcccaaac gggcccaaa gtacggatc tacggatct tactacaca

24 CS262 Lecture 9, Win07, Batzoglou 1. Find Overlapping Reads Find pairs of reads sharing a k-mer, k ~ 24 Extend to full alignment – throw away if not >98% similar TAGATTACACAGATTAC ||||||||||||||||| T GA TAGA | || TACA TAGT || Caveat: repeats  A k-mer that occurs N times, causes O(N 2 ) read/read comparisons  ALU k-mers could cause up to 1,000,000 2 comparisons Solution:  Discard all k-mers that occur “ too often ” Set cutoff to balance sensitivity/speed tradeoff, according to genome at hand and computing resources available

25 CS262 Lecture 9, Win07, Batzoglou 1. Find Overlapping Reads Create local multiple alignments from the overlapping reads TAGATTACACAGATTACTGA TAG TTACACAGATTATTGA TAGATTACACAGATTACTGA TAG TTACACAGATTATTGA TAGATTACACAGATTACTGA

26 CS262 Lecture 9, Win07, Batzoglou 1. Find Overlapping Reads Correct errors using multiple alignment TAGATTACACAGATTACTGA TAGATTACACAGATTATTGA TAGATTACACAGATTACTGA TAG-TTACACAGATTACTGA TAGATTACACAGATTACTGA TAG-TTACACAGATTATTGA TAGATTACACAGATTACTGA TAG-TTACACAGATTATTGA insert A replace T with C correlated errors— probably caused by repeats  disentangle overlaps TAGATTACACAGATTACTGA TAG-TTACACAGATTATTGA TAGATTACACAGATTACTGA TAG-TTACACAGATTATTGA In practice, error correction removes up to 98% of the errors

27 CS262 Lecture 9, Win07, Batzoglou 2. Merge Reads into Contigs Overlap graph:  Nodes: reads r 1 …..r n  Edges: overlaps (r i, r j, shift, orientation, score) Note: of course, we don’t know the “color” of these nodes Reads that come from two regions of the genome (blue and red) that contain the same repeat

28 CS262 Lecture 9, Win07, Batzoglou 2. Merge Reads into Contigs We want to merge reads up to potential repeat boundaries repeat region Unique Contig Overcollapsed Contig

29 CS262 Lecture 9, Win07, Batzoglou 2. Merge Reads into Contigs Ignore non-maximal reads Merge only maximal reads into contigs repeat region

30 CS262 Lecture 9, Win07, Batzoglou 2. Merge Reads into Contigs Remove transitively inferable overlaps  If read r overlaps to the right reads r 1, r 2, and r 1 overlaps r 2, then (r, r 2 ) can be inferred by (r, r 1 ) and (r 1, r 2 ) rr1r1 r2r2 r3r3

31 CS262 Lecture 9, Win07, Batzoglou 2. Merge Reads into Contigs

32 CS262 Lecture 9, Win07, Batzoglou 2. Merge Reads into Contigs Ignore “hanging” reads, when detecting repeat boundaries sequencing error repeat boundary??? b a a b …

33 CS262 Lecture 9, Win07, Batzoglou Overlap graph after forming contigs Unitigs: Gene Myers, 95

34 CS262 Lecture 9, Win07, Batzoglou Repeats, errors, and contig lengths Repeats shorter than read length are easily resolved  Read that spans across a repeat disambiguates order of flanking regions Repeats with more base pair diffs than sequencing error rate are OK  We throw overlaps between two reads in different copies of the repeat To make the genome appear less repetitive, try to:  Increase read length  Decrease sequencing error rate Role of error correction: Discards up to 98% of single-letter sequencing errors decreases error rate  decreases effective repeat content  increases contig length

35 CS262 Lecture 9, Win07, Batzoglou Insert non-maximal reads whenever unambiguous 2. Merge Reads into Contigs

36 CS262 Lecture 9, Win07, Batzoglou 3. Link Contigs into Supercontigs Too dense  Overcollapsed Inconsistent links  Overcollapsed? Normal density

37 CS262 Lecture 9, Win07, Batzoglou Find all links between unique contigs 3. Link Contigs into Supercontigs Connect contigs incrementally, if  2 links supercontig (aka scaffold)

38 CS262 Lecture 9, Win07, Batzoglou Fill gaps in supercontigs with paths of repeat contigs 3. Link Contigs into Supercontigs

39 CS262 Lecture 9, Win07, Batzoglou 4. Derive Consensus Sequence Derive multiple alignment from pairwise read alignments TAGATTACACAGATTACTGA TTGATGGCGTAA CTA TAGATTACACAGATTACTGACTTGATGGCGTAAACTA TAG TTACACAGATTATTGACTTCATGGCGTAA CTA TAGATTACACAGATTACTGACTTGATGGCGTAA CTA TAGATTACACAGATTACTGACTTGATGGGGTAA CTA TAGATTACACAGATTACTGACTTGATGGCGTAA CTA Derive each consensus base by weighted voting (Alternative: take maximum-quality letter)

40 CS262 Lecture 9, Win07, Batzoglou Some Assemblers PHRAP Early assembler, widely used, good model of read errors Overlap O(n 2 )  layout (no mate pairs)  consensus Celera First assembler to handle large genomes (fly, human, mouse) Overlap  layout  consensus Arachne Public assembler (mouse, several fungi) Overlap  layout  consensus Phusion Overlap  clustering  PHRAP  assemblage  consensus Euler Indexing  Euler graph  layout by picking paths  consensus

41 CS262 Lecture 9, Win07, Batzoglou Quality of assemblies—mouse N50 contig length Terminology: N50 contig length If we sort contigs from largest to smallest, and start Covering the genome in that order, N50 is the length Of the contig that just covers the 50 th percentile. 7.7X sequence coverage

42 CS262 Lecture 9, Win07, Batzoglou Quality of assemblies—dog 7.5X sequence coverage

43 CS262 Lecture 9, Win07, Batzoglou History of WGA 1982: -virus, 48,502 bp 1995: h-influenzae, 1 Mbp 2000: fly, 100 Mbp 2001 – present  human (3Gbp), mouse (2.5Gbp), rat *, chicken, dog, chimpanzee, several fungal genomes Gene Myers Let’s sequence the human genome with the shotgun strategy That is impossible, and a bad idea anyway Phil Green 1997

44 CS262 Lecture 9, Win07, Batzoglou Genomes Sequenced http://www.genome.gov/10002154


Download ppt "CS262 Lecture 9, Win07, Batzoglou Conditional Random Fields A brief description."

Similar presentations


Ads by Google