An efficient model-based base-calling algorithm for high-throughput sequencing Wei-Chun Kao and Yun S. Song, UC Berkeley Presented by: Frank Chun Yat Li.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Real-Time Template Tracking
Mobile Robot Localization and Mapping using the Kalman Filter
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
We processed six samples in triplicate using 11 different array platforms at one or two laboratories. we obtained measures of array signal variability.
Ab initio gene prediction Genome 559, Winter 2011.
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
The adjustment of the observations
Hidden Markov Model Cryptanalysis Chris Karlof and David Wagner.
Lecture 15 Hidden Markov Models Dr. Jianjun Hu mleg.cse.sc.edu/edu/csce833 CSCE833 Machine Learning University of South Carolina Department of Computer.
Microarray technology and analysis of gene expression data Hillevi Lindroos.
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Extraction and comparison of gene expression patterns from 2D RNA in situ hybridization images BIOINFORMATICS Gene expression Vol. 26, no. 6, 2010, pages.
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
Part 4 b Forward-Backward Algorithm & Viterbi Algorithm CSE717, SPRING 2008 CUBS, Univ at Buffalo.
Algebraic Statistics for Computational Biology Lior Pachter and Bernd Sturmfels Ch.5: Parametric Inference R. Mihaescu Παρουσίαση: Aγγελίνα Βιδάλη Αλγεβρικοί.
Lecture 5: Learning models using EM
Face Recognition Using Embedded Hidden Markov Model.
Spatial Semi- supervised Image Classification Stuart Ness G07 - Csci 8701 Final Project 1.
Conditional Random Fields
Part 4 c Baum-Welch Algorithm CSE717, SPRING 2008 CUBS, Univ at Buffalo.
Genome evolution: a sequence-centric approach Lecture 3: From Trees to HMMs.
Measuring Uncertainty in Graph Cut Solutions Pushmeet Kohli Philip H.S. Torr Department of Computing Oxford Brookes University.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
CS 6293 Advanced Topics: Current Bioinformatics
Isolated-Word Speech Recognition Using Hidden Markov Models
Gene finding with GeneMark.HMM (Lukashin & Borodovsky, 1997 ) CS 466 Saurabh Sinha.
7-Speech Recognition Speech Recognition Concepts
Segmental Hidden Markov Models with Random Effects for Waveform Modeling Author: Seyoung Kim & Padhraic Smyth Presentor: Lu Ren.
earthobs.nr.no Temporal Analysis of Forest Cover Using a Hidden Markov Model Arnt-Børre Salberg and Øivind Due Trier Norwegian Computing Center.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
A statistical base-caller for the Illumina Genome Analyzer Wally Gilks University of Leeds.
CS CM124/224 & HG CM124/224 DISCUSSION SECTION (JUN 6, 2013) TA: Farhad Hormozdiari.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Definitions Random Signal Analysis (Review) Discrete Random Signals Random.
Maximum Entropy (ME) Maximum Entropy Markov Model (MEMM) Conditional Random Field (CRF)
An Asymptotic Analysis of Generative, Discriminative, and Pseudolikelihood Estimators by Percy Liang and Michael Jordan (ICML 2008 ) Presented by Lihan.
PGM 2003/04 Tirgul 2 Hidden Markov Models. Introduction Hidden Markov Models (HMM) are one of the most common form of probabilistic graphical models,
Approximate Inference: Decomposition Methods with Applications to Computer Vision Kyomin Jung ( KAIST ) Joint work with Pushmeet Kohli (Microsoft Research)
Application of the MCMC Method for the Calibration of DSMC Parameters James S. Strand and David B. Goldstein The University of Texas at Austin Sponsored.
Channel-Independent Viterbi Algorithm (CIVA) for DNA Sequencing
Proteome and Gene Expression Analysis Chapter 15 & 16.
1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,... Si Sj.
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Review of statistical modeling and probability theory Alan Moses ML4bio.
1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,..., sN Si Sj.
1 Relational Factor Graphs Lin Liao Joint work with Dieter Fox.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Kevin Stevenson AST 4762/5765. What is MCMC?  Random sampling algorithm  Estimates model parameters and their uncertainty  Only samples regions of.
An unsupervised conditional random fields approach for clustering gene expression time series Chang-Tsun Li, Yinyin Yuan and Roland Wilson Bioinformatics,
Graphical Models for Segmenting and Labeling Sequence Data Manoj Kumar Chinnakotla NLP-AI Seminar.
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
Introduction to Illumina Sequencing
How many iterations in the Gibbs sampler? Adrian E. Raftery and Steven Lewis (September, 1991) Duke University Machine Learning Group Presented by Iulian.
CS479/679 Pattern Recognition Dr. George Bebis
Statistical Quality Control, 7th Edition by Douglas C. Montgomery.
Today.
ICS 280 Learning in Graphical Models
Ch3: Model Building through Regression
Automatic Digitizing.
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
Ab initio gene prediction
11. Conditional Density Functions and Conditional Expected Values
11. Conditional Density Functions and Conditional Expected Values
Sofia Pediaditaki and Mahesh Marina University of Edinburgh
Visual Recognition of American Sign Language Using Hidden Markov Models 문현구 문현구.
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
Universal microbial diagnostics using random DNA probes
Presentation transcript:

An efficient model-based base-calling algorithm for high-throughput sequencing Wei-Chun Kao and Yun S. Song, UC Berkeley Presented by: Frank Chun Yat Li

Introduction

Extracting the DNA Sequence Image Analysis Translate image data into fluorescence intensity data Base-calling From the intensities infer the DNA sequence

Base-calling Algorithms Bustard Ships with Illumina’s Genome Analyzer Very efficient and based on matrix inversion But error rate very high in the later cycles Alta-Cyclic More accurate But needs large amounts of labelled training data BayesCall Most accurate Little training data But computation time is very high, not practical

navieBayesCall Builds upon BayesCall An order of magnitude faster Comparable error rate

Review of BayesCall’s Model L total cycles e i elementary 4X1 column matrix, eg: Goal is to produce the sequence of the k th cluster S k (S k ) = (S 1,k, … S L,k ) with S t,k Є { e A, e C, e G, e T } S k is initialized to have uniform distribution of e i If the distribution of the Genome is known S k can initialize to that to improve accuracy

Terminology Active template density A t,k Model of the density of the k th cluster that is still active at cycle t I t,k is a 4 X 1 matrix of the 4 intensities

Terminology Phasing of each DNA strand is modelled by a L X L matrix P, where p = probability of phasing (no new base is synthesized) q = probability of prephasing (2 new bases are synthesized instead of 1) Q j,t is the probability of the template terminating at location j after t cycles Q j,t = [P t ] 0,j

Punch Line From A t,k and Q j,t, we can estimate the concentration of active DNA strands in a cluster at cycle t Named See formula (2) in the paper With, model other residual effects that propagate from one cycle to the next Named See formula (3) in the paper Punch line Observed fluorescence intensities is a 4D normal distribution

naiveBayesCall Try to apply the Viterbi algorithm to this domain Problem 1: high order Markov model because of all the prephasing and phasing effects, so very computationally expensive Problem 2: the densities A k = (A 1,k, …, A L,k ) is a series of continuous random variables and the Viterbi algorithm does not handle continuous variables

Problem 1 Solution If we provide a good initialization of S k, then the algorithm can home in on the solution a lot faster The authors came up with a hybrid algorithm using modeling in BayesCall and approach in Bustard to come up with the initializations quickly See section 3.1 in the paper A t,k can be estimated using Maximum a Posteriori (MAP) estimation See forumlaes (10), (11), (12) P ROBLEM 2 S OLUTION

naiveBayesCall Algorithm

Results Data came from sequencing PhiX174 virus. All 4 algorithms were ran against the data (Bustard, Alta-Cyclic, BayesCall, naiveBayesCall) naiveBayesCall is more accurate from 21+ bp

Improvements naiveBayesCall accuracy is higher than Alta-Cyclic from all cycles Error rate is also lower than Bustard’s at later cycles

What is the Point? Each run of the sequencer costs $$ With lower error rate, we can run the sequencer longer and still obtain accurate data less runs!