. Applications and Summary. . Presented By Dan Geiger Journal Club of the Pharmacogenetics Group Meeting Technion.

Slides:



Advertisements
Similar presentations
Exact Inference. Inference Basic task for inference: – Compute a posterior distribution for some query variables given some observed evidence – Sum out.
Advertisements

. Exact Inference in Bayesian Networks Lecture 9.
Exact Inference in Bayes Nets
HMM II: Parameter Estimation. Reminder: Hidden Markov Model Markov Chain transition probabilities: p(S i+1 = t|S i = s) = a st Emission probabilities:
Loopy Belief Propagation a summary. What is inference? Given: –Observabled variables Y –Hidden variables X –Some model of P(X,Y) We want to make some.
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Introduction to Belief Propagation and its Generalizations. Max Welling Donald Bren School of Information and Computer and Science University of California.
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
Graphical models: approximate inference and learning CA6b, lecture 5.
Markov Networks.
Belief Propagation on Markov Random Fields Aggeliki Tsoli.
Bayesian network inference
… Hidden Markov Models Markov assumption: Transition model:
Bayes Rule How is this rule derived? Using Bayes rule for probabilistic inference: –P(Cause | Evidence): diagnostic probability –P(Evidence | Cause): causal.
1 Graphical Models in Data Assimilation Problems Alexander Ihler UC Irvine Collaborators: Sergey Kirshner Andrew Robertson Padhraic Smyth.
Regulatory Network (Part II) 11/05/07. Methods Linear –PCA (Raychaudhuri et al. 2000) –NIR (Gardner et al. 2003) Nonlinear –Bayesian network (Friedman.
. Hidden Markov Models Lecture #5 Prepared by Dan Geiger. Background Readings: Chapter 3 in the text book (Durbin et al.).
. Learning Bayesian networks Slides by Nir Friedman.
. Bayesian Networks For Genetic Linkage Analysis Lecture #7.
. Basic Model For Genetic Linkage Analysis Lecture #3 Prepared by Dan Geiger.
. Hidden Markov Models For Genetic Linkage Analysis Lecture #4 Prepared by Dan Geiger.
Chapter 13 Introduction to Linear Regression and Correlation Analysis
CASE STUDY: Genetic Linkage Analysis via Bayesian Networks
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
. Learning Parameters of Hidden Markov Models Prepared by Dan Geiger.
1 gR2002 Peter Spirtes Carnegie Mellon University.
Chapter 14 Introduction to Linear Regression and Correlation Analysis
24 November, 2011National Tsin Hua University, Taiwan1 Mathematical Structures of Belief Propagation Algorithms in Probabilistic Information Processing.
Computer vision: models, learning and inference
A Brief Introduction to Graphical Models
. Basic Model For Genetic Linkage Analysis Lecture #5 Prepared by Dan Geiger.
Introduction to Linear Regression
Chap 12-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 12 Introduction to Linear.
UIUC CS 498: Section EA Lecture #21 Reasoning in Artificial Intelligence Professor: Eyal Amir Fall Semester 2011 (Some slides from Kevin Murphy (UBC))
Learning With Bayesian Networks Markus Kalisch ETH Zürich.
Lecture 13: Linkage Analysis VI Date: 10/08/02  Complex models  Pedigrees  Elston-Stewart Algorithm  Lander-Green Algorithm.
Computing & Information Sciences Kansas State University Data Sciences Summer Institute Multimodal Information Access and Synthesis Learning and Reasoning.
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
Review: Probability Random variables, events Axioms of probability Atomic events Joint and marginal probability distributions Conditional probability distributions.
Probabilistic Graphical Models seminar 15/16 ( ) Haim Kaplan Tel Aviv University.
Lecture 2: Statistical learning primer for biologists
Introduction to Belief Propagation
Belief Propagation and its Generalizations Shane Oldenburger.
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
. Basic Model For Genetic Linkage Analysis Prepared by Dan Geiger.
Contextual models for object detection using boosted random fields by Antonio Torralba, Kevin P. Murphy and William T. Freeman.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Daphne Koller Overview Conditional Probability Queries Probabilistic Graphical Models Inference.
Lesson The Normal Approximation to the Binomial Probability Distribution.
Markov Networks: Theory and Applications Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
Daphne Koller Overview Maximum a posteriori (MAP) Probabilistic Graphical Models Inference.
Bayesian Belief Propagation for Image Understanding David Rosenberg.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                   
Probability Theory and Parameter Estimation I
Today.
ICS 280 Learning in Graphical Models
Markov ó Kalman Filter Localization
Markov Networks.
CSE-490DF Robotics Capstone
Class #19 – Tuesday, November 3
Graduate School of Information Sciences, Tohoku University
Expectation-Maximization & Belief Propagation
Readings: K&F: 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7 Markov networks, Factor graphs, and an unified view Start approximate inference If we are lucky… Graphical.
Markov Networks.
Learning Bayesian networks
Presentation transcript:

. Applications and Summary

. Presented By Dan Geiger Journal Club of the Pharmacogenetics Group Meeting Technion

. Rare Recessive Diseases A Given such pedigree our program Superlink produces a LOD score determining if this is a coincidence or suggestive of disease gene location. How probable is it to be IBD (denoted f) ? Pedigree 1C

. X1X1 X2X2 X L-1 XLXL XiXi L Assumptions: No interferance, No errors in genetic maps.  ={ a, f } are parameters that can be estimated (e.g. by ML), if IBD data is available. No change of coancestry Modeling The IBD Process

. X1X1 X2X2 X L-1 XLXL Y1Y1 Y2Y2 Y L-1 YLYL XkXk YkYk Adding genomic data

6 Computing IBD from genomic data X1X1 X2X2 X L-1 XLXL Y1Y1 Y2Y2 Y L-1 YLYL XiXi YiYi Forward-Backward formula: P  (y 1,…,y L,x i ) = P  (y 1,…,y i,x i ) P  (y i+1,…,y L | x i )  f(x i ) b(x i ) Likelihood of Evidence: P  (y 1,…,y L ) =  xi P  (y 1,…,y L,x i ). Posterior IBD Probabilities: P  (x i | y 1,…,y L ) = P  (y 1,…,y L,x i )/  xi P  (y 1,…,y L,x i ). P  (y 1,…,y L, x 1,…,x L )

. Simulation Results For First Degree Cousins (1C)

. P(Homozigosity for allele of frequency q by random) = qf + q 2 (1-f) P(Homozigosity for allele of frequency q at location X i ) = q P(X k =1 | Y) + q 2 P(X k = 0 | Y) Gene mapping: The FLOD score Total FLOD score is the sum of the FLOD for all individuals.

. The Taybi-Linder Syndrome

. Data and Inbreeding Coeffcients

. LOD and FLOD results genomewise

. LOD and FLOD results for Chromosome 2 FLOD FLODe4 LOD

. LOD and FLOD results for Chromosome 7 FLOD LOD FLODe4

. Haplotype Analysis

15 Road Map For Graphical Models Foundations Probability theory –subjective versus objective Other formalisms for uncertainty (Fuzzy, Possibilistic, belief functions) Type of graphical models: Directed, Undirected, Chain Graphs, Dynamic networks, factored HMM, etc Discrete versus continuous distributions Causality versus correlation Inference Exact Inference Variable elimination, clique trees, message passing Using internal structure like determinism or zeroes Queries: MLE, MAP, Belief update, sensitivityApproximate Inference Sampling methods Loopy propagation (minimizing some energy function) Variational method

16 Road Map For Graphical Models Learning Complete data versus incomplete data Observed variables versus hidden variables Learning parameters versus learning structure Scoring methods versus conditional independence tests methods Exact scores versus asymptotic scores Search strategies vs. Optimal learning of trees/polytrees/TANs Applications Diagnostic tools: printer problems to airplanes failures Medical diagnostic Error correcting codes: Turbo codes Image processing Applications in Bioinformatics: gene mapping, regulatory, metabolic, and other network learning