Introduction on Graphic Models

Slides:



Advertisements
Similar presentations
Slide 1 of 18 Uncertainty Representation and Reasoning with MEBN/PR-OWL Kathryn Blackmond Laskey Paulo C. G. da Costa The Volgenau School of Information.
Advertisements

CS188: Computational Models of Human Behavior
Bayesian networks Chapter 14 Section 1 – 2. Outline Syntax Semantics Exact computation.
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 27 – Overview of probability concepts 1.
Markov Networks Alan Ritter.
CS498-EA Reasoning in AI Lecture #15 Instructor: Eyal Amir Fall Semester 2011.
Probabilistic Reasoning Bayesian Belief Networks Constructing Bayesian Networks Representing Conditional Distributions Summary.
Bayesian Network and Influence Diagram A Guide to Construction And Analysis.
Artificial Intelligence Universitatea Politehnica Bucuresti Adina Magda Florea
1 Knowledge Engineering for Bayesian Networks. 2 Probability theory for representing uncertainty l Assigns a numerical degree of belief between 0 and.
BAYESIAN NETWORKS CHAPTER#4 Book: Modeling and Reasoning with Bayesian Networks Author : Adnan Darwiche Publisher: CambridgeUniversity Press 2009.
Identifying Conditional Independencies in Bayes Nets Lecture 4.
Introduction of Probabilistic Reasoning and Bayesian Networks
EE462 MLCV Lecture Introduction of Graphical Models Markov Random Fields Segmentation Tae-Kyun Kim 1.
Causal and Bayesian Network (Chapter 2) Book: Bayesian Networks and Decision Graphs Author: Finn V. Jensen, Thomas D. Nielsen CSE 655 Probabilistic Reasoning.
From: Probabilistic Methods for Bioinformatics - With an Introduction to Bayesian Networks By: Rich Neapolitan.
Chapter 8-3 Markov Random Fields 1. Topics 1. Introduction 1. Undirected Graphical Models 2. Terminology 2. Conditional Independence 3. Factorization.
A Differential Approach to Inference in Bayesian Networks - Adnan Darwiche Jiangbo Dang and Yimin Huang CSCE582 Bayesian Networks and Decision Graph.
Probabilistic Reasoning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 14 (14.1, 14.2, 14.3, 14.4) Capturing uncertain knowledge Probabilistic.
PGM 2003/04 Tirgul 3-4 The Bayesian Network Representation.
Bayes Nets Rong Jin. Hidden Markov Model  Inferring from observations (o i ) to hidden variables (q i )  This is a general framework for representing.
Bayesian Networks. Graphical Models Bayesian networks Conditional random fields etc.
Example applications of Bayesian networks
Bayesian Networks Alan Ritter.
A Differential Approach to Inference in Bayesian Networks - Adnan Darwiche Jiangbo Dang and Yimin Huang CSCE582 Bayesian Networks and Decision Graphs.
. DAGs, I-Maps, Factorization, d-Separation, Minimal I-Maps, Bayesian Networks Slides by Nir Friedman.
Quiz 4: Mean: 7.0/8.0 (= 88%) Median: 7.5/8.0 (= 94%)
Made by: Maor Levy, Temple University  Probability expresses uncertainty.  Pervasive in all of Artificial Intelligence  Machine learning 
Bayesian networks Chapter 14. Outline Syntax Semantics.
A Brief Introduction to Graphical Models
Bayesian Belief Networks. What does it mean for two variables to be independent? Consider a multidimensional distribution p(x). If for two features we.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
第十讲 概率图模型导论 Chapter 10 Introduction to Probabilistic Graphical Models
1 Chapter 14 Probabilistic Reasoning. 2 Outline Syntax of Bayesian networks Semantics of Bayesian networks Efficient representation of conditional distributions.
2 Syntax of Bayesian networks Semantics of Bayesian networks Efficient representation of conditional distributions Exact inference by enumeration Exact.
Baye’s Rule.
Bayesian Networks for Data Mining David Heckerman Microsoft Research (Data Mining and Knowledge Discovery 1, (1997))
Probabilistic Reasoning ECE457 Applied Artificial Intelligence Spring 2007 Lecture #9.
Introduction to Bayesian Networks
Ch 8. Graphical Models Pattern Recognition and Machine Learning, C. M. Bishop, Revised by M.-O. Heo Summarized by J.W. Nam Biointelligence Laboratory,
Announcements Project 4: Ghostbusters Homework 7
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
LAC group, 16/06/2011. So far...  Directed graphical models  Bayesian Networks Useful because both the structure and the parameters provide a natural.
Marginalization & Conditioning Marginalization (summing out): for any sets of variables Y and Z: Conditioning(variant of marginalization):
1 Bayesian Networks (Directed Acyclic Graphical Models) The situation of a bell that rings whenever the outcome of two coins are equal can not be well.
Review: Bayesian inference  A general scenario:  Query variables: X  Evidence (observed) variables and their values: E = e  Unobserved variables: Y.
Christopher M. Bishop, Pattern Recognition and Machine Learning 1.
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
Belief Networks Kostas Kontogiannis E&CE 457. Belief Networks A belief network is a graph in which the following holds: –A set of random variables makes.
Conditional Independence As with absolute independence, the equivalent forms of X and Y being conditionally independent given Z can also be used: P(X|Y,
Probabilistic Reasoning Inference and Relational Bayesian Networks.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
An Algorithm to Learn the Structure of a Bayesian Network Çiğdem Gündüz Olcay Taner Yıldız Ethem Alpaydın Computer Engineering Taner Bilgiç Industrial.
Reasoning Under Uncertainty: Belief Networks
CS 2750: Machine Learning Directed Graphical Models
Bayesian networks Chapter 14 Section 1 – 2.
Presented By S.Yamuna AP/CSE
Qian Liu CSE spring University of Pennsylvania
Inference in Bayesian Networks
Read R&N Ch Next lecture: Read R&N
Read R&N Ch Next lecture: Read R&N
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence Fall 2007
An Algorithm for Bayesian Network Construction from Data
CS 188: Artificial Intelligence
Readings: K&F: 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7 Markov networks, Factor graphs, and an unified view Start approximate inference If we are lucky… Graphical.
Bayesian networks Chapter 14 Section 1 – 2.
Probabilistic Reasoning
Read R&N Ch Next lecture: Read R&N
Chapter 14 February 26, 2004.
Presentation transcript:

Introduction on Graphic Models Jin Mao Postdoc, School of Information, University of Arizona Jan 20, 2015

Outline Probabilistic Graphical Models Bayesian Network Markov Network Factor & Factor Graph

Probabilistic Graphical Models Definition Probabilistic graphical models use a graph-based representation as the basis for compactly encoding a complex distribution over a high-dimensional space. Nodes: The Variables Edges: Direct probabilistic interactions between the variables

Probabilistic Graphical Models Features Representationtransparent, in that a human expert can understand and evaluate its semantics and properties. Inferenceposterior probability based on the structure. Learningconstruct the model from observed data.

Probabilistic Graphical Models Types Bayesian Networks, directed Markov Networks , indirected

Bayesian Networks Definition Bayesian networks (BNs) are graphical models for reasoning under uncertainty. The nodes in a Bayesian network represent a set of random variables, X = X1,.. Xi,.. Xn, from the domain.

Bayesian Networks Definition Bayesian networks (BNs) are graphical models for reasoning under uncertainty. The nodes in a Bayesian network represent a set of random variables, X = X1,.. Xi,.. Xn, from the domain. Types of nodes: Boolean: true/false Ordered: low/medium/high Integral: 1- 20 …

directed acyclic graphs, or simply dags. Bayesian Networks Definition A set of directed arcs (or links) connects pairs of nodes, Xi Xj, representing the direct dependencies between variables. Assuming discrete variables, the strength of the relationship between variables is quantified by conditional probability distributions associated with each node: P(Xj|Xi). directed acyclic graphs, or simply dags.

Model by Knowledge Engineer Bayesian Networks Model by Knowledge Engineer Identify variables of interests; Define the states of variables: both mutually exclusive and exhaustive. To choose values that represent the domain efficiently, but with enough detail to perform the reasoning required; Design the structure/topology. Determine conditional probabilities. Use it for prediction

Bayesian Networks An example

Bayesian Networks An example

Structure Terminology Bayesian Networks Structure Terminology a parent of a child an ancestor of a descendant Root node, leaf node, intermediate node Markov blanket the node’s parents, its children, and its children’s parents.

Conditional Probabilities Bayesian Networks Conditional Probabilities A conditional probability table (CPT), discrete variables. First, look at all the possible combinations of values of those parent nodes. (instantiation) For each distinct instantiation of parent node values, we need to specify the probability that the child will take each of its values by some means. Pollution and Smoking and take the possible joint values < H;T >;< H;F >;< L;T >; < L;F > < 0:05;0:02;0:03;0:001 > Root nodes---prior probablities

Independence-maps (or, I-maps for short) Bayesian Networks The Markov property there are no direct dependencies in the system being modeled which are not already explicitly shown via arcs. Independence-maps (or, I-maps for short) it is not generally required that the arcs in a BN correspond to real dependencies in the system. Redundant edges, minimal I-maps Every arc happens to correspond to a real direct dependence---Dependence-Map Both D-map and I-map, A perfect map. no hidden “backdoor”

diagnostic reasoning: Bayesian Networks Types of reasoning diagnostic reasoning: reasoning from symptoms to cause, the opposite direction to the network arcs.

Predictive reasoning: Bayesian Networks Types of reasoning Predictive reasoning: reasoning from new information about causes to new beliefs about effects, following the directions of the network arcs.

intercausal reasoning: the mutual causes of a common effect Bayesian Networks Types of reasoning intercausal reasoning: the mutual causes of a common effect

Specific Evidence: observe a specific value Bayesian Networks Types of Evidence Specific Evidence: observe a specific value Negative Evidence: not a value Virtual evidence, likelihood evidence, not so sure. Suppose, for example, that the radiologist who has taken and analyzed the Xray in our cancer example is uncertain. He thinks that the X-ray looks positive, but is only 80% sure.

Bayesian Networks Learning

AKA Markov random field (MRF) Markov Networks Definition Undirected graphs can also be used to represent dependency relationships. Useful in modeling domains where the interactions between the variables seem symmetrical and one cannot naturally ascribe a directionality to the interaction between variables AKA Markov random field (MRF)

AKA Markov random field (MRF) Markov Networks Definition Undirected graphs can also be used to represent dependency relationships. Useful in modeling domains where the interactions between the variables seem symmetrical and one cannot naturally ascribe a directionality to the interaction between variables AKA Markov random field (MRF)

Markov Networks Moralization: Moral graph of G From BN to MN Moralization: Start with the DAG G, add an edge between every variable and each of its spouses, and finally drop the directionality of all edges in DAG G. Moral graph of G The moralization process ensures that every variable in DAG G will be connected to each variable in its Markov Blanket

Factor & Factor Graph A Factor(potential) Factors A Factor(potential) A factor over a set of variables X is a function which maps each instantiation x of variables X to a non-negative real number, denoted. The variables in a factor often have dependency relationships. The result of multiplying factors is another factor.

Factor & Factor Graph Factorization

Factor & Factor Graph Factor Graph A factor graph is an undirected graph containing two types of nodes: variable nodes and factor nodes. The graph only contains edges between variable nodes and factor nodes. Each factor node Vφ is associated with precisely one factor φ, whose scope is the set of neighbors of Vφ.

Factor & Factor Graph Why Factor Graph They are used extensively for breaking down a problem into pieces. “We can simplify computations based on how variables are related to these factors. We’ll break up the joint distribution into a bunch of factors on a graph. ” (read: http://www.moserware.com/2010/03/computing-your-skill.html)

Thank you!