Markov Random Fields Presented by: Vladan Radosavljevic.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

CS188: Computational Models of Human Behavior
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 27 – Overview of probability concepts 1.
Markov Networks Alan Ritter.
Variational Methods for Graphical Models Micheal I. Jordan Zoubin Ghahramani Tommi S. Jaakkola Lawrence K. Saul Presented by: Afsaneh Shirazi.
CS498-EA Reasoning in AI Lecture #15 Instructor: Eyal Amir Fall Semester 2011.
Graphical Models BRML Chapter 4 1. the zoo of graphical models Markov networks Belief networks Chain graphs (Belief and Markov ) Factor graphs =>they.
Introduction to Markov Random Fields and Graph Cuts Simon Prince
Exact Inference in Bayes Nets
An Introduction to Variational Methods for Graphical Models.
EE462 MLCV Lecture Introduction of Graphical Models Markov Random Fields Segmentation Tae-Kyun Kim 1.
Belief Propagation on Markov Random Fields Aggeliki Tsoli.
Hidden Markov Models M. Vijay Venkatesh. Outline Introduction Graphical Model Parameterization Inference Summary.
Chapter 8-3 Markov Random Fields 1. Topics 1. Introduction 1. Undirected Graphical Models 2. Terminology 2. Conditional Independence 3. Factorization.
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
1 Graphical Models in Data Assimilation Problems Alexander Ihler UC Irvine Collaborators: Sergey Kirshner Andrew Robertson Padhraic Smyth.
Global Approximate Inference Eran Segal Weizmann Institute.
Belief Propagation, Junction Trees, and Factor Graphs
Graphical Models Lei Tang. Review of Graphical Models Directed Graph (DAG, Bayesian Network, Belief Network) Typically used to represent causal relationship.
Belief Propagation Kai Ju Liu March 9, Statistical Problems Medicine Finance Internet Computer vision.
Today Logistic Regression Decision Trees Redux Graphical Models
Bayesian Networks Alan Ritter.
Computer vision: models, learning and inference Chapter 10 Graphical Models.
. DAGs, I-Maps, Factorization, d-Separation, Minimal I-Maps, Bayesian Networks Slides by Nir Friedman.
Machine Learning CUNY Graduate Center Lecture 21: Graphical Models.
Computer vision: models, learning and inference
A Brief Introduction to Graphical Models
CSC2535 Spring 2013 Lecture 1: Introduction to Machine Learning and Graphical Models Geoffrey Hinton.
Ch 8. Graphical Models Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by B.-H. Kim Biointelligence Laboratory, Seoul National.
Presenter : Kuang-Jui Hsu Date : 2011/5/23(Tues.).
Probabilistic graphical models. Graphical models are a marriage between probability theory and graph theory (Michael Jordan, 1998) A compact representation.
A Markov Random Field Model for Term Dependencies Donald Metzler W. Bruce Croft Present by Chia-Hao Lee.
第十讲 概率图模型导论 Chapter 10 Introduction to Probabilistic Graphical Models
Markov Random Fields Probabilistic Models for Images
1 Markov Random Fields with Efficient Approximations Yuri Boykov, Olga Veksler, Ramin Zabih Computer Science Department CORNELL UNIVERSITY.
Ch 8. Graphical Models Pattern Recognition and Machine Learning, C. M. Bishop, Revised by M.-O. Heo Summarized by J.W. Nam Biointelligence Laboratory,
CS774. Markov Random Field : Theory and Application Lecture 02
Learning the Structure of Related Tasks Presented by Lihan He Machine Learning Reading Group Duke University 02/03/2006 A. Niculescu-Mizil, R. Caruana.
An Introduction to Variational Methods for Graphical Models
Lecture 2: Statistical learning primer for biologists
Indexing Correlated Probabilistic Databases Bhargav Kanagal, Amol Deshpande University of Maryland, College Park, USA SIGMOD Presented.
ECE 8443 – Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem Proof EM Example – Missing Data Intro to Hidden Markov Models.
1 Use graphs and not pure logic Variables represented by nodes and dependencies by edges. Common in our language: “threads of thoughts”, “lines of reasoning”,
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
Maximum Entropy Model, Bayesian Networks, HMM, Markov Random Fields, (Hidden/Segmental) Conditional Random Fields.
Christopher M. Bishop, Pattern Recognition and Machine Learning 1.
Pattern Recognition and Machine Learning
6.4 Random Fields on Graphs 6.5 Random Fields Models In “Adaptive Cooperative Systems” Summarized by Ho-Sik Seok.
Introduction on Graphic Models
Today Graphical Models Representing conditional dependence graphically
Bayesian Belief Propagation for Image Understanding David Rosenberg.
Markov Random Fields in Vision
Markov Random Fields with Efficient Approximations
Statistical Models for Automatic Speech Recognition
Learning Bayesian Network Models from Data
Prof. Adriana Kovashka University of Pittsburgh April 4, 2017
Markov Networks.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18
Bayesian Models in Machine Learning
CSCI 5822 Probabilistic Models of Human and Machine Learning
General Gibbs Distribution
Independence in Markov Networks
An Introduction to Variational Methods for Graphical Models
Lecture 5 Unsupervised Learning in fully Observed Directed and Undirected Graphical Models.
Independence in Markov Networks
General Gibbs Distribution
Graduate School of Information Sciences, Tohoku University
Probabilistic image processing and Bayesian network
Readings: K&F: 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7 Markov networks, Factor graphs, and an unified view Start approximate inference If we are lucky… Graphical.
Junction Trees 3 Undirected Graphical Models
Presentation transcript:

Markov Random Fields Presented by: Vladan Radosavljevic

Outline Intuition Simple Example Theory Simple Example - Revisited Application Summary References

Intuition Simple example Observation Objective noisy image pixel values are -1 or +1 Objective recover noise free image

Intuition An idea Represent pixels as random variables y - observed variable x - hidden variable x and y are binary variables (-1 or +1) Question: Is there any relation among those variables?

Intuition Building a model Values of observed and original pixels should be correlated (small level of noise) - make connections! Values of neighboring pixels should be correlated (large homogeneous areas, objects) - make connections! Final model

Intuition Why do we need a model? y is given, x has to be find The objective is to find an image x that maximizes p(x|y) Use model penalize connected pairs in the model that have opposite sign as they are not correlated Assume distribution p(x,y) ~ exp(-E) E = over all pairs of connected nodes If xi and xj have the same sign, the probability will be higher The same holds for x and y

Intuition How to find an image x which maximizes probability p(x|y)? Assume x=y Take a node xi at time and evaluate E for xi=+1 and xi=-1 Set xi to the value that has lowest E (highest probability) Iterate through all nodes until convergence This method finds local optimum

Intuition Result

Theory Graphical models – a general framework for representing and manipulating joint distributions defined over sets of random variables Each variable is associated with a node in a graph Edges in the graph represent dependencies between random variables Directed graphs: represent causative relationships (Bayesian Networks) Undirected graphs: represent correlative relationships (Markov Random Fields) Representational aspect: efficiently represent complex independence relations Computational aspect: efficiently infer information about data using independence relations

Theory Recall If there are M variables in the model each having K possible states, straight forward inference algorithm will be exponential in the size of the model (KM) However, inference algorithms (whether computing distributions, expectations etc.) can use structure in the graph for the purposes of efficient computation

Theory How to use information from the structure? Markov property: If all paths that connect nodes in set A to nodes in set B pass through nodes in set C, then we say that A and B are conditionally independent given C: p(A,B|C) = p(A|C)p(B|C) The main idea is to factorize joint probability, then use sum and product rules for efficient computation

Theory General factorization If two nodes xi and xk are not connected, then they have to be conditionally independent given all other nodes There is no link between those two nodes and all other links pass through nodes that are observed This can be expressed as Therefore, joint distribution must be factorized such that unconnected nodes do not appear in the same factor This leads to the concept of clique: a subset of nodes such that all pairs of nodes in the subset are connected The factors are defined as functions on the all possible cliques

Theory Example Factorization where : potential function on the clique C Z : partition function

Theory Since potential functions have to be positive we can define them as: (there is also a theorem that proofs correspondence of this distribution and Markov Random Fields, E is energy function) Recall E =

Theory Why is this useful? Inference algorithms can take an advantage of such representation to significantly increase computational efficiency Example, inference on a chain To find marginal distribution ~ KN

Theory If we rearrange the order of summations and multiplications ~ NK2

Application

Summary Advantages Disadvantages Graphical representation Computational efficiency Disadvantages Parameter estimation How to define a model? Computing probability is sometimes difficult

References [1] Alexander T. Ihler, Sergey Kirshner, Michael Ghilc, Andrew W. Robertson and Padhraic Smyth “Graphical models for statistical inference and data assimilation”, Physica D: Nonlinear Phenomena, Volume 230, Issues 1-2, June 2007, Pages 72-87