Exam Preparation Class

Slides:



Advertisements
Similar presentations
Identifying Conditional Independencies in Bayes Nets Lecture 4.
Advertisements

EE462 MLCV Lecture Introduction of Graphical Models Markov Random Fields Segmentation Tae-Kyun Kim 1.
From: Probabilistic Methods for Bioinformatics - With an Introduction to Bayesian Networks By: Rich Neapolitan.
M.I. Jaime Alfonso Reyes ´Cortés.  The basic task for any probabilistic inference system is to compute the posterior probability distribution for a set.
Date:2011/06/08 吳昕澧 BOA: The Bayesian Optimization Algorithm.
A Differential Approach to Inference in Bayesian Networks - Adnan Darwiche Jiangbo Dang and Yimin Huang CSCE582 Bayesian Networks and Decision Graph.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
Learning with Bayesian Networks David Heckerman Presented by Colin Rickert.
Bayesian Networks. Graphical Models Bayesian networks Conditional random fields etc.
Bayesian Networks What is the likelihood of X given evidence E? i.e. P(X|E) = ?
Data Mining CS 341, Spring 2007 Lecture 4: Data Mining Techniques (I)
Today Logistic Regression Decision Trees Redux Graphical Models
A Differential Approach to Inference in Bayesian Networks - Adnan Darwiche Jiangbo Dang and Yimin Huang CSCE582 Bayesian Networks and Decision Graphs.
1 Bayesian Networks Chapter ; 14.4 CS 63 Adapted from slides by Tim Finin and Marie desJardins. Some material borrowed from Lise Getoor.
9/9/2011 ©Evergreen Public Schools Rate Of Change Key Concepts : rate of change slope slope formula.
Machine Learning CUNY Graduate Center Lecture 21: Graphical Models.
Made by: Maor Levy, Temple University  Probability expresses uncertainty.  Pervasive in all of Artificial Intelligence  Machine learning 
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
Undirected Models: Markov Networks David Page, Fall 2009 CS 731: Advanced Methods in Artificial Intelligence, with Biomedical Applications.
Bayesian Networks What is the likelihood of X given evidence E? i.e. P(X|E) = ?
Bayesian Networks Martin Bachler MLA - VO
Comp. Genomics Recitation 12 Bayesian networks Taken from Artificial Intelligence course, MIT, 6.034
Bayesian Networks for Data Mining David Heckerman Microsoft Research (Data Mining and Knowledge Discovery 1, (1997))
1 Monte Carlo Artificial Intelligence: Bayesian Networks.
Introduction to Bayesian Networks
2-1 Sample Spaces and Events Random Experiments Figure 2-1 Continuous iteration between model and physical system.
Generalizing Variable Elimination in Bayesian Networks 서울 시립대학원 전자 전기 컴퓨터 공학과 G 박민규.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
K2 Algorithm Presentation KDD Lab, CIS Department, KSU
Graphs We often use graphs to show how two variables are related. All these examples come straight from your book.
Review: Bayesian inference  A general scenario:  Query variables: X  Evidence (observed) variables and their values: E = e  Unobserved variables: Y.
AP Statistics Semester One Review Part 2 Chapters 4-6 Semester One Review Part 2 Chapters 4-6.
1 Use graphs and not pure logic Variables represented by nodes and dependencies by edges. Common in our language: “threads of thoughts”, “lines of reasoning”,
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
Conditional Independence As with absolute independence, the equivalent forms of X and Y being conditionally independent given Z can also be used: P(X|Y,
1 BN Semantics 1 Graphical Models – Carlos Guestrin Carnegie Mellon University September 15 th, 2006 Readings: K&F: 3.1, 3.2, 3.3.
Copyright © Cengage Learning. All rights reserved. Graphs; Equations of Lines; Functions; Variation 3.
Weng-Keen Wong, Oregon State University © Bayesian Networks: A Tutorial Weng-Keen Wong School of Electrical Engineering and Computer Science Oregon.
Recuperação de Informação B Modern Information Retrieval Cap. 2: Modeling Section 2.8 : Alternative Probabilistic Models September 20, 1999.
Review of Probability.
CS 2750: Machine Learning Directed Graphical Models
Given Slope & y-Intercept
Does Naïve Bayes always work?
URBDP 591 A Lecture 10: Causality
Qian Liu CSE spring University of Pennsylvania
Inference in Bayesian Networks
Read R&N Ch Next lecture: Read R&N
Bayesian Networks: A Tutorial
Bayesian Networks Probability In AI.
Data Mining Lecture 11.
CSCI 121 Special Topics: Bayesian Networks Lecture #2: Bayes Nets
Markov Properties of Directed Acyclic Graphs
Read R&N Ch Next lecture: Read R&N
Propagation Algorithm in Bayesian Networks
Pattern Recognition and Image Analysis
Markov Networks.
Class #19 – Tuesday, November 3
Relations for functions.
Class #16 – Tuesday, October 26
Hankz Hankui Zhuo Bayesian Networks Hankz Hankui Zhuo
Section 11.2 Inverse Functions.
Machine Learning: Lecture 6
Machine Learning: UNIT-3 CHAPTER-1
Probabilistic Reasoning
Read R&N Ch Next lecture: Read R&N
Objective- To graph a relationship in a table.
Bayesian networks (2) Lirong Xia. Bayesian networks (2) Lirong Xia.
Bayesian networks (2) Lirong Xia.
Presentation transcript:

Exam Preparation Class .

QUESTION 2 on Basic Inference directly from definitions Given is a Bayesian network A  B where A and B are binary variables. Variable A stands for type of shirt with values (a0 = T-shirt; a1 = not T-shirt). Variable B stands for color of shirt with values (b0 = red; b1 = purple). Denote z = p(a0). Let P(b0|a0)=0.5 and let P(b0|a1)=0.25. An observer examines a shirt color. Denote by e the process of examination (which is influenced by internal conceptions and laboratory light conditions). The observer declares: “the likelihood of red is twice the likelihood of purple. Provide a formula for the posterior probability of A after the evidence is given. Namely, a formula for p(a0|e) (as a function of z). .

Answer: Use the network A  BE Set E=e. Set P(e|b0) / P(e/b1) = 2. Write a formula directly from the definition of Bayes network for p(a0,e)= p(a0,b,e)= z p(b0|a0) p(e|b0) + z p(b1|a0) p(e|b1) and for p(a1,e) = p(a1,b,e)= (1-z) p(b0|a1) p(e|b0) + (1-z) p(b1|a1) p(e|b1) Divide the two formulae. It is a function of z. Use the relationship p(a0,e)=1-p(a0,e) to obtain the answer.

QUESTION 3 on d-separation Describe a linear algorithm for the following task. Input: a Bayesian Network D=(V,E), a set of nodes J, and a set of nodes Z. Output: The set of all nodes X that are d-separated from J by Z. (P.S. Due to soundness and completeness theorems, X is the largest set of variables that can be shown to be conditionally independent of J, given Z, based on the graph structure alone.) .

Answer (d-separation: from theorems to algorithms): Use BFS with minor changes and linear preprocessing. Consider a set of legal pairs of edges u – v –w according to d-separation. Namely, a pair is legal if edges meet head-to-head and v is in Z or has a descendant in Z - or - the pair of edges do not meet head-to-head and v is not in Z. Construct a table: Z Set false in all entries. Set true for all v in Z. Iterate to their parents. .

.

QUESTION 1: conditional independence properties (I(X,Z,Y) .