CSE 571 Advanced Artificial Intelligence Nov 24, 2003 Class Notes Transcribed By: Jon Lammers.

Slides:



Advertisements
Similar presentations
Recognising Languages We will tackle the problem of defining languages by considering how we could recognise them. Problem: Is there a method of recognising.
Advertisements

CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 27 – Overview of probability concepts 1.
Graphs: basic definitions and notation Definition A (undirected, unweighted) graph is a pair G = (V, E), where V = {v 1, v 2,..., v n } is a set of vertices,
CS498-EA Reasoning in AI Lecture #15 Instructor: Eyal Amir Fall Semester 2011.
Open Provenance Model Tutorial Session 2: OPM Overview and Semantics Luc Moreau University of Southampton.
Stanford University CS243 Winter 2006 Wei Li 1 Register Allocation.
Bayesian Networks, Winter Yoav Haimovitch & Ariel Raviv 1.
Structure Learning Using Causation Rules Raanan Yehezkel PAML Lab. Journal Club March 13, 2003.
Week 21 Basic Set Theory A set is a collection of elements. Use capital letters, A, B, C to denotes sets and small letters a 1, a 2, … to denote the elements.
Chapter 4 Probability and Probability Distributions
Belief Networks Done by: Amna Omar Abdullah Fatima Mahmood Al-Hajjat Najwa Ahmad Al-Zubi.
Learning Causality Some slides are from Judea Pearl’s class lecture
Lectures on Network Flows
Chapter 8-3 Markov Random Fields 1. Topics 1. Introduction 1. Undirected Graphical Models 2. Terminology 2. Conditional Independence 3. Factorization.
Junction Trees: Motivation Standard algorithms (e.g., variable elimination) are inefficient if the undirected graph underlying the Bayes Net contains cycles.
Discussion #36 Spanning Trees
Convex Grid Drawings of 3-Connected Plane Graphs Erik van de Pol.
Bayesian Network Representation Continued
CSE 780 Algorithms Advanced Algorithms Graph Algorithms Representations BFS.
Simulation and Application on learning gene causal relationships Xin Zhang.
Learning Equivalence Classes of Bayesian-Network Structures David M. Chickering Presented by Dmitry Zinenko.
Inferring Causal Graphs Computing 882 Simon Fraser University Spring 2002.
CISC220 Fall 2009 James Atlas Nov 13: Graphs, Line Intersections.
Chapter 1 Systems of Linear Equations
Bayesian Networks Alan Ritter.
. DAGs, I-Maps, Factorization, d-Separation, Minimal I-Maps, Bayesian Networks Slides by Nir Friedman.
Causal Modeling for Anomaly Detection Andrew Arnold Machine Learning Department, Carnegie Mellon University Summer Project with Naoki Abe Predictive Modeling.
Graphs, relations and matrices
Function: Definition A function is a correspondence from a first set, called the domain, to a second set, called the range, such that each element in the.
Made by: Maor Levy, Temple University  Probability expresses uncertainty.  Pervasive in all of Artificial Intelligence  Machine learning 
GRAPHS CSE, POSTECH. Chapter 16 covers the following topics Graph terminology: vertex, edge, adjacent, incident, degree, cycle, path, connected component,
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
1 Chapter Elementary Notions and Notations.
Introduction to Bayesian Networks
Learning Linear Causal Models Oksana Kohutyuk ComS 673 Spring 2005 Department of Computer Science Iowa State University.
Graphs.  Definition A simple graph G= (V, E) consists of vertices, V, a nonempty set of vertices, and E, a set of unordered pairs of distinct elements.
Ch 8. Graphical Models Pattern Recognition and Machine Learning, C. M. Bishop, Revised by M.-O. Heo Summarized by J.W. Nam Biointelligence Laboratory,
Course files
Learning the Structure of Related Tasks Presented by Lihan He Machine Learning Reading Group Duke University 02/03/2006 A. Niculescu-Mizil, R. Caruana.
Learning With Bayesian Networks Markus Kalisch ETH Zürich.
Daphne Koller Message Passing Belief Propagation Algorithm Probabilistic Graphical Models Inference.
PROBABILITY, PROBABILITY RULES, AND CONDITIONAL PROBABILITY
Exploratory studies: you have empirical data and you want to know what sorts of causal models are consistent with it. Confirmatory tests: you have a causal.
1 Use graphs and not pure logic Variables represented by nodes and dependencies by edges. Common in our language: “threads of thoughts”, “lines of reasoning”,
 Quotient graph  Definition 13: Suppose G(V,E) is a graph and R is a equivalence relation on the set V. We construct the quotient graph G R in the follow.
Great Theoretical Ideas in Computer Science for Some.
Algorithms for hard problems Parameterized complexity Bounded tree width approaches Juris Viksna, 2015.
Dynamic Programming & Hidden Markov Models. Alan Yuille Dept. Statistics UCLA.
COMPSCI 102 Introduction to Discrete Mathematics.
Last lecture… An experiment is a process by which an observation is made. A random experiment is an experiment that can result in different outcomes, even.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
David Stotts Computer Science Department UNC Chapel Hill.
CSE 2813 Discrete Structures
Partial Orderings CSE 2813 Discrete Structures.
What is Probability? Quantification of uncertainty.
Great Theoretical Ideas In Computer Science
Lectures on Network Flows
Richard Anderson Lecture 25 NP-Completeness
Advanced Artificial Intelligence
Inferring Causal Graphs
N-Gram Model Formulas Word sequences Chain rule of probability
CS 188: Artificial Intelligence Spring 2007
I-maps and perfect maps
I-maps and perfect maps
Probabilistic Reasoning
CS 188: Artificial Intelligence Spring 2006
GRAPHS.
Locality In Distributed Graph Algorithms
Section 3.2 More set operations.
Presentation transcript:

CSE 571 Advanced Artificial Intelligence Nov 24, 2003 Class Notes Transcribed By: Jon Lammers

10/22/2003CSE Advanced Artificial Intelligence 2 Learning Structured Models Definition – Observational Equivalence – –Two structures are observationally equivalent if the set of data can satisfy both graphs by finding a parameters. Operationally Equivalent

10/22/2003CSE Advanced Artificial Intelligence 3 a is independent of b.P(a/b) = P(a) d is independent of {a,b} given c.P(d/a,b,c) = P(d/c) Consistent/MinimalNot ConsistentConsistent/Minimal Example – Consistency a c d ba c d b * c d b * a

10/22/2003CSE Advanced Artificial Intelligence 4 Example – Coins ABCP(fair)P(-fair) With fair coins, multiple structures can be derived from the data. Including those that do not correctly capture the causal relationship. Making the coins unfair, leaves you with the correct structure. Fair:P(A) = 0.50 P(B) = Fair:P(A) = 0.60 P(B) = 0.30 a c b b a c a b c Correct Structure Only consistent with fair coins.

10/22/2003CSE Advanced Artificial Intelligence 5 Definition Stability – Let I(P) denote the set of all conditional independence relationships embodied in P. A causal model M=(D,Θ D ) generates a stable distribution if and only if P((D,Θ D )) contains no extraneous independencies - that is, if and only if I(P((D,Θ D ))) I(P((D,Θ’ D ))) for any set of parameters Θ’ D.

10/22/2003CSE Advanced Artificial Intelligence 6 Algorithm If you have a large number of nodes, you can have a huge number of possible graphs. The algorithm generates a graph (non-directed) that is a summary of all the minimal stable structures consistent with the data. You can arbitrarily put arrows as long as you do not introduce any new “v-structures”. V-StructureNot V-Structure

10/22/2003CSE Advanced Artificial Intelligence 7 Inductive Causation Algorithm 1.For a, b find S ab such that a independent b/S ab, construct undirected graph with edge between a, b if you can’t find S ab. (not S ab = Ø) 2.For each pair of non-adjacent variables a, b with a common neighbor c. If c is not an element of S ab add arrowheads pointing to c from a and b. 3.Orient as many undirected edges as possible while: –Not creating v-structures –Not creating directed cycles.

10/22/2003CSE Advanced Artificial Intelligence 8 Rules for Orienting Arrows (Step 3) 1.Orient b – c as b -> c when a -> b and a, c are non- adjacent. 2.Orient a – b as a -> b when a -> c -> b. 3.Orient a – b as a -> b when a – c -> b and a – d -> b and b, c are non-adjacent. 4.Orient a – b as a -> b when a – c -> d and c -> d -> b and c, b are non-adjacent.