Context-Specific CPDs

Slides:



Advertisements
Similar presentations
Nonnegative Matrix Factorization with Sparseness Constraints S. Race MA591R.
Advertisements

Chapter 23 Minimum Spanning Tree
Local structures; Causal Independence, Context-sepcific independance COMPSCI 276 Fall 2007.
Lecture 13 – Perceptrons Machine Learning March 16, 2010.
Parameter Learning in Markov Nets Dhruv Batra, Recitation 11/13/2008.
Radial Basis Functions
Convergent and Correct Message Passing Algorithms Nicholas Ruozzi and Sekhar Tatikonda Yale University TexPoint fonts used in EMF. Read the TexPoint manual.
Phylogenetic Trees Presenter: Michael Tung
Figure 1.1 The observer in the truck sees the ball move in a vertical path when thrown upward. (b) The Earth observer views the path of the ball as a parabola.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
EMIS 8381 – Spring Netflix and Your Next Movie Night Nonlinear Programming Ron Andrews EMIS 8381.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Model representation Linear regression with one variable
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
Dan Boneh Symmetric Encryption History Crypto. Dan Boneh History David Kahn, “The code breakers” (1996)
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Mixture of Gaussians This is a probability distribution for random variables or N-D vectors such as… –intensity of an object in a gray scale image –color.
11 1 Backpropagation Multilayer Perceptron R – S 1 – S 2 – S 3 Network.
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Classification: Logistic Regression –NB & LR connections Readings: Barber.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Insight: Steal from Existing Supervised Learning Methods! Training = {X,Y} Error = target output – actual output.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Steepest Descent Method Contours are shown below.
Machine learning optimization Usman Roshan. Machine learning Two components: – Modeling – Optimization Modeling – Generative: we assume a probabilistic.
1 CS 430: Information Discovery Lecture 5 Ranking.
Daphne Koller Wrapup BNs vs MNs Probabilistic Graphical Models Representation.
Daphne Koller Bayesian Networks Semantics & Factorization Probabilistic Graphical Models Representation.
Prims Algorithm for finding a minimum spanning tree
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
X = 2 + t y = t t = x – 2 t = (y + 3)/2 x – 2 = y x – 4 = y + 3 y – 2x + 7 = 0 Finding the Cartesian Equation from a vector equation x = 2.
Daphne Koller Introduction Motivation and Overview Probabilistic Graphical Models.
Daphne Koller Independencies Bayesian Networks Probabilistic Graphical Models Representation.
Fall 2004 Backpropagation CS478 - Machine Learning.
Gradient descent David Kauchak CS 158 – Fall 2016.
A Simple Artificial Neuron
ECE 5424: Introduction to Machine Learning
Irina Rish IBM T.J.Watson Research Center
Lecture 8 Generalized Linear Models &
Neural Networks for Vertex Covering
General Gibbs Distribution
Independence in Markov Networks
Independence in Markov Networks
General Gibbs Distribution
Backpropagation.
I-equivalence Bayesian Networks Representation Probabilistic Graphical
Conditional Random Fields
Probabilistic Influence & d-separation
Reasoning Patterns Bayesian Networks Representation Probabilistic
Independence of Causal Influence
Knowledge Engineering
MCMC Inference over Latent Diffeomorphisms
CS639: Data Management for Data Science
Backpropagation David Kauchak CS159 – Fall 2019.
Unifying Variational and GBP Learning Parameters of MNs EM for BNs
Tree-structured CPDs Local Structure Representation Probabilistic
Representation Probabilistic Graphical Models Local Structure Overview.
Independence of Causal Influence
Crypto Encryption Intro to public key.
Representation Probabilistic Graphical Models Local Structure Overview.
Independence in Markov Networks
Minimum Spanning Trees (MSTs)
Backpropagation.
Flow of Probabilistic Influence
Preliminaries: Independence
Variable Elimination Graphical Models – Carlos Guestrin
Introduction to Neural Networks
Linear regression with one variable
More Graphs Lecture 19 CS2110 – Fall 2009.
Presentation transcript:

Context-Specific CPDs Representation Probabilistic Graphical Models Local Structure Context-Specific CPDs

Tree CPD A S L s1 a0 a1 s0 l1 l0 (0.8,0.2) (0.9,0.1) (0.4,0.6) (0.1,0.9) s1 a0 a1 s0 l1 l0 Apply Letter SAT Job

Which context-specific independencies are implied by the structure of this CPD? (Mark all that apply.) A S L (0.8,0.2) (0.9,0.1) (0.4,0.6) (0.1,0.9) s1 a0 a1 s0 l1 l0

Tree CPD C L2 L1 c1 c2 l1 l0 (0.8,0.2) (0.1,0.9) (0.9,0.1) (0.3,0.7) Choice Letter1 Letter2 Job

Which non-context-specific independency is implied by the structure of this CPD? (0.9,0.1) (0.3,0.7) l1 l0 L (0.8,0.2) (0.1,0.9) l1 l0 Choice Letter1 Letter2 Job

C L L c1 c2 l1 l0 l1 l0 (0.9,0.1) (0.3,0.7) (0.8,0.2) (0.1,0.9) Choice Letter1 Letter2 Job

Multiplexer CPD . . . Z1 Z2 Zk A Y

Microsoft Troubleshooters # of parameters: 145 to 55

END END END

Suppose q is at a local minimum of a function Suppose q is at a local minimum of a function. What will one iteration of gradient descent do? Leave q unchanged. Change q in a random direction. Move q towards the global minimum of J(q). Decrease q.

Consider the weight update: Which of these is a correct vectorized implementation?

Fig. A corresponds to a=0.01, Fig. B to a=0.1, Fig. C to a=1.