Learning Coordination Classifiers

Slides:



Advertisements
Similar presentations
Document Summarization using Conditional Random Fields Dou Shen, Jian-Tao Sun, Hua Li, Qiang Yang, Zheng Chen IJCAI 2007 Hao-Chin Chang Department of Computer.
Advertisements

Why does it work? We have not addressed the question of why does this classifier performs well, given that the assumptions are unlikely to be satisfied.
An Introduction to Conditional Random Field Ching-Chun Hsiao 1.
An Overview of Machine Learning
Indian Statistical Institute Kolkata
Middle Term Exam 03/01 (Thursday), take home, turn in at noon time of 03/02 (Friday)
Conditional Random Fields - A probabilistic graphical model Stefan Mutter Machine Learning Group Conditional Random Fields - A probabilistic graphical.
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data John Lafferty Andrew McCallum Fernando Pereira.
Naïve Bayes Classifier
Consistent probabilistic outputs for protein function prediction William Stafford Noble Department of Genome Sciences Department of Computer Science and.
On Discriminative vs. Generative classifiers: Naïve Bayes
Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Research supported in part by grants from the National.
Lesson learnt from the UCSD datamining contest Richard Sia 2008/10/10.
Announcements  Project proposal is due on 03/11  Three seminars this Friday (EB 3105) Dealing with Indefinite Representations in Pattern Recognition.
Abstract We present a model of curvilinear grouping using piecewise linear representations of contours and a conditional random field to capture continuity.
Modeling Consensus: Classifier Combination for WSD Authors: Radu Florian and David Yarowsky Presenter: Marian Olteanu.
Bagging LING 572 Fei Xia 1/24/06. Ensemble methods So far, we have covered several learning methods: FSA, HMM, DT, DL, TBL. Question: how to improve results?
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
Kernel Methods Part 2 Bing Han June 26, Local Likelihood Logistic Regression.
CPSC 422, Lecture 18Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18 Feb, 25, 2015 Slide Sources Raymond J. Mooney University of.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Jeff Howbert Introduction to Machine Learning Winter Classification Bayesian Classifiers.
Crash Course on Machine Learning
Machine Learning CUNY Graduate Center Lecture 21: Graphical Models.
Margin Learning, Online Learning, and The Voted Perceptron SPLODD ~= AE* – 3, 2011 * Autumnal Equinox.
1 CS 391L: Machine Learning: Bayesian Learning: Beyond Naïve Bayes Raymond J. Mooney University of Texas at Austin.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Hidden Markov Models in Keystroke Dynamics Md Liakat Ali, John V. Monaco, and Charles C. Tappert Seidenberg School of CSIS, Pace University, White Plains,
1 Generative and Discriminative Models Jie Tang Department of Computer Science & Technology Tsinghua University 2012.
CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov
Overview of the final test for CSC Overview PART A: 7 easy questions –You should answer 5 of them. If you answer more we will select 5 at random.
Slides for “Data Mining” by I. H. Witten and E. Frank.
Active learning Haidong Shi, Nanyi Zeng Nov,12,2008.
29 August 2013 Venkat Naïve Bayesian on CDF Pair Scores.
John Lafferty Andrew McCallum Fernando Pereira
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Ensemble Methods: Bagging, Boosting Readings: Murphy 16.4; Hastie 16.
COMP24111: Machine Learning Ensemble Models Gavin Brown
Maximum Entropy Model, Bayesian Networks, HMM, Markov Random Fields, (Hidden/Segmental) Conditional Random Fields.
Markov Random Fields & Conditional Random Fields
Contextual models for object detection using boosted random fields by Antonio Torralba, Kevin P. Murphy and William T. Freeman.
NTU & MSRA Ming-Feng Tsai
Machine Learning: A Brief Introduction Fu Chang Institute of Information Science Academia Sinica ext. 1819
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Classification COMP Seminar BCB 713 Module Spring 2011.
Hierarchical Mixture of Experts Presented by Qi An Machine learning reading group Duke University 07/15/2005.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Graphical Models for Segmenting and Labeling Sequence Data Manoj Kumar Chinnakotla NLP-AI Seminar.
Machine Learning Usman Roshan Dept. of Computer Science NJIT.
CMPS 142/242 Review Section Fall 2011 Adapted from Lecture Slides.
Naive Bayes (Generative Classifier) vs. Logistic Regression (Discriminative Classifier) Minkyoung Kim.
Learning Deep Generative Models by Ruslan Salakhutdinov
Maximum Entropy Models and Feature Engineering CSCI-GA.2591
Data Mining Lecture 11.
CSC 594 Topics in AI – Natural Language Processing
Machine Learning Week 1.
Conditional Random Fields
Project 1 Binary Classification
Lecture 5 Unsupervised Learning in fully Observed Directed and Undirected Graphical Models.
network of simple neuron-like computing elements
Generative Models and Naïve Bayes
Prepared by: Mahmoud Rafeek Al-Farra
Parametric Methods Berlin Chen, 2005 References:
Conditional Random Fields
Generative Models and Naïve Bayes
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18
A task of induction to find patterns
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
Stance Classification of Ideological Debates
What is Artificial Intelligence?
Presentation transcript:

Learning Coordination Classifiers Guo, Greiner, and Schuurmans University of Alberta Presented by Nick Rizzolo

Outline Standard assumptions about classification What’s a “coordination classifier”? How is it trained? How is it evaluated? How is this approach justified? Experiments 9/13/05 AIML Seminar

Standard Classification Assumptions Input: Output: Training and testing data are independent and identically distributed (i.i.d.) f 9/13/05 AIML Seminar

Coordination Classification Pair examples Multi-class f Coordination 9/13/05 AIML Seminar

Training Amount of training examples is squared Maximum likelihood, logistic regression, naïve Bayes, Bayes networks, neural networks, etc. Trained classifier makes dependent associations 9/13/05 AIML Seminar

Evaluation HMM, CRF, voting, … What the authors did: Markov Random Field Training examples Testing examples 9/13/05 AIML Seminar

Evaluation (cont’) Full network is impractical Just train-test edges Analogous to kernel based algorithms Easy to compute most likely labeling Just test-test edges Analogous to ensemble (voting) methods Probabilistic inference is expensive Random edge subsampling 9/13/05 AIML Seminar

Rationale x1 x2 y1 y2 test examples true conditional model learned 9/13/05 AIML Seminar

Experiment 1 Logistic regression, belief propagation, only test edges, 18 edges per example 9/13/05 AIML Seminar

Naïve Bayes, belief propagation, only test edges, 18 edges per example Experiment 2 Naïve Bayes, belief propagation, only test edges, 18 edges per example 9/13/05 AIML Seminar

Experiment 3 Logistic regression, belief propagation, 18 edges per example Only train-test edges Test-test and train-test edges 9/13/05 AIML Seminar

Neural Network, voting, only test edges, 18 edges per example Experiment 4 Neural Network, voting, only test edges, 18 edges per example 9/13/05 AIML Seminar

Logistic regression, belief propagation, only test edges Experiment 5 Logistic regression, belief propagation, only test edges 9/13/05 AIML Seminar