Machine Learning in Practice Lecture 27

Slides:



Advertisements
Similar presentations
CSC321: Introduction to Neural Networks and Machine Learning Lecture 24: Non-linear Support Vector Machines Geoffrey Hinton.
Advertisements

Imbalanced data David Kauchak CS 451 – Fall 2013.
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Machine Learning in Practice Lecture 7 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Lecture 22: Evaluation April 24, 2010.
Machine Learning in Practice Lecture 3 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
ML ALGORITHMS. Algorithm Types Classification (supervised) Given -> A set of classified examples “instances” Produce -> A way of classifying new examples.
Today Evaluation Measures Accuracy Significance Testing
Anomaly detection Problem motivation Machine Learning.
Active Learning for Class Imbalance Problem
Short Introduction to Machine Learning Instructor: Rada Mihalcea.
INTRODUCTION TO MACHINE LEARNING. $1,000,000 Machine Learning  Learn models from data  Three main types of learning :  Supervised learning  Unsupervised.
Today’s Topics Chapter 2 in One Slide Chapter 18: Machine Learning (ML) Creating an ML Dataset –“Fixed-length feature vectors” –Relational/graph-based.
Mining Social Network for Personalized Prioritization Language Techonology Institute School of Computer Science Carnegie Mellon University Shinjae.
Today Ensemble Methods. Recap of the course. Classifier Fusion
CROSS-VALIDATION AND MODEL SELECTION Many Slides are from: Dr. Thomas Jensen -Expedia.com and Prof. Olga Veksler - CS Learning and Computer Vision.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Machine Learning in Practice Lecture 5 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Machine Learning in Practice Lecture 10 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Machine Learning in Practice Lecture 6 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Machine Learning in Practice Lecture 2 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Machine Learning in Practice Lecture 21 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Machine Learning in Practice Lecture 9 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Machine Learning in Practice Lecture 8 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Machine Learning in Practice Lecture 9 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Machine Learning in Practice Lecture 25 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Experience Report: System Log Analysis for Anomaly Detection
Machine Learning in Practice Lecture 18
CS Fall 2016 (Shavlik©), Lecture 5
Chapter 4: Business Process and Functional Modeling, continued
Presented by Khawar Shakeel
Hands-on Automated Scoring
UNIT 2 – LESSON 6 ENCODE AN EXPERIENCE.
Adversarial Learning for Neural Dialogue Generation
Evaluating Results of Learning
2. Industry 4.0: novel sensors, control algorithms, and servo-presses
General principles in building a predictive model
Classification—Practical Exercise
Dipartimento di Ingegneria «Enzo Ferrari»,
Data Mining Lecture 11.
Machine Learning Today: Reading: Maria Florina Balcan
Project Planning is a waste of time!!!
CMPT 733, SPRING 2016 Jiannan Wang
Objective of This Course
Identifying Confusion from Eye-Tracking Data
Machine Learning in Practice Lecture 11
Evaluation and Its Methods
Experiments in Machine Learning
Large Scale Support Vector Machines
Overview of Machine Learning
Machine Learning in Practice Lecture 26
Approaching an ML Problem
Chapter 11 Practical Methodology
ML – Lecture 3B Deep NN.
Machine Learning in Practice Lecture 23
Machine Learning in Practice Lecture 22
Machine Learning in Practice Lecture 7
Machine Learning in Practice Lecture 17
Computational Models of Discourse Analysis
Core Methods in Educational Data Mining
Machine Learning in Practice Lecture 6
Information Retrieval
CMPT 733, SPRING 2017 Jiannan Wang
Type Topic in here! Created by Educational Technology Network
Jia-Bin Huang Virginia Tech
Evaluation and Its Methods
Presenter: Donovan Orn
Presentation transcript:

Machine Learning in Practice Lecture 27 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute

Plan for the Day Announcements Brief Mid-term Feedback Questions? Brief Mid-term Feedback Yingbo et al., 2007 paper

Midterm Feedback

Challenge Question Stage 1: estimating tuned performance Lets say you are using10-fold cross validation to tune an algorithm. It has two parameters you want to tune. On each iteration, you divide the training data into train and validation sets rather than using cross-validation. For one parameter, you are considering 5 different possible values. For the other one, you are considering 3. How many different models altogether will you train for: Stage 1: estimating tuned performance Stage 2: finding the best settings Stage 3: building the model

Challenge Question Stage 1: estimating tuned performance Lets say you are using10-fold cross validation to tune an algorithm. It has two parameters you want to tune. On each iteration, you divide the training data into train and validation sets rather than using cross-validation. For one parameter, you are considering 5 different possible values. For the other one, you are considering 3. How many different models altogether will you train for: Stage 1: estimating tuned performance 160 Stage 2: finding the best settings 150 Stage 3: building the model 1

Mid-term discussion How might you estimate how long it will take to tune your algorithm from how you have it set up? What was your strategy? How did you determine what were reasonable ranges for parameter values?

Yingbo et al., paper

What Makes a Good Paper Argue why the problem is important, what your big result is, and what you learned about your application area Explain prior work, where previous approaches “went wrong”, and where your approach fits Summarize your approach Justify your evaluation metrics, corpus, gold standard, and baselines Present your results Discuss your error analysis 8

What Makes a Good Paper Argue why the problem is important, what your big result is, and what you learned about your application area Explain prior work, where previous approaches “went wrong”, and where your approach fits Summarize your approach Justify your evaluation metrics, corpus, gold standard, and baselines Present your results Discuss your error analysis 9

Application: Staff Assignment Staff assignment normally performed manually Poor assignment of human resources causes unnecessary expenses http://www.lewrockwell.com/rogers/line.jpg

What Makes a Good Paper Argue why the problem is important, what your big result is, and what you learned about your application area Explain prior work, where previous approaches “went wrong”, and where your approach fits Summarize your approach Justify your evaluation metrics, corpus, gold standard, and baselines Present your results Discuss your error analysis 11

Background on Task Workflow model: a plan that can be executed Composed of activities and tasks Each activity is performed by one or more actors An event entry records that the activity was accomplished Event entries are recorded in an event log One path through the plan is called a case or workflow instance http://ec.europa.eu/enterprise /regulation/goods/images/cars.jpg

Setting Up the Problem Raw data: an event log that shows over time who executes which activities Each event is an instance Class attribute: nominal, names of people who are potential assignees Features: “event entries of those completed activities in the same workflow instance” Representation is like a word vector where each activity is like a word Suitable activities are ones that are executed many times and have many potential actors 22 activities from enterprise A and 35 from B

More on Feature Space Design Information in event log “actor’s identity, data contained in the workflow and structure information” Too vague to see what they are doing here Referring to documents that people access while executing the tasks Later they mention social network information and expertise

More on Feature Space Design Did separate experiments for each workflow model Each instance is a vector representing the sequence of activities that were executed by an actor to achieve a goal as part of a case Ordering information must be lost Basically, you are identifying difference characteristic styles of executing a plan that are associated with different actors

What Makes a Good Paper Argue why the problem is important, what your big result is, and what you learned about your application area Explain prior work, where previous approaches “went wrong”, and where your approach fits Summarize your approach Justify your evaluation metrics, corpus, gold standard, and baselines Present your results Discuss your error analysis 16

Why did they pick SVM? No significant difference on average Decided to go with the most consistent algorithm

Application: Staff Assignment Results for car manufacturing are 85.5% for one company and 80.1% for another But what is the kappa? How hard is the problem? What kinds of mistakes does it make? What does accuracy mean here? http://www.asia-pacific-connections.com/images/ Car-manufacturing.jpg

What Makes a Good Paper Argue why the problem is important, what your big result is, and what you learned about your application area Explain prior work, where previous approaches “went wrong”, and where your approach fits Summarize your approach Justify your evaluation metrics, corpus, gold standard, and baselines Present your results Discuss your error analysis 19

Take Home Message Positives Negatives Real world application of machine learning Evaluation on realistic dataset Related work discussion points out what is the unique contribution of this work But the latter portion of the discussion sounds like a “laundry list” Lots of consideration of feature space design Not sure feature space design makes sense for the task though – is it learning the right thing? Negatives Evaluation seems weak – and no error analysis Writing is not clear You should try to explain things more clearly than this