Machine learning Image source: https://www.coursera.org/course/ml.

Slides:



Advertisements
Similar presentations
Learning from Observations
Advertisements

Learning from Observations Chapter 18 Section 1 – 3.
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
also known as the “Perceptron”
Linear Classifiers (perceptrons)
Data Mining Classification: Alternative Techniques
Data Mining Classification: Alternative Techniques
Support Vector Machines and Margins
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Intelligent Environments1 Computer Science and Engineering University of Texas at Arlington.
ImageNet Classification with Deep Convolutional Neural Networks
CS 4700: Foundations of Artificial Intelligence
Learning Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Lecture 13 – Perceptrons Machine Learning March 16, 2010.
Overview over different methods – Supervised Learning
Recognition: A machine learning approach
Decision making in episodic environments
LEARNING DECISION TREES
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
ICS 273A Intro Machine Learning
Learning: Introduction and Overview
Machine Learning – Classifiers and Boosting
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Machine learning Image source:
Classification III Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
More Machine Learning Linear Regression Squared Error L1 and L2 Regularization Gradient Descent.
CSE 185 Introduction to Computer Vision Pattern Recognition.
Machine Learning Overview Tamara Berg Language and Vision.
Artificial Neural Networks
Classification Tamara Berg CSE 595 Words & Pictures.
INTRODUCTION TO MACHINE LEARNING. $1,000,000 Machine Learning  Learn models from data  Three main types of learning :  Supervised learning  Unsupervised.
Inductive learning Simplest form: learn a function from examples
LEARNING DECISION TREES Yılmaz KILIÇASLAN. Definition - I Decision tree induction is one of the simplest, and yet most successful forms of learning algorithm.
CSE 446 Perceptron Learning Winter 2012 Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer.
Classification: Feature Vectors
Learning from observations
Learning from Observations Chapter 18 Through
CHAPTER 18 SECTION 1 – 3 Learning from Observations.
Learning from Observations Chapter 18 Section 1 – 3, 5-8 (presentation TBC)
Machine Learning – Classifiers and Boosting Reading Ch ,
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
Linear Discrimination Reading: Chapter 2 of textbook.
Non-Bayes classifiers. Linear discriminants, neural networks.
Linear Classification with Perceptrons
Machine Learning Overview Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
Classification II Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Machine learning Image source:
Artificial Neural Network
CS 188: Artificial Intelligence Learning II: Linear Classification and Neural Networks Instructors: Stuart Russell and Pat Virtue University of California,
Chapter 18 Section 1 – 3 Learning from Observations.
© Copyright 2008 STI INNSBRUCK Neural Networks Intelligent Systems – Lecture 13.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Learning From Observations Inductive Learning Decision Trees Ensembles.
129 Feed-Forward Artificial Neural Networks AMIA 2003, Machine Learning Tutorial Constantin F. Aliferis & Ioannis Tsamardinos Discovery Systems Laboratory.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Neural networks and support vector machines
Learning from Observations
Machine learning Image source:
Introduce to machine learning
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Perceptrons Lirong Xia.
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Decision making in episodic environments
Machine Learning: Decision Tree Learning
Introduction to Neural Networks
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Perceptrons Lirong Xia.
Presentation transcript:

Machine learning Image source: https://www.coursera.org/course/ml

Machine learning Definition Getting a computer to do well on a task without explicitly programming it Improving performance on a task based on experience

Learning for episodic tasks We have just looked at learning in sequential environments Now let’s consider the “easier” problem of episodic environments The agent gets a series of unrelated problem instances and has to make some decision or inference about each of them

Example: Image classification input desired output apple pear tomato cow dog horse

Learning for episodic tasks We have just looked at learning in sequential environments Now let’s consider the “easier” problem of episodic environments The agent gets a series of unrelated problem instances and has to make some decision or inference about each of them In this case, “experience” comes in the form of training data

Training data apple pear tomato cow dog horse Key challenge of learning: generalization to unseen examples

Example 2: Seismic data classification Earthquakes Surface wave magnitude Nuclear explosions Body wave magnitude

Example 3: Spam filter

Example 4: Sentiment analysis http://gigaom.com/2013/10/03/stanford-researchers-to-open-source-model-they-say-has-nailed-sentiment-analysis/ http://nlp.stanford.edu:8080/sentiment/rntnDemo.html

The basic machine learning framework y = f(x) Learning: given a training set of labeled examples {(x1,y1), …, (xN,yN)}, estimate the parameters of the prediction function f Inference: apply f to a never before seen test example x and output the predicted value y = f(x) output classification function input

Naïve Bayes classifier A single dimension or attribute of x

Decision tree classifier Example problem: decide whether to wait for a table at a restaurant, based on the following attributes: Alternate: is there an alternative restaurant nearby? Bar: is there a comfortable bar area to wait in? Fri/Sat: is today Friday or Saturday? Hungry: are we hungry? Patrons: number of people in the restaurant (None, Some, Full) Price: price range ($, $$, $$$) Raining: is it raining outside? Reservation: have we made a reservation? Type: kind of restaurant (French, Italian, Thai, Burger) WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60)

Decision tree classifier

Decision tree classifier

Nearest neighbor classifier Training examples from class 2 Training examples from class 1 Test example f(x) = label of the training example nearest to x All we need is a distance function for our inputs No training required!

K-nearest neighbor classifier For a new point, find the k closest points from training data Vote for class label with labels of the k points k = 5

Linear classifier Find a linear function to separate the classes f(x) = sgn(w1x1 + w2x2 + … + wDxD) = sgn(w  x)

NN vs. linear classifiers NN pros: Simple to implement Decision boundaries not necessarily linear Works for any number of classes Nonparametric method NN cons: Need good distance function Slow at test time Linear pros: Low-dimensional parametric representation Very fast at test time Linear cons: Works for two classes How to train the linear function? What if data is not linearly separable?

Perceptron . Input Weights x1 w1 x2 w2 Output: sgn(wx + b) x3 w3 Can incorporate bias as component of the weight vector by always including a feature with value set to 1 wD xD

Loose inspiration: Human neurons From Wikipedia: At the majority of synapses, signals are sent from the axon of one neuron to a dendrite of another... All neurons are electrically excitable, maintaining voltage gradients across their membranes… If the voltage changes by a large enough amount, an all-or-none electrochemical pulse called an action potential is generated, which travels rapidly along the cell's axon, and activates synaptic connections with other cells when it arrives.

Linear separability

Perceptron training algorithm Initialize weights Cycle through training examples in multiple passes (epochs) For each training example: If classified correctly, do nothing If classified incorrectly, update weights

Perceptron update rule For each training instance x with label y: Classify with current weights: y’ = sgn(wx) Update weights: 𝐰←𝐰+𝛼 𝑦−𝑦′ 𝐱 α is a learning rate that should decay as 1/t, e.g., 1000/(1000+t) What happens if answer is correct? Otherwise, consider what happens to individual weights: 𝑤 𝑖 ← 𝑤 𝑖 +𝛼 𝑦−𝑦′ 𝑥 𝑖 If y = 1 and y’ = −1, wi will be increased if xi is positive or decreased if xi is negative −> wx gets bigger If y = −1 and y’ = 1, wi will be decreased if xi is positive or increased if xi is negative −> wx gets smaller

Convergence of perceptron update rule Linearly separable data: converges to a perfect solution Non-separable data: converges to a minimum-error solution assuming learning rate decays as O(1/t) and examples are presented in random sequence

Implementation details Bias (add feature dimension with value fixed to 1) vs. no bias Initialization of weights: all zeros vs. random Number of epochs (passes through the training data) Order of cycling through training examples

Multi-class perceptrons Need to keep a weight vector wc for each class c Decision rule: 𝑐 =arg max 𝑐 𝐰 𝑐 ∙𝐱 Update rule: suppose an example from class c gets misclassified as c’ Update for c: 𝐰 𝒄 ← 𝐰 𝒄 +𝛼𝐱 Update for c’: 𝐰 𝒄′ ← 𝐰 𝒄′ −𝛼𝐱

Differentiable perceptron Input Weights x1 w1 x2 w2 Output: (wx + b) x3 w3 . Sigmoid function: wd xd

Update rule for differentiable perceptron Define total classification error or loss on the training set: Update weights by gradient descent: For a single training point, the update is:

Update rule for differentiable perceptron For a single training point, the update is: Compare with update rule with non-differentiable perceptron:

Multi-Layer Neural Network Can learn nonlinear functions Training: find network weights to minimize the error between true and estimated labels of training examples: Minimization can be done by gradient descent provided f is differentiable This training method is called back-propagation

Deep convolutional neural networks Zeiler, M., and Fergus, R. Visualizing and Understanding Convolutional Neural Networks, tech report, 2013. Krizhevsky, A., Sutskever, I., and Hinton, G.E. ImageNet classication with deep convolutional neural networks. NIPS, 2012.

Demo from Berkeley http://decaf.berkeleyvision.org/