Introduction to Machine Learning for Category Representation

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
CHAPTER 2: Supervised Learning. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Learning a Class from Examples.
Machine learning continued Image source:
Clustering with k-means and mixture of Gaussian densities Jakob Verbeek December 3, 2010 Course website:
An Overview of Machine Learning
Discriminative and generative methods for bags of features
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Supervised learning Given training examples of inputs and corresponding outputs, produce the “correct” outputs for new inputs Two main scenarios: –Classification:
Recognition: A machine learning approach
COMP 875: Introductions Name, year, research area/group
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Learning From Data Chichang Jou Tamkang University.
Object Recognition: Conceptual Issues Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, and K. Grauman.
Object Class Recognition Using Discriminative Local Features Gyuri Dorko and Cordelia Schmid.
Introduction to Machine Learning course fall 2007 Lecturer: Amnon Shashua Teaching Assistant: Yevgeny Seldin School of Computer Science and Engineering.
Presented by Zeehasham Rasheed
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Part I: Classification and Bayesian Learning
Introduction to machine learning
Machine learning & category recognition Cordelia Schmid Jakob Verbeek.
CS Machine Learning. What is Machine Learning? Adapt to / learn from data  To optimize a performance function Can be used to:  Extract knowledge.
Review: Intro to recognition Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear.
More Machine Learning Linear Regression Squared Error L1 and L2 Regularization Gradient Descent.
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence From Data Mining To Knowledge.
CSE 185 Introduction to Computer Vision Pattern Recognition.
The EM algorithm, and Fisher vector image representation
Data Mining Joyeeta Dutta-Moscato July 10, Wherever we have large amounts of data, we have the need for building systems capable of learning information.
Classification 2: discriminative models
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Machine learning & category recognition Cordelia Schmid Jakob Verbeek.
Introduction to Machine Learning for Category Representation Jakob Verbeek November 27, 2009 Many slides adapted from S. Lazebnik.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Bayesian networks Classification, segmentation, time series prediction and more. Website: Twitter:
Building local part models for category-level recognition C. Schmid, INRIA Grenoble Joint work with G. Dorko, S. Lazebnik, J. Ponce.
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 16 Nov, 3, 2011 Slide credit: C. Conati, S.
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Classification 1: generative and non-parameteric methods Jakob Verbeek January 7, 2011 Course website:
MSRI workshop, January 2005 Object Recognition Collected databases of objects on uniform background (no occlusions, no clutter) Mostly focus on viewpoint.
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
Visual Categorization With Bags of Keypoints Original Authors: G. Csurka, C.R. Dance, L. Fan, J. Willamowski, C. Bray ECCV Workshop on Statistical Learning.
Christoph Eick: Learning Models to Predict and Classify 1 Learning from Examples Example of Learning from Examples  Classification: Is car x a family.
Concept learning, Regression Adapted from slides from Alpaydin’s book and slides by Professor Doina Precup, Mcgill University.
Jakob Verbeek December 11, 2009
Ohad Hageby IDC Support Vector Machines & Kernel Machines IP Seminar 2008 IDC Herzliya.
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
Methods for classification and image representation
Machine Learning Overview Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
Clustering Instructor: Max Welling ICS 178 Machine Learning & Data Mining.
Chapter1: Introduction Chapter2: Overview of Supervised Learning
Probability and Statistics in Vision. Probability Objects not all the sameObjects not all the same – Many possible shapes for people, cars, … – Skin has.
Machine Learning Saarland University, SS 2007 Holger Bast Marjan Celikik Kevin Chang Stefan Funke Joachim Giesen Max-Planck-Institut für Informatik Saarbrücken,
Machine learning Image source:
Data Mining and Decision Support
MACHINE LEARNING 3. Supervised Learning. Learning a Class from Examples Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Pattern recognition – basic concepts. Sample input attribute, attribute, feature, input variable, independent variable (atribut, rys, příznak, vstupní.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Machine learning & object recognition Cordelia Schmid Jakob Verbeek.
Machine Learning and Category Representation Jakob Verbeek November 25, 2011 Course website:
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Supervised Time Series Pattern Discovery through Local Importance
Machine Learning Basics
Paper Presentation: Shape and Matching
Presentation transcript:

Introduction to Machine Learning for Category Representation Jakob Verbeek October 1, 2010 Course website: http://lear.inrialpes.fr/~verbeek/MLCR.10.11.php Many slides adapted from S. Lazebnik

Plan for the course Session 1, October 1 2010 Cordelia Schmid: Introduction Jakob Verbeek: Introduction Machine Learning Session 2, December 3 2010 Jakob Verbeek: Clustering with k-means, mixture of Gaussians Cordelia Schmid: Local invariant features Student presentation 1: Scale and affine invariant interest point detectors, Mikolajczyk, Schmid, IJCV 2004. Session 3, December 10 2010 Cordelia Schmid: Instance-level recognition: efficient search Student presentation 2: Scalable Recognition with a Vocabulary Tree, Nister and Stewenius, CVPR 2006.

Plan for the course Session 4, December 17 2010 Jakob Verbeek: Mixture of Gaussians, EM algorithm, Fisher Vector image representation Cordelia Schmid: Bag-of-features models for category-level classification Student presentation 2: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories, Lazebnik, Schmid and Ponce, CVPR 2006. Session 5, January 7 2011 Jakob Verbeek: Classification 1: generative and non-parameteric methods Student presentation 4: Large-Scale Image Retrieval with Compressed Fisher Vectors, Perronnin, Liu, Sanchez and Poirier, CVPR 2010. Cordelia Schmid: Category level localization: Sliding window and shape model Student presentation 5: Object Detection with Discriminatively Trained Part Based Models, Felzenszwalb, Girshick, McAllester and Ramanan, PAMI 2010. Session 6, January 14 2011 Jakob Verbeek: Classification 2: discriminative models Student presentation 6: TagProp: Discriminative metric learning in nearest neighbor models for image auto-annotation, Guillaumin, Mensink, Verbeek and Schmid, ICCV 2009. Student presentation 7: IM2GPS: estimating geographic information from a single image, Hays and Efros, CVPR 2008.

What is machine learning? According to wikipedia “Learning is acquiring new knowledge, behaviors, skills, values, preferences or understanding, and may involve synthesizing different types of information. The ability to learn is possessed by humans, animals and some machines. Progress over time tends to follow learning curves.” “Machine learning is a scientific discipline that is concerned with the design and development of algorithms that allow computers to change behavior based on data, such as from sensor data or databases. A major focus of machine learning research is to automatically learn to recognize complex patterns and make intelligent decisions based on data. Hence, machine learning is closely related to fields such as statistics, probability theory, data mining, pattern recognition, artificial intelligence, adaptive control, and theoretical computer science.”

Why machine learning? Extract knowledge/information from past experience/data Use this knowledge/information to analyze new experiences/data Designing rules to deal with new data by hand can be difficult How to design a rule to decide whether there is a cat in an image? Collecting data can be easier Find images with cats, and ones without them Use machine learning to automatically find such rules. Goal of this course: introduction to machine learning techniques used in current object recognition systems.

Steps in machine learning Problem formulation What is it that we try to predict for new data Data collection “training data”, optionally with “labels” provided by a “teacher”. Representation how the data are encoded into “features” when presented to learning algorithm. Modeling choose the class of models that the learning algorithm will choose from. Estimation find the model that best explains the data: simple and fits well. Validation evaluate the learned model and compare to solution found using other model classes. Deploy the learned model

Data Representation Important issue when using learning techniques Different types of representations Vectorial, graphs, … Homogeneous or heterogeneous, e.g. Images + text Choice of representation may impact the choice of learning algorithm. Domain knowledge can help to design or select good features. The ultimate feature would solve the learning problem… Automatic methods known as “feature selection” methods

Probability & Statistics in Learning Many learning methods formulated as a probabilistic model of data Can deal with uncertainty in the data Missing values for some data can be handled Provides a unified framework to combine many different models for different types of data Statistics are used to analyze the behavior of learning algorithms Does the learning algorithm recover the underlying model given enough data: “consistency” How fast does is do so: rate of convergence Common important assumption Training data sampled from the true data distribution The test data is sampled from the same distribution

Different forms of learning Supervised Classification Regression Unsupervised Clustering Dimension reduction Topic models Density estimation Semi-supervised Combine labeled data wit unlabeled data Active learning Determine the most useful data to label next Many other forms…

Supervised learning Training data provided as pairs (x,y) The goal is to predict an “output” y from an “input” x Output y for each input x is the “supervision” that is given to the learning algorithm. Often obtained by manual “annotation” of the inputs x Can be costly to do Most common examples Classification Regression

Classification Predict for input x to which of a finite set of classes the input belongs Training data consists of pairs (x,y) Example: Input x : image Output y : category label, eg “cat” vs. “no cat” Output y : category label, eg “cat” vs. “dog” vs “bird” Learn a “classifier” function f(x) from the input data that outputs the class label or a probability over the class labels. Classification can be binary (two classes), or over a larger number of classes (multi-class). In binary classification we often refer to one class as “positive”, and the other as “negative” Classifiers partition the input space into regions assigned to each class

Example of classification Given: training images and their categories What are the categories of these test images?

Regression Similar to classification, but output y has the form of one or more real numbers. Training data consists of pairs (x,y) y can be a vector x might contain both continuous values, as well as discrete Learn a function f(x) that gives an output close to the true y. A “loss” function measures how good a certain function f is In classification we want to minimize nr. of errors using a 0/1 loss: correct classification : loss 0 incorrect classification : loss 1 In regression loss gets bigger as f(x) is further from correct y Squared loss: ( y – f(x) )2

Regression: example 2 Training set: x: face image, processed by detection of characteristic points y: age of that person Learn: function f(x) to predict the age of person Vector of pairwise distances Appearance around points Age estimate f(x)

Other forms of supervised learning Structured prediction tasks: predict several interdependent output variables Word recognition can be easier than recognizing the individual letters Context of other easier letters disambiguates the interpretation of the more difficult letters Word Image

Structured Prediction Estimation of body poses: part locations interdependent Data association problem: assigning edges body parts model Source: D. Ramanan

Other supervised learning scenarios Metric Learning: learn distance metric to compare objects Training data Pairs of images: x1, x2 Label: +1: same class, or -1 different classes Decide if a new pair of images belong to the same class Source: X. Sui, K. Grauman

Learning face similarities Training data: pairs of faces labeled as same/different Similarity measure should ignore: pose, expression, … Some examples of face-pairs recognized as the same [Guillaumin, Verbeek, Schmid, ICCV 2009]

Unsupervised learning Input data x given without desired output variables y. Goal is to learn something about the “structure” of the data Examples include Clustering Dimensionality reduction Density estimation Not always clear how to measure success of unsupervised learning Probabilistic models can be evaluated by computing likelihood assigned to other data sampled from the same distribution Clustering can be evaluated by learning on labeled data, measure how clusters correspond to classes, but classes may not define most apparent clusters Dimensionality reduction can be evaluated by reconstruction errors

Clustering Finding a group structure in the data Data in one cluster similar to each other Data in different clusters dissimilar Map each data point to a discrete cluster index “flat” methods find k groups (k known, or automatically set) “hierarchical” methods define a tree structure over the data

Clustering example Metric learning from training face-pairs labeled as same/different Clustering of other face (different people) produced using the learned similarity [Guillaumin, Verbeek, Schmid, ICCV 2009]

Dimension reduction Finding a lower dimensional representation of the data Useful for compression, visualization, noise reduction Unlike regression: target values not given

Dimension reduction High dimensional input: black image with moving white square Representation: 20x20 pixel values collected in 400d vector x 3D visualization: linear projection of 400d space, images with white square in neighboring locations are connected for visualization

Dimension reduction High dimensional input: 20x28 pixel grey valued images of a face 2D visualization: automatically found, captures pose + expression

Density estimation Fit probability density on the training data Can be combination of discrete and continuous data Good fit: high likelihood on training data Smooth function: generalizes to new data Can be used to detect anomalies Many forms of (un)supervised learning can be understood as doing density estimation Clustering Dimension reduction Classification

Different forms of learning Supervised Classification Regression Unsupervised Clustering Dimension reduction Density estimation Semi-supervised Combine labeled data wit unlabeled data Active learning Determine the most useful data to label next Many other forms…

Semi-supervised learning Learn from supervised and unsupervised data Labeled data often expensive to obtain Unlabeled data often cheap to obtain Why should this work? Unsupervised data used to learn about distribution on inputs x Supervised data used to learn about input x given output y ?

Example of semi-supervised learning Classification of newsgroup articles into 20 different classes: politics, sports, education,… Use EM to iteratively estimate class label of unlabeled data and update the model Helps when few labeled examples are available [Nigam et al., Machine Learning, Vol. 39, pp 103—134, 2000]

Active learning The learning algorithm can choose its own training examples, or ask a “teacher” for an answer on selected inputs Labeling of most uncertain images Labeling of images that maximally reduce uncertainty in model parameters S. Vijayanarasimhan and K. Grauman, “Cost-Sensitive Active Visual Category Learning,” 2009 

Generalization The goal is to predict as well as possible on new data, not seeen during training, but sampled from the same underlying distribution. To learn models we only have access to the (labeled) training set What makes generalization possible? Inductive bias: set of assumptions a learner uses to predict the target value for previously unseen inputs Use domain knowledge to choose good features Use domain knowledge to design good models (and learn their parameters from training data) Types of inductive bias Occam’s razor: simple models to be preferred over complex ones, unless invalidated by (training) data Similarity/continuity bias: similar inputs should have similar outputs …

Achieving good generalization Consideration 1: Bias How well does your model fit the observed data? It may be a good idea to accept some fitting error, because it may be due to noise or other “accidental” characteristics of one particular training set Consideration 2: Variance How robust is the model to the selection of a particular training set? To put it differently, if we learn models on two different training sets, how consistent will the models be?

Bias/variance tradeoff Models with too many parameters may fit the training data well (low bias), but are sensitive to choice of training set (high variance)

Bias/variance tradeoff Models with too many parameters may fit the training data well (low bias), but are sensitive to choice of training set (high variance) Models with too few parameters may not fit the data well (high bias) but are consistent across different training sets (low variance) 2

Bias/variance tradeoff Models with too many parameters may fit the training data well (low bias), but are sensitive to choice of training set (high variance) Generalization error is due to overfitting Models with too few parameters may not fit the data well (high bias) but are consistent across different training sets (low variance) Generalization error is due to underfitting 2

Underfitting and overfitting How to recognize underfitting? High training error and high test error How to deal with underfitting? Find a more complex model How to recognize overfitting? Low training error, but high test error How to deal with overfitting? Get more training data Decrease the number of parameters in your model Regularization: penalize certain parts of the parameter space or introduce additional constraints to deal with a potentially ill-posed problem

Methodology Distinction between training and testing is crucial Correct performance on training set is just memorization! Not enough to perform well on new test data Strictly speaking, the researcher should never look at the test data when designing the system Generalization performance should be evaluated on a held-out or validation set Raises some troubling issues for learning “benchmarks” Source: R. Parr

Plan for the course Session 1, October 1 2010 Cordelia Schmid: Introduction Jakob Verbeek: Introduction Machine Learning Session 2, December 3 2010 Jakob Verbeek: Clustering with k-means, mixture of Gaussians Cordelia Schmid: Local invariant features Student presentation 1: Scale and affine invariant interest point detectors, Mikolajczyk, Schmid, IJCV 2004. Session 3, December 10 2010 Cordelia Schmid: Instance-level recognition: efficient search Student presentation 2: Scalable Recognition with a Vocabulary Tree, Nister and Stewenius, CVPR 2006. Course website: http://lear.inrialpes.fr/~verbeek/MLCR.10.11.php