Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, 11-12 a Machine Learning.

Slides:



Advertisements
Similar presentations
COMPUTER AIDED DIAGNOSIS: CLASSIFICATION Prof. Yasser Mostafa Kadah –
Advertisements

INC 551 Artificial Intelligence Lecture 11 Machine Learning (Continue)
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
2 – In previous chapters: – We could design an optimal classifier if we knew the prior probabilities P(wi) and the class- conditional probabilities P(x|wi)
Assuming normally distributed data! Naïve Bayes Classifier.
Lecture 20 Object recognition I
Parameter Estimation: Maximum Likelihood Estimation Chapter 3 (Duda et al.) – Sections CS479/679 Pattern Recognition Dr. George Bebis.
Classification and risk prediction
AGC DSP AGC DSP Professor A G Constantinides© Estimation Theory We seek to determine from a set of data, a set of parameters such that their values would.
An Optimal Learning Approach to Finding an Outbreak of a Disease Warren Scott Warren Powell
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
CHAPTER 4: Parametric Methods. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation X = {
CHAPTER 4: Parametric Methods. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation Given.
MACHINE LEARNING 6. Multivariate Methods 1. Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Motivating Example  Loan.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
METU Informatics Institute Min 720 Pattern Classification with Bio-Medical Applications PART 2: Statistical Pattern Classification: Optimal Classification.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
ECSE 6610 Pattern Recognition Professor Qiang Ji Spring, 2011.
Mehdi Ghayoumi Kent State University Computer Science Department Summer 2015 Exposition on Cyber Infrastructure and Big Data.
Principles of Pattern Recognition
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Speech Recognition Pattern Classification. 22 September 2015Veton Këpuska2 Pattern Classification  Introduction  Parametric classifiers  Semi-parametric.
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Whitening.
Sergios Theodoridis Konstantinos Koutroumbas Version 2
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
1 Generative and Discriminative Models Jie Tang Department of Computer Science & Technology Tsinghua University 2012.
1 E. Fatemizadeh Statistical Pattern Recognition.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
: Chapter 3: Maximum-Likelihood and Baysian Parameter Estimation 1 Montri Karnjanadecha ac.th/~montri.
Clustering Algorithms Presented by Michael Smaili CS 157B Spring
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Chapter 3: Maximum-Likelihood Parameter Estimation l Introduction l Maximum-Likelihood Estimation l Multivariate Case: unknown , known  l Univariate.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Introduction to Machine Learning Multivariate Methods 姓名 : 李政軒.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
ECE 471/571 – Lecture 3 Discriminant Function and Normal Density 08/27/15.
Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Mixture Densities Maximum Likelihood Estimates.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
CS479/679 Pattern Recognition Dr. George Bebis
Usman Roshan CS 675 Machine Learning
Chapter 3: Maximum-Likelihood Parameter Estimation
Supervised Training and Classification
LECTURE 06: MAXIMUM LIKELIHOOD ESTIMATION
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
ECES 481/681 – Statistical Pattern Recognition
CH 5: Multivariate Methods
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Recognition and Image Analysis
Unsupervised Learning II: Soft Clustering with Gaussian Mixture Models
LECTURE 21: CLUSTERING Objectives: Mixture Densities Maximum Likelihood Estimates Application to Gaussian Mixture Models k-Means Clustering Fuzzy k-Means.
Learning From Observed Data
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
Hairong Qi, Gonzalez Family Professor
Data Exploration and Pattern Recognition © R. El-Yaniv
Hairong Qi, Gonzalez Family Professor
ECE – Pattern Recognition Lecture 4 – Parametric Estimation
Presentation transcript:

Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning

 1.Vision: Face recognition Facial expression recognition Object tracking 2.Big Data: Data mining Streaming data over the Internet Fraud detection Naïve Bays, SVM, KNN, HMM,NN,…

Machine Learning Example: Generate N = dimensional data points that are distributed according to the Gaussian distribution N(m,S), with mean m = [0, 0]T and covariance matrix S = for the following cases: Plot each data set and comment on the shape of the clusters formed by the data points.

Machine Learning

MINIMUM DISTANCE CLASSIFIERS The Euclidean Distance Classifier The Mahalanobis Distance Classifier

Machine Learning The Euclidean Distance Classifier

Machine Learning

The Mahalanobis Distance Classifier

Machine Learning

Example: Consider a 2-class classification task in the 3-dimensional space, where the Two classes,ω1 and ω2,are modeled by Gaussian distributions with means m1=[0,0,0] T and m2=[0.5,0.5,0.5] T, respectively. Assume the two classes to be equiprobable. The covariance matrix For both distributions is

Machine Learning Given the point x=[0.1,0.5,0.1] T classify x (1)according to the Euclidean distance classifier (2) according to the Mahalanobis distance classifier.

Machine Learning Step 1. Use the function euclidean_classifier The answer is z = 1; that is, the point is classified to the ω1 class.

Machine Learning Step 2. Use the function mahalanobis_classifier This time, the answer is z = 2,meaning the point is classified to the ω2 class.

Machine Learning For this case, the optimal Bayesian classifier is realized by the Mahalanobis distance classifier. The point is assigned to class ω2 in spite of the fact that it lies closer to ω1 according to the Euclidean norm.

Machine Learning The maximum likelihood (ML) The technique is a popular method for such a Parametric estimation of an unknown pdf. Focusing on Gaussian pdfs and assuming that we are given N points, xi ∈ R l, i=1,2,...,N, which are known to be normally distributed.

Machine Learning Example. Generate 50 2-dimensional feature vectors from a Gaussian distribution, N(m,S),where Let X be the resulting matrix, having the feature vectors as columns. Compute the ML estimate of the mean value, m, and the covariance matrix, S, of N(m,S)

Machine Learning Note that the returned values depend on the initialization of the random generator, so there is a slight deviation among experiments.

Thank you!