CMPUT 615 Applications of Machine Learning in Image Analysis

Slides:



Advertisements
Similar presentations
Clustering. How are we doing on the pass sequence? Pretty good! We can now automatically learn the features needed to track both people But, it sucks.
Advertisements

Why does it work? We have not addressed the question of why does this classifier performs well, given that the assumptions are unlikely to be satisfied.
CS 478 – Tools for Machine Learning and Data Mining Clustering: Distance-based Approaches.
CS 245Notes 71 CS 245: Database System Principles Notes 7: Query Optimization Hector Garcia-Molina.
Chapter 3 cont’d. Adjacency, Histograms, & Thresholding.
Clustering & image segmentation Goal::Identify groups of pixels that go together Segmentation.
Histogram Analysis to Choose the Number of Clusters for K Means By: Matthew Fawcett Dept. of Computer Science and Engineering University of South Carolina.
Automatic Thresholding
Machine Learning and Data Mining Clustering
Image Segmentation and Active Contour
Clustering CMPUT 466/551 Nilanjan Ray. What is Clustering? Attach label to each observation or data points in a set You can say this “unsupervised classification”
Otsu’s Thresholding Method Based on a very simple idea: Find the threshold that minimizes the weighted within-class variance. This turns out to be the.
EE 290A: Generalized Principal Component Analysis Lecture 6: Iterative Methods for Mixture-Model Segmentation Sastry & Yang © Spring, 2011EE 290A, University.
Medical Imaging Mohammad Dawood Department of Computer Science University of Münster Germany.
Medical Imaging Mohammad Dawood Department of Computer Science University of Münster Germany.
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
First introduced in 1977 Lots of mathematical derivation Problem : given a set of data (data is incomplete or having missing values). Goal : assume the.
Unsupervised Learning: Clustering Rong Jin Outline  Unsupervised learning  K means for clustering  Expectation Maximization algorithm for clustering.
Motion Analysis (contd.) Slides are from RPI Registration Class.
Segmentation Divide the image into segments. Each segment:
Clustering Color/Intensity
K-means Clustering. What is clustering? Why would we want to cluster? How would you determine clusters? How can you do this efficiently?
Radial Basis Function Networks
Face Detection and Neural Networks Todd Wittman Math 8600: Image Analysis Prof. Jackie Shen December 2001.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Image Segmentation Rob Atlas Nick Bridle Evan Radkoff.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
Machine Vision for Robots
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
CSIE Dept., National Taiwan Univ., Taiwan
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
START OF DAY 8 Reading: Chap. 14. Midterm Go over questions General issues only Specific issues: visit with me Regrading may make your grade go up OR.
Clustering Methods K- means. K-means Algorithm Assume that K=3 and initially the points are assigned to clusters as follows. C 1 ={x 1,x 2,x 3 }, C 2.
CHAPTER 7: Clustering Eick: K-Means and EM (modified Alpaydin transparencies and new transparencies added) Last updated: February 25, 2014.
Digital Image Processing Lecture 18: Segmentation: Thresholding & Region-Based Prof. Charlene Tsai.
CLUSTERING. Overview Definition of Clustering Existing clustering methods Clustering examples.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
CS654: Digital Image Analysis
Chapter 11 Statistical Techniques. Data Warehouse and Data Mining Chapter 11 2 Chapter Objectives  Understand when linear regression is an appropriate.
Cluster Analysis Potyó László. Cluster: a collection of data objects Similar to one another within the same cluster Similar to one another within the.
CHAPTER 1: Introduction. 2 Why “Learn”? Machine learning is programming computers to optimize a performance criterion using example data or past experience.
Lecture 2: Statistical learning primer for biologists
CSSE463: Image Recognition Day 23 Midterm behind us… Midterm behind us… Foundations of Image Recognition completed! Foundations of Image Recognition completed!
Flat clustering approaches
Chapter 13 (Prototype Methods and Nearest-Neighbors )
Digital Image Processing
1 Machine Learning Lecture 9: Clustering Moshe Koppel Slides adapted from Raymond J. Mooney.
6.S093 Visual Recognition through Machine Learning Competition Image by kirkh.deviantart.com Joseph Lim and Aditya Khosla Acknowledgment: Many slides from.
CZ5211 Topics in Computational Biology Lecture 4: Clustering Analysis for Microarray Data II Prof. Chen Yu Zong Tel:
Medical Image Analysis Dr. Mohammad Dawood Department of Computer Science University of Münster Germany.
Given a set of data points as input Randomly assign each point to one of the k clusters Repeat until convergence – Calculate model of each of the k clusters.
Clustering Usman Roshan CS 675. Clustering Suppose we want to cluster n vectors in R d into two groups. Define C 1 and C 2 as the two groups. Our objective.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Mixture Densities Maximum Likelihood Estimates.
Semi-Supervised Clustering
SRAM Yield Rate Optimization EE201C Final Project
IMAGE SEGMENTATION USING THRESHOLDING
Classification of unlabeled data:
Haim Kaplan and Uri Zwick
Otsu’s Thresholding Method
Otsu’s Thresholding Method
CSSE463: Image Recognition Day 23
Problem Definition Input: Output: Requirement:
Clustering 77B Recommender Systems
LECTURE 21: CLUSTERING Objectives: Mixture Densities Maximum Likelihood Estimates Application to Gaussian Mixture Models k-Means Clustering Fuzzy k-Means.
Text Categorization Berlin Chen 2003 Reference:
Statistical Models and Machine Learning Algorithms --Review
Computer and Robot Vision I
Presentation transcript:

CMPUT 615 Applications of Machine Learning in Image Analysis K-Means Clustering CMPUT 615 Applications of Machine Learning in Image Analysis

K-means Overview A clustering algorithm An approximation to an NP-hard combinatorial optimization problem It is unsupervised “K” stands for number of clusters, it is a user input to the algorithm From a set of data points or observations (all numerical), K-means attempts to classify them into K clusters The algorithm is iterative in nature

K-means Details X1,…, XN are data points or vectors or observations Each observation will be assigned to one and only one cluster C(i) denotes cluster number for the ith observation Dissimilarity measure: Euclidean distance metric K-means minimizes within-cluster point scatter: where mk is the mean vector of the kth cluster Nk is the number of observations in kth cluster

K-means Algorithm For a given assignment C, compute the cluster means mk: For a current set of cluster means, assign each observation as: Iterate above two steps until convergence

Image Segmentation Results An image (I) Three-cluster image (J) on gray values of I Matlab code: I = double(imread(‘…')); J = reshape(kmeans(I(:),3),size(I)); Note that K-means result is “noisy”

Summary K-means converges, but it finds a local minimum of the cost function Works only for numerical observations (for categorical and mixture observations, K-medoids is a clustering method) Fine tuning is required when applied for image segmentation; mostly because there is no imposed spatial coherency in k-means algorithm Often works as a staring point for sophisticated image segmentation algorithms

Otsu’s Thresholding Method (1979) Based on the clustering idea: Find the threshold that minimizes the weighted within-cluster point scatter. This turns out to be the same as maximizing the between-class scatter. Operates directly on the gray level histogram [e.g. 256 numbers, P(i)], so it’s fast (once the histogram is computed).

Otsu’s Method Histogram (and the image) are bimodal. No use of spatial coherence, nor any other notion of object structure. Assumes uniform illumination (implicitly), so the bimodal brightness behavior arises from object appearance differences only.

The weighted within-class variance is: Where the class probabilities are estimated as: And the class means are given by:

Finally, the individual class variances are: Now, we could actually stop here. All we need to do is just run through the full range of t values [1, 256] and pick the value that minimizes . But the relationship between the within-class and between-class variances can be exploited to generate a recursion relation that permits a much faster calculation.

Finally... Initialization... ; Recursion...

After some algebra, we can express the total variance as... Within-class, from before Between-class, Since the total is constant and independent of t, the effect of changing the threshold is merely to move the contributions of the two terms back and forth. So, minimizing the within-class variance is the same as maximizing the between-class variance. The nice thing about this is that we can compute the quantities in recursively as we run through the range of t values.

Result of Otsu’s Algorithm An image Binary image by Otsu’s method Matlab code: I = double(imread(‘…')); I = (I-min(I(:)))/(max(I(:))-min(I(:))); J = I>graythresh(I); Gray level histogram