Chap.8 Image Analysis 숙명여자대학교 컴퓨터과학과 최 영 우 2005년 2학기.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Robust statistical method for background extraction in image segmentation Doug Keen March 29, 2001.
Major Operations of Digital Image Processing (DIP) Image Quality Assessment Radiometric Correction Geometric Correction Image Classification Introduction.
Intensity Transformations (Chapter 3)
HISTOGRAM TRANSFORMATION IN IMAGE PROCESSING AND ITS APPLICATIONS Attila Kuba University of Szeged.
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.
EE 7730 Image Segmentation.
Thresholding Otsu’s Thresholding Method Threshold Detection Methods Optimal Thresholding Multi-Spectral Thresholding 6.2. Edge-based.
Simple Neural Nets For Pattern Classification
MRI Image Segmentation for Brain Injury Quantification Lindsay Kulkin 1 and Bir Bhanu 2 1 Department of Biomedical Engineering, Syracuse University, Syracuse,
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Objective of Computer Vision
Objective of Computer Vision
Lecture 14: Classification Thursday 18 February 2010 Reading: Ch – 7.19 Last lecture: Spectral Mixture Analysis.
Image Classification To automatically categorize all pixels in an image into land cover classes or themes.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Linear Discriminant Functions Chapter 5 (Duda et al.)
Lecture 14: Classification Thursday 19 February Reading: “Estimating Sub-pixel Surface Roughness Using Remotely Sensed Stereoscopic Data” pdf preprint.
Radial Basis Function Networks
Image Segmentation by Clustering using Moments by, Dhiraj Sakumalla.
Spectral contrast enhancement
Image Classification
嵌入式視覺 Pattern Recognition for Embedded Vision Template matching Statistical / Structural Pattern Recognition Neural networks.
Machine Vision for Robots
Environmental Remote Sensing Lecture 5: Image Classification " Purpose: – categorising data – data abstraction / simplification – data interpretation –
Classification. An Example (from Pattern Classification by Duda & Hart & Stork – Second Edition, 2001)
1Ellen L. Walker Segmentation Separating “content” from background Separating image into parts corresponding to “real” objects Complete segmentation Each.
CS 6825: Binary Image Processing – binary blob metrics
Image Classification 영상분류
Digital Image Processing CCS331 Relationships of Pixel 1.
November 13, 2014Computer Vision Lecture 17: Object Recognition I 1 Today we will move on to… Object Recognition.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
EECS 274 Computer Vision Segmentation by Clustering II.
Pattern Recognition April 19, 2007 Suggested Reading: Horn Chapter 14.
Visual Information Systems Recognition and Classification.
Pixel Connectivity Pixel connectivity is a central concept of both edge- and region- based approaches to segmentation The notation of pixel connectivity.
Map of the Great Divide Basin, Wyoming, created using a neural network and used to find likely fossil beds See:
Digital Image Processing
Computer Graphics and Image Processing (CIS-601).
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
Remote Sensing Unsupervised Image Classification.
EEE502 Pattern Recognition
Nottingham Image Analysis School, 23 – 25 June NITS Image Segmentation Guoping Qiu School of Computer Science, University of Nottingham
1 Mathematic Morphology used to extract image components that are useful in the representation and description of region shape, such as boundaries extraction.
Course 3 Binary Image Binary Images have only two gray levels: “1” and “0”, i.e., black / white. —— save memory —— fast processing —— many features of.
Unsupervised Classification
Linear Discriminant Functions Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Course : T Computer Vision
Machine Vision ENT 273 Lecture 4 Hema C.R.
图像处理技术讲座(3) Digital Image Processing (3) Basic Image Operations
3.1 Clustering Finding a good clustering of the points is a fundamental issue in computing a representative simplicial complex. Mapper does not place any.
IMAGE PROCESSING RECOGNITION AND CLASSIFICATION
Classification of Remotely Sensed Data
Class 10 Unsupervised Classification
DIGITAL SIGNAL PROCESSING
Classification of unlabeled data:
Map of the Great Divide Basin, Wyoming, created using a neural network and used to find likely fossil beds See:
Basic machine learning background with Python scikit-learn
University College London (UCL), UK
REMOTE SENSING Multispectral Image Classification
REMOTE SENSING Multispectral Image Classification
Computer Vision Lecture 16: Texture II
Fall 2012 Longin Jan Latecki
Unsupervised Classification
Image Information Extraction
Object Recognition Today we will move on to… April 12, 2018
Class 10 Unsupervised Classification
Presentation transcript:

Chap.8 Image Analysis 숙명여자대학교 컴퓨터과학과 최 영 우 2005년 2학기

Signal-to-symbol transformation/ Symbol-to-symbol transformation/ Image Analysis Abstract Extract usable global information from the image. The image analysis operators extract useful information Pixel value distribution Classified pixels Connected components This is part of the middle level image interpretation and used for the high level image interpretation. Signal-to-symbol transformation/ feature extraction Symbol-to-symbol transformation/

Intensity Histogram (I) Brief Description A graph showing the number of pixels in an image at each different intensity value This shows the distribution of pixels graphically. Intensity Count

Intensity Histogram (II) How It Works The image is scanned in a single pass and a running count of the number of pixels found at each intensity value is kept. Guideline for Use One of the common uses is to decide what value of threshold to use when converting a grayscale image to a binary one. Original Image Histogram Thresholded Image

Intensity Histogram (III) Example: Thresholding There is a significant illumination gradient across the image (a), and it blurs out the histogram. No longer possible to select a single global threshold that will neatly segment the object from its background. Two failed thresholding segmentations are shown in (c) and (d). (c) Threshold 80 (a) Illuminated Image (b) Histogram (d) Threshold 120

Intensity Histogram (IV) Example: Contrast Stretching Contrast stretching takes an image in which the intensity values do not span the full intensity range and stretches its values linearly. Histogram of (a) shows that most of the pixels have rather high intensity values. Contrast stretching the image yields (b) which has a clearly improved contrast . (a) Original Image (b) Result of Contrast Stretching

Intensity Histogram (V) Example: Histogram Equalization The idea is that the pixels should be distributed evenly over the whole intensity range. i.e. The aim is to transform the image so that the output image has a flat histogram. The values are much more distributed than in the original histogram and the contrast in the image was essentially increased. (A) Original Image (B) Result of Histogram Equalization

Classification (I) Brief Description Analyze the numerical properties of various image features and organize data into categories. Methods Supervised classification The example classes are specified by an analyst. Unsupervised classification The example is automatically clustered. Analyst merely specifies the number of desired categories.

Classification (II) How It Works Two phases of processing Training phase Isolate characteristic properties of typical image features. And, create a unique description of each classification category. Testing phase Classifies image features. Classification method Supervised classification Statistical processes Distribution-free processes Unsupervised classification K-means clustering

Classification (III) The motivating criteria for constructing training classes Independent A change in the description should not change the value of another. Discriminatory Different image features should have significantly different description. Reliable All image features in a group should share the common description. Example: Classification of bolts and sewing needles using head diameter and length

Classification (IV) Minimum (mean) distance classifier Suppose that each training class is represented by a prototype (or mean) vector: where Nj is the number of training patterns from class wj. M is the number of classes. If Euclidean distance is used for proximity measure, the distance to the prototype is x2 Cluster Centers For j=1,2,…,M mneedle = [0.86 2.34]T mbolt = [5.74 5.85]T x1

Classification (V) Decision function, dj(x), based on the Euclidean distance is: Thus, the decision functions in this example are: x1 x2

Classification (VI) The decision boundary that separates two classes is: Thus, the decision boundary(or surface) is: In practice, the minimum distance classifier works well, when the distance between means is large compared to the spread of each class. x1 x2

Classification (VII) Guidelines for Use Example: Remote sensing application Classify each image pixel into one of the several classes (e.g. water, city, wheat field, pine forest, cloud, etc.) based on the spectral measurement of the pixel. Visual Image of Globe Infrared band Image

Classification (VIII) Example: (continued) Difficult to find a threshold or a decision boundary that segments the images into training classes (e.g. spectral classes that correspond to physical phenomena such as cloud, ground, water, etc.). Having a higher dimensional representation of this information can provide segmentation of regions which might overlap when projected onto a single axis. (i.e. using one 2-D histogram instead of two 1-D histograms)

Visual Intensity levels Classification (IX) Example: (continued) Combine them into a single two-band image and find a decision surface which divides the data into distinct class regions. To this aim, use a K-means algorithm to find the training classes of the 2-D spectral images. Infrared Intensity levels Visual Intensity levels Its result (K=2)

Classification (X) Example: (continued) We can see the classified regions that correspond to the distinct physical phenomena. Following images show the color-assigned classification results using K=4 and K=6 training classes. c.f. Classification accuracy using the minimum (mean) distance classifier improves as the number of training classes are increased. K=4 K=6

:value of the jth cluster center at the lth iteration Classification (XI) K-Means Classification Unsupervised classification Assumption The number of cluster centers is known a priori. Steps 1) Initialize Choose the number of clusters K. For each cluster, choose an initial cluster center. Starting values can be arbitrary. :value of the jth cluster center at the lth iteration

Classification (XII) 2) Distribute samples. Distribute all sample vectors ( ). 3) Calculate the new cluster centers. Recalculate the position of each cluster. 4) Check for convergence If no cluster center has changed, then convergence has occurred and stop. Otherwise, iterate by going to step 2. for all i =1,2,…,K, i j. represents the population of cluster j at iteration l. Nj is the number of sample vectors attached to Sj.

Classification (XIII) Example 1 2 3 4 5 6 7 8 9 10 initial cluster centers : (0, 0) : (1, 0)

Classification (XIV) : (0, 1) : (5.9, 5.3) 1st iteration 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 10 1st iteration : (0, 1) : (5.9, 5.3)

Classification (XV) : (1, 1) : (8, 7.5) 2nd iteration 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 9 10 2nd iteration : (1, 1) : (8, 7.5)

Cluster centers not changed. Classification (XVI) 1 2 3 4 5 6 7 8 9 10 3rd iteration : (1, 1) : (8, 7.5) Cluster centers not changed.

Connected Components Labeling (I) Brief Description Scans an image and groups its pixels into component based on the pixel connectivity. Used in many automated image analysis application.

Connected Components Labeling (II) Assume a binary image with 8-connectivity. When we arrived at a point p for which V={1}, examine the four neighbors of p already scanned: Left(i), above(ii), and two upper diagonal terms(iii & iv) P (i) (ii) (iii) (iv) ? Already scanned pixels Pixels that should be scanned.

Connected Components Labeling (III) If all four neighbors are 0, assign a new label to p, else if only one neighbor has V={1}, assign its label to p, else if one or more have V={1}, assign one of labels to p, and make a note of the equivalence. Then, a second scan is made through the image for replacing labels according to the equivalence classes.

Connected Components Labeling (IV) Example Same label assigned. 1 A New label assigned.

Connected Components Labeling (V) New label assigned. 1 A B Also, same label assigned.

Connected Components Labeling (VI) Third Case: A & B must be the same labels. 1 A B ?

Connected Components Labeling (VI) Guidelines for Use Example 1 After scanning this image and labeling the distinct pixel classes with a different gray-value, we obtain the labeled output image (b). If we assign a distinct color to each gray-level, we obtain (c). (a) Original Image (b) Labeling in Graylevel (c) Labeling in Color

Connected Components Labeling (VII) Example 2 If we want to count the objects in a real world scene like (a), we first have to threshold the image to produce a binary image (b). The connected components of the binary image are in (c). (a) Original Image (b) Thresholded Image (c) Labeled Image

Connected Components Labeling (VIII) Example 2: (continued) In order to see the result better, assign a color to each component. But, the problem is that we cannot find 163 colors where each of them is different enough from all others to be distinguished by the human eye. Two possible ways Use only a few colors (e.g. 8) which are clearly different from each other and assign each gray-level of the CC image to one of these colors. (d) Assign a different color to each gray-value, many of them being quite similar. (e) (d) (e)

Connected Components Labeling (IX) Example 3 Big problems when we count the number of turkeys in (a). Although we assigned one connected component to each turkey, the total number of components (196) does not correspond to the number of turkeys. The last two examples showed that the CC labeling is the easy part of the automated analysis process, whereas the major task is to obtain a good binary image which separates the objects(turkeys) from the background(other objects). (a) Original Image (b) Thresholded Image (d) Labeled in color (c) Labeled in Grayscale