Copyright © 2012 Elsevier Inc. All rights reserved.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Linear Filtering – Part I Selim Aksoy Department of Computer Engineering Bilkent University
Spatial Filtering (Chapter 3)
Image Processing Lecture 4
Chapter 3 Image Enhancement in the Spatial Domain.
Presented By: Vennela Sunnam
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
嵌入式視覺 Feature Extraction
Computer Vision Lecture 16: Texture
Texture. Edge detectors find differences in overall intensity. Average intensity is only simplest difference. many slides from David Jacobs.
The Global Digital Elevation Model (GTOPO30) of Great Basin Location: latitude 38  15’ to 42  N, longitude 118  30’ to 115  30’ W Grid size: 925 m.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
6/9/2015Digital Image Processing1. 2 Example Histogram.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
Three-dimensional co-occurrence matrices & Gabor filters: Current progress Gray-level co-occurrence matrices Carl Philips Gabor filters Daniel Li Supervisor:
Texture Turk, 91.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
CS 376b Introduction to Computer Vision 03 / 26 / 2008 Instructor: Michael Eckmann.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
Texture Classification Using QMF Bank-Based Sub-band Decomposition A. Kundu J.L. Chen Carole BakhosEvan Kastner Dave AbramsTommy Keane Rochester Institute.
E.G.M. PetrakisTexture1 Repeative patterns of local variations of intensity on a surface –texture pattern: texel Texels: similar shape, intensity distribution.
Texture Readings: Ch 7: all of it plus Carson paper
CS292 Computational Vision and Language Visual Features - Colour and Texture.
1 Chapter 21 Machine Vision. 2 Chapter 21 Contents (1) l Human Vision l Image Processing l Edge Detection l Convolution and the Canny Edge Detector l.
© 2010 Cengage Learning Engineering. All Rights Reserved.
Linear Algebra and Image Processing
Digital Image Characteristic
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Computer vision.
Lecture 19 Representation and description II
8D040 Basis beeldverwerking Feature Extraction Anna Vilanova i Bartrolí Biomedical Image Analysis Group bmia.bmt.tue.nl.
BACKGROUND LEARNING AND LETTER DETECTION USING TEXTURE WITH PRINCIPAL COMPONENT ANALYSIS (PCA) CIS 601 PROJECT SUMIT BASU FALL 2004.
Attribute Expression Using Gray Level Co-Occurrence Sipuikinene Angelo*, Marcilio Matos,Kurt J Marfurt ConocoPhillips School of Geology & Geophysics, University.
Topic 10 - Image Analysis DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Texture analysis Team 5 Alexandra Bulgaru Justyna Jastrzebska Ulrich Leischner Vjekoslav Levacic Güray Tonguç.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Linear Filtering – Part I Selim Aksoy Department of Computer Engineering Bilkent University
Chapter 10 Image Segmentation.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
Image Segmentation and Edge Detection Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
Chapter 12 Object Recognition Chapter 12 Object Recognition 12.1 Patterns and pattern classes Definition of a pattern class:a family of patterns that share.
Autonomous Robots Vision © Manfred Huber 2014.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Digital Image Processing
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Image Quality Measures Omar Javed, Sohaib Khan Dr. Mubarak Shah.
Surface Defect Inspection: an Artificial Immune Approach Dr. Hong Zheng and Dr. Saeid Nahavandi School of Engineering and Technology.
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
EDGE DETECTION Dr. Amnach Khawne. Basic concept An edge in an image is defined as a position where a significant change in gray-level values occur. An.
Miguel Tavares Coimbra
- photometric aspects of image formation gray level images
Texture.
Distinctive Image Features from Scale-Invariant Keypoints
Filtering – Part I Gokberk Cinbis Department of Computer Engineering
Computer Vision Lecture 16: Texture II
Texture.
Object Recognition Today we will move on to… April 12, 2018
Department of Computer Engineering
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Crystal Geometry, Structure and its defects
Presentation transcript:

Copyright © 2012 Elsevier Inc. All rights reserved. Chapter 8 Texture Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. 8.1 Introduction Texture is a difficult property to define: indeed, in 1979, Haralick reported that no satisfactory definition of it had up till then been produced. In this chapter, we are mainly interested in the interpretation of images, and so we define texture as the characteristic variation in intensity of a region of an image, which should allow us to recognize and describe it and to outline its boundaries (Fig. 8.1) Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. FIGURE 8.1 (1/4) FIGURE 8.1 The variety of textures obtained from real objects. (a) Bark, (b) wood grain, (c) fir leaves, (d) chick peas, (e) carpet, (f) fabric, (g) stone chips, (h) water. These textures demonstrate the wide variety of familiar textures that are easily recognized from their characteristic intensity patterns. Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. FIGURE 8.1 (2/4) FIGURE 8.1 The variety of textures obtained from real objects. (a) Bark, (b) wood grain, (c) fir leaves, (d) chick peas, (e) carpet, (f) fabric, (g) stone chips, (h) water. These textures demonstrate the wide variety of familiar textures that are easily recognized from their characteristic intensity patterns. Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. FIGURE 8.1 (3/4) FIGURE 8.1 The variety of textures obtained from real objects. (a) Bark, (b) wood grain, (c) fir leaves, (d) chick peas, (e) carpet, (f) fabric, (g) stone chips, (h) water. These textures demonstrate the wide variety of familiar textures that are easily recognized from their characteristic intensity patterns. Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. FIGURE 8.1 (4/4) FIGURE 8.1 The variety of textures obtained from real objects. (a) Bark, (b) wood grain, (c) fir leaves, (d) chick peas, (e) carpet, (f) fabric, (g) stone chips, (h) water. These textures demonstrate the wide variety of familiar textures that are easily recognized from their characteristic intensity patterns. Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. In fact, there are many ways in which intensity might vary, but if the variation does not have sufficient uniformity, the texture may not be characterized sufficiently close to permit recognition or segmentation. Thus, the degrees of randomness and of regularity will have to be measured and compared when charactering a texture. Often, textures are derived from tiny objects or components that are themselves similar, but that are placed together in ways ranging from purely random to purely regular, such as bricks in a wall, or grains of sand, etc. Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. Texture elements are called texels. The texels will have various sizes and degree of uniformity. The texels will be oriented in various directions. The texels will be spaced at varying distances in different directions. The contrast will have various magnitutes and variations. Various amounts of background may be visible between texels. The variations composing the texture may each have varying degrees of regularity vis-à-vis randomness. Thus, texture analysis generally only aims at obtaining accurate statistical descriptions of textures, from which apparently identical textures can be reproduced. Copyright © 2012 Elsevier Inc. All rights reserved. .

8.2 Some Basic Approaches to Texture Analysis Texture was defined as the characteristic variation in intensity of a region of an image that should allow us to recognize and describe it and to outline its boundaries. Bajcsy (1973) used a variety of ring and oriented strip filters in the Fourier domain to isokate texture feature – an approach that was found to work successfully on natural textures such as grass, sand, and trees. Autocorrelation is another obvious approach to texture analysis, since it should show up both local intensity variations and also the repeatability of the texture (Fig. 8.2). Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. FIGURE 8.2 Copyright © 2012 Elsevier Inc. All rights reserved. .

8.3 Graylevel Co-occurrance Matrices The graylevel co-occurrence matrix approach is based on studies of the statistics of pixel intensity distributions. The co-occurrence matrices express the relative frequencies (or probabilities) P(i, j | d,θ) with which two pixels having relative polar coordinates (d,θ) appear with intensities I, j. The co-occurrence matrices provide raw numerical data on the texture, although this data must be condensed to relatively few numbers before it can be used to classify the texture. Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. FIGURE 8.3 Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. FIGURE 8.4 Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. FIGURE 8.5 Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. Haralick etal. (1973) gave 14 such measures. Commers and Harlow (1980s) found that only five of these measures were normally used, viz. energy, entropy, correlation, local homogeneity and inertia. The co-occurrence matrices merely provide a new representation: they do not themselves solve the recognition problem. These factors mean that the gray scale has to be compressed into a much smaller set of values and careful choice of specific sample d,θ values must be made. In addition, various functions of the matrix data must be tested before the texture can be properly characterized and classified. Copyright © 2012 Elsevier Inc. All rights reserved. .

8.4 Laws’ Texture Energy Approach In 1979 and 1980, Laws presented his novel texture energy approach to texture analysis. This involved the application of simple filters to digital images. The basic filters are common Gaussian, edge detector, and Laplacian-type filters, and were designed to highlight points of high texture energy in the image. Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. The Laws’ masks are constructed by convolving together just three basic 1x3 masks: L: local averaging, E: edge detection, S: spot detection Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. basic 1x5 masks: R: ripple detection; W: wave detection Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. We can also use matrix multiplication to combine the 1x3 and a similar set of 3x1 masks to obtain nine 3x3 masks: Note: two of these masks are identical to the Sobel operator masks. Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. Table 8.1 Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. Having produced images that indicate, e.g., local edginess, the next stage is to deduce the local magnitudes of these quantities. These magnitues are then smoothed over a fair-sized region rather greater than the basic filter mask size (e.g. 15x15 smoothing window after applying 3x3 masks) A further stage is required to combine the various energies in a number of different ways, providing several outputs that can be fed into a classifier to decide upon the particular type of texture at each pixel location. Copyright © 2012 Elsevier Inc. All rights reserved. .

Copyright © 2012 Elsevier Inc. All rights reserved. FIGURE 8.6 Copyright © 2012 Elsevier Inc. All rights reserved. .

8.5 Ade’s Eigenfilter Approach Eigenfilter masks produces images that are principal component images for the given texture. Furthermore, each eigenvalue gives the part of the variance of the original image that can be extracted by the corresponding filter. The Ade approach is superior to the extent that it permits low-value energy components to be eliminated early on, thereby saving computation. (Literature Survey Topics) Copyright © 2012 Elsevier Inc. All rights reserved. .