Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.

Slides:



Advertisements
Similar presentations
電腦視覺 Computer and Robot Vision I
Advertisements

CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Chapter 3 Image Enhancement in the Spatial Domain.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer Vision Lecture 16: Texture
The Global Digital Elevation Model (GTOPO30) of Great Basin Location: latitude 38  15’ to 42  N, longitude 118  30’ to 115  30’ W Grid size: 925 m.
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Shape From Texture Nick Vallidis March 20, 2000 COMP 290 Computer Vision.
Texture Turk, 91.
CS485/685 Computer Vision Prof. George Bebis
Image Formation1 Projection Geometry Radiometry (Image Brightness) - to be discussed later in SFS.
Regionalized Variables take on values according to spatial location. Given: Where: A “structural” coarse scale forcing or trend A random” Local spatial.
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
E.G.M. PetrakisTexture1 Repeative patterns of local variations of intensity on a surface –texture pattern: texel Texels: similar shape, intensity distribution.
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Information that lets you recognise a region.
CSE473/573 – Stereo Correspondence
December 2, 2014Computer Vision Lecture 21: Image Understanding 1 Today’s topic is.. Image Understanding.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
Machine Vision for Robots
8D040 Basis beeldverwerking Feature Extraction Anna Vilanova i Bartrolí Biomedical Image Analysis Group bmia.bmt.tue.nl.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
OBJECT RECOGNITION. The next step in Robot Vision is the Object Recognition. This problem is accomplished using the extracted feature information. The.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
8D040 Basis beeldverwerking Feature Extraction Anna Vilanova i Bartrolí Biomedical Image Analysis Group bmia.bmt.tue.nl.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Course 11 Optical Flow. 1. Concept Observe the scene by moving viewer Optical flow provides a clue to recover the motion. 2. Constraint equation.
Course 10 Shading. 1. Basic Concepts: Light Source: Radiance: the light energy radiated from a unit area of light source (or surface) in a unit solid.
Affine Structure from Motion
3D Imaging Motion.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
EECS 274 Computer Vision Affine Structure from Motion.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
A Photograph of two papers
3D Reconstruction Using Image Sequence
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Image features and properties. Image content representation The simplest representation of an image pattern is to list image pixels, one after the other.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Image Quality Measures Omar Javed, Sohaib Khan Dr. Mubarak Shah.
Image Features (I) Dr. Chang Shu COMP 4900C Winter 2008.
Objectives: Normal Random Variables Support Regions Whitening Transformations Resources: DHS – Chap. 2 (Part 2) K.F. – Intro to PR X. Z. – PR Course S.B.
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
Image Enhancement in the Spatial Domain.
MAN-522 Computer Vision Spring
Texture.
Common Classification Tasks
A Photograph of two papers
Computer Vision Lecture 16: Texture II
Computer Vision Chapter 9
Outline Neural networks - reviewed Texture modeling
Reconstruction.
Computer and Robot Vision I
Texture, optics and shading
Course 6 Stereo.
Grouping/Segmentation
Shape from Shading and Texture
Presentation transcript:

Course 9 Texture

Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes the image of large number of structural primitive of (statistically) identical shape and size placed (statistically) uniformly. To perceive a homogeneous texture, What features should be homogeneous? What features are unimportant?

The answer may lie in what signifies the constant of changing scene geometry. The texture image includes two homogeneous regions

The answer may lie in what signifies the constant of changing scene geometry. The texture image includes two homogeneous regions

1. Gray-level co-occurrence Matrix Characterized to capture spatial dependence of image gray- level value, which contribute to the perception of texture. Gray-level co-occurrence matrix P is defined by first specifying a displacement vector, and counting all pairs of pixels separated by having gray level values i and j. Specifically, for a given image of size and with L gray levels. For a defined displacement vector

a) Systemically scan image from top to the row of,from left-most to the column of For, count the number of occurrence, say being. b) Set matrix element c) Repeat operations b) and c) until all combinations of i and j are completed. d) Normalize matrix P by.

For example: Note: for the same image with different displacement vector, it will yield different gray-level co-occurrence matrix, which characterizes the texture homogeneity of different special distribution and orientations.

From co-occurrence matrix, some useful measurements can be derived. Energy : Contrast: Homogeneity: Entropy :

2. Structural analysis of texture Assumptions: - Texture is ordered. - Texture primitives are large enough. - Texture primitives can be separated.

After primitive regions are identified, homogeneity properties can be measured. - Centroid distances of different directions. - Size. - Elongation. - Orientation. Then, co-occurrence based measurements of the primitives are used to characterize the texture.

3. Model-based Texture Analysis Concept: - Establish an analytical model of the given textured image. - Then, analysis the model. Difficulty: too many parameters in the model to be determined. Example: Gauss-Markove random field model: where: is weight, is an additive noise.

In this model, any pixel is modeled as a linear combination of gray level of its neighbors to pixel. The parameters are the weight, which can be estimated by least-square from the given textured image. 4. Shape From Texture - Recover 3D information from 2D image clues. - Image clues: variations of size, shape, and density of texture primitives. - Yield: surface shape and orientation.

An simple example for analysis Assumptions: 1)3D surface is slanted with angel. 2)Till angel being zero, i.e. points along horizontal line on the surface have the same depth from the camera. 3)Texture primitive is disk. 4)Perspective projection imaging model.

Observations from image: 1)3D disks appear to be ellipses in image plane. 2)The size of ellipses decrease as a function of, the y- coordinate of image plane, causing “density gradients”. 3)The ratio of minor to major diameters of ellipses does not remain constant along -axis. Def. aspect ratio =

At image center: let the diameter of 3D disk be d, then Thus, aspect ration is

At point of So, Thus,

(AC parallel to image plane) (Assume ) Thus, aspect ration =

Note : is known from image plane, the 3D surface orientation can be computed from aspect ratio.

5. Surface Orientation from statistic Texture Assumptions: 1) 3D texture primitives are small line segments, called needles. 2) Needles are distributed uniformly on 3D surface, their directions are independent. 3) Surface is approximately planar. 4) Orthographic image projection Given: image of N needles with needle angle from x-axis. Find: surface orientation

Method (omitting detail deriving): For a give image needle direction, define an auxiliary vector The average of the vector is: From orthographic projection

Thus, one can solve for: where