Texture, optics and shading

Slides:



Advertisements
Similar presentations
Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
Advertisements

電腦視覺 Computer and Robot Vision I
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
Capturing light Source: A. Efros. Image formation How bright is the image of a scene point?
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
嵌入式視覺 Feature Extraction
CDS 301 Fall, 2009 Image Visualization Chap. 9 November 5, 2009 Jie Zhang Copyright ©
Computer Vision Lecture 16: Texture
Radiometry. Outline What is Radiometry? Quantities Radiant energy, flux density Irradiance, Radiance Spherical coordinates, foreshortening Modeling surface.
The Global Digital Elevation Model (GTOPO30) of Great Basin Location: latitude 38  15’ to 42  N, longitude 118  30’ to 115  30’ W Grid size: 925 m.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Texture Turk, 91.
Image Formation1 Projection Geometry Radiometry (Image Brightness) - to be discussed later in SFS.
Introduction to Computer Vision CS / ECE 181B Tues, May 18, 2004 Ack: Matthew Turk (slides)
Visual Information Systems Image Content. Visual cues to recover 3-D information There are number of cues available in the visual stimulus There are number.
E.G.M. PetrakisTexture1 Repeative patterns of local variations of intensity on a surface –texture pattern: texel Texels: similar shape, intensity distribution.
CS292 Computational Vision and Language Visual Features - Colour and Texture.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
Information that lets you recognise a region.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
Reflectance Map: Photometric Stereo and Shape from Shading
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
Camera Geometry and Calibration Thanks to Martial Hebert.
What we didn’t have time for CS664 Lecture 26 Thursday 12/02/04 Some slides c/o Dan Huttenlocher, Stefano Soatto, Sebastian Thrun.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Capturing light Source: A. Efros.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
Radiometry and Photometric Stereo 1. Estimate the 3D shape from shading information Can you tell the shape of an object from these photos ? 2.
Course 10 Shading. 1. Basic Concepts: Light Source: Radiance: the light energy radiated from a unit area of light source (or surface) in a unit solid.
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
3D Imaging Motion.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
CDS 301 Fall, 2008 Image Visualization Chap. 9 November 11, 2008 Jie Zhang Copyright ©
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Radiometry of Image Formation Jitendra Malik. A camera creates an image … The image I(x,y) measures how much light is captured at pixel (x,y) We want.
Announcements Final is Thursday, March 18, 10:30-12:20 –MGH 287 Sample final out today.
Radiometry of Image Formation Jitendra Malik. What is in an image? The image is an array of brightness values (three arrays for RGB images)
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
CIS 681 Distributed Ray Tracing. CIS 681 Anti-Aliasing Graphics as signal processing –Scene description: continuous signal –Sample –digital representation.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
MAN-522 Computer Vision Spring
Rendering Pipeline Fall, 2015.
CS262 – Computer Vision Lect 4 - Image Formation
- photometric aspects of image formation gray level images
Texture.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Distributed Ray Tracing
Common Classification Tasks
Computer Vision Lecture 16: Texture II
Computer Vision Chapter 9
Outline Neural networks - reviewed Texture modeling
Distributed Ray Tracing
Part One: Acquisition of 3-D Data 2019/1/2 3DVIP-01.
Filtering Things to take away from this lecture An image as a function
Computer and Robot Vision I
Filtering An image as a function Digital vs. continuous images
Shape from Shading and Texture
Distributed Ray Tracing
Physical Problem of Simultaneous Linear Equations in Image Processing
Outline Texture modeling - continued Markov Random Field models
Presentation transcript:

Texture, optics and shading By Zhi Liu

outline Texture Optics shading

Texture What is texture?? A measure of the variation of the intensity of a surface, quantifying properties such as smoothness, coarseness and regularity. It's often used as a region descriptor in image analysis and computer vision. Source: The Free On-line Dictionary of Computing, © 1993-2004 Denis Howe

texture Texture is characterized by the spatial distribution of gray levels in a neighborhood. Resolution at which image is observed determines how texture is perceived.

example Observing tiled floor from a large distance: Same scene is observed from a closer distance: What the difference…. So…….our define of texture is…..

Texture definition We can define texture as repeating patterns of local variations in image intensity which are too fine to be distinguished as separate objects at the observed resolution. A connected set of pixels satisfying a given gray-level property which occur repeatedly in an image region constitutes a textured region.

Texture example Presentation schedule for EE6358 Computer Vision Summer 2004 Binary image processing—Amin---June 14, 2004 Region segmentation---Nayan---June 14, 2004 Edge detection---Actul---June 21, 2004 Contouring—Ashish---June 21, 2004 Texture, optics &shading---Liu ---June 28, 2004 Color and depth---Amin---June 28, 2004 Curves and surfaces---Jeff---July 5, 2004 Dynamic vision---Rob----July 5, 2004 .

Texture analysis Three primary issues: texture classification. texture segmentation. shape recovery from texture.

Texture classification What is it about? Identifying the given textured region from a given set of texture classes. A small example. a particular region in an aerial image may belong to agricultural land, forest region, or urban area.

Texture classification Assuming that the boundaries between regions have been determined. Properties as gray-level, co-occurrence, contrast, entropy, and homogeneity are used.

Texture classification Micro Textures: when the texture primitive is small, statistical methods are useful in this case. Macro Textures: when the size of texture primitive is large, have to determine the shape and properties of the basic primitive and then determine the rules which govern the placement of these primitives.

Texture segmentation Concerned with automatically determining the boundaries between various textured regions in an image. Most of the statistical methods for determining texture features do not provide accurate measures unless the computations are limited to a single texture region.

Shape recovery Image plane variations such as density ,size and orientation are the cues exploited by shape-from-texture algorithms. For example, texture gradient determine the orientation of the surface.

Statistical methods of texture analysis Texture is a spatial property, 1-D histogram can not be used to characterize texture.

measures Gray-level co-occurrence matrix Autocorrelation function Focus on the first measure

Gray-level co-occurrence matrix Gray-level co-occurrence matrix---- P[ I, j ] Displacement vector----d = (dx,dy)

Gray-level co-occurrence matrix The elements of P[I,j] are normalized by dividing each entry by the total number of pixel pairs. The normalized P[I,j]is then treated as a probability mass function since the entries now add up to 1. Another example to show matrix captures the spatial distribution of gray levels

Gray-level co-occurrence matrix Entropy to measure randomness of gray level distribution. Other parameters:

parameters Energy: Measures the number of repeated pairs. Contrast: Measures the local contrast of the image. Homogeneity: essentially tells us the opposite of contrast.

Autocorrelation

autocorrelation The autocorrelation function can be used to detect repetitive patterns and is a good measure of the fineness/coarseness of the texture. Coarse texture function drops off slowly Fine texture function drops off rapidly Regular textures function will have peaks and valleys. exhibits periodic behavior with a period equal to the spacing between adjacent texture primitive for images comprising repetitive texture patterns

Structural analysis of ordered texture Used when the primitives are large enough to be individually segmented and described. First segmenting the discs using a simple method( connected component labeling), then determine the regular structure. Morphological methods are used when the image is corrupted by noise or other non-repeating random patterns.

Example

Model-based Methods for texture analysis What the method is. The model has a set of parameters which determine the properties of the texture. The challenge of this method

Markov random field Discrete Gauss-Markov random field model: Gray level of any pixel is modeled as a linear combination of gray levels of its neighbors plus an additive noise term. Weight h[k,l] is the parameter

Markov random fields( Contd.) The procedure of building the model Computing the parameters from given image texture using least-squares method The estimated parameters will compared with those of the known texture classes to determine the class of the particular texture being analyzed Least-squared method: http://www.efunda.com/math/leastsquares/leastsquares.cfm

Contd. When patterns forming texture have the property of self-similarity at different scales, fractal-based models may be used. What is self-similarity? N, r Such texture is characterized by its fractal dimension D, by given

Shape from Texture We can use the variations in the size, shape, and density of texture primitive to estimate the surface shape and orientation. Recover 3-D information from 2-D image.

example

example

The example tell us….. Surface is not parallel with the image plane Size of ellipse decrease------density gradient Aspect ratio (minor and major diameters of the ellipse ) does not remain constant--- aspect ratio gradient

Ratio in the point (0,0) Z: distance of the disc from the camera center and f is the focal length of the camera. αis slant angel. d is the diameter of the disc

Ratio at point (0,y’) (0,y’) = ( 1 – tanθtanα ) (0,y’) = cosα(1-tanθtanα) (1-tanθtanα)

Optics Machine vision rely on pinhole camera model models the geometry of perspective projection, omits the depth of field view volume is an infinite pyramid.

Lens Equation If we define the distance from optical origin to the image plane as z’, focal length as f, and the distance from the optical origin to the point in the scene as z, we have :

Image resolution For most machine vision applications, it is determined by the interplay between pixel spacing and depth of field The space between pixel is A, then the resolution limit is 2A since that is the separation between features that allows two features to be perceived as distinct.

Image resolution The resolution of the film a typical spacing between grains is 5 μm human version system. One minute of arc. Example: The length of the arm is 40 cm 0.3 * 0.001 * 40 = 120 μm

Depth of Field Depth of field is the range of distances that produce acceptably focused images When a scene point is out of focus, it creates a circle of diameter b. diameter of the lens d Near plane( min ) Far plane( max )

Depth of field so the depth of field D is the difference between the max and the min:

Exposure Exposure is the power per square meter multiplied by time. Parameter that control exposure is called F-stop given by F = f/d.

shading Shading is how light reflects from surfaces and estimate the shape of surface.

Shading Imaging is the process that maps intensities from points in a scene onto an image plane. Image irradiance is defined to be the power per unit area of radiant energy falling on the image plane. Radiant---outgoing Irradiance---incoming

Image irradiance Irradiance is determined by the energy radiated from the corresponding point. We trace the ray back to the surface patch from which the ray was emitted and understand how the light is reflected by surface patch.

Image irradiance Two factors determine radiance reflected illumination falling on the patch of scene surface. the fraction of the incident illumination that is reflected by the surface patch. Establish an infinitesimal patch of surface in a scene.

Bidirectional reflectance distribution function ( BRDF ) f(θi, Фi, θe, Фe) = f(θi-θe, Фi-Фe) θ ρ Ф

illumination The total irradiance of the surface patch is given by The radiance reflected by the surface is given by

illumination L(θeФe ) = Io / пcosθs

Reflectance Lambertian reflectance Specular reflectance Combinations of Lambertian and Specular reflectance f(θi, Фi, θe, Фe) = ŋ/ п+ (1+ ŋ )(Ơ(θe- θi) Ơ(Фe- Фi- п )/sinФi cos Фi

Surface Orientation A surface normal to the surface patch is related to the gradient by: This implies that the amount of displacement in x and y corresponding to the unit change depth z is p and q, respectively.

Surface orientation So the coordinates of a point on a scene surface can just be denoted by image plane coordinates x and y. any function or property of the scene surface can be specified in terms of p and q. p = p ( x, y ) q = q ( x, y )

The reflectance map The combination of scene illumination, surface reflectance, and the representation of surface orientation in viewer-centered coordinates is called the reflectance map. It specifies the brightness of a patch of surface, at a particular orientation for a given distribution of illumination and surface material

The reflectance map E( x, y ) = R( p, q ) which means the irradiance ( brightness ) at point (x, y) in the image plane is equal to the reflectance map value for the surface orientation p and q of the corresponding point on the scene surface

Shape from shading For fixed illumination and imaging conditions, changes in surface orientation translate into corresponding changes in image intensity. Inverse problem is ~

reference www.cis.rit.edu/~rlc8222/vision/text.html Project Title: Analysis of Textured Regions Based on Gray Scale Co-occurrence Matrices: www.cis.rit.edu/~rlc8222/vision/text.html Classification of Tissues Using Texture Information  www.irisa.fr/vista/Papers/2003_icip_rousseau.pdf