1 Formation et Analyse d’Images Session 4 Daniela Hall 10 October 2005.

Slides:



Advertisements
Similar presentations
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Advertisements

電腦視覺 Computer and Robot Vision I
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
Computational Biology, Part 23 Biological Imaging II Robert F. Murphy Copyright  1996, 1999, All rights reserved.
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
Computer Vision Lecture 16: Region Representation
COLORCOLOR A SET OF CODES GENERATED BY THE BRAİN How do you quantify? How do you use?
Each pixel is 0 or 1, background or foreground Image processing to
Formation et Analyse d’Images Session 8
1 Formation et Analyse d’Images Session 6 Daniela Hall 18 November 2004.
1 Formation et Analyse d’Images Session 12 Daniela Hall 16 January 2006.
1 Formation et Analyse d’Images Session 3 Daniela Hall 14 October 2004.
1 Binary Image Analysis Binary image analysis consists of a set of image analysis operations that are used to produce or process binary images, usually.
Binary Image Analysis: Part 2 Readings: Chapter 3: mathematical morphology region properties region adjacency 1.
Objective of Computer Vision
CSE (c) S. Tanimoto, 2008 Image Understanding II 1 Image Understanding 2 Outline: Guzman Scene Analysis Local and Global Consistency Edge Detection.
Highlights Lecture on the image part (10) Automatic Perception 16
Objective of Computer Vision
Computer Vision Basics Image Terminology Binary Operations Filtering Edge Operators.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
E.G.M. PetrakisBinary Image Processing1 Binary Image Analysis Segmentation produces homogenous regions –each region has uniform gray-level –each region.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
1 Formation et Analyse d’Images Daniela Hall 19 Septembre 2005
1 Formation et Analyse d’Images Daniela Hall 30 Septembre 2004.
Machine Vision for Robots
CS 6825: Binary Image Processing – binary blob metrics
Digital Image Fundamentals II 1.Image modeling and representations 2.Pixels and Pixel relations 3.Arithmetic operations of images 4.Image geometry operation.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
1 Formation et Analyse d’Images Session 2 Daniela Hall 7 October 2004.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Digital Image Processing CCS331 Relationships of Pixel 1.
1 Binary Image Analysis Binary image analysis consists of a set of image analysis operations that are used to produce or process binary images, usually.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
1 Formation et Analyse d’Images Session 4 Daniela Hall 21 October 2004.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Course 2 Image Filtering. Image filtering is often required prior any other vision processes to remove image noise, overcome image corruption and change.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
1 Formation et Analyse d’Images Session 2 Daniela Hall 26 September 2005.
CS 376b Introduction to Computer Vision 02 / 11 / 2008 Instructor: Michael Eckmann.
Low level Computer Vision 1. Thresholding 2. Convolution 3. Morphological Operations 4. Connected Component Extraction 5. Feature Extraction 1.
Nottingham Image Analysis School, 23 – 25 June NITS Image Segmentation Guoping Qiu School of Computer Science, University of Nottingham
1 Mathematic Morphology used to extract image components that are useful in the representation and description of region shape, such as boundaries extraction.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
IMAGE PROCESSING Tadas Rimavičius.
Course : T Computer Vision
图像处理技术讲座(3) Digital Image Processing (3) Basic Image Operations
SIFT Scale-Invariant Feature Transform David Lowe
- photometric aspects of image formation gray level images
CS262: Computer Vision Lect 09: SIFT Descriptors
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Mean Shift Segmentation
- photometric aspects of image formation gray level images
Computer Vision Lecture 5: Binary Image Processing
Binary Image Analysis used in a variety of applications:
Filtering Things to take away from this lecture An image as a function
Computer and Robot Vision I
Department of Computer Engineering
Binary Image Analysis: Part 2 Readings: Chapter 3:
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Filtering An image as a function Digital vs. continuous images
Computer and Robot Vision I
Review and Importance CS 111.
Binary Image Analysis used in a variety of applications:
Presentation transcript:

1 Formation et Analyse d’Images Session 4 Daniela Hall 10 October 2005

2 Course Overview Session 1 (19/09/05) –Overview –Human vision –Homogenous coordinates –Camera models Session 2 (26/09/05) –Tensor notation –Image transformations –Homography computation Session 3 (3/10/05) –Camera calibration –Reflection models –Color spaces Session 4 (10/10/05) –Pixel based image analysis 17/10/05 course is replaced by Modelisation surfacique

3 Course overview Session (24/10/05) 9:45 – 12:45 –Kalman filter –Tracking of regions, pixels, and lines Session 7 (7/11/05) –Gaussian filter operators Session 8 (14/11/05) –Scale Space Session 9 (21/11/05) –Contrast description –Hough transform Session 10 (5/12/05) –Stereo vision Session 11 (12/12/05) –Epipolar geometry Session 12 (16/01/06): exercises and questions

4 Session Overview 1.Review reflectance model 2.Pixel based image analysis 1.Color histograms 2.Example face detection 3.Segmentation 4.Connectivity analysis 5.Morphological operators 6.Moments

5 Di-chromatic reflectance model the reflected light R is the sum of the light reflected at the surface R s and the light reflected from the material body R L R s has the same spectrum as the light source The spectrum of R l is « filtered » by the material (photons are absorbed, this changes the emitted light) Luminance depends on surface orientation Spectrum of chrominance is composed of light source spectrum and absorption of surface material.

6 Pixel based image analysis Amplitude of R L depends on angle i Amplitude is captured by the luminance axis. Body component of the object is captured by the chrominance axis. camera light N e g i

7 Color space (r,g) Intensity normalised color space (r,g) Properties: –less sensitive to intensity changes –less sensitive to variances of i –preserves chrominance (important for object identification)

8 Object detection by color Let p((r,g)|obj) be the probability density of (r,g) given the object and p(r,g) the global probability of the occurrence of (r,g) Then we can compute for any color (r,g) the probability p(obj|(r,g)) This gives rise to a « probability map » of the image.

9 Color histograms A color histogram is a (multi-dimensional) table We define a linear function that computes for any color the index of the corresponding histogram cell. Example: we have a greyscale image with 256 grey values. We want to fill a histogram with N cells. The index of the histogram cell c(val) of the pixel with value val is:

10 Color histograms A histogram is filled by considering pixels within a ROI (region of interest). –For each pixel val we compute the c(val) and increment it. Histograms approximate probability densities.

11 Object detection by color histograms The prior p(obj) can be estimated as the ratio of the obj size N obj to the size of the image N tot Then: Ratio of histograms

12 Object detection by color histograms Constructing the histograms h obj and h tot is called learning. Important points: –h obj must contain only points of the object –h tot must be sufficiently representative to estimate the color distribution in the world. –You need sufficient number of training examples. All cells of h tot should be > 0 (otherwise division by 0) You have sufficient data when N >= k*number cells, k~5 to 10 Example: for 2D histogram 32x32 cells, you need pixels. for 5D histogram of 10x10 cells, you need 1 million pixels.

13 Example: face detection 1.Learning 1.select images 2.segment pixels that have skin color (by hand) 3.construct h obj and h tot

14 Example: face detection Detection Compute probability map where each pixel has probability p(obj|(r,g))~h obj (r,g)/h tot (r,g). First and second moments of the high probability pixels give the position and extent of the face.

15 Example: face tracking 1.Learning as before 2.Do detection once for initialisation 3.Continous tracking 1.Compute position estimate for the next frame (using Kalman filter, session 6) 2.Compute probability image 3.Multiply by a Gaussian mask that is centered on the most likely position. This removes outliers and makes system more stable. 4.First and second moments give position and size of face.

16 Example: face tracking input image probability image weighted by Gaussian Tracking result

17 Segmentation 1.Segmentation by thresholding 2.Connected components 3.Improvement by morphological operators

18 Segmentation Segmentation by thresholding 1.Make histogram of the probability image 2.Find threshold values by searching for valleys 3.Apply thresholds probability imageThresholded image Threshold at 0.375

19 Notational convention Image operators require to evaluate the neighboring pixels. The neighbor with coordinates (i-1,j-1) of pixel (i,j) is called the neighbor NW. It has the value I(i-1,j-1)= I(NW). Operators are based on two types of support: a 3*3 block (8-connected) or a 3*3 cross (4-connected). I(NW) I(N) I(NE) I(W) I(C) I(E) I(SW) I(S) I(SE) I(N) I(W) I(C) I(E) I(S) I(i-1,j-1) I(i,j-1) I(i+1, j-1) I(i-1,j) I(i,j) I(i+1, j) I(i-1,j+1) I(i, j+1) I(i+1, j+1) 8 connected 4 connected

20 Connected components Algorithm to segment multiple objects within an image. How it works: 1.Use a binary image 2.Scan along a row until a point p with I(p)=1 is found. 3.Examine the 4 neighbors (N,W,NW,NE) 1.if I(N)=I(NW)=I(NE)=I(E)=0, assign a new label to p 2.if only one neighbor has a label, assign this label to p. 3.if more neighbors have a label, assign one to p and make a note of label equivalence. 4.Scan all rows. 5.Determine which labels are equivalent 6.Replace equivalent labels within the image. source: p N W NWNE

21 Example connectivity analysis 1. Scan a row until p with I(p)=1 is found. 2. Examine neighbors N,NW,NE,W aa aaa a aaa a a b b a aaa a a b b b b=a a aaa a a b b b c c c Replace equivalences b b b b b b b c c c

22 More examples binary image Connected components with labels coded as colors

23 Example: count number of objects Original image Thresholded image Connected component algorithm gives 163 labels labels coded as grey values labels coded as colors labels coded as 8 different colors

24 Morphological operators This example shows the basic operators of the mathematic morphology on the binary images. The structuring element is a 3*3 block (8- connected) or a 3*3 cross (4-connected). Max (Dilatation for binary image) Min (Erosion for binary image) Close: Min(Max(Image)) Open: Max(Min(Image)) Convention: for the binary image, black=0, white=1.

25 Dilatation and erosion Dilatation operator I’(C) = max(I(N),I(W),I(C),I(E),I(S)) Erosion operator I’(C) = min(I(N),I(W),I(C),I(E),I(S)) Original imageThresholded image Dilated image Eroded image

26 Close and open operators Close: Min(Max(Image)) Open: Max(Min(Image))

27 Moments In order to describe (and recognize) an object we need a method that is invariant to certain image transformations. Method: moment computation 1 st moment: center of gravity 2 nd moment: spatial extent 1 st and 2 nd moment are invariant to image orientation 2 nd moment is invariant to image translation.

28 Moments Input: a binary image. Let S be the sum of all white pixels. 1 st moment (μi,μj) (center of gravity) 2 nd moments (covariance)

29 Moments The covariances define an ellipse. The direction and length of the major axis of the ellipse are computed by principle component analysis. Find a rotation Φ, such that Φ eigen vectors, Λ eigen values

30 Example Height and width of an object depend on the orientation of the object whereas the eigen values of the covariance C P are invariant h w w h w h λ1λ1 λ2λ2 bounding box

31 Example:pattern recognition You dispose of an example image that contains a particular segmented object. Task: decide for a number of new images of different orientations, if the learned object is within the image. Training example Query images

32 Example:pattern recognition Representation by width and height of the bounding box of the object is not a solution. Representation by moments allows to find the correct images. Training example Query images

33 Exercise Your company asks you to build a cheap traffic light monitoring system. You have a camera that observes a traffic light and should emit events when the light changes the color. How would you proceed.