Shadow Removal Seminar

Slides:



Advertisements
Similar presentations
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Advertisements

Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
Ghunhui Gu, Joseph J. Lim, Pablo Arbeláez, Jitendra Malik University of California at Berkeley Berkeley, CA
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Lecture 4 Edge Detection
Lecture 5 Template matching
Computer Vision Group Edge Detection Giacomo Boracchi 5/12/2007
1 Color Segmentation: Color Spaces and Illumination Mohan Sridharan University of Birmingham
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
May 2004SFS1 Shape from shading Surface brightness and Surface Orientation --> Reflectance map READING: Nalwa Chapter 5. BKP Horn, Chapter 10.
1 Interest Operator Lectures lecture topics –Interest points 1 (Linda) interest points, descriptors, Harris corners, correlation matching –Interest points.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Segmentation (Section 10.2)
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
CSE 291 Final Project: Adaptive Multi-Spectral Differencing Andrew Cosand UCSD CVRR.
Tracking Video Objects in Cluttered Background
Shadow Removal Using Illumination Invariant Image Graham D. Finlayson, Steven D. Hordley, Mark S. Drew Presented by: Eli Arbel.
Shadow Detection In Video Submitted by: Hisham Abu saleh.
1 Interest Operators Find “interesting” pieces of the image Multiple possible uses –image matching stereo pairs tracking in videos creating panoramas –object.
Statistical Color Models (SCM) Kyungnam Kim. Contents Introduction Trivariate Gaussian model Chromaticity models –Fixed planar chromaticity models –Zhu.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
Computer Graphics Mirror and Shadows
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Neighborhood Operations
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
Information Extraction from Cricket Videos Syed Ahsan Ishtiaque Kumar Srijan.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
EADS DS / SDC LTIS Page 1 7 th CNES/DLR Workshop on Information Extraction and Scene Understanding for Meter Resolution Image – 29/03/07 - Oberpfaffenhofen.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Perception Introduction Pattern Recognition Image Formation
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Joon Hyung Shim, Jinkyu Yang, and Inseong Kim
Depth Edge Detection with Multi- Flash Imaging Gabriela Martínez Final Project – Processamento de Imagem IMPA.
Data Extraction using Image Similarity CIS 601 Image Processing Ajay Kumar Yadav.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Interactive Sand Art Drawing Using RGB-D Sensor
1 Self-Calibration and Neural Network Implementation of Photometric Stereo Yuji IWAHORI, Yumi WATANABE, Robert J. WOODHAM and Akira IWATA.
Autonomous Robots Vision © Manfred Huber 2014.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Text From Corners: A Novel Approach to Detect Text and Caption in Videos Xu Zhao, Kai-Hsiang Lin, Yun Fu, Member, IEEE, Yuxiao Hu, Member, IEEE, Yuncai.
Wonjun Kim and Changick Kim, Member, IEEE
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Image features and properties. Image content representation The simplest representation of an image pattern is to list image pixels, one after the other.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
Image Features (I) Dr. Chang Shu COMP 4900C Winter 2008.
MAN-522 Computer Vision Spring
Course : T Computer Vision
Rendering Pipeline Fall, 2015.
- photometric aspects of image formation gray level images
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
Distinctive Image Features from Scale-Invariant Keypoints
IIS for Image Processing
Common Classification Tasks
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
Aim of the project Take your image Submit it to the search engine
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Filtering An image as a function Digital vs. continuous images
IT472 Digital Image Processing
Review and Importance CS 111.
Presentation transcript:

Shadow Removal Seminar Color Invariance Shadow Removal Seminar

What we passed till now Cause to shadows, and what shadows means for us (the interpretation of shadows in human brain). How to create shadows graphically. Some shadow detection techniques

This lecture Overview Intro Shadow classification Shadow segmentation Invariance and color invariance Shadow classification Shadow segmentation

Intro - Shadows Generation of shadows Shadows Types Cast shadows Self shadows Shadows: - As we already know: shadow occurs when an object partially or totally occludes direct light from a source of illumination. Shadows can be divided in two cases: self and cast shadows.

Intro - Invariance Invariant Invariance in images A feature (quantity or property or function) that remains unchanged when a particular transformation is applied to it. What it used for Invariance in images Matlab demo

Intro – Shadow detection techniques Shadow detection techniques classification: Model based Property based Model based techniques relay on models representing the a priory knowledge of the geometry of the scene, the objects, and the illumination. Property based techniques identify shadows by using features such as geometry, brightness or color of shadows. (from “Cast shadow segmentation using invariant color features”)

Shadow identification and classification using invariant color models Elena Salvador, Andrea Cavallaro, Touradj Eebrahimi 2001

Overview Goal Constraints Color Invariants Algorithm steps Results Conclusions

Goal Extraction and classification of shadows in color images.

Constraints A simple environment assumed where shadows are cast on flat or nearly flat non textured surface. Objects are uniformly colored. Only one light source illuminates the scene. Shadows and objects are within the image. The light source must be strong.

Color Invariants Photometric color invariants Definition Models of photometric color invariants Normalized rgb Hue (H) and saturation (S) (C1,C2,C3) and (L1,L2,L3) Photometric color invariants – are functions which describe the color configuration of each image point discounting shadings, shadows and highlights.

Color Invariants - cont C1C2C3 color invariant features defined as: In fact, any expression defining colors on the same linear color cluster spanned by the body reflection vector in RGB-space is an invariant for the dichromatic reflection model with white illumination. And from that assumption comes this color invariant model. Denoting the angles of the body reflection vector and consequently being invariants for matte, dull objects. ( Color Based Object Recognition Theo Gevers and Arnold W.M. Smeulders 1999 )

Algorithm steps Shadow candidates identification Edge detection Finding the outer points of the edge map Intensities used as reference Morphological processing used to close contours of the edge map. As we already know, shadows result from the obstruction of light from the light source. Thus the luminance values in a shadow region are smaller than those in the surrounding lit regions. In this section (shadow candidates identification) I’ll describe the method that used here to find the shadow candidates. 1) At first an edge map obtained by applying Sobel operator on the luminance. 2) Horizontal and vertical scanning is performed on the edge map, in order to find the outer points of the edge map. The intensities at the detected points are used as reference to determine if the pixels in the inner part of the edge map are darker and therefore candidate to be shadow points. **** Since luminance is a color feature that is sensitive to shadows and shadings, the map contains both object and shadow edges.***** By using this edge map in the dark regions extraction process, we restrict the search for shadow candidate regions in the portion of the image that occupied by the object and its cast shadow.

Algorithm steps - cont Shadow classification Applying photometric color invariants Edge detection Classification Once the dark regions have been extracted from the image, color information can be used to classify shadow regions on the object (self shadow) and shadow regions on the background (cast shadows). By performing edge detection on the invariant color features an edge map which does not contain the edges corresponding to shadow boundaries is obtained (c). The color edge map and the dark regions map are then used as input for the classification level. The process for the classification of the dark regions is similar to that used for the classification level. The input color edge map is scanned in the horizontal and vertical directions to find the outer points of the edge map. The detected points indicate the outer edge points on the object. Points in the dark region mask that lie within the detected edge points are classified as self shadow points. The outer points are classified as cast shadow points.

Algorithm steps - summary

Results

Conclusions This method succeeds in detecting and classifying shadows within environmental constraints that are less restrictive then other methods. Need to define strategy to describe the object color discounting the effect of self shadow.

Cast shadow segmentation using invariant color features Elena Salvador, Andrea Cavallaro and Touradj Ebrahimi 2004

Overview Goal Constraints Spectral properties of shadows Dichromatic reflection model Photometric color invariants Algorithm steps Results

Goal Detection of cast shadows on video and on still images.

Constraints The ambient light assumed to be a proportional to direct occluded light. Inter-object reflection among different surfaces not taken in account. Video The camera is not moving. The reason for first bullet: Ambient light can have different spectral characteristics with respect to direct light. The case of outdoor scenes, where the diffuse light from the sky differs in spectral composition with respect to the direct light from the sun, provides an example. Since we aim in this work at avoiding calibration procedures and camera-dependent computations, so as to propose a segmentation algorithm that can be applied even when control on the imaging conditions and the scene is possible, we assume….(the first bullet).

Dichromatic reflection model Radiance of light: When object obstructing the direct light we have: Let to be a spectral sensitivities of R,G and B sensors of color camera. The appearance of surface is the result of the interaction among illumination, surface reflectance properties, and the responses of chromatic mechanism. This chromatic mechanism is composed of three color filters in a color camera. To model the physical interaction between illumination and object's surface we will consider the Dichromatic Reflection Model. Radiance of light: Lr reflected on a given point p on a surface in 3D. La – ambient reflection, Lb – body reflection term, Ls – surface reflection term. Gamma – is the wavelength. If there is no direct illumination because of object obstructing the direct light, then the radiance of reflected light is as follows. Which represents the intensity of the reflected light at a point in a shadow region.

Dichromatic reflection model - cont The color components of reflected intensity that reaching the camera sensors are: Sensor measurements in direct light: For a point in shadow the measurements are: Image Irradiance The color components of the reflected intensity reaching the sensors at point (x,y) in the 2D image plane. Where Ci(R,G,B) are the sensor responses, E(lamda, x, y) is the image irradiance at (x,y), and Sci(lamda) is {Sr(lamda), Sg(lamda), Sb(lamda)}. The interval of summation is determined by Sci(lamda), which non-zero over bounded interval of wavelength v. Since the image irradiance is proportional to scene radiance, for a pixel position (x,y) representing a point p in direct light the sensor measurements are… giving a color vector C(x,y)lit. Alpha is the proportionality factor between radiance and irradiance. For a point in shadow the measurements are ……….. giving a color vector C(x,y)shadow=(R_shadow, G_shadow, B_shadow). It follows that each of the three RGB components, if positive and not zero, decrease when passing from a lit region to a shadowed one, that is: Note: Irradiance - is the radiant flux incident upon a unit area of a surface. For sunlight it is the number of watts received per square metre of the Earth's surface.

Dichromatic reflection model - cont The conclusions are:

Color invariance The color invariants are the same as in previous article.

Algorithm steps Hypothesis generation Accumulation of evidence Dichromatic model Accumulation of evidence Color invariance test Geometric properties test Decision

Hypothesis generation Still images Find edges with Sobel operator. Use reference pixels to find shadow suspected areas. Video Analysis performed only in areas that identified by motion detector The reference image represents the background of the scene. To obtain more robustness the analysis performed on window Still images: The edge map obtained by applying Sobel operator, separately on the color channels and after that OR operation between them. A contour point (x,y) becomes a candidate shadow contour point if the reference pixel value is bigger then the current pixel value, when the reference pixel defined by a first level neighboring. … Video: The reference pixel (Xr,Yr) belongs to a reference image which represents the background of the scene. The reference image can be either a frame in the sequence or a reconstructed one. Therefore the reference pixel is at the same location as (x,y) in the image under analysis. In the noise free case the condition I(Xr,Yr)-I(x,y)>0 for each color channel will tell that the pixel is in shadow. To obtain more robustness for each pixel position the analysis will be performed to a window.

Hypothesis generation - cont Result of the first level: The candidate shadow points belonging to the edge map:

Accumulation of evidence – overview Color invariance property used to strength or cancel the hypothesized shadow areas. Checking the existence of shadow line and hidden line. The result of the first level of analysis is the identification of a set candidate shadow pixels. Photometric invariant color features and spatial constraints are exploited at this level of the shadow segmentation process. Invariant color features compared for every pixel with the features of the reference pixel. If the value of the invariant color features has not changed with respect to the reference, the hypothesized shadow is strengthened.

Accumulation of evidence – Still Images Color edge detection performed in the invariant space. Morphological dilation applied on the edge map. Isolated pixels removed.

Accumulation of evidence – in video Compute invariant feature values by: Geometric property test Position of shadow with respect to the object is tested. The identification of the pixels satisfying the first evidence is achieved by analyzing the difference in the invariant feature values. As in the still image case the d(x,y) is not 0 for the invariant case, therefore here too window operation performed and threshold set. The reference picture and the current picture are converted to the invariance space and then differed one from each other. In perfect case the shadowed area should give 0 but its not occurred in real images so threshold on d(x,y) is set.  if (d(x,y)<threshold) then it’s a shadow. Once the set of pixels is obtained, the position of shadows with respect to objects is tested (geometric property). In case a hypothesized shadow is fully included in an object, the shadow line is not present, and the shadow hypothesis is then weakened.

Information integration Results of integrating all stages. Color edge map of the invariant features Once the additional evidences have been extracted, a decision making step is performed. This final step allows the fusion of different pieces of information. The result is a rejection of the initial hypothesis in case the rules are not respected. Otherwise the hypothesis is confirmed. If the analysis of the photometric invariant color features on the candidate shadow is not successful, the pixel is labeled as material change. If the analysis is successful, the candidate shadow undergoes further analysis by means of the geometrical constraints. This final verification is required to eliminate the last ambiguities. C – color edge map of the invariant features containing material boundaries for which the shadow hypothesis is weakened; D – integration of shadow evidence from the spectral analyzes of (B) and (C). E & F are refinement by means of geometric analysis providing the shadow line and hidden shadow line (E), and complete shadow contours (F). (E) And (F) contains refinement by means of geometric analysis providing the shadow line and hidden shadow line. Integration of shadow evidence from (B) and (C)

Results In video there is a problem with shadows that does not moving, also the outside scenes are much more harder

References Shadow identification and classification using invariant color models. Elena Salvador, Andrea Cavallaro, Touradj Eebrahimi 2001 Cast shadow segmentation using invariant color features. Elena Salvador, Andrea Cavallaro and, Touradj Ebrahimi 2004 http://www.mathworks.com/access/helpdesk/help/toolbox/images/morph3.html

The End…

Sobel operator Performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial gradient that correspond to edges. Basic Sobel convolution mask: The Sobel operator performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial gradient that correspond to edges. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image. These masks are designed to respond maximally to edges running vertically and horizontally relative to the pixel grid, one mask for each of the two perpendicular orientations. The masks can be applied separately to the input image, to produce separate measurements of the gradient component in each orientation (call these Gx and Gy). These can then be combined together to find the absolute magnitude of the gradient at each point and the orientation of that gradient. The gradient magnitude is given by: Note about convolution: One of the most powerful techniques in all of image processing is convolution.  Convolution is the modification of a pixel's value on the basis of the value of neighboring pixels.  Images are convolved by multiplying each pixel and its neighbors by a numerical matrix, called a kernel. This matrix is essentially moved over each pixel in the image, each pixel under the matrix is multiplied by the appropriate matrix value, the total is summed and normalized, and the central pixel is replaced by the result.  C(x,y) = sum(sum(P(i,j)*M(i,j)))/(sum(sum(M(i,j)))

Pseudo-convolution kernels in general We can use a pseudo convolution operator to perform these to steps in one step. P1 P2 P3 P4 P5 P6 P7 P8 P9 |G| = |(P1+2*P2+P3)-(P7+2*P8+P9)|+|(P3+2*P6+P9)-(P1+2*P4+P7) Often, this absolute magnitude is the only output the user sees --- the two components of the gradient are conveniently computed and added in a single pass over the input image using the pseudo-convolution operator Using this kernel the approximate magnitude is given by: (… the equation…)

Morphological dilation of images The state of any given pixel in the output image is determined by applying a rule to the corresponding pixel and its neighbors in the input image. The rule used to process the pixels defines the operation as a dilation or an erosion. Dilation: The value of the output pixel is the maximum value of all the pixels in the input pixel's neighborhood. In a binary image, if any of the pixels is set to the value 1, the output pixel is set to 1. (the neighborhood in this example is the structuring element).

Examples of photometric color invariants (L1,L2,L3) ( Color Based Object Recognition Theo Gevers and Arnold W.M. Smeulders 1999 )