Recovering Intrinsic Images from a Single Image 28/12/05 Dagan Aviv Shadows Removal Seminar.

Slides:



Advertisements
Similar presentations
Basic Steps 1.Compute the x and y image derivatives 2.Classify each derivative as being caused by either shading or a reflectance change 3.Set derivatives.
Advertisements

Slides from: Doug Gray, David Poole
Ensemble Learning Reading: R. Schapire, A brief introduction to boosting.
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
EE462 MLCV Lecture 5-6 Object Detection – Boosting Tae-Kyun Kim.
Spatial Filtering (Chapter 3)
Image Analysis Phases Image pre-processing –Noise suppression, linear and non-linear filters, deconvolution, etc. Image segmentation –Detection of objects.
Games of Prediction or Things get simpler as Yoav Freund Banter Inc.
Face detection Many slides adapted from P. Viola.
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,
The Viola/Jones Face Detector (2001)
Robust Moving Object Detection & Categorization using self- improving classifiers Omar Javed, Saad Ali & Mubarak Shah.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Shadow removal algorithms Shadow removal seminar Pavel Knur.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
Introduction to Boosting Aristotelis Tsirigos SCLT seminar - NYU Computer Science.
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Object recognition under varying illumination. Lighting changes objects appearance.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Super-Resolution of Remotely-Sensed Images Using a Learning-Based Approach Isabelle Bégin and Frank P. Ferrie Abstract Super-resolution addresses the problem.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
Face Detection using the Viola-Jones Method
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
CS 231A Section 1: Linear Algebra & Probability Review
CSSE463: Image Recognition Day 27 This week This week Last night: k-means lab due. Last night: k-means lab due. Today: Classification by “boosting” Today:
EADS DS / SDC LTIS Page 1 7 th CNES/DLR Workshop on Information Extraction and Scene Understanding for Meter Resolution Image – 29/03/07 - Oberpfaffenhofen.
INDEPENDENT COMPONENT ANALYSIS OF TEXTURES based on the article R.Manduchi, J. Portilla, ICA of Textures, The Proc. of the 7 th IEEE Int. Conf. On Comp.
Window-based models for generic object detection Mei-Chen Yeh 04/24/2012.
Benk Erika Kelemen Zsolt
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
ECE738 Advanced Image Processing Face Detection IEEE Trans. PAMI, July 1997.
Today Ensemble Methods. Recap of the course. Classifier Fusion
Image Filtering Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/02/10.
Lecture notes for Stat 231: Pattern Recognition and Machine Learning 1. Stat 231. A.L. Yuille. Fall 2004 AdaBoost.. Binary Classification. Read 9.5 Duda,
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
Automated Solar Cavity Detection
The Viola/Jones Face Detector A “paradigmatic” method for real-time object detection Training is slow, but detection is very fast Key ideas Integral images.
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
CSSE463: Image Recognition Day 33 This week This week Today: Classification by “boosting” Today: Classification by “boosting” Yoav Freund and Robert Schapire.
PRESENTATION REU IN COMPUTER VISION 2014 AMARI LEWIS CRCV UNIVERSITY OF CENTRAL FLORIDA.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Notes on HW 1 grading I gave full credit as long as you gave a description, confusion matrix, and working code Many people’s descriptions were quite short.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Combining multiple learners Usman Roshan. Decision tree From Alpaydin, 2010.
Lecture 5: Statistical Methods for Classification CAP 5415: Computer Vision Fall 2006.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Machine Vision. Image Acquisition > Resolution Ability of a scanning system to distinguish between 2 closely separated points. > Contrast Ability to detect.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Projects Project 1a due this Friday Project 1b will go out on Friday to be done in pairs start looking for a partner now.
AdaBoost Algorithm and its Application on Object Detection Fayin Li.
Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
Another Example: Circle Detection
Machine Learning: Ensemble Methods
Reading: R. Schapire, A brief introduction to boosting
2. Skin - color filtering.
- photometric aspects of image formation gray level images
Session 7: Face Detection (cont.)
Lit part of blue dress and shadowed part of white dress are the same color
By: Kevin Yu Ph.D. in Computer Engineering
Adaboost Team G Youngmin Jun
Lecture 26: Faces and probabilities
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei.
Announcements Project 4 out today Project 2 winners help session today
Presentation transcript:

Recovering Intrinsic Images from a Single Image 28/12/05 Dagan Aviv Shadows Removal Seminar

Relies on: Marshall F. Tappen, William T. Freeman and Edward H. Adelson. “Recovering Intrinsic Images from a Single Image.” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 9, pp , September, 2005 Matt Bell and William T. Freeman. “Learning Local Evidence for Shading and Reflectance” Proc. Int’l Conf. Computer Vision, 2001.

Motivation Interpreting real-world images Distinguish the different characteristics of the scene Shading and reflectance – two of the most important characteristics

Short Introduction Image is composed of Shading Intrinsic and Reflectance Intrinsic images

Our Goal Decompose an Input Image into its Intrinsics Simple Approaches like Band Filtering won't help us. for example:

Our Approach Recovering the images using multiple cues Implicit assumption – surfaces are lambertian (a good starting point…) Classify Image Derivatives

Separating Shadows and Reflectance As shown in the preceding talk: Recovering S and R using derivatives of the input image I

Creating The Intrinsic Image Building S and R is performed in the same manner as shown in the last talk (Weiss)  - convolution operator imgX - S or R F – estimated derivative f – derivative filter – ([-1 1] in out case) f(-x,-y) – reverse copy of f(x,y)    

Binary Classification Assumption – each derivative is caused either by Shadings or Reflectance This reduces our problem into a binary classification problem

Classifying Derivatives 3 Basic phases: 1. Compute image derivatives 2. Classify each derivative as caused by shading or reflectance 3. Invert derivatives classified as shading to find shading images. The reflectance image is found in the same way.

Classifying Derivatives The Classifying stage is achieved using two forms of local evidence: 1. color information 2. statistical regularities of surfaces (Gray-scale information)

Color Information When speaking of defusive surfaces : And lights have the same color, changes due to shading should affect R,G and B proportionally

Color Information Let and be RGB vectors of two adjacent pixels. A change due to shading can be represent as: α is scalar (intensity change)

Color Information If the changes are caused by a reflectance change After normalizing and, the dot product will result 1 if the changes are due to shadings ( ) Practically a threshold is chosen manually so: If - shading Else - reflectance

Color Information The threshold eliminates chromatics artifacts caused by JPEG compression for example The chosen threshold: cos 0.01 When speaking of non-lambertian surfaces: the results are less satisfying

Color Information - examples Input Image Shading Imagereflectance Image

Color Information - examples Black on white may be interpreted as intensity change. Resulting in misclassification

Color Information - examples As before - the face is incorrectly placed in the shadings image The hat specularity is added to the reflectance image

Gray-scale Information Shading patterns have a unique appearance We will examine ROIs wrapping each derivative in a gray-scale image to find shadow patterns

Gray-scale classifier The Basic Feature where I is the ROI (patch) surrounding a derivative and w is a linear filter the non-linear F is the result in the center of the ROI 

Training the classifier Two tasks are involved: 1. choosing the filters set – which will build the features 2. training the classifier on the features

AdaBoost (in general) Both Tasks are achieved by the chosen classifier – AdaBoost First introduced in 1995 by Freund and Schapire The main idea is to boost a “weak classifier” – a classifier with error slightly less than 0.5

AdaBoost The classifier is trained by giving it a training set is a binary mapping from the X domain to the Y domain – {-1,+1} In our case X is a set of synthetic images of shadings and reflectance, -1 is for reflectance and +1 is for shading

AdaBoost AdaBoost is also gets the weak classifier as an input The learning stage is iterative At each round t, AdaBoost weights the training set and run the weak classifier The weak classifier job is to find an hypothesis h such that:

AdaBoost Elements that were misclassified will get a higher weight for the next iteration AdaBoost also weights the classifier votes At the end – once the desired number of rounds has run, all the weighted votes is gathered to compute the final strong classification H.

AdaBoost – toy example Original Training Set

AdaBoost – toy example Round 1

AdaBoost – toy example Round 2

AdaBoost – toy example Round 3

AdaBoost – toy example Final result

AdaBoost – matlab source See the next archive for AdaBoost Matlab implementation (and more)

Our AdaBoost The Weak Classifier where and recall that If otherwise 

So AdaBoost needs to choose w s, threshold s and s w – a set of patches constructed from 1 st and 2 nd derivatives of Gaussian filters The training set (which the I’s patches is derived from) is a set of synthetic images Our AdaBoost

The training set is evenly divided between shading: and reflectance: Our AdaBoost

The shading images were lit from the same direction An assumption – when an input image is given, the light direction is known Preprocess - rotate the input image so the light will match the light in the training set Our AdaBoost

GrayScale Information - examples

The shading image is missing some edges These edges didn”t appear in the training set

GrayScale Information - examples

Misclassification of the cheeks – due to weak gradients

Combing Informations The final result is based on a statistical calculation of conditional probability Assumption: both classifiers (color and gray- scale) are statistically independent Bayes rule: Each Pr is computed with some modifications on the classifiers

Combing Informations – The Pillow Example

Handling Ambiguities Ambiguities - In the former slide for example – the center of the mouse Shading example Input imageReflectance example

Derivatives that lie on the same contour should have the same classification The mouth corners are well classified as reflectance Handling Ambiguities

Areas where the classification is clear are to propagate their classification to disambiguate other areas Achieved by a Markov Random Field – which generalize Markov Chains Handling Ambiguities

First a potential function is applied on the image finding the “most interesting” gradients Then the propagation starts from points having both strong derivatives and no ambiguities Handling Ambiguities

Final Results

Thank you The End