A Convex Optimization Approach for Depth Estimation Under Illumination Variation Wided Miled, Student Member, IEEE, Jean-Christophe Pesquet, Senior Member,

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Stereo Vision Reading: Chapter 11
Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923.
Gratuitous Picture US Naval Artillery Rangefinder from World War I (1918)!!
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Lecture 8: Stereo.
Last Time Pinhole camera model, projection
Motion Estimation I What affects the induced image motion? Camera motion Object motion Scene structure.
Stereo and Epipolar geometry
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Stereopsis Mark Twain at Pool Table", no date, UCR Museum of Photography.
The plan for today Camera matrix
CS 223b 1 More on stereo and correspondence. CS 223b 2 =?f g Mostpopular For each window, match to closest window on epipolar line in other image. (slides.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Motion Computing in Image Analysis
Stereo Computation using Iterative Graph-Cuts
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Stereo Vision Static Stereo. Static Stereo Pipeline Image Acquisition Camera Modeling Feature Extraction Correspondence Analysis –Intensity Based –Feature.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
CSE473/573 – Stereo Correspondence
Announcements PS3 Due Thursday PS4 Available today, due 4/17. Quiz 2 4/24.
Optical Flow Digital Photography CSE558, Spring 2003 Richard Szeliski (notes cribbed from P. Anandan)
Stereo Sebastian Thrun, Gary Bradski, Daniel Russakoff Stanford CS223B Computer Vision (with slides by James Rehg and.
Stereo matching “Stereo matching” is the correspondence problem –For a point in Image #1, where is the corresponding point in Image #2? C1C1 C2C2 ? ? C1C1.
Stereo matching Class 10 Read Chapter 7 Tsukuba dataset.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
A plane-plus-parallax algorithm Basic Model: When FOV is not very large and the camera motion has a small rotation, the 2D displacement (u,v) of an image.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Automatic Camera Calibration
Computer vision: models, learning and inference
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Structure from images. Calibration Review: Pinhole Camera.
Fast Approximate Energy Minimization via Graph Cuts
Mutual Information-based Stereo Matching Combined with SIFT Descriptor in Log-chromaticity Color Space Yong Seok Heo, Kyoung Mu Lee, and Sang Uk Lee.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
The Brightness Constraint
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
The Measurement of Visual Motion P. Anandan Microsoft Research.
Stereo Vision Reading: Chapter 11 Stereo matching computes depth from two or more images Subproblems: –Calibrating camera positions. –Finding all corresponding.
Stereo Many slides adapted from Steve Seitz.
Computer Vision, Robert Pless
1 University of Texas at Austin Machine Learning Group 图像与视频处理 计算机学院 Motion Detection and Estimation.
Computer Vision Stereo Vision. Bahadir K. Gunturk2 Pinhole Camera.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
stereo Outline : Remind class of 3d geometry Introduction
Motion Estimation I What affects the induced image motion?
Solving for Stereo Correspondence Many slides drawn from Lana Lazebnik, UIUC.
Jeong Kanghun CRV (Computer & Robot Vision) Lab..
A global approach Finding correspondence between a pair of epipolar lines for all pixels simultaneously Local method: no guarantee we will have one to.
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Advanced Computer Vision Chapter 11 Stereo Correspondence Presented by: 蘇唯誠 指導教授 : 傅楸善 博士.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Energy minimization Another global approach to improve quality of correspondences Assumption: disparities vary (mostly) smoothly Minimize energy function:
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
CSE 185 Introduction to Computer Vision Stereo 2.
Geometry 3: Stereo Reconstruction
What have we learned so far?
The Brightness Constraint
Computer Vision Stereo Vision.
Image and Video Processing
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Optical flow and keypoint tracking
Presentation transcript:

A Convex Optimization Approach for Depth Estimation Under Illumination Variation Wided Miled, Student Member, IEEE, Jean-Christophe Pesquet, Senior Member, IEEE, and Michel Parent

2 Abstract Illumination changes cause serious problems in many computer vision applications. A spatially varying multiplicative model is developed to account for brightness changes induced between left and right views.Illumination changes cause serious problems in many computer vision applications. A spatially varying multiplicative model is developed to account for brightness changes induced between left and right views. The recovery of the depth information of a scene from stereo images is an active area of research in computer vision. The need for an accurate and dense depth map arises in many applications such as autonomous navigation, 3-D reconstruction and 3-D television.The recovery of the depth information of a scene from stereo images is an active area of research in computer vision. The need for an accurate and dense depth map arises in many applications such as autonomous navigation, 3-D reconstruction and 3-D television. 2

3 I. INTRODUCTION Feature-based methods:Feature-based methods: Extract salient features from both images, such as edges, segments, or curves. Extract salient features from both images, such as edges, segments, or curves. An interpolation step is required if a dense map is desired, but accurate. An interpolation step is required if a dense map is desired, but accurate. Region-based methods:Region-based methods: It have the advantage of directly generating dense disparity estimates by correlation over local windows, but not accurate. It have the advantage of directly generating dense disparity estimates by correlation over local windows, but not accurate. Many global stereo algorithms have, therefore, been developed based on dynamic programming, graph cuts, or belief propagation. Variational approaches have also been very effective for solving the matching problem globally Many global stereo algorithms have, therefore, been developed based on dynamic programming, graph cuts, or belief propagation. Variational approaches have also been very effective for solving the matching problem globally

4 II.MODEL FOR ILLUMINATION VARIATIONS The intensity of an image pixel: I i (s) = ρ(s) R i (n(s)), for i ∈﹛ l,r ﹜。 I i (s) = ρ(s) R i (n(s)), for i ∈﹛ l,r ﹜。 Assuming that the stereo images have been rectified, so that the geometry of the cameras can be considered as horizontal epipolar, and using the Image Irradiance Equation: I r ( x-u(s), y ) = v(s) I l (s) I r ( x-u(s), y ) = v(s) I l (s) 4

5 II.MODEL FOR ILLUMINATION VARIATIONS The disparity u and illumination v can be computed by minimizing the following cost function based on the sum of squared differences (SSD) metric: following cost function based on the sum of squared differences (SSD) metric: Ĵ ( u, v ) = ∑ s ∈ D [ v(s)I l (s) – I r ( x-u(s), y )] 2, D ⊂ N 2 This expression is nonconvex with respect to the displacement field u. Thus, to avoid a nonconvex minimization, we assume that I r is a differentiable function and we consider a Taylor expansion of the nonlinear term I r ( x-ū, y ) around an initial estimate ū as follows: I r ( x-u, y ) ≈ I r ( x-ū, y ) - ( u-ū ) ∇ I r x ( x-ū, y )

6 II.MODEL FOR ILLUMINATION VARIATIONS To simplify the notations: Ĵ ( u, v ) ≈ ∑ s ∈ D [ L 1 (s)u(s) + L 2 (s)v(s) – r(s) ] 2 where L 1 (s) = ∇ I r x ( x-ū, y ), L 2 (s) = I l (s), r(s) = I r ( x-ū(s), y ) + ū(s)L 1 (s) Our goal is to simultaneously recover u and v. Thus, setting w = ( u, v) T and L = [ L 1, L 2 ], we end up with the following quadratic criterion to be minimized: J D J D ( w ) = ∑ s ∈ D [ L(s)w(s) – r(s) ] 2

7 III. SET THEORETIC ESTIMATION FindFind w ∈ S=∩ i=1 m S i such that such that J(w) = inf J(S) where J: H→]-∞,+∞] is a convex function. J: H→]-∞,+∞] is a convex function. (S i ) 1≤i ≤m are closed convex sets of H. (S i ) 1≤i ≤m are closed convex sets of H. Constraint sets can be modelled as level sets : Constraint sets can be modelled as level sets : ∀ i ∈ { 1,…,m }, S i = { w ∈ H | f i (w) ≤ δ i } where where ∀ i ∈ { 1,…,m }, f i :H →R is continuous convex function ∀ i ∈ { 1,…,m }, f i :H →R is continuous convex function (δ i ) 1≤i ≤m are real-valued parameters. (δ i ) 1≤i ≤m are real-valued parameters.

8 III. SET THEORETIC ESTIMATION A. Global Objective Function ( 1 / 2 ) The initial disparity estimate ū: ū(x,y) = arg min u ∈ U ∑ (i,j) ∈ β [ β x,y (u) I l (x+i,y+j) – I r (x+i-u,y+j) ] 2 where U⊂N is the search disparity set。 βcorresponds to the matching block centered at the pixel (x,y)。 β βx,y(u) is the following least squares estimate of the illumination factor for block β: βx,y(u)=∑ (i,j) ∈ β I l (x+i,y+j)I r (x+i-u,y+j) /∑ (i,j) ∈ β I l (x+I,y+i) 2 The initial illumination field ϋ : ϋ(x,y) = βx,y ( ū(x,y) ) )) )

9 III. SET THEORETIC ESTIMATION A. Global Objective Function ( 2 / 2 )A. Global Objective Function ( 2 / 2 ) J D\O (w) = ∑ s ∈ D\O [ L(s)w(s) – r(s) ] 2 J(w) = ∑ s ∈ D\O [ L(s)w(s) – r(s) ] 2 + α∑ s ∈ D | w(s) - ŵ (s). 2 2 where ϋ) is an initial estimate as described above ŵ = ( ū, ϋ) is an initial estimate as described above denotes the Euclidean norm in R 2 |. 2 denotes the Euclidean norm in R 2 is a positive constant α is a positive constant

10 III. SET THEORETIC ESTIMATION B. Convex Constraints 1) Constraints on the Disparity Image: Total Variation Based Regularization: For a differentiable analog image u defined on a spatial domain Ω TV(u) = ∫ Ω | ∇ u(s) | ds where ∇ u denotes the gradient of u S a 1 = { (u,v) ∈ H | TV(u) ≤ T u } where a : stands for analog constraint sets.

11 III. SET THEORETIC ESTIMATION B. Convex Constraints 1) Constraints on the Disparity Image: Disparity Range Constraint: Sa2 = { (u,v) ∈ H | u min ≤ u ≤ u max }

12 III. SET THEORETIC ESTIMATION B. Convex Constraints 1) Constraints on the Disparity Image: Nagel–Enkelmann Based Regularization: where I denotes the 2 2 identity matrix r is chosen according to gradient norm value range |∇I|<<r:uniform areas, |∇I|>>r:edge

13 III. SET THEORETIC ESTIMATION B. Convex Constraints 2) Constraints on the Illumination Field: Tikhonov Based Regularization: Illumination Range Constraint: Sa5 = { (u,v) ∈ H | v min ≤ v ≤ v max } where v min = 0.8 v max = 1.2

14 IV. EXPERIMENTAL RESULTS N β is the total number of pixels inβ

15 IV. EXPERIMENTAL RESULTS (x0, y0) is (128, 128) α is the standard deviation of the illumination change. illumination change.

16 IV. EXPERIMENTAL RESULTS δ s is fixed to 1

17 IV. EXPERIMENTAL RESULTS

18 IV. EXPERIMENTAL RESULTS

19 IV. EXPERIMENTAL RESULTS

20 IV. EXPERIMENTAL RESULTS

21 IV. EXPERIMENTAL RESULTS

22 IV. EXPERIMENTAL RESULTS

23

Thank you for your listening ! The more you learn, the more you know. The more you know, the more you forget. The more you forget, the less you know