Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
The fundamental matrix F
GrabCut Interactive Image (and Stereo) Segmentation Carsten Rother Vladimir Kolmogorov Andrew Blake Antonio Criminisi Geoffrey Cross [based on Siggraph.
Jue Wang Michael F. Cohen IEEE CVPR Outline 1. Introduction 2. Failure Modes For Previous Approaches 3. Robust Matting 3.1 Optimized Color Sampling.
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Professor :王聖智 教授 Student :周節.
Image Segmentation some examples Zhiqiang wang
Computer Vision Optical Flow
Shape from Contours and Multiple Stereo A Hierarchical, Mesh-Based Approach Hendrik Kück, Wolfgang Heidrich, Christian Vogelgsang.
3D Video Generation and Service Based on a TOF Depth Sensor in MPEG-4 Multimedia Framework IEEE Consumer Electronics Sung-Yeol Kim Ji-Ho Cho Andres Koschan.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Last Time Pinhole camera model, projection
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Stereo Matching Vision for Graphics CSE 590SS, Winter 2001 Richard Szeliski.
Multi-view stereo Many slides adapted from S. Seitz.
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
High-Quality Video View Interpolation
Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Interactive Visual Media Group Microsoft Research Dept. of Computer.
The plan for today Camera matrix
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Abstract Extracting a matte by previous approaches require the input image to be pre-segmented into three regions (trimap). This pre-segmentation based.
Virtual Control of Optical Axis of the 3DTV Camera for Reducing Visual Fatigue in Stereoscopic 3DTV Presenter: Yi Shi & Saul Rodriguez March 26, 2008.
Interactive Matting Christoph Rhemann Supervised by: Margrit Gelautz and Carsten Rother.
© 2002 by Davi GeigerComputer Vision October 2002 L1.1 Binocular Stereo Left Image Right Image.
Object recognition under varying illumination. Lighting changes objects appearance.
Information that lets you recognise a region.
Dorin Comaniciu Visvanathan Ramesh (Imaging & Visualization Dept., Siemens Corp. Res. Inc.) Peter Meer (Rutgers University) Real-Time Tracking of Non-Rigid.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
Manhattan-world Stereo Y. Furukawa, B. Curless, S. M. Seitz, and R. Szeliski 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.
Stereo vision A brief introduction Máté István MSc Informatics.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) /04 $20.00 c 2004 IEEE 1 Li Hong.
Mutual Information-based Stereo Matching Combined with SIFT Descriptor in Log-chromaticity Color Space Yong Seok Heo, Kyoung Mu Lee, and Sang Uk Lee.
1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.
A Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction C. Stentoumis 1, L. Grammatikopoulos 2, I. Kalisperakis 2, E.
Joint Depth Map and Color Consistency Estimation for Stereo Images with Different Illuminations and Cameras Yong Seok Heo, Kyoung Mu Lee and Sang Uk Lee.
Takuya Matsuo, Norishige Fukushima and Yutaka Ishibashi
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
Image-based rendering Michael F. Cohen Microsoft Research.
Robot Vision SS 2007 Matthias Rüther 1 ROBOT VISION Lesson 6a: Shape from Stereo, short summary Matthias Rüther Slides partial courtesy of Marc Pollefeys.
A Region Based Stereo Matching Algorithm Using Cooperative Optimization Zeng-Fu Wang, Zhi-Gang Zheng University of Science and Technology of China Computer.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
CSE 185 Introduction to Computer Vision Feature Matching.
Jeong Kanghun CRV (Computer & Robot Vision) Lab..
Journal of Visual Communication and Image Representation
Line Matching Jonghee Park GIST CV-Lab..  Lines –Fundamental feature in many computer vision fields 3D reconstruction, SLAM, motion estimation –Useful.
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
Improving Image Matting using Comprehensive Sampling Sets CVPR2013 Oral.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Speaker Min-Koo Kang March 26, 2013 Depth Enhancement Technique by Sensor Fusion: MRF-based approach.
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
Matte-Based Restoration of Vintage Video 指導老師 : 張元翔 主講人員 : 鄭功運.
Che-An Wu Background substitution. Background Substitution AlphaMa p Trimap Depth Map Extract the foreground object and put into another background Objective.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Energy minimization Another global approach to improve quality of correspondences Assumption: disparities vary (mostly) smoothly Minimize energy function:
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
MAN-522 Computer Vision Spring
Summary of “Efficient Deep Learning for Stereo Matching”
Semi-Global Matching with self-adjusting penalties
Motion and Optical Flow
Common Classification Tasks
Vehicle Segmentation and Tracking in the Presence of Occlusions
Representing Moving Images with Layers
Representing Moving Images with Layers
Occlusion and smoothness probabilities in 3D cluttered scenes
Presentation transcript:

Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32

Outline Introduction Method Result Conclusion

Introduction New view synthesis has emerged as an important application of 3D stereo reconstruction. Even if a perfect depth map were available, current methods for view interpolation share two major limitations: Sampling blur Boundary artifacts We propose a method called boundary matting, which represents each occlusion boundary as a 3D curve.

Introduction Better suited to view synthesis, because it avoids the blurring associated with resampling those mattes. Automatic matting from imperfect stereo data for large-scale opaque objects. Exploits information from matting to refine stereo disparities along occlusion boundaries. Estimate occlusion boundaries to sub-pixel accuracy, suitable for super-resolution or zooming. Error metric is symmetric with respect to the input images.

Method To model the matting effects at occlusion boundaries, we use the well-known compositing equation The triangulation method operates by observing foreground objects in front of V known backgrounds giving the linear system The 3D subpixel boundary curves lead to different alpha’s across viewpoint in general.

Method Assume the occluding contours of the foreground objects are sufficiently sharp relative to both the closeness of the views and the standoff distance of the cameras. The 3D curve as a spline parameterized by control points,. In the ideal case, with a Dirac point spread function, the continuous alpha matte for the i-th view is

Simulate image blurring due to camera optics and motion by convolving a with an isotropic 2D Gaussian function : Objective function

Method The starting point for boundary matting is an initialization derived from stereo and the attendant camera calibration. This method computes stereo by combining shiftable windows for matching with global minimization using graph cuts for visibility reasoning.

To extract the initial curves h0 corresponding to occlusion boundaries, we first form a depth discontinuity map by applying a manually selected threshold to the gradient of the disparity map for the reference view.

Background (clean plate) estimation For a given boundary pixel, we find potentially corresponding background colors by forward-warping that pixel to all other views. We use a color inconsistency measure to select the corresponding background pixel most likely to consist of pure background color. For each candidate background pixel, we compute its ‘‘color inconsistency’’ as the maximum L2 distance in RGB space between its color and any of its eight-neighbors that are also labeled at background depth.

Foreground estimation For each pixel, we aggregate the foreground color estimates over all V views for robustness. To do this we take the weighted average

Result For all datasets, we used five input views, with the middle view designated as the reference view for initialization. A typical run for a 300-pixel boundary in five views could take approximately five minutes to complete, converging within 20 iterations

Result 344*240640*486434*380

Result

Conclusion For seamless view interpolation, mixed boundary pixels must be resolved into foreground and background components. Boundary matting appears to be a useful tool for addressing this problem in an automatic way. Using 3D curves to model occlusion boundaries is a natural representation that provides several benefits, including the ability to super-resolve the depth maps near occlusion boundaries.