Interactive Matting Christoph Rhemann Supervised by: Margrit Gelautz and Carsten Rother.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Wheres Waldo: Matching People in Images of Crowds Rahul GargDeva RamananSteven M. Seitz Noah Snavely Problem Definition University of Washington University.
S INGLE -I MAGE R EFOCUSING AND D EFOCUSING Wei Zhang, Nember, IEEE, and Wai-Kuen Cham, Senior Member, IEEE.
SOFT SCISSORS: AN INTERACTIVE TOOL FOR REALTIME HIGH QUALITY MATTING International Conference on Computer Graphics and Interactive Techniques ACM SIGGRAPH.
Spectral Matting Anat Levin 1,2 Alex Rav-Acha 1 Dani Lischinski 1 1 School of CS&Eng The Hebrew University 2 CSAIL MIT.
1School of CS&Eng The Hebrew University
Image Matting with the Matting Laplacian
Patch to the Future: Unsupervised Visual Prediction
Proportion Priors for Image Sequence Segmentation Claudia Nieuwenhuis, etc. ICCV 2013 Oral.
Shaojie Zhuo, Dong Guo, Terence Sim School of Computing, National University of Singapore CVPR2010 Reporter: 周 澄 (A.J.) 01/16/2011 Key words: image deblur,
Video Matting from Depth Maps Jonathan Finger Oliver Wang University of California, Santa Cruz {jfinger,
GrabCut Interactive Image (and Stereo) Segmentation Carsten Rother Vladimir Kolmogorov Andrew Blake Antonio Criminisi Geoffrey Cross [based on Siggraph.
Natural Video Matting using Camera Arrays Neel S. JoshiWojciech MatusikShai Avidan Mitsubishi Electric Research Laboratory University of California, San.
How to Evaluate Foreground Maps ?
Jue Wang Michael F. Cohen IEEE CVPR Outline 1. Introduction 2. Failure Modes For Previous Approaches 3. Robust Matting 3.1 Optimized Color Sampling.
Image Compositing and Matting. Introduction Matting and compositing are important operations in the production of special effects. These techniques enable.
Forward-Backward Correlation for Template-Based Tracking Xiao Wang ECE Dept. Clemson University.
1 Roey Izkovsky Yuval Kaminka Matting Helping Superman fly since 1978.
Grayscale Image Matting And Colorization Tongbo Chen, Yan Wang, Volker Schillings, Christoph Meinel FB IV-Informatik, University of Trier, Trier 54296,
Stephen J. Guy 1. Photomontage Photomontage GrabCut – Interactive Foreground Extraction 1.
Image Matting and Its Applications Chen-Yu Tseng Advisor: Sheng-Jyh Wang
GrabCut Interactive Image (and Stereo) Segmentation Joon Jae Lee Keimyung University Welcome. I will present Grabcut – an Interactive tool for foreground.
Unnatural L 0 Representation for Natural Image Deblurring Speaker: Wei-Sheng Lai Date: 2013/04/26.
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.

Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
Schedule Introduction Models: small cliques and special potentials Tea break Inference: Relaxation techniques:
Natural Video Matting with Depth Jonathan Finger Oliver Wang University of California, Santa Cruz {jfinger,
What are Good Apertures for Defocus Deblurring? Columbia University ICCP 2009, San Francisco Changyin Zhou Shree K. Nayar.
A Closed Form Solution to Natural Image Matting
Advanced Topics in Computer Vision Spring 2006 Video Segmentation Tal Kramer, Shai Bagon Video Segmentation April 30 th, 2006.
Motion Detail Preserving Optical Flow Estimation Li Xu 1, Jiaya Jia 1, Yasuyuki Matsushita 2 1 The Chinese University of Hong Kong 2 Microsoft Research.
High-Quality Video View Interpolation
Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Interactive Visual Media Group Microsoft Research Dept. of Computer.
The plan for today Camera matrix
Abstract Extracting a matte by previous approaches require the input image to be pre-segmented into three regions (trimap). This pre-segmentation based.
Matting and Transparency : Computational Photography Alexei Efros, CMU, Fall 2006.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
An Iterative Optimization Approach for Unified Image Segmentation and Matting Hello everyone, my name is Jue Wang, I’m glad to be here to present our paper.
Understanding and evaluating blind deconvolution algorithms
Multi-Aperture Photography Paul Green – MIT CSAIL Wenyang Sun – MERL Wojciech Matusik – MERL Frédo Durand – MIT CSAIL.
An Interactive Background Blurring Mechanism and Its Applications NTU CSIE 1 互動式背景模糊.
Yu-Wing Tai, Hao Du, Michael S. Brown, Stephen Lin CVPR’08 (Longer Version in Revision at IEEE Trans PAMI) Google Search: Video Deblurring Spatially Varying.
Self-Calibration and Metric Reconstruction from Single Images Ruisheng Wang Frank P. Ferrie Centre for Intelligent Machines, McGill University.
#MOTION ESTIMATION AND OCCLUSION DETECTION #BLURRED VIDEO WITH LAYERS
Background Subtraction based on Cooccurrence of Image Variations Seki, Wada, Fujiwara & Sumi Presented by: Alon Pakash & Gilad Karni.
Mitsubishi Electric Research Labs (MERL) Super-Res from Single Motion Blur PhotoAgrawal & Raskar Amit Agrawal and Ramesh Raskar Mitsubishi Electric Research.
Learning to Perceive Transparency from the Statistics of Natural Scenes Anat Levin School of Computer Science and Engineering The Hebrew University of.
GrabCut Interactive Foreground Extraction Carsten Rother – Vladimir Kolmogorov – Andrew Blake – Michel Gangnet.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Extracting Depth and Matte using a Color-Filtered Aperture Yosuke Bando TOSHIBA + The University of Tokyo Bing-Yu Chen National Taiwan University Tomoyuki.
Lecture#4 Image reconstruction
CS654: Digital Image Analysis Lecture 28: Advanced topics in Image Segmentation Image courtesy: IEEE, IJCV.
An Interactive Background Blurring Mechanism and Its Applications NTU CSIE Yan Chih-Yu Advisor: Wu Ja-Ling, Ph.D. 1.
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
Improving Image Matting using Comprehensive Sampling Sets CVPR2013 Oral.
ICCV 2007 Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1,
Che-An Wu Background substitution. Background Substitution AlphaMa p Trimap Depth Map Extract the foreground object and put into another background Objective.
GrabCut Interactive Foreground Extraction using Iterated Graph Cuts Carsten Rother Vladimir Kolmogorov Andrew Blake Microsoft Research Cambridge-UK.
An Additive Latent Feature Model
Deconvolution , , Computational Photography
Compositional Human Pose Regression
Enhanced-alignment Measure for Binary Foreground Map Evaluation
A Comparative Study for Single Image Blind Deblurring
Xiang Liang Natural Image Matting Xiang Liang
Iterative Optimization
Fully automated trimap generation for matting with Kinect
Interactive Background Blurring
Deblurring Shaken and Partially Saturated Images
Presentation transcript:

Interactive Matting Christoph Rhemann Supervised by: Margrit Gelautz and Carsten Rother

Matting and compositing

Outline Talk Outline: Introduction & previous approaches Our matting model Evaluation strategy

=+ C r,g,b = α F r,g,b + (1 - α ) B r,g,b ●● ●● Inverse process of compositing: Determine: F, B, α Given:C Matting is ill posed

=+ Underconstrained problem: 7 Unknowns in only 3 Equations ●● C r = α F r + (1 - α ) B r C g = α F g + (1 - α ) B g C b = α F b + (1 - α ) B b C r,g,b = α F r,g,b + (1 - α ) B r,g,b ●● Matting is ill posed

Trimap Scribbles Background Unknown Foreground Unknown Foreground User interaction

Previous approaches C = α F + (1 – α ) B ● ● Recall compositing equation:

Previous approaches C = α F + (1 – α ) B ● ● Recall compositing equation: Closed Form Matting [Levin et al. 06] R B G

Previous approaches C = α F + (1 – α ) B ● ● Recall compositing equation: R B G Closed Form Matting [Levin et al. 06] Assumption: F and B colors in a local window lie on color line

Previous approaches C = α F + (1 – α ) B ● ● Recall compositing equation: R B G Closed Form Matting [Levin et al. 06] Assumption: F and B colors in a local window lie on color line  Analytically eliminate F,B.  Alpha can be solved in closed form

Result of [Levin et al 06]True SolutionInput image + Trimap Result of Closed Form Matting [Levin et al. 06]: Result imperfect: Hairs cut off Problem: Cost function has large solution space Previous approaches

What are the reasons for pixels to be transparent? Segmentation – based matting Defocus Blur

Lens Camera sensor Point spread function Point Spread Function Focal plane Lens’ aperture Lens and defocus Slides by Anat Levin

LensObject Camera sensor Point spread function Lens’ aperture Focal plane Slides by Anat Levin Lens and defocus Point Spread Function

What are the reasons for pixels to be transparent? Segmentation – based matting Defocus BlurMotion Blur PSF for Motion Blur

What are the reasons for pixels to be transparent? Segmentation – based matting Defocus BlurMotion Blur Discretization

What are the reasons for pixels to be transparent?  Observation: Apart from translucency mixed pixels are caused by camera’s Point Spread Function (PSF) Segmentation – based matting Defocus BlurMotion Blur DiscretizationTranslucency

Basic idea: Model alpha as convolution of a binary segmentation with PSF Approach taken [Rhemann et al. 08]: Use this model as prior in framework of [Levin et al. 06] Model for alpha Binary segmentationPSFObserved alpha Input image + Trimap

Matting process Initial alpha using [Wang et al. ´07] (Result is imperfect) Initialize PSF/ deblur alpha Deblured (sparse) alpha Binarized (sparse) alpha using gradient preserving MRF prior Iterate a few times Input image

Matting process Binarized (sparse) alpha using gradient preserving MRF prior Segmentation prior Final alpha Ground truth

Result for [Levin et al. ’06] Input image Input image + trimap Comparison

Result of [Wang et al. ’07] Input image Input image + trimap Comparison

Input image Input image + trimap Result of [Rhemann et al. ’08] Comparison

Input image + trimap[Levin et al. ’06] [Wang et al. ’07][Rhemann et al. ’08]Ground truth alpha [Levin et al. ’07] Comparison – Close up

Evaluation of matting algorithms How to compare performance of algorithms? Showing some qualitative results OR Quantitative evaluation using reference solutions

Evaluation of matting algorithms Key Factors for a good quantitative evaluation Ground truth dataset Online evaluation Perceptual error functions

35 natural images High resolution High quality Triangulation Matting [Smith, Blinn 96] - Photograph object against 2 different backgrounds  True solution to matting problem Input imageGround truthZoom in Ground truth dataset

Data and evaluation scripts online Advantages: Investigate results Upload novel results Online evaluation

Motivation: Simple metrics not always correlated with visual quality Input imageZoom inResult 1 SAD: 1215 Result 2 SAD: 806 Perceptually motivated error functions

Develop error measures for two properties: Connectivity of foreground object Gradient of the alpha matte Perceptually motivated error functions Input imageZoom inResult 1 SAD: 312 Result 2 SAD: 83

User Study: Goal: Infer visual quality of image compositions Task: Rank to according to how realistic they appear Perceptually motivated error functions Gradient artifactsConnectivity artifacts

Correlation of error measures to average user ranking Perceptually motivated error functions

Model for alpha  overcomes ambiguities Model-based algorithm: Performs better than competitors Perceptual motivated evaluation Message to you: Evaluation of your algorithm is important Use ground truth data to make quantitative comparisons Use a large dataset Use a training / test split Conclusions

Previous approaches C = α F + (1 – α ) B ● ● Recall compositing equation: R B G Model of F Model of B Observed color Data driven approaches (e.g. [Wang et al. 07]) Model color distribution of F and B (from the user defined trimap) Observed color more likely under F or B model? Use likelihood in framework of [Levin et al 06]

Result of data driven approaches [Wang et al. 07]: Hair is better captured Many artifacts in the background Previous approaches Result of [Levin et al 06]True SolutionInput image + TrimapResult of [Wang et al 07]