Depth Aware Inpainting for Novel View Synthesis Jayant Thatte

Slides:



Advertisements
Similar presentations
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Advertisements

These figures correspond to the image using shear =10 pixels And 2D convolution was done in order to get the images Using R=0 y0 Using R=1 y1.
GrabCut Interactive Image (and Stereo) Segmentation Carsten Rother Vladimir Kolmogorov Andrew Blake Antonio Criminisi Geoffrey Cross [based on Siggraph.
Video Inpainting Under Constrained Camera Motion Kedar A. Patwardhan, Student Member, IEEE, Guillermo Sapiro, Senior Member, IEEE, and Marcelo Bertalm.
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 12, DECEMBER /10/4.
Stereo.
Vision For Graphics ICCV 2005 Vision for Graphics Larry Zitnick, Sing Bing Kang, Rick Szeliski Interactive Visual Media Group Microsoft Research Steve.
Maryia Kazakevich “Texture Synthesis by Patch-Based Sampling” Texture Synthesis by Patch-Based Sampling Real-Time Texture Synthesis By Patch-Based Sampling,
Image Quilting for Texture Synthesis and Transfer Alexei A. Efros1,2 William T. Freeman2.
Self-Supervised Segmentation of River Scenes Supreeth Achar *, Bharath Sankaran ‡, Stephen Nuske *, Sebastian Scherer *, Sanjiv Singh * * ‡
1 Computer Vision Research  Huttenlocher, Zabih –Recognition, stereopsis, restoration, learning  Strong algorithmic focus –Combinatorial optimization.
1 Learning to Detect Natural Image Boundaries David Martin, Charless Fowlkes, Jitendra Malik Computer Science Division University of California at Berkeley.
High-Quality Video View Interpolation
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Region Filling and Object Removal by Exemplar-Based Image Inpainting
High Speed Obstacle Avoidance using Monocular Vision and Reinforcement Learning Jeff Michels Ashutosh Saxena Andrew Y. Ng Stanford University ICML 2005.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
The University of Ontario CS 4487/9587 Algorithms for Image Analysis n Web page: Announcements, assignments, code samples/libraries,
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Background Estimation Mehdi Ghayoumi, MD Iftakharul Islam, Muslem Al-Saidi Department of Computer Science Kent State University, Kent, OH
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Reconstruction of 3D Models from Intensity Images and Partial Depth L. Abril Torres-Méndez and Gregory Dudek Centre for Intelligent Machines School of.
Introduction Autostereoscopic displays give great promise as the future of 3D technology. These displays spatially multiplex many views onto a screen,
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
Image-based rendering Michael F. Cohen Microsoft Research.
Statistical Inference and Synthesis in the Image Domain for Mobile Robot Environment Modeling L. Abril Torres-Méndez and Gregory Dudek Centre for Intelligent.
Feature-Based Stereo Matching Using Graph Cuts Gorkem Saygili, Laurens van der Maaten, Emile A. Hendriks ASCI Conference 2011.
Computer Vision, Robert Pless
Depth-of-Field Rendering by Pyramidal Image Processing
Statistics in the Image Domain for Mobile Robot Environment Modeling L. Abril Torres-Méndez and Gregory Dudek Centre for Intelligent Machines School of.
3d Pose Detection Used by Kinect
Andreas Geiger and Philip Lenz Karlsruhe Institute of Technology Raquel Urtasun Toyota Technological Institute at Chicago.
Extracting Depth and Matte using a Color-Filtered Aperture Yosuke Bando TOSHIBA + The University of Tokyo Bing-Yu Chen National Taiwan University Tomoyuki.
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
A computational model of stereoscopic 3D visual saliency School of Electronic Information Engineering Tianjin University 1 Wang Bingren.
Image Research Topics. Colors Color2Grey Colorization Color transfer Color harmonization.
Gaussian Conditional Random Field Network for Semantic Segmentation
Technological Uncanny K. S'hell, C Kurtz, N. Vincent et E. André et M. Beugnet 1.
Deeply-Recursive Convolutional Network for Image Super-Resolution
A Plane-Based Approach to Mondrian Stereo Matching
Summary of “Efficient Deep Learning for Stereo Matching”
Hole Filling 관련 논문 장호욱.
Semi-Global Matching with self-adjusting penalties
Convolutional Neural Fabrics by Shreyas Saxena, Jakob Verbeek
Jure Zbontar, Yann LeCun
Reconstruction For Rendering distribution Effect
Semi-Global Stereo Matching with Surface Orientation Priors
Depth Map Upsampling by Self-Guided Residual Interpolation
Part-Based Room Categorization for Household Service Robots
Fast Preprocessing for Robust Face Sketch Synthesis
PHOTO – Day 2 depth of field.
Adversarially Tuned Scene Generation
By: Kevin Yu Ph.D. in Computer Engineering
Image Inpainting Using Pre-Trained Classification CNN
Iterative Optimization
Texture Quality Extensions to Image Quilting
RGB-D Image for Scene Recognition by Jiaqi Guo
Reconstructing depth from 2D images
Depth of Field Effects Rendering
Lip movement Synthesis from Text
John H.L. Hansen & Taufiq Al Babba Hasan
Hein H. Aung Matt Anderson, Advisor Introduction
Texture Synthesis and Transfer
Background Task Fashion image inpainting Some conceptions
Appearance Transformer (AT)
Random Neural Network Texture Model
Self-Supervised Cross-View Action Synthesis
Computing the Stereo Matching Cost with a Convolutional Neural Network
Directional Occlusion with Neural Network
20 November 2019 Output maps Normal Diffuse Roughness Specular
Presentation transcript:

Depth Aware Inpainting for Novel View Synthesis Jayant Thatte Depth Aware Inpainting for Novel View Synthesis Jayant Thatte Jean-Baptiste Boin Motivation: Depth-based novel view synthesis often leads to missing pixels in the output image. Inpainting done without accounting for depth results in incorrect final result. In this work, we explore using depth- aware inpainting to fill these missing regions.

Goal and Methodology Goal: Inpaint the missing regions in the novel views synthesized using depth-based rendering and compare the results with groundtruth Plan of Action Use the texture-plus-depth images from Middlebury Stereo Dataset and synthesize novel views Use 2 different algorithms – Markov Random Fields and convolutional neural networks – to inpaint the missing regions Evalute with ground truth views (also available in the dataset)

Dataset and Initial Results Middlebury Stereo Dataset (Scharstein 2014) 23 indoor scenes with rectified stereo image + depth (2880 x 1988) Depth aware inpainting fills missing regions with background texture Foreground objects bleed into the background when inpainting without depth Method 1 results in blurry output compared to method 2 Ground truth evaluations before the final report Input W/o Depth Using Depth Groundtruth Result: Method 1 Result: Method 2