Depth Aware Inpainting for Novel View Synthesis Jayant Thatte Depth Aware Inpainting for Novel View Synthesis Jayant Thatte Jean-Baptiste Boin Motivation: Depth-based novel view synthesis often leads to missing pixels in the output image. Inpainting done without accounting for depth results in incorrect final result. In this work, we explore using depth- aware inpainting to fill these missing regions.
Goal and Methodology Goal: Inpaint the missing regions in the novel views synthesized using depth-based rendering and compare the results with groundtruth Plan of Action Use the texture-plus-depth images from Middlebury Stereo Dataset and synthesize novel views Use 2 different algorithms – Markov Random Fields and convolutional neural networks – to inpaint the missing regions Evalute with ground truth views (also available in the dataset)
Dataset and Initial Results Middlebury Stereo Dataset (Scharstein 2014) 23 indoor scenes with rectified stereo image + depth (2880 x 1988) Depth aware inpainting fills missing regions with background texture Foreground objects bleed into the background when inpainting without depth Method 1 results in blurry output compared to method 2 Ground truth evaluations before the final report Input W/o Depth Using Depth Groundtruth Result: Method 1 Result: Method 2