20 November 2019 Output maps Normal Diffuse Roughness Specular

Slides:



Advertisements
Similar presentations
1 Photometric Stereo Reconstruction Dr. Maria E. Angelopoulou.
Advertisements

3D Modeling from a photograph
Spherical Convolution in Computer Graphics and Vision Ravi Ramamoorthi Columbia Vision and Graphics Center Columbia University SIAM Imaging Science Conference:
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,
Real-Time Human Pose Recognition in Parts from Single Depth Images Presented by: Mohammad A. Gowayyed.
1. What is Lighting? 2 Example 1. Find the cubic polynomial or that passes through the four points and satisfies 1.As a photon Metal Insulator.
A Signal-Processing Framework for Forward and Inverse Rendering COMS , Lecture 8.
Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Kalyan Sunkavalli, Harvard University Joint work with Todd Zickler and Hanspeter Pfister.
Flexible Bump Map Capture From Video James A. Paterson and Andrew W. Fitzgibbon University of Oxford Calibration Requirement:
A Theory of Locally Low Dimensional Light Transport Dhruv Mahajan (Columbia University) Ira Kemelmacher-Shlizerman (Weizmann Institute) Ravi Ramamoorthi.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Computer Graphics Inf4/MSc Computer Graphics Lecture Notes #16 Image-Based Lighting.
Satellites in Our Pockets: An Object Positioning System using Smartphones Justin Manweiler, Puneet Jain, Romit Roy Choudhury TsungYun
Image-Based Rendering from a Single Image Kim Sang Hoon Samuel Boivin – Andre Gagalowicz INRIA.
Multiple Scattering in Vision and Graphics Lecture #21 Thanks to Henrik Wann Jensen.
Validation of Color Managed 3D Appearance Acquisition Michael Goesele Max-Planck-Institut für Informatik (MPI Informatik) Vortrag im Rahmen des V 3 D 2.
Shading. What is Shading? Assigning of a color to a pixel in the final image. So, everything in shading is about how to select and combine colors to get.
What Does Motion Reveal About Transparency ? Moshe Ben-Ezra and Shree K. Nayar Columbia University ICCV Conference October 2003, Nice, France This work.
PCB Soldering Inspection. Structured Highlight approach Structured Highlight method is applied to illuminating and imaging specular surfaces which yields.
Understanding the effect of lighting in images Ronen Basri.
Overview of Reflectance Models Focused on Car Paint Simulation Advisor: doc. RNDr. Roman Ďurikovič, PhD. Róbert Smetana.
Image-Based Rendering of Diffuse, Specular and Glossy Surfaces from a Single Image Samuel Boivin and André Gagalowicz MIRAGES Project.
Inverse Global Illumination: Recovering Reflectance Models of Real Scenes from Photographs Computer Science Division University of California at Berkeley.
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
Announcements Final Exam Friday, May 16 th 8am Review Session here, Thursday 11am.
Understanding the effect of lighting in images Ronen Basri.
Learning to Answer Questions from Image Using Convolutional Neural Network Lin Ma, Zhengdong Lu, and Hang Li Huawei Noah’s Ark Lab, Hong Kong
Today’s Lecture Neural networks Training
Michael Xie, Neal Jean, Stefano Ermon
Deep Learning for Dual-Energy X-Ray
Learning to Compare Image Patches via Convolutional Neural Networks
Demo.
Summary of “Efficient Deep Learning for Stereo Matching”
Jure Zbontar, Yann LeCun
Image Restoration using Model-based Tracking
Rogerio Feris 1, Ramesh Raskar 2, Matthew Turk 1
Compositional Human Pose Regression
Unrolling: A principled method to develop deep neural networks
Synthesis of X-ray Projections via Deep Learning
3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks Zhaoliang Lun Matheus Gadelha Evangelos Kalogerakis Subhransu Maji Rui.
Adversarially Tuned Scene Generation
State-of-the-art face recognition systems
Combining Sketch and Tone for Pencil Drawing Production
By: Kevin Yu Ph.D. in Computer Engineering
Zan Gao, Deyu Wang, Xiangnan He, Hua Zhang
Image Based Modeling and Rendering (PI: Malik)
(c) 2002 University of Wisconsin
Convolutional Neural Networks for Visual Tracking
Similarity based on Shape and Appearance
Single Image Rolling Shutter Distortion Correction
SVM-based Deep Stacking Networks
Graph Neural Networks Amog Kamsetty January 30, 2019.
Introduction to Object Tracking
M/EEG Statistical Analysis & Source Localization
Attention for translation
U-Net: Convolutional Network for Segmentation
Occlusion and smoothness probabilities in 3D cluttered scenes
Stereo vision Many slides adapted from Steve Seitz.
Learning Deconvolution Network for Semantic Segmentation
Deep screen image crop and enhance
Example of training and deployment of deep convolutional neural networks. Example of training and deployment of deep convolutional neural networks. During.
Artistic Rendering Final Project Initial Proposal
Recent Developments on Super-Resolution
Directional Occlusion with Neural Network
Real-time Global Illumination with precomputed probe
CVPR2019 Jiahe Li SiamRPN introduces the region proposal network after the Siamese network and performs joint classification and regression.
Shengcong Chen, Changxing Ding, Minfeng Liu 2018
Presentation transcript:

20 November 2019 Output maps Normal Diffuse Roughness Specular 256 x 256 x 3 Input U-Net 256 x 256 x 9 ... ... Discussion: Research and questions in neural methods for material acquisition Valentin Deschaintre Good evening. I have been working on neural material acquisition during my Phd. New methods for material acquisition with deep learning Good time to take a step back and have a discussion about the limitations and what would be interesting to see next. Quickly present the existing method for neural acquisition Give some limitations I see and discuss everything together, I can also try to discuss the different methods in more details if you have questions there. But a first question could be “why deep learning”?

Why deep learning “Pocket reflectometry” 20 November 2019 Why deep learning “Pocket reflectometry” Ren et al., 2011 “Reflectance Modeling by Neural Texture Synthesis” Aittala et al., 2016 “Polarization imaging reflectometry in the wild“ Riviere et al., 2017 Matrix of known materials Self repetitive materials Lightweight acquisition with a only few image is ill posed Historically, assumptions were made about the materials or the lighting, a few examples Natural light polarization

Data generation for material acquisition 20 November 2019 Data generation for material acquisition Modeling surface appearance from a single photograph using self- augmented convolutional neural networks Li et al., Siggraph 2017 Li et al method propose in 2017 to use a CNN and an environmentaly lit image to infer a SVBRDF with a diffuse albedo, normal map and an approximate understanding of specularity. But the main key of this paper was self augmentation to solve the absence of SVBRDF databases.

Data generation for material acquisition 20 November 2019 Data generation for material acquisition Modeling surface appearance from a single photograph using self- augmented convolutional neural networks Li et al., Siggraph 2017 Explain self augmentation

Data generation for material acquisition 20 November 2019 Data generation for material acquisition Modeling surface appearance from a single photograph using self- augmented convolutional neural networks Li et al., Siggraph 2017 Li et al method on acquisition, focused more on the lack of data

Data generation for material acquisition 20 November 2019 Data generation for material acquisition Modeling surface appearance from a single photograph using self- augmented convolutional neural networks Li et al., Siggraph 2017 Li et al method on acquisition, focused more on the lack of data with self augmentation and allows to retrieve with one picture environment lit, normals, diffuse map and an approximate specular behaviour

Synthetic training, flash and distance metric 20 November 2019 Synthetic training, flash and distance metric Materials for Masses: SVBRDF Acquisition with a Single Mobile Phone Image Li et al, ECCV 2018 Single-image svbrdf capture with a rendering-aware deep network Deschaintre et al., Siggraph 2018 CNN Differentiable Renderer - Input Inferred maps Ground truth Last year 2 (concurrent) methods were proposed using Flash input to get a better sense of the material behaviour, increasing quality This time the problem of data is solved through procedurally generated training data, Finally we use a different distance metric for the optimization function. Explain the rendering loss So far all the methods I talked about work on near planar surfaces.

Simulation of multiple lighting bounces 20 November 2019 Going to 3D Learning to Reconstruct Shape and Spatially-Varying Reflectance from a Single Image Li et al., 2018 Simulation of multiple lighting bounces So, at the end of last year a new method was introduced to somewhat handle 3D objects. The main challenge here is the differentiation of the rendering process with shapes for the optimization. They solve it using a cnn to basically simulate the rendering process, taking bounces of light into account

Multiple images and higher resolution 20 November 2019 Multiple images and higher resolution Flexible SVBRDF Capture with a Multi-Image Deep Network Deschaintre et al., EGSR 2019 Deep Inverse Rendering for High-resolution SVBRDF Estimation from an Arbitrary Number of Images Gao et al, Siggraph 2019 All previous method get an image but if you don’t like the result: meh. Multiple image to improve the result. and higher resolution and classical optim in an untangled dimension for Gao

Summary and future work 20 November 2019 Summary and future work What we have: Neural material acquisition Rendering loss 3D objects Multiple image aggregation What’s next? Meaningful metric (Lagunas et al. 19, Gao et al. 19) Differentiable rendering (Azinović et al. 19) Scale (Mazlov et al. 19) Real data (Merzbach et al. 19) Uncalibrated results So we have all these properties. What I think is more interesting to discuss are the limitations and the next steps to take that would be interesting to the community. Here are a few I found interesting and first steps in solving them. Describe them So now the question is what do you think ? What limitation would you like to see fixed, or what could be adapted from your different fields to solve these challenges. What do you think ?