Directional Occlusion with Neural Network

Slides:



Advertisements
Similar presentations
Exploration of advanced lighting and shading techniques
Advertisements

Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Computer graphics & visualization Global Illumination Effects.
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
Two Methods for Fast Ray-Cast Ambient Occlusion Samuli Laine and Tero Karras NVIDIA Research.
Ray Tracing & Radiosity Dr. Amy H. Zhang. Outline  Ray tracing  Radiosity.
 Engineering Graphics & Introductory Design 3D Graphics and Rendering REU Modeling Course – June 13 th 2014.
1. What is Lighting? 2 Example 1. Find the cubic polynomial or that passes through the four points and satisfies 1.As a photon Metal Insulator.
I3D Fast Non-Linear Projections using Graphics Hardware Jean-Dominique Gascuel, Nicolas Holzschuch, Gabriel Fournier, Bernard Péroche I3D 2008.
Real-Time Rendering Paper Presentation Imperfect Shadow Maps for Efficient Computation of Indirect Illumination T. Ritschel T. Grosch M. H. Kim H.-P. Seidel.
Rasterization and Ray Tracing in Real-Time Applications (Games) Andrew Graff.
CSS552 Final Project Demo Peter Lam Tim Chuang. Problem Statement Our goal is to experiment with different post rendering effects (Cel Shading, Bloom.
Computer Graphics Shadows
Computer Graphics Mirror and Shadows
Shading (introduction to rendering). Rendering  We know how to specify the geometry but how is the color calculated.
Shading. What is Shading? Assigning of a color to a pixel in the final image. So, everything in shading is about how to select and combine colors to get.
Jonathan M Chye Technical Supervisor : Mr Matthew Bett 2010.
CS 376 Introduction to Computer Graphics 04 / 16 / 2007 Instructor: Michael Eckmann.
CS447/ Realistic Rendering -- Radiosity Methods-- Introduction to 2D and 3D Computer Graphics.
1 Computer Graphics Week2 –Creating a Picture. Steps for creating a picture Creating a model Perform necessary transformation Lighting and rendering the.
Parallel Ray Tracer Computer Systems Lab Presentation Stuart Maier.
Real-Time Depth Buffer Based Ambient Occlusion
Feedforward semantic segmentation with zoom-out features
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
Real-Time Dynamic Shadow Algorithms Evan Closson CSE 528.
Shadows David Luebke University of Virginia. Shadows An important visual cue, traditionally hard to do in real-time rendering Outline: –Notation –Planar.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Computer Graphics: Illumination
Learning to Compare Image Patches via Convolutional Neural Networks SERGEY ZAGORUYKO & NIKOS KOMODAKIS.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Compositing and Rendering
Machine Learning Supervised Learning Classification and Regression
Convolutional Sequence to Sequence Learning
Learning to Compare Image Patches via Convolutional Neural Networks
- Introduction - Graphics Pipeline
Real-Time Soft Shadows with Adaptive Light Source Sampling
Summary of “Efficient Deep Learning for Stereo Matching”
Deep Neural Net Scenery Generation
Deep Learning Amin Sobhani.
Photorealistic Rendering vs. Interactive 3D Graphics
Convolutional Neural Fabrics by Shreyas Saxena, Jakob Verbeek
Reconstruction For Rendering distribution Effect
Image-Based Rendering
Deferred Lighting.
3D Graphics Rendering PPT By Ricardo Veguilla.
From Turing Machine to Global Illumination
The Graphics Rendering Pipeline
Understanding Theory and application of 3D
Hybrid Ray Tracing of Massive Models
© University of Wisconsin, CS559 Fall 2004
Adversarially Tuned Scene Generation
Low Dose CT Image Denoising Using WGAN and Perceptual Loss
Path Tracing (some material from University of Wisconsin)
Parallel Spectral Renderer
RGB-D Image for Scene Recognition by Jiaqi Guo
Real-time Rendering Shadow Maps
Introduction of MATRIX CAPSULES WITH EM ROUTING
Chapter XVI Texturing toward Global Illumination
UMBC Graphics for Games
A Proposal Defense On Deep Residual Network For Face Recognition Presented By SAGAR MISHRA MECE
Image Classification Painting and handwriting identification
Current Research in VR Display Technology
GR2 Advanced Computer Graphics AGR
V-RAY FOR 3DS MAX Global Illumination for Interior Scenes
CSC 578 Neural Networks and Deep Learning
Frame Buffer Applications
Automatic Handwriting Generation
Real-time Global Illumination with precomputed probe
Photon Density Estimation using Multiple Importance Sampling
Shengcong Chen, Changxing Ding, Minfeng Liu 2018
Presentation transcript:

Directional Occlusion with Neural Network Team 2 20184230 Saehun Kim 20193138 Jaeyoon Kim

Ambient Occlusion Ambient occlusion After modeling ambient lighting with local illumination 중간발표에서 봤듯이 AO는 local illumination에서 light를 조금 더 사실적으로 표현하기 위한 기법 중 하나이다. 먼저 일반적인 방법으로 ambient light를 만든 다음에

Ambient Occlusion Ambient occlusion Add shading 그림자를 추가한다. You can feel the image with AO is more realistic.

Why AO? Ray tracing Why use ambient occlusion 3D scene Simple render Perfect shadow Too slow 3D scene Then, why do we use this technique? Global illumination, for example ray tracing, gives high quality and realistic images, but it takes very long time. On the other hand, ambient occlusion gives less realistic images, but it can be done very fast. Simple render Ambient occlusion Approximate shadow 3D scene

Directional Occlusion Consider direction of light & indirect light when adding shadows Support color bleeding More realistic shadows And there is another technique, directional occlusion. DO is advanced form of AO. DO considers the direction of light, and indirect lighting. DO gives more realistic images than AO. Dirty shadow Realistic shadow

AO? DO? Ambient occlusion (Scalar value) Directional occlusion (vector) ‘Blue indirect light coming’ 40% occluded Shade the point 40% darker ‘We know what direction the light comes from’ AO only calculate how much percent of light is occluded. But DO calculate what color of light is coming from what direction, and how much of the light is occluded.

Thanks to Benjamin Keinert Previous Works Realistic scene by global illumination Machine learning for real time Simple render Awkward scene As 세훈 introduced in his presentation, Benjamin Keinert et al. introduced a real-time AO technique using machine learning. Thanks to Benjamin Keinert (2018) Ambient occlusion Add shadow Machine learning for real time

Directional occlusion What we want to do Realistic scene by global illumination Machine learning for real time Simple render Awkward scene Our goal is to modify the network and make a real-time DO technique using ML. Directional occlusion Machine learning for real time

Additional shader program Neural Network Learning Previous Works Keinert et al., Learning Real-Time Ambient Occlusion from Distance Representations (I3D2018) Let’s see more about Keinert’s network. Additional shader program distance sample MLP AO value shader Image w/ AO Image w/o AO Neural Network Learning

Benchmark Our own network for DO distance sample MLP DO value shader Image w/ DO color Image w/o DO We can add color input, light information, and some other thing. light info …

Problems We cannot use existing dataset used in previous works. It is difficult to calculate distance samples in OpenGL.

Problem 1 We cannot use existing dataset used in previous works. We need additional data such as distance samples, which requires 3D modeling data. But there is no 3D modeling information in datasets of previous works. Only 2D normal, position image Therefore, we need to make new 3D models and scenes, and construct our own dataset.

Directional Occlusion What renderer to use?   OpenGL Mitsuba 3ds max Blender OptiX Directional Occlusion X O G-buffer △ Distance Samples Then, how can we make our own dataset? Which renderer can we use? The table shows the comparison among some renderers. O means …, X means …, and △ means … Unfortunately, there is no renderer that can make all of what we want. So we chose two things: mitsuba renderer for DO and OpenGL for G-buffer and distance samples. O: immediately possible    △: possible with little efforts    X: impossible or possible with a lot of efforts Unfortunately, there is no renderer that can make all data we want. We decided to make DO ground truth images using Mitsuba renderer and make G-buffer and distance samples using openGL.

Data 1: Directional Occlusion Ground Truth Ground truth images with directional occlusion can be made by path tracing of light path length ≤ 3. We rendered 512×512 px images using mitsuba renderer. (spp 1024) Mitsuba renderer requires xml scene files and obj 3D modeling files. Scene files and modeling files are made by our own data generator.

Data 2: G-Buffer G-buffer: A screen space representation of geometry and material information We use color, normal vector, and depth buffers. G-buffer can be easily created by openGL Color Position Depth Normal

Data 3: Distance Samples Distance Samples: vector of distances between samples and nearest faces. With distance samples, we can know approximation of geometry around the point Pi

Problem 2 It is difficult to calculate distance samples in OpenGL. For a surface point, sample 3D points inside a hemisphere. (orange) Get distance values of sampled points. (blue) In OpenGL shader programs, it is impossible to access data of all faces. cannot calculate the distances between samples and faces. The purpose of distance samples is to help to know geometry information. This can be done with depth map! Instead of distance samples, we used depth map.

Automatic Data Generator This is a diagram for our own “Automatic Data Generator” Our generator is separated into two parts. One part is a python program, “random scene generator”. It randomly makes some objects, and creates corresponding obj files. Then, it assigns a random color for each objects, and export the xml scene file. The scene files and obj files are used to render DO GT images using Mitsuba renderer,  and also used in the other part of ADG. In the C++ program, tinyxml2 and tinyobjloader libraries helps to read scene and modeling information. OpenGL renderers make G-Buffer data and local illumination images from the information. The G-buffer data contains color, depth, normal data. Finally, use lodePNG library to export them as png files.

Automatic Data Generator This is a diagram for our own “Automatic Data Generator” Our generator is separated into two parts. One part is a python program, “random scene generator”. It randomly makes some objects, and creates corresponding obj files. Then, it assigns a random color for each objects, and export the xml scene file. The scene files and obj files are used to render DO GT images using Mitsuba renderer,  and also used in the other part of ADG. In the C++ program, tinyxml2 and tinyobjloader libraries helps to read scene and modeling information. OpenGL renderers make G-Buffer data and local illumination images from the information. The G-buffer data contains color, depth, normal data. Finally, use lodePNG library to export them as png files.

Neural Network Learning What we want to do Our own network for DO Depth map MLP DO value shader Image with DO We can add color input, light information, and some other thing. color Image w/o DO Normal Neural Network Learning

Neural Network Learning What we want to do Our own network for DO How to get ground truth DO value for shader?? Depth map MLP DO value shader Image with DO We can add color input, light information, and some other thing. ??? color Image w/o DO Normal Failed to program ground truth DO value.. Neural Network Learning

Neural Network Learning What we want to do Lets just go directly to DO image by end-to-end neural network Depth map MLP Image with DO We can add color input, light information, and some other thing. color Normal Neural Network Learning (End-to-End) Ambient image

Directional occlusion by neural network We want a direct process from ambient to DO image No code to refer to so we made it all by hand Ambient image Directional occluded image

MLP (Multi Layer Perceptron) Pixel wise computation For each pixel (i,j) Ambient Has 4 informations Get the pixel value (i,j) Color Depth Normal

MLP (Multi Layer Perceptron) For each pixel (i,j) Ambient Has 4 informations Failed to train One pixel does not have information of it’s neighbor (e.g Distance sample) Get the pixel value (i,j) Color Depth Normal

2D CNN To provide information of local environments We tried to use 2D CNN

2D CNN Then the network can know where to shade Shade when there is object boundary → depth map What kind of shade? → Normal map What color? → Color map Preserve content → Ambient image

Residual learning Learning technique that learns the residual instead of the final result Normal CNN learning Hard to learn

Residual learning Residual CNN learning More easy to learn Learns what to add to make the DO image (where to shade, where to bright) By combining information of normal, depth, color.

What the network has to learn Shade walls with greenish light to make it more photo realistic Add shadows behind objects Brighten box surface where the light hits Learn!!!

Residual CNN architecture we found Through many trial and errors, we managed to find the best network 20 convolutional layer with ReLu & residual learning Adam optimizer & Xavier initializer Input 10 channels (3 RGB ambient + 3 RGB color + 3 xyz normal + 1 depth) Loss = 0.7*SSIM + 0.3*MSE Clipping output value to a range of 0~255 Ambient image DO image

Results Residual, 20 layers, SSIM MSE 7:3, no clipping Ambient image CNN result

Some output image pixel value Results Residual, 20 layers, SSIM MSE 7:3, no clipping Some output image pixel value were over 255 or under 0

Results Residual, 20 layers, SSIM MSE 7:3, clip by value (time: 0.754 sec) Ambient image CNN result

Results Ambient image CNN result

Results Small Green indirect light Ambient image CNN result DO image

Results CNN result DO image

Results Ambient image CNN result

Results CNN result DO image

Results Ambient image CNN result

Results CNN result Ambient image DO image

Results CNN result DO image

Without residual (direct learning) Ablation study Power of residual learning Without residual (direct learning) With residual

Ablation study How many layers?? 10 layers 20 layers

Ablation study How helpful depth is?? Without depth map With depth map

Ablation study How helpful depth is?? Without depth map With depth map

Ablation study Other network architecture that we tried Encode and decode spatial features Best network was Ambient image DO image Ambient image DO image

Ablation study Ambient image DO image Ambient image DO image

Ablation study Ambient image DO image Ambient image DO image

Shrinking spatial features lead to loss of local relation information Ablation study Ambient image DO image Ambient image DO image Shrinking spatial features lead to loss of local relation information

Evaluation Results Deep Shading Deep Shading SSIM = 0.589 SSIM = 0.951 Ours Ground truth

Role Jaeyoon: Render data set Make every object & code every scene Calculate depth, normal, color Rendered ground truth DO Saehun: Used data set from Jaeyoon to deep learning Code all networks by hand Tune hyper parameters Find best network architecture Perform ablation study