Rendering Depth Perception through Distributed Ray Tracing

Slides:



Advertisements
Similar presentations
CSCE 643 Computer Vision: Template Matching, Image Pyramids and Denoising Jinxiang Chai.
Advertisements

Spatial Filtering (Chapter 3)
A Fast Approximation of the Bilateral Filter using a Signal Processing Approach Sylvain Paris and Frédo Durand Computer Science and Artificial Intelligence.
5/24/2015© 2009 Raymond P. Jefferis III Lect Geographic Image Processing Data Transformation and Filtering Noise in data Arrays Digital filtering.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
6/9/2015Digital Image Processing1. 2 Example Histogram.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
CIS 681 Distributed Ray Tracing. CIS 681 Anti-Aliasing Graphics as signal processing –Scene description: continuous signal –Sample –digital representation.
CSCE 441: Computer Graphics Image Filtering Jinxiang Chai.
© Chun-Fa Chang Sampling Theorem & Antialiasing. © Chun-Fa Chang Motivations “ My ray traced images have a lot more pixels than the TV screen. Why do.
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
01/28/05© 2005 University of Wisconsin Last Time Improving Monte Carlo Efficiency.
University of Texas at Austin CS384G - Computer Graphics Fall 2008 Don Fussell Distribution Ray Tracing.
Basic Ray Tracing CMSC 435/634. Visibility Problem Rendering: converting a model to an image Visibility: deciding which objects (or parts) will appear.
Image processing Fourth lecture Image Restoration Image Restoration: Image restoration methods are used to improve the appearance of an image.
Why is computer vision difficult?
Image Enhancement [DVT final project]
Digital Image Processing Lecture 10: Image Restoration March 28, 2005 Prof. Charlene Tsai.
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
Digital Image Processing Lecture 10: Image Restoration
Cameras Digital Image Synthesis Yung-Yu Chuang 10/26/2006 with slides by Pat Hanrahan and Matt Pharr.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
CSE 681 DISTRIBUTED RAY TRACING some implementation notes.
Intelligent Vision Systems ENT 496 Image Filtering and Enhancement Hema C.R. Lecture 4.
Sampling Pixel is an area!! – Square, Rectangular, or Circular? How do we approximate the area? – Why bother? Color of one pixel Image Plane Areas represented.
Ray Tracing Fall, Introduction Simple idea  Forward Mapping  Natural phenomenon infinite number of rays from light source to object to viewer.
Visual Computing Computer Vision 2 INFO410 & INFO350 S2 2015
Sampling Theorem & Antialiasing
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
Distributed Ray Tracing. Can you get this with ray tracing?
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Noah Snavely Hybrid Images, Oliva et al.,
Non-linear filtering Example: Median filter Replaces pixel value by median value over neighborhood Generates no new gray levels.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
Image Enhancement in the Spatial Domain.
CIS 681 Distributed Ray Tracing. CIS 681 Anti-Aliasing Graphics as signal processing –Scene description: continuous signal –Sample –digital representation.

Basic Ray Tracing CMSC 435/634.
3D Rendering 2016, Fall.
Rendering Pipeline Fall, 2015.
Digital Image Processing Lecture 10: Image Restoration
Linear Filters and Edges Chapters 7 and 8
Depth of Field Objective: to photograph a subject with a foreground and background, giving a better understanding of aperture and its relation to depth.
Motion and Optical Flow
Reconstruction For Rendering distribution Effect
Distributed Ray Tracing
A Gentle Introduction to Bilateral Filtering and its Applications
3D Graphics Rendering PPT By Ricardo Veguilla.
Sampling Theorem & Antialiasing
© University of Wisconsin, CS559 Fall 2004
Three-Dimensional Concepts. Three Dimensional Graphics  It is the field of computer graphics that deals with generating and displaying three dimensional.
Sampling Theorem told us …
Image filtering Hybrid Images, Oliva et al.,
(c) 2002 University of Wisconsin
Image filtering Images by Pawan Sinha.
Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2006/3/8
Image filtering Images by Pawan Sinha.
Distributed Ray Tracing
Distributed Ray Tracing
Digital Image Processing Week IV
Non-local Means Filtering
Image filtering
CS5500 Computer Graphics May 29, 2006
Distributed Ray Tracing
Lecture 1: Images and image filtering
Distributed Ray Tracing
Distributed Ray Tracing
DIGITAL IMAGE PROCESSING Elective 3 (5th Sem.)
Presentation transcript:

Rendering Depth Perception through Distributed Ray Tracing Ryan May, Ruth Ogunnaike, Craig Shih

Depth Perception and Image Focus •Depth perception is the visual ability to perceive the world in three dimensions (3D) and the distance of an object. •It is difficult to recognize the distance of an object (and between objects) in an image or world without defining depth of the object(s) in view

Objects in image…

…may be larger than they appear

Basic Human Vision

Camera Model

Scene Scale and Focus Ability

Scene Scale and Focus Ability

Rendering Depth Perception Techniques Distributed Ray Tracing Depth of Field Post processing Gaussian Filter Techniques Bilateral filter techniques

Gaussian Filter Techniques Gaussian filters weigh pixels based on their distance from the center of the convolution filter. (Convolution takes two images as input(an image and the kernel filter) and produces a third image as output). The Gaussian filter is a local and linear filter that smoothens the whole image irrespective of its edges or details Noise is blurred while preserving the features of the image Tackle inaccuracies in the depth maps Aims at eliminating edge effects by weight contribution of neighborhood pixels to their closeness to the center (computes the weighted sum of the pixels in a window centered at a pixel). With increasing window size, edges are lost. A Gaussian kernel gives less weight to pixels further from the center of the window

Gaussian filters (Contd.) Gaussian Function  = size of the window GC[I]p = Gaussian filter applied on image I. p, q = 2-dimensional pixel coordinates. Iq = intensity values of pixels p and q. wp = window centered at pixel p. Gσ (||p − q||) = normalized Gaussian function (space) Convolution with self is another Gaussian So can smooth with small-width kernel, repeat, and get same result as larger-width kernel would have Only considers the spatial distance of the pixels to compute the weights in a filter kernel. It does not preserve edges

Bilateral filter techniques The bilateral filter is an edge-preserving filter. For a given pixel, its denoised counterpart is obtained by the weighted average of its neighbors, where the weights are given by some function that depends on their color similarity and image distance.  It considers the difference in the intensity values (in addition to the spatial distance of the pixels). Dissimilar pixels that e.g., occur at edges of the processed image are maintained. Spatial and intensity closeness are combined to ensure that only nearby similar pixels are considered in the smoothing process Bilateral filters can be slow especially for large spatial kernels (the number of pixels)

Bilateral filter (Contd.) BF[I] = bilateral filter applied on image I. p, q = 2-dimensional pixel coordinates. Ip, Iq = intensity values of pixels p and q. wp = window centered at pixel p. Wp =normalized term of the pixel weights Gσs (||p − q||) = spatial weight (decreases the influence of distant pixels) Gσr (||p − q ||) = range weight (decreases the influence of dissimilar pixels in color or intensity). σs, σr = the standard deviation of the spatial filter Gσs and the range filter Gσr . Source: Matej Nezveda, 2014

Solution and Implementation Strategy

Depth of Field through Distributed Ray Tracing Normal Ray Tracing (What we’ve done so far in class) Similar to a pinhole camera where all light is focused through one specific point Uses one single point as the origin of all rays traced through the world Entire image is sharp and focused Distributed Ray Tracing Removes the restriction of having one single point Ray origins are distributed over a specific interval Similar a camera lens that includes an aperture Aperture - an opening, hole, or gap.

Normal Ray tracing Ray Eye position Scene Image plane (i,j) pixel position Eye position Scene Image plane

Lens Covexity and Depth of Field Explain how the lens concavity effects the rays traced. Explain how the focal plane is found in real life as in the concavity of the lens will effect where the focal plane is at lens Image plane Focal plane

Working solutions for Depth of field Two ways of thinking about this Image plane prior lens (IPprior) Requires the image to be flipped at the end of the process Image plane after lens (IPafter) Doesn’t require image to be flipped at the end of the process More directly related to how we’ve been performing ray tracing in class Primary Ray Focal distance Secondary Ray Lens center (eye position) IPprior IPafter

Working solutions for Depth of field Modify RTCamera mEye as lens center Radius for lens will be read in from XML file Focal distance will be read in from XML file getRandomLensPosition() function will be implemented in RTCamera Returns a random Vector3 that is within the radius with center mEye Uses Side Direction and UP Vector to determine x,y position (relative to the viewing direction) Generate a random angle angle = Rand.NextDouble()*Math.PI*2 randRadius = Math.Sqrt(Rand.NextDouble()) * radius The random is sqrt here because without sqrt random, the random points will be more concentrated towards the center. This is because the mean distance between 2 neighboring points is smaller near the center then it is on the outer edges resulting in a higher density near the center. x = Math.cos(angle) * randRadius y = Math.sin(angle) * randRadius randomPos = mEye + (x * SideDirection) + (y * UpVector) The problem is that the mean distance between 2 neighboring points is smaller near the center then it is at the circumference. So the number of points near the center is packed tighter -> the point density (nr of points per partial volume) is higher. Depending on the application this might or might not be acceptable.

Working solutions for Depth of field getFocalPoint(Ray r) function will be implemented in RTCamera Returns a Vector3 focal point focalPoint = mCamera.EyePosition + (primaryRay * focalDistance) Secondary Rays can now be generated from the RandomLensPosition to the focalPoint Visibility will then be computed for each of the secondary rays

Jitter Sampling Since distributed ray tracing requires the averaging of multiple intersection points color to return each pixel color, a sampling method will need to be chosen Jitter Sampling – each point in each broken up section is randomly selected – overall box is one pixel Will be implementing 8 samples per pixel

Motion Blur Apply stochastic sampling to time as well as space Assign a time as well as an image position to each ray The result is still-frame motion blur and smooth animation

Motion Blur Create a time element to each ray Have a List of positions for each object in the field. If List is size == 1 then no motion. If more than one than motion. Based on the time of the ray, will check a certain position for the object when computing intersection Having the rays intersecting at different “times” will result in a motion blur effect if enough samples are taken. Image taken from UNC

Potential Problems with distributed ray tracing Number of samples required to produce a accurate well rendered picture is usually 50 samples or higher Expensive to compute that many samples Time to render will take longer than post processing.

Benefits of Distributed Ray Tracing for Depth of Field Since visibility is calculated from multiple points (whereas postprocessing is still calculated from a single point) Results in a more accurate depiction of depth of field Allows for other effects such as Motion Blur, Area Light to be implemented.

References [1] R. Baltazar, “Adding Realistic Camera Effects to the Computer Graphics Camera Model,” 2012. [2] B. A. Barsky and T. J. Kosloff, “Algorithms for rendering depth of field effects in computer graphics,” in Proceedings of the 12th WSEAS international conference on Computers, 2008, vol. 2008. [3] B. A. Barsky, D. R. Horn, S. A. Klein, J. A. Pang, and M. Yu, “Camera Models and Optical Systems Used in Computer Graphics: Part I, Object-Based Techniques,” in Computational Science and Its Applications — ICCSA 2003, V. Kumar, M. L. Gavrilova, C. J. K. Tan, and P. L’Ecuyer, Eds. Springer Berlin Heidelberg, 2003, pp. 246–255. [4] J. D. Pfautz, Depth perception in computer graphics. University of Cambridge, 2000. [5] N. Tai and M. Inanici, “Depth perception in real and pictorial spaces: A computational framework to represent and simulate the built environment,” in Proceedings of the 14th International Conference on Computer Aided Architectural Design Research in Asia, Yunlin (Taiwan), 2009, pp. 543–552. [6] R. L. Cook, T. Porter, and L. Carpenter, “Distributed Ray Tracing,” in Proceedings of the 11th Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 1984, pp. 137–145. [7] P. Lebreton, A. Raake, M. Barkowsky, and P. Le Callet, “Evaluating depth perception of 3D stereoscopic videos,” Selected Topics in Signal Processing, IEEE Journal of, vol. 6, no. 6, pp. 710–720, 2012. [8]Matej N., “Evaluation of Depth Map Post-processing Techniques for Novel View Generation“, Vienna University of Technology, 2014 [9] Yung-Yu C., Ramesh R., Fredo D., Sylvain P., Soonmin B., “Digital Visual Effects: Bilateral filtering” , DigiVFX, 2010