Maths & Technologies for Games Stereoscopic Rendering This presentation uses some red/cyan anaglyph images.

Slides:



Advertisements
Similar presentations
Exploration of advanced lighting and shading techniques
Advertisements

Seeing 3D from 2D Images. How to make a 2D image appear as 3D! ► Output and input is typically 2D Images ► Yet we want to show a 3D world! ► How can we.
Exploration of bump, parallax, relief and displacement mapping
Monocular and Binocular cues
Chapter 10: Perceiving Depth and Size
HOW 3D GLASSES WORK JACQUELINE DEPUE.  In 1893, William Friese-Green created the first anaglyphic 3D motion picture by using a camera with two lenses.
L 31 Light and Optics-3 Images formed by mirrors
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
Imaging Science FundamentalsChester F. Carlson Center for Imaging Science The Human Visual System Part 2: Perception.
Extra Credit for Participating in Experiments Go to to sign upwww.tatalab.ca We are recruiting for a variety of experiments including basic.
PSYC 1000 Lecture 21. Selective Attention: Stroop.
L 33 Light and Optics [3] images formed by mirrors
2002/02/12PSYC202, Term 2, Copyright Jason Harrison Recovery of World Structure: Art and Image.
Imaging Science FundamentalsChester F. Carlson Center for Imaging Science Binocular Vision and The Perception of Depth.
DEPTH AND SIZE PERCEPTION Problems for Perceiving Depth and Size Oculomotor Cues Monocular Cues Binocular Disparity Size Constancy.
1 Lecture 11 Scene Modeling. 2 Multiple Orthographic Views The easiest way is to project the scene with parallel orthographic projections. Fast rendering.
Three-dimensional (3D) vision How comes that we can see in three dimensions? That we can tell which objects are closer, and which are more distant? Parallax.
Monocular vs. Binocular View Monocular view: one eye only! Many optical instruments are designed from one eye view. Binocular view: two eyes with each.
Reading Gregory 24 th Pinker 26 th. Seeing Depth What’s the big problem with seeing depth ?
Scenes, Cameras & Lighting. Outline  Constructing a scene  Using hierarchy  Camera models  Light models.
L 33 Light and Optics [3] images formed by mirrors –plane mirrors –curved mirrors Concave (converging) Convex (diverging) Images formed by lenses the human.
Dinesh Ganotra. each of the two eyes sees a scene from a slightly different perspective.
L 33 Light and Optics [3] Measurements of the speed of light  The bending of light – refraction  Total internal reflection  Dispersion Dispersion 
Joshua Smith and Garrick Solberg CSS 552 Topics in Rendering.
Stereoscopic images Several methods: –Anaglyph –Polarization –Timesequential (shutterglasses) –Lenticular L A Rønningen/ E Heiberg, Item 2008.
Sheila Frederixon, Matt Gillett, Amy Gracik Stereoscopics is the technology that combines two separate images to create a 3D image. It is the most used.
3D/Multview Video. Outline Introduction 3D Perception and HVS 3D Displays 3D Video Representation Compression.
Depth Perception – Monocular and Binocular Depth cues
DEPTH PRINCIPLES The ability to accurately estimate the distance of objects and therefore perceive the world in three dimensions.
Maths and Technologies for Games Water Rendering CO3303 Week 22.
CAP4730: Computational Structures in Computer Graphics 3D Concepts.
By Andrea Rees. Gestalt Principles 1) Closure 2) Proximity 3) Similarity 4) Figure VISUAL PERCEPTION PRINCIPLES OVERVIEW Depth Principles Binocular 1)
When light travels from an object to your eye, you see the object. How do you use light to see? 14.1 Mirrors When no light is available to reflect off.
Computer Graphics World, View and Projection Matrices CO2409 Computer Graphics Week 8.
BY JESSIE PARKER VISUAL PERCEPTION PRINCIPLES. VISUAL PERCEPTION Visual perception is the ability to interpret the surrounding environment by processing.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 8 Seeing Depth.
Computer Graphics The Rendering Pipeline - Review CO2409 Computer Graphics Week 15.
Chapter 6 Section 2: Vision. What we See Stimulus is light –Visible light comes from sun, stars, light bulbs, & is reflected off objects –Travels in the.
Vision Part 2 Theories on processing colors. Objectives: The Student Will Compare and contrast color theories (VENN) Explain the Gestalt Theory List your.
Perception Chapter 5.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Stereo Viewing Mel Slater Virtual Environments
Games Development 1 Camera Projection / Picking CO3301 Week 8.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Advanced Computer Graphics Shadow Techniques CO2409 Computer Graphics Week 20.
Emerging Technologies for Games Deferred Rendering CO3303 Week 22.
Computer Graphics Blending CO2409 Computer Graphics Week 14.
Basic Perspective Projection Watt Section 5.2, some typos Define a focal distance, d, and shift the origin to be at that distance (note d is negative)
L 33 Light and Optics [3] images formed by mirrors
Visual Perception Principles Visual perception principles are ‘rules’ that we apply to visual information to assist our organisation and interpretation.
Immersive Rendering. General Idea ► Head pose determines eye position  Why not track the eyes? ► Eye position determines perspective point ► Eye properties.
Graphics II “3D” Graphics Cameron Miller INFO410 & INFO350 S INFORMATION SCIENCE Visual Computing.
Maths & Technologies for Games Advanced Graphics: Scene Post-Processing CO3303 Week
How Far Away Is It? Depth Perception
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 1, 2015 HUMAN VISUAL PERCEPTION EYE PHYSIOLOGY COLOR BLINDNESS CONSTANCY SHADOWS PARALLAX STEREOSCOPY.
Perception and VR MONT 104S, Fall 2008 Lecture 8 Seeing Depth
Anaglyph overview stereoscopic viewing technology.
Visual Perception There are two categories of cognitive processes that we use when we assign meaning to incoming information. What are they?
Outline Of Today’s Discussion 1.Some Disparities are Not Retinal: Pulfrich Effect 2.Random-Dot Stereograms 3.Binocular Rivalry 4.Motion Parallax.
Visual Perception. What is Visual Perception? Visual perception are rules we apply to visual information to assist our organisation and interpretation.
VISUAL PERCEPTION PRINCIPLES By Mikayla. VISUAL PERCEPTION PRINCIPLES  Gestalt principles 1.Closure 2.Proximity 3.Similarity 4.Figure-ground  Depth.
AUDIO VIDEO SYSTEMS Prepared By :- KISHAN DOSHI ( ) PARAS BHRAMBHATT ( ) VAIBHAV SINGH THAKURALE ( )
 Imagine a clear evening when a full moon is just starting to rise. Even though the Moon might seem large and close, it is still too far away for you.
1 2D TO 3D IMAGE AND VIDEO CONVERSION. INTRODUCTION The goal is to take already existing 2D content, and artificially produce the left and right views.
Digital Video Representation Subject : Audio And Video Systems Name : Makwana Gaurav Er no.: : Class : Electronics & Communication.
How good are you are judging distance?. We are learning about...We are learning how to... Perceiving the world visually Depth perception Binocular depth.
Binocular Vision Concepts and Examples Size is calibrated without switching into slide show mode. In slideshow mode, the images are a little bit larger.
- Introduction - Graphics Pipeline
How you perceive your surroundings
Presentation transcript:

Maths & Technologies for Games Stereoscopic Rendering This presentation uses some red/cyan anaglyph images

Lecture Contents 1.Human Vision Depth Perception Binocular Vision Convergence and Accommodation 2.Stereoscopy: Background 3.Stereoscopic Rendering in Practice Parallax Cameras: Parallel / Toe-In / Off-axis Rendering / Combining Two Scenes Anaglyph Display Optimisations 4.Improving the Viewer Experience Adjusting 3D Strength Avoiding Causes of Discomfort

Depth Perception – 2D There are a number of depth cues in a 2D image/video: –Position and perspective Nearer objects are larger and tend to be lower down in our vision –Known sizes of objects, or relative sizes of similar objects –Visible detail / texture Detail is visible with less acuity in the distance –Motion parallax Nearer objects appear to move faster –Shadows and lighting –Occlusion Nearer objects hider further ones –Atmospheric blurring (“distance fog”) None of these require two eyes –Only require monocular vision

2D Depth Cues

Depth Perception – Binocular Vision We gain additional cues from having two eyes: –Convergence We turn our eyes inwards to see nearer objects Our eyes turn almost parallel to see distant objects –Binocular Disparity The image seen in each eye is different Our brain resolves it into one image with depth

Binocular Depth Cues Although we can infer considerable depth information from a single image, binocular depth cues are powerful –Especially for short to medium distances Our instincts and reactions have evolved to rely strongly on our binocular vision –Most predators have forward-facing binocular vision to focus on a target –Prey, on the other hand, often have eyes facing in opposite directions, favouring field of view over depth perception

Binocular Depth Cues

Depth Perception – Accommodation In the real world we get one additional cue: –Accommodation Eye muscles adjust the shape of the lens in our eye To focus the light coming a given distance Stretch the lens flatter to focus on distant objects Squash it to a rounder shape to focus on nearer objects Accommodation does not occur when viewing a flat screen –Accommodate to screen distance regardless of any apparent depth –This can be a cause of discomfort viewing stereoscopic images Our eye muscles are converging, but not accommodating - unnatural

Stereoscopy - Background Stereoscopy enhances the depth perception of an image or video by presenting a different image to each eye –Viewer gets extra depth cues of binocular disparity & convergence The simplest form is just to place two images side-by-side: View the effect by going cross-eyed until the cars merge (easier close-up)

Stereoscopy – Background For a single user, it is possible to provide two sources feeding a truly separate image into left and right eye: –E.g. head-mounted dual displays, or historically, the “View-Master” More common to combine left and right images into a single image and require viewer to wear special glasses –Can be viewed by several people at once However, this presents the problem of how to combine / separate the images –Preferably without quality loss Also, people don’t like to wear glasses…

Stereoscopy via Glasses Three main types of glasses used for stereoscopic viewing Anaglyph –Two images are overlaid using different colour components –Glasses contain two distinct coloured filters to separate out the two images Red/cyan, red/green, or other variants Pros/cons –No special display hardware required –Glasses are cheap –Colour reproduction is poor –Crosstalk – filters not perfect, can see left image in right eye and vice versa

Stereoscopy via Glasses Polarised Light –Polarise the two images differently at source and combine for projection –Glasses contain polarised filters to separate out the images again Pros/cons –Glasses are cheap –Colour reproduction is good –No crosstalk –Requires special projection equipment –Polarisation will reduce brightness of the projected image

Stereoscopy via Glasses Shutter Glasses –Project alternate frames at a high frequency –Glasses blank out each eye in synchrony with the tv / projector Pros/cons –Colour reproduction is good –Little crosstalk –Glasses expensive –Requires capable display

Stereoscopy via Glasses One screen per eye –Screen close to each eye –Or even on the eye – contact lenses Pros/cons –High quality image –No crosstalk or colour problems –No out of screen problems –Expensive (compared to other options) –Can be uncomfortable –Can only be used by one person (Not always expensive, lol)

Stereoscopic Rendering in Practice There are three key steps to stereoscopic rendering: 1.Set up two cameras, one for each eye 2.Render scene with these to two render targets (a stereo pair) 3.Combine the two rendered images into one image for display The first point is straightforward if done correctly –However, there are some pitfalls to avoid Incorrectly rendered stereoscopic 3D may not look immediately wrong, but may give an uncomfortable, sub-standard result The latter two points are also straightforward where performance is not a concern –However, real-world applications need to consider optimisation –Rendering the scene twice is not cheap

Stereoscopic Rendering – Parallax The same object on the left and right rendered scene will typically appear in two slightly different locations The relative positioning determines whether we see the object as near or far. This is called parallax:

Stereoscopic Rendering - Cameras We normally consider the viewer as a single camera When using one camera per-eye, we might initially think to simply offset our single camera to the left, and to the right –Creating two parallel facing cameras –The distance between the cameras is the distance between the eyes –Called the interocular distance This approach will produce a comfortable result, but reduces the scope of the 3D effect –In particular, it is not possible to create negative parallax –I.e. cannot make objects look nearer than the physical screen

Stereoscopic Rendering - Cameras We can try to rotate the cameras inwards in the same way that the eyes turn inwards to focus on an object –Must decide a central focal point –This is called the “toe-in” method –It allows negative parallax But this approach has problems –It introduces vertical parallax Vertical differences between left and right eye –We cannot move our eyes vertically independently so not comfortable –Also sharply limits the comfortable range of depths that can be used Or viewer will have to cross or diverge their eyes too much

Stereoscopic Rendering - Cameras A better approach is to try and combine the above –Parallel camera axes, but inward facing We do this with “off-axis” cameras –Both cameras face the same direction –But offset the rendered area away from the axis towards the centre This produces a comfortable viewing experience Allows for negative parallax –Out of screen effects Requires a special form for the camera projection matrix

Standard Perspective Projection The standard projection matrix uses the near and far clip distances and the camera FOV: –This version reworked to use the viewport aspect ratio We need to update this matrix to shift the centre of the rendered area away from the camera axis

Off-Axis Perspective Projection This is the horizontal off-axis variant: –Note that this appears different from versions you might find online. However, it is equivalent and simpler to work with off x the horizontal offset is just half the interocular distance –The cameras are offset opposite directions –Note that z s is not the same as the near and far clip distances

Rendering / Combining Two Scenes So we set up two cameras, each with their own view and projection matrices –Each view matrix positions the camera at one eye. They both face down the same viewing axis –Each projection matrix is of the form given above Then render the scene normally through each camera –Into two render targets For 3D display hardware, these two render targets are processed by the display API –E.g. Displayed as alternate frames for shutter glasses Alternatively we can combine them in a post-processing pass into one image for display in anaglyph form

Splitting Colours for Anaglyph The core idea of anaglyph is to store two images in the separate colour channels of a single image For example, the left eye image goes into the red channel and the right image into the blue (and green) channels –[For red/cyan anaglyph, other colour variants operate in a similar way] This can be written in matrix form: –Where r,g,b is the combined image, r 1,g 1,b 1 the first image (left) and r 2,g 2,b 2 the second image (right) –This equation simply copies the red from the left eye image into the red of the output, and the green/blue from the right eye into the green/blue of the output

Splitting Colours for Anaglyph Using the colour combining formula from the last slide: We get a result like this: Colour reproduction is poor –Inevitable with anaglyph There is also retinal rivalry –The red in the flowers is much stronger than the green/blue –The eyes get very different images –Very uncomfortable

Splitting Colours for Anaglyph Recognising that anaglyph is not suited to colour reproduction we can create greyscale anaglyph. We get a different formula: A calmer result, but no colour: The formula converts each image to grayscale before copying into the channels of the output –The (0.299,0.587,0.114) coefficients are from the Rec.601 broadcast standard for the luminance of the red, green and blue primaries. It takes account of the fact that for our eyes, green is brighter than red, which is brighter than blue..

Splitting Colours for Anaglyph Half-colour anaglyph calms the red channel by using a greyscale image at its source, but copies the blue/green directly. –Calmer, some colour reproduction Optimised anaglyph creates a fake red channel from the blue and green of the first image –Even calmer image, brighter colour

Optimisations Rendering two scenes will clearly be costly There are a number of optimisations that can be employed given that the two images will be very similar –Screen space reprojection: render the second image from the first by offsetting elements depending on their depth. Use inpainting techniques to fill any gaps. Turns out to be similar to parallax mapping –Render both images at once to a single wide image using the geometry shader to create the duplicate geometry (problems?) –Render with different optimisations in depth slices – e.g. furthest scene elements same in both images, render that slice only once –What else?

Viewer Experience There are a number of settings that can be changed to affect the viewer’s experience of stereoscopic material: –Interocular distance –Distance to the screen plane Affecting how much of the game world comes out of the screen –FOV in projection matrix Each of these may need to be adjusted based on the physical situation of the viewer: –Size of their display and how far away they are from it –Their actual interocular distance! Practically speaking it is as well to set sensible defaults for a given environment (e.g. sofa at home, computer desk) Only adjust interocular distance for “3D strength” slider –Distance to screen plane should be game dependent

Causes of Discomfort Poorly thought-out 3D can be off- putting or even nauseating Problems to avoid: –Extreme negative parallax – things coming too far out of the screen –Window violations with negative parallax – out of screen objects going over the screen edges –Text & UI should normally be on the screen plane (but see below) –Extreme parallax differences in a focus area, e.g. screen plane text over a distant enemy unit making eyes struggle to depth adjust

Gallery