Immersive Rendering. General Idea ► Head pose determines eye position  Why not track the eyes? ► Eye position determines perspective point ► Eye properties.

Slides:



Advertisements
Similar presentations
Seeing 3D from 2D Images. How to make a 2D image appear as 3D! ► Output and input is typically 2D Images ► Yet we want to show a 3D world! ► How can we.
Advertisements

Monocular and Binocular cues
Visually-Pleasing Stereoscopic Imagery for Design Visualization Jason Walter Software Developer – Autodesk Consulting.
Chapter 10: Perceiving Depth and Size
The Fundamentals of Stereoscopic 3D (S3D) Display Technologies for Virtual Reality, Film, and Video Games Mark Newburn Vizics Inc.
Passive Stereo Projection in the Classroom Eric Wiebe Bethany Smith Lessons learned putting a system together.
Depth Cues Pictorial Depth Cues: aspects of 2D images that imply depth
Extra Credit for Participating in Experiments Go to to sign upwww.tatalab.ca We are recruiting for a variety of experiments including basic.
Head-Mounted Display Sherman & Craig, pp
Electronic visualization laboratory, university of illinois at chicago Stereoscopic Computer Graphics CS426 Jason Leigh © Electronic Visualization.
CS 4363/6353 INTRODUCTION TO COMPUTER GRAPHICS. WHAT YOU’LL SEE Interactive 3D computer graphics Real-time 2D, but mostly 3D OpenGL C/C++ (if you don’t.
Head-Mounted Display Sherman & Craig, pp
Stereograms seeing depth requires “only” two different images on the retina.
What’s on page 13-25? Tom Butkiewicz. Refresh Rates Flicker from shutter systems Halve refresh rates 2 eyed 120Hz != 1 eyed 60Hz Phosphors 2 Polarized.
Theoretical Foundations of Multimedia Chapter 3 Virtual Reality Devices Non interactive Slow image update rate Simple image Nonengaging content and presentation.
1 Lecture 11 Scene Modeling. 2 Multiple Orthographic Views The easiest way is to project the scene with parallel orthographic projections. Fast rendering.
Three-dimensional (3D) vision How comes that we can see in three dimensions? That we can tell which objects are closer, and which are more distant? Parallax.
Three-Dimensional Concepts
5/5/2006Visualization Sciences, Texas A&M University1 Spatially Immersive Visualization Systems (an update) Prof. Frederic I. Parke Visualization Sciences.
Monocular vs. Binocular View Monocular view: one eye only! Many optical instruments are designed from one eye view. Binocular view: two eyes with each.
Space Perception Depth Cues Tasks Shape-from-Shading.
Reading Gregory 24 th Pinker 26 th. Seeing Depth What’s the big problem with seeing depth ?
Dinesh Ganotra. each of the two eyes sees a scene from a slightly different perspective.
Careers for Psychology and Neuroscience Majors Oct. 19th5-7pm in SU 300 Ballroom B.
Head-Tracked Displays (HTDs) Sherman and Craig, pp
Joshua Smith and Garrick Solberg CSS 552 Topics in Rendering.
Stereoscopic images Several methods: –Anaglyph –Polarization –Timesequential (shutterglasses) –Lenticular L A Rønningen/ E Heiberg, Item 2008.
Bob Lambermont - Petrobras Innovators in image processing.
Maths & Technologies for Games Stereoscopic Rendering This presentation uses some red/cyan anaglyph images.
3D/Multview Video. Outline Introduction 3D Perception and HVS 3D Displays 3D Video Representation Compression.
Aaron Schultz. Idea: Objects close to a light shadow those far away. Anything we can see from the light’s POV is lit. Everything hidden is dark. Distance.
Depth Perception – Monocular and Binocular Depth cues
CAP4730: Computational Structures in Computer Graphics 3D Concepts.
Spatiotemporal Information Processing No.3 3 components of Virtual Reality-2 Display System Kazuhiko HAMAMOTO Dept. of Information Media Technology, School.
Space Perception: the towards- away direction The third dimension Depth Cues Tasks Navigation Cost of Knowledge Interaction.
20/10/2004 Display Technology: Stereo&3D Display Technologies David F. McAllister Department of Computer Science North Carolina.
By Andrea Rees. Gestalt Principles 1) Closure 2) Proximity 3) Similarity 4) Figure VISUAL PERCEPTION PRINCIPLES OVERVIEW Depth Principles Binocular 1)
CS 480/680 Computer Graphics Image Formation Dr. Frederick C Harris, Jr.
GENESIS OF VIRTUAL REALITY  The term ‘Virtual reality’ (VR) was initially coined by Jaron Lanier, founder of VPL Research (1989)..
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 8 Seeing Depth.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Stereo Viewing Mel Slater Virtual Environments
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Spatiotemporal Information Processing No.3 3 components of Virtual Reality-2 Display System Kazuhiko HAMAMOTO Dept. of Information Media Technology, School.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Graphics II “3D” Graphics Cameron Miller INFO410 & INFO350 S INFORMATION SCIENCE Visual Computing.
12/24/2015 A.Aruna/Assistant professor/IT/SNSCE 1.
The Microscope and Forensic Identification. Magnification of Images A microscope is an optical instrument that uses a lens or a combination of lenses.
Video Composition Media Concepts The Spill Resource Page.
Stereoscopic Images Binocular vision enables us to measure depth using eye convergence and stereoscopic vision. Eye convergence is a measure of the angle.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
A Photograph of two papers
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Perception and VR MONT 104S, Fall 2008 Lecture 8 Seeing Depth
Anaglyph overview stereoscopic viewing technology.
Visual Perception. What is Visual Perception? Visual perception are rules we apply to visual information to assist our organisation and interpretation.
Binocular Disparity points nearer than horopter have crossed disparity points farther than horopter have uncrossed disparity.
Space Perception: the towards- away direction Cost of Knowledge Depth Cues Tasks Navigation.
Viewing. Classical Viewing Viewing requires three basic elements - One or more objects - A viewer with a projection surface - Projectors that go from.
Binocular Vision Concepts and Examples Size is calibrated without switching into slide show mode. In slideshow mode, the images are a little bit larger.
Introduction to Computer Graphics
- Introduction - Graphics Pipeline
Diving deeper into design
Head-Tracked Displays (HTDs)
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
CS451Real-time Rendering Pipeline
Three-Dimensional Concepts. Three Dimensional Graphics  It is the field of computer graphics that deals with generating and displaying three dimensional.
Common Classification Tasks
Mark Newburn Vizics Inc.
Presentation transcript:

Immersive Rendering

General Idea ► Head pose determines eye position  Why not track the eyes? ► Eye position determines perspective point ► Eye properties determine what part of the real world can be seen ► Screen determines how the virtual world can be seen  Screen is a window into the virtual world

Virtual World Head Pose Eye Position Eye Properties Display Pose Head-Eye-Screen-World Relationships

Depth effects ► Most displays are 2D (pixels) ► Yet we want to show a 3D world! ► How can we do this?  We can include ‘cues’ in the image that give our brain 3D information about the scene  These cues are visual depth cues

Visual Depth Cues ► Cues about the 3 rd dimension – total of 10 ► Monoscopic Depth Cues (single 2D image) [6] ► Stereoscopic Depth Cues (two 2D images) [1] ► Motion Depth Cues (series of 2D images) [1] ► Physiological Depth Cues (body cues) [2]

Monoscopic Depth Cues ► Interposition  An occluding object is closer ► Shading  Shape and shadows ► Size  The larger object is closer ► Linear Perspective  Parallel lines converge at a single point  Higher the object is (vertically), the further it is ► Surface Texture Gradient  More detail for closer objects ► Atmospheric effects  Further away objects are blurrier and dimmer ► Images from

Physiological Depth Cues ► Accommodation – focusing adjustment made by the eye to change the shape of the lens. (up to 3 m) ► Convergence – movement of the eyes to bring in the an object into the same location on the retina of each eye.

Stereoscopic Depth Cue ► Stereopsis ► Stereoscopic Display Technology ► Computing Stereoscopic Images ► Stereoscopic Display and HTDs. ► Works for objects < 5m. Why?

Stereopsis The result of the two slightly different views of the world that our laterally-displaced eyes receive.

Screen Parallax P left – Point P projected screen location as seen by left eye P right – Point P projected screen location as seen by right eye Screen parallax - distance between P left and P right P Left eye position Right eye position P left P right P left P Display Screen Object with positive parallax Object with negative parallax

How to create correct left- and right-eye views ► What do you need to specify for most rendering engines?  Eyepoint  Look-at Point  Field-of-View or location of Projection Plane  View Up Direction P Left eye position Right eye position P left P right P left P Display Screen Object with positive parallax Object with negative parallax

Basic Perspective Projection Set Up from Viewing Paramenters Y Z X Projection Plane is orthogonal to one of the major axes (usually Z). That axis is along the vector defined by the eyepoint and the look-at point.

What doesn’t usuallywork Each view has a different projection plane Each view will be presented (usually) on the same plane

What Does Work ii

What if you don’t have 2 displays? Look at point Eye Locations Look at point Eye Locations No Yes

Asymmetric Camera Frustum Images from: us/stereographics/stereorender/

Cue Mismatch: Accommodation/ Convergence Display Screen

Position Dependence (without head-tracking)

Interocular Dependance F Modeled Point Perceived Point Projection Plane True Eyes Modeled Eyes

Two View Points with Head-Tracking Projection Plane Modeled Point Perceived Points Modeled Eyes True Eyes

Stereoscopic Display ► Stereoscopic images are easy to do badly, hard to do well, and impossible to do correctly.

Stereoscopic Displays ► Stereoscopic display systems presents each eye with a slightly different view of a scene.  Time-parallel – 2 images same time  Time-multiplexed – 2 images one right after another

Time Parallel Stereoscopic Display Two Screens ► Each eye sees a different screen ► Optical system directs correct view ► HMD stereo Single Screen ► Two different images projected ► Images are colored or polarized “differently” ► User wears glasses to filter out L image for L eye and R image for R eye

Passive Polarized Projection ► Linear Polarization  Ghosting increases when you tilt head  Reduces brightness of image by about ½  Potential Problems with Multiple Screens ► Circular Polarization  Reduces ghosting  Reduces brightness  Reduces crispness

Problem with Polarization Technology for Multiple Screens ► With linear polarization, the separation of the left and right eye images is dependent on the orientation of the glasses with respect to the projected image. ► The floor image cannot be aligned with both the side screens and the front screens at the same time. ► Solution?

Time Multiplexed Display ► Left and right-eye views of an image are computed ► Alternately displayed on the screen ► A shuttering system occludes the right eye when the left-eye image is being displayed

Shutter Glasses

Ghosting ► Some of L image is visible to R eye and vice versa ► Not a problem for HMDs, why? ► About equal problem for polarized and shuttered glasses ► Pixel persistence ► Vertical screen position of the image.

Other stereo limiting factors ► Right and left-eye images do not match in color, size, vertical alignment. ► Distortion caused by the optical system ► Resolution ► HMDs interocular settings ► Computational model does not match viewing geometry.

Summary ► Monoscopic – Interposition is strongest. ► Stereopsis is very strong. ► Relative Motion is also very strong (or stronger). ► Physiological is weakest (we don’t even use them in VR!)

Stereo Capable HMDs ► Each has some way to send two independent signals  Usually dependent on both graphics card and API ► Lab has 4 different HMDs each with different requirements to produce stereo ► Head Tracking is a separate concept

Emagin Z800 ► Must use machine with Geforce 7900 GPU (on back wall) to get quality stereo ► NVIDIA API (old version) to set convergence and separation ► Must be 800x600x60hz ► Controlled Internally LRLRLRLRLRLRLR ► Best resolution, best color, best head-tracking, best comfort, worst hardware requirement

Vuzix VR920 ► Use their API ► Must hack Ogre or use OpenGL or DirectX directly ► 640x480, frame interleaved LRLRLRLRLRLRLRLR…. ► Very uncomfortable, bad FOV, but internal head tracking

Vuzix Wrap 920 ► Use Internal API (like VR920) ► Use side-by-side stereo (two viewports) settings in control box ► 640x480, two 320x480 viewports ► Coolest looking, worst FOV

VR Research V6 ► Dual Input (L and R connections)  Must have two outputs on graphics card ► 1280x480 in horizontal span window mode  Two 640x480 viewports side by side ► 2 640x480 fullscreen windows ► Best FOV (60 degrees)

Other stereo options ► 2 Viewsonic projectors and Asus 3D monitor  1024x768x120hz frame interleaved  Use new NVIDIA Api (convergence and separation) or Use QuadBuffered OpenGL (or hacked ogre)  Must use newer NVIDIA (desktop) GPU or hacked laptop driver  Use NVIDIA 3D Vision Shutter glasses

Anaglyph Stereo ► Red Cyan glasses ► Render scene twice  First pass – Left - Convert to grayscale then to Red (R channel)  Second pass – Right - Convert to grayscale then to Cyan (BG channels) ► Worst effect ► Easiest to deploy