Wk 9: AR/VR  Immersion  Stereo  Head motion  Sensors  Sensor fusion  Hands-on VR demo.

Slides:



Advertisements
Similar presentations
SEMINAR ON VIRTUAL REALITY 25-Mar-17
Advertisements

A Natural Interactive Game By Zak Wilson. Background This project was my second year group project at University and I have chosen it to present as it.
Seeing 3D from 2D Images. How to make a 2D image appear as 3D! ► Output and input is typically 2D Images ► Yet we want to show a 3D world! ► How can we.
Joshua Fabian Tyler Young James C. Peyton Jones Garrett M. Clayton Integrating the Microsoft Kinect With Simulink: Real-Time Object Tracking Example (
Cameras and Projectors
The Fundamentals of Stereoscopic 3D (S3D) Display Technologies for Virtual Reality, Film, and Video Games Mark Newburn Vizics Inc.
Multimedia Specification Design and Production 2012 / Semester 1 / week 6 Lecturer: Dr. Nikos Gazepidis
Virtual Reality Design Virtual reality systems are designed to produce in the participant the cognitive effects of feeling immersed in the environment.
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
Virtual Reality. What is virtual reality? a way to visualise, manipulate, and interact with a virtual environment visualise the computer generates visual,
Location Systems for Ubiquitous Computing Jeffrey Hightower and Gaetano Borriello.
My Categorization  Free-Viewing Displays  SIRDS  Stereo Pairs  Barrier-Strip  Lenticular  Aided-Viewing Displays  Anaglyph  Polarized  Field-Sequential.
Tracking Gökhan Tekkaya Gürkan Vural Can Eroğul. Outline Tracking –Overview –Head Tracking –Eye Tracking –Finger/Hand Tracking Demos.
Theoretical Foundations of Multimedia Chapter 3 Virtual Reality Devices Non interactive Slow image update rate Simple image Nonengaging content and presentation.
1 Lecture 11 Scene Modeling. 2 Multiple Orthographic Views The easiest way is to project the scene with parallel orthographic projections. Fast rendering.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Space Perception Depth Cues Tasks Shape-from-Shading.
CSC 461: Lecture 2 1 CSC461 Lecture 2: Image Formation Objectives Fundamental imaging notions Fundamental imaging notions Physical basis for image formation.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Dinesh Ganotra. each of the two eyes sees a scene from a slightly different perspective.
Overview and Mathematics Bjoern Griesbach
Virtual Reality RYAN TAYLOR. Virtual Reality What is Virtual Reality? A Three Dimension Computer Animated world which can be interacted with by a human.
3D/Multview Video. Outline Introduction 3D Perception and HVS 3D Displays 3D Video Representation Compression.
Visual Perception How We See Things. Visual Perception It is generally agreed that we have five senses o Vision o Hearing o Touch o Taste o Smell Of our.
CAP4730: Computational Structures in Computer Graphics 3D Concepts.
Spatiotemporal Information Processing No.3 3 components of Virtual Reality-2 Display System Kazuhiko HAMAMOTO Dept. of Information Media Technology, School.
VE Input Devices(I) Doug Bowman Virginia Tech Edited by Chang Song.
Designing 3D Interfaces Examples of 3D interfaces Pros and cons of 3D interfaces Overview of 3D software and hardware Four key design issues: system performance,
Space Perception: the towards- away direction The third dimension Depth Cues Tasks Navigation Cost of Knowledge Interaction.
More Perception Introduction to Cognitive Science.
Dr. Gallimore10/18/20151 Cognitive Issues in VR Chapter 13 Wickens & Baker.
Virtual Reality Lecture2. Some VR Systems & Applications 고려대학교 그래픽스 연구실.
GENESIS OF VIRTUAL REALITY  The term ‘Virtual reality’ (VR) was initially coined by Jaron Lanier, founder of VPL Research (1989)..
1 Computer Graphics Week2 –Creating a Picture. Steps for creating a picture Creating a model Perform necessary transformation Lighting and rendering the.
Improving the Speed of Virtual Rear Projection: A GPU-Centric Architecture Matthew Flagg, Jay Summet, James M. Rehg GVU Center College of Computing Georgia.
Image Pool. (a)(b) (a)(b) (a)(c)(b) ID = 0ID = 1.
Presented by Matthew Cook INFO410 & INFO350 S INFORMATION SCIENCE Paper Discussion: Dynamic 3D Avatar Creation from Hand-held Video Input Paper Discussion:
Stereo Viewing Mel Slater Virtual Environments
OPTICAL CAMOUFLAGE - Invisibility Cloak.
The Limitations of Stereoscopic 3D Display Technologies as a Path to Immersive Reality Aris Silzars, PhD Northlight Displays.
Spatiotemporal Information Processing No.3 3 components of Virtual Reality-2 Display System Kazuhiko HAMAMOTO Dept. of Information Media Technology, School.
Space, Time & Colour in Visualisation Space –Cues: position, size, overlay, colour, perspective, texture, lighting, focus, accommodation, convergence,
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
HCI 입문 Graphics Korea University HCI System 2005 년 2 학기 김 창 헌.
1 Perception and VR MONT 104S, Fall 2008 Lecture 14 Introduction to Virtual Reality.
CONTENT 1. Introduction to Kinect 2. Some Libraries for Kinect 3. Implement 4. Conclusion & Future works 1.
Counting How Many Words You Read
Tracking Systems in VR.
Immersive Rendering. General Idea ► Head pose determines eye position  Why not track the eyes? ► Eye position determines perspective point ► Eye properties.
TELE IMMERSION AMAN BABBER
Haris Ali (15) Abdul Ghafoor (01) Kashif Zafar (27)
Made By: Pallavi Chhikara
Anaglyph overview stereoscopic viewing technology.
Product: Microsoft Kinect Team I Alex Styborski Brandon Sayre Brandon Rouhier Section 2B.
OPTICAL CAMOUFLAGE 1 Optical Camouflage M.SASIDHAR 07231A0581 CSE.
Space Perception: the towards- away direction Cost of Knowledge Depth Cues Tasks Navigation.
What you need: In order to use these programs you need a program that sends out OSC messages in TUIO format. There are a few options in programs that.
MULTI TOUCH  Multi-touch refers to a touch system's ability to simultaneously detect and resolve a minimum of 3+ touch points. All 3 or more touches are.
3-D Technology.
VR/AR project Progress Report
Musical Instrument Virtual
Introduction to Virtual Environments & Virtual Reality
Augmented Reality And Virtual Reality.
Xbox Kinect (Microsoft)
Balanduino Supervisor: Dr. Raed Al-Qadi Prepared by: Nadeen Kalboneh Nardeen Mabrouk.
Multiple View Geometry for Robotics
Virtual Reality.
LEAP MOTION: GESTURAL BASED 3D INTERACTIONS
Fast Forward, Part II Multi-view Geometry Stereo Ego-Motion
Presentation transcript:

Wk 9: AR/VR  Immersion  Stereo  Head motion  Sensors  Sensor fusion  Hands-on VR demo

Immersion The sense of ‘being there’ How much of user’s overall experience is due to the game/app/artefact Physical: sensory immersion + control Emotional: story + relatedness Cognitive: believability + challenge Only one is necessary for immersion, but breaking one breaks all

Reminder: 3D reconstruction Viewer task: given a 2D image of the world, reconstruct the 3D scene that is out there: shape, size and relative positioning of objects Made easier by: –Two eyes –Consistent changes over time –Other internal information: eyes’ position, balance etc. –Previous knowledge: laws of physics, object sizes etc. More 3D – more physical immersion

Computer Graphics Fool the eye: Enable the viewer to reconstruct a 3D scene that is not there ‘Depth Cues’ are features of input that allow this reconstruction

Depth cues: common Occlusion: Z-buffer Size: perspective projection Shading: diffuse lighting Perspective: perspective projection Coherent motion: camera and animation Fade to blue and fog: shaders

Depth cues: rare and epic Stereo disparity: various stereo hardware Self-motion: motion tracking Convergence: eye tracking, light-field displays Accommodation: holodecks

Cue conflicts If cues hint at conflicting 3D interpretations of the scene: Either resolved –Some depth cues are stronger than others Or not –Annoying –Breaking immersion –Worst case scenario: cybersickness

Cybersickness Aka simulation sickness –First research: USAF pilot training Discrepancy between expected (from proprioception) and perceived scene motion Body assumes it's been poisoned and tries to instinctively rid itself of poison To be avoided

Depth Cue I: Stereo To provide stereo depth cues, the renderer has to provide two different images to two eyes Relatively easy in software: just render the scene twice Hardware: must present images to the eyes separately (images mainly from mtbs3d.com)

Physical separation I

Physical separation II

Time separation Shutter-glasses, synchronised with display refresh rate –Halves the effective refresh rate

Wavelength separation Anaglyph: simple, cheap, but with loss of colour Dolby 3D: two different RGB wavelength sets –no loss of colour, but uses complicated and expensive glasses

Polarisation separation Electro-magnetic waves oscillate in a plane perpendicular to the direction of travel That can be constrained and separated –Horizontal/vertical sensitive to head orientation –Clockwise/anti-clockwise Head orientation does not matter

Polarised (passive) stereo projection Two projectors Per-pixel polarisation –Interlaced, checkerboard, etc. Per-frame polarisation IMAX, Real3D,etc.

Depth cue II: Head tracking Given: the position of the head in real world, adjust the position of the camera Provides a strong depth cue: change of scene due to self motion (“fishtank”)

Implementation Recalculate the View and Projection each frame Static screens: the eye moves and the image plane stays in place Helmets: the eye and the image plane move together Most likely to cause cybersickness due to: Slow or inaccurate tracking Lag between tracker update and scene refresh Especially pronounced in fast head movements

Sensor fusion Given: –Readings from various sensors –Noise estimation for each sensor type Calculate cleaner data –Smoothing –Weighting by sensor reliability (1/noise) –Kalman filter

Hardware: trackers Internal (accelerometers, compasses, gyros) –Nintendo Wii, mobile phones, etc. –Problem: drift External (active or passive) –Optical, infra-red, magnetic, ultrasound etc. –Problem: occlusion, interference from other sources Integrated: internal + external –Vicon (+ passive optical), Wii plus (+ active IR); Intersense (+ passive ultrasound) –Self-correction from two sources; basic sensor fusion

Hardware: tracker less PS eye: –video capture and analysis Kinekt –projecting IR grid onto environment FaceAPI: face capture only

Off: Kalman filter A way to calculate the next state of a system based on: –Measurements (noisy) –Previous state (uncertain) –System dynamics (limited model)

'aware' devices Position in the world –Local, global, relative to something World around them –Scene, objects, agents User/users –Position, identity, preferences

Oculus Rift Fast: tracking data is sent directly to GPU –Orientation only in DK1, position+orientation in DK2 Lightweight –Single screen –Minimal optics; distorted rendering, corrected in shader DK2: Low persistence display

VR/AR As a spectrum As a dichotomy