Immersive Displays The other senses…
1962…
Classic Human Sensory Systems Sight (Visual) Hearing (Aural) Touch (Tactile) Smell (Olfactory) Taste (Gustatory)
Relevance to VR #1 – Sight #2 – Hearing #3 – Touch #4 – Smell #5 – Taste 1,2,3 are well studied but still have plenty of research left 4 and 5 are incredibly difficult, but some examples exist
Other relevant sensors Temperature Sensors Proprioceptive sensors (gravity) Stretch sensors found in muscles, skin, and joints Vestibular (inner ear) sensors Which can we control in VR? ◦ Cue conflicts cause nausea, vomiting
Audio (Sound Rendering) Easiest way to improve a VR system ◦ Think of watching a movie without sound Easy to use (Sound APIs) Cheap to produce great results (headphones) <$100
Audio Displays An arrangement of speakers ◦ Spatially Fixed – Loudspeakers (many types) ◦ Head-Mounted – Headphones (many types) Speaker quality affects your ability to generate sound wave frequencies, loudness ◦ Amplifiers very important for good results
Immersive Audio Our hearing system can sense the 3D source of a sound ◦ A VR system should be able to produce what the ears should hear from a 3D source Binaural recordings in real life (like stereoscopic video) 3D sound rendering in the virtual world (like stereoscopic rendering) ◦ Works best with headphones
Head Related Transfer Function (HRTF) In the frequency domain, at frequency f ◦ H(f) = Output (f) / Input (f) HRTF is dependent on spatial position, X,Y,Z, or in the far field, direction. Complex HRTF caused by the Pinnae of the ears ◦ Unique to each person HRTF learned by each person from childhood to sense 3D source
3D sound rendering In the API (what you program) ◦ position, velocity, intensity of source ◦ position, velocity, *orientation* of listener Dependent on your renderer capabilities ◦ HRTF of actual listener for best results Measure with molds or in-ear microphones Default HRTF is identity (basically you only get left- right distinction) ◦ Reverb (echoing) or other effects ◦ Speaker arrangement (usually defined in OS)
Sound API OpenAL and DirectSound are popular ◦ Sort of like OpenGL and Direct3D API for talking to a 3D renderer (usually hardware) ◦ Similar to the idea of OpenGL Allows you to load sounds (utility toolkit), specify 3D sound properties, and specify listener properties. ◦ Must use single-channel sound files! Multi- channel sound files do not make sense. The renderer “generates” multi-channel sound. Example
Haptics (Touch rendering) Reproduction of forces exerted on the human body ◦ Striking a surface (e.g. hitting a ball) ◦ Holding an object ◦ Texture of a surface Lack of touch rendering is the #1 problem in VR systems ◦ Enormous actuation area The entire surface of the human body Existing solutions are encumbering and task specific
Categories of Haptic Displays Passive vs Active ◦ Passive – Can stop motion but cannot create it ◦ Active – Can generate motion Fixed vs Sourceless ◦ Fixed – Mounted to the environment (e.g. a wand) ◦ Sourceless – Mounted to the user (e.g. a glove) Forces, torques, vibrations ◦ Types of output a Haptic device can be capable of
Haptic Rendering Specify forces, torques, rotations at actuation points ◦ Most commonly one APIs are available ◦ From manufacturer ◦ OpenHL? Very similar to physics rendering, except much more difficult ◦ Requires extremely high update rates (1000hz for imperceptibility)