Download presentation
Presentation is loading. Please wait.
Published byElaine Clark Modified over 9 years ago
1
Cosc 6326/Psych6750X Enabling Technology for Advanced Displays
2
Virtual reality and other advanced interactive displays –simulate and maintain a model of the world to be created or augmented –present or display the world to the user (displays and effectors) –sense the actions of the user and environmental state to enable the simulation to react (sensors)
3
A typical VR system has –sensors to collect information about the actions of the user –a processor to collect this information, model the virtual world and generate the output for the display devices. –displays and other sensory stimulators generate the sensory input provided to the user.
4
The sensation-perception-action cycle of the user is an integral part of VR system. Normally when one acts in the world feedback from the senses confirms the expected result Current VR systems have serious limitations that limit the ability to create high fidelity realistic virtual worlds.
5
In a sense VR closes the users sensorimotor loop –User acts in the world. –Simulation detects the action using sensors –Feedback provided by the simulation via the displays
6
1. Simulation and Image/Display Generation
7
Hardware –Need to provide real time update to the user –Processor speeds and technology have improved exponentially although modelled VR worlds are still limited –Recent trend move from ‘big-iron’ to clusters of PCs
8
Software: –input, simulation, rendering –often done in parallel loops (more parallelization possible) –input loop handles interfacing with sensors to get current state
9
–simulation loop: for each time interval simulate behaviour of objects in virtual environment physical behaviour, reaction to user actions, higher level behaviour (intelligent entities, avatars …), collision detection … real time – feedback to user must be timely (e.g. 60 Hz) distributed, multiprocessor pipelines …
10
–Rendering loop: generate displays to present: graphics, haptics, audio modern raster graphics has a number of stages to convert world model to raster image –transformation, projection –lighting, shading –texture mapping –rasterisation –anti-aliasing –visibility, clipping, culling … recently, substantial hardware support on fast, low cost graphics cards – ‘graphics pipeline’
11
2. Displays and Effectors
12
Low end HMDs Targetted for personal entertainment (games, dvd, …) Sony Glasstron, Olympus Eyetrek currently NTSC, PAL, VGA resolution. HDTV?
13
VR HMDs Sutherland’s HMD was boom supported. Often need free head motion. Characterizing HMDs –Configuration: projection versus direct viewing –Optics: simple magnifier vs. compound microscope –Display image source: CRT, DLP, LCD … –Opaque or see-through
14
VR HMD Projection type head mounted optics external electronics & projection display CAE FOHMD –images generated by high- resolution data projectors –coherent fibre optic bundle and optics direct image to eyes
15
Direct viewing: many modern displays have head mounted miniature displays –CRT: e.g. N-Vision, Kaiser (KEO) –LCD: e.g. Virtual Research, KEO –laser retinal scanning –DMDs –FEDs …
16
Some HMDs
17
HMD Optics Simple magnifer –single magnifying lens, short optical path –no exit pupil formed –simple, inexpensive Compound optics –several lens: eyepiece, objective –exit pupil formed; must align with eye’s entrance pupil –more complicated, longer optical path, permits focusing
18
See-through HMD capability Non-see-through –No need for optical combiner –Eye sees only the virtual image –Pure virtual reality application
19
See-through HMD capability Optical see-through –images of the real and virtual worlds optically superimposed –need optical combiner (transmission ratio?) –useful for AR, wearables; similar technology for heads-up displays –distortions and time-lags a problem –direct view of real world
20
See-through HMD capability Video See-through –non-see-through HMD plus ‘scene’ cameras –the virtual world is superimposed on a video image of the real world –electronic (not optical) combiner –can match time delays and distortions –system has access to user’s view –low resolution image of the real world
21
Figures of Merit/Design factors –field of view –resolution (tradeoff with FOV) –luminance, contrast –colour –monocular, biocular, binocular –exit pupil size, eye relief, adjustments (inter- pupillary distance, focus) –distortion
22
Projection-based displays Walls –large screen interactive displays –suggested for collaborative design –curved screen, flat, wrap around, dome e.g. Elumens Vision Dome Desks –ImmersaDesk (University of Illinois EVL), …
25
CAVE/CAVE-like University of Illinois EVL, Fakespace, Trimension (ReaCToR), Mechdyne (SSVR ) from fakespace
26
Cave
27
reconfigurable CAVE - RAVE
28
Large format immersive displays Large format film, domes, planetariums, ride simulators –SEOS, Trimension, Spitz, Disney Quest, IMAX –immersive but often not very interactive (large groups) –used in simulators, $$$ for VR –Mechdyne V-dome has been used for VR
30
V-Dome
31
projection technology issues –projectors cathode ray tube (CRT) digital light processing (DLP) DILA liquid crystal display (LCD) Laser
32
–screens material: glass, fabric, plastic, fog! reflectivity, gain, polarisation inter-reflection (black screens) –structure single vs multiple tiling, blending colour and luminance matching/uniformity –support for stereopsis
33
Audio displays: –stereophonic, surround sound –spatial sound displays –sound modeling and synthesis haptics, tactile displays … more on these later …
34
Sensors
35
Sensor technology is currently particularly rudimentary. Position of a limited number of joints or limbs is normally sensed such as the position of the head and hand. Buttons and joysticks etc can also provide input.
36
Sense only a limited range of the possible motions and have limited resolution. Lag is a major problem with some sensors
37
Tracking Technology
38
To generate the displays, need to know users position and orientation Need to track user’s head (hand, body …) in real time in order to respond to head (hand, body …) motion in real time Current tracking does not measure degrees of freedom possible in human motion
39
magnetic –pulsed DC, AC –earths magnetic field ultrasound optical GPS (outside) mechanical gyroscopes, accelerometers www.3rdtech.com
40
3D input devices a number of 6 degree of freedom input have been proposed for 3D interaction spaceball, 3D mice, hand/stylus tracking isometric versus isotonic –maps to rate versus position control
41
Gloves/Motion capture one of the early VR input devices was the Dataglove typically many degrees of freedom additional tracking for position animation/gesture recognition Gypsy Immersion Cybergrasp
42
Other input technology speech recognition eye gaze tracking gesture recognition biopotentials
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.