Download presentation
Presentation is loading. Please wait.
1
Underlying Technologies Part One: Hardware Mark Green School of Creative Media
2
Introduction Take a look at the hardware technologies that make it all work an overview, survey or what’s available, may come back to details later two types of hardware: output output input input
3
Output Technologies Numerous modes: visual visual sound sound tactile/force tactile/force smell smell examine both commercial and research technologies
4
Visual We tend to concentrate on visual, this is our dominant sense aim: fill our visual field of view with computer generated graphics standard monitor doesn’t work, only covers a portion of the visual field, still a useful tool
5
Head Mounted Display Started with Head Mounted Display (HMD) two display screens, one for each eye, mounted on head images change as user moves display responds to user’s motions, move head to view complete 3D environment not perfect: heavy, low resolution, etc
10
Projection Based Displays A number of display technologies based on projection active area of research: rapidly evolving projection technology rapidly evolving projection technology rapid decrease in price rapid decrease in price easier to use and higher quality than HMD, but requires more space
11
Projectors Three main projection technologies: CRT CRT LCD LCD DLP DLP CRT is the oldest technology, based on three CRT tubes: red, green and blue low brightness, around 240 lumens
12
CRT Projectors Main benefits: high resolution high resolution very fast very fast very flexible very flexible Problems: hard to maintain, heavy hard to maintain, heavy dead technology, no evolution dead technology, no evolution expensive expensive
13
LCD Projectors LCD panel with bright projection lamp LCD panel: grid of cells containing liquid crystal when voltage applied to cell crystals change optical properties, produces image main problem: crystals don’t change fast enough brightness: 600 - 2400 lumens
14
LCD Projectors Benefits: cheap cheap small, light, easy to maintain small, light, easy to maintain reasonably bright reasonably bright Problems: not very fast, 20 - 30 Hz not very fast, 20 - 30 Hz fixed resolution fixed resolution
15
DLP Projectors Digital light processor: micro-mirrors built on a chip, voltage applied to mirror causes it to tilt shine light on DLP, mirror direction controls how much light is reflected a developing technology brightness: 2,000 - 10,000 lumens
16
DLP Projectors Benefits: very bright very bright reasonably fast and easy to maintain reasonably fast and easy to maintain developing technology, will get better and cheaper developing technology, will get better and cheaper Problems: currently quite expensive currently quite expensive fixed resolution fixed resolution
17
Front Vs. Back Projection Projector can be in front or behind the screen back projection: viewers don’t shadow the screen, but requires a lot of space front projection: saves a lot of space, but viewers can shadow the screen a space Vs quality trade off
18
Throw Distance Distance from projector to screen for best quality image varies with projector and optics, rough estimate: twice the largest dimension of the image can be reduced by using mirrors, fold the optical path, need high quality mirrors and accurate placement
19
CAVE CAVE: room with projected stereo graphics on walls, user position tracked Why? Very high resolution Very high resolution less to wear less to wear multiple users multiple users access to real world objects access to real world objects
26
Walls Project onto a single wall, either flat or curved may use multiple projectors to give higher resolution, need edge blending can be quite large, over 25’ long usually stereo, but not head tracked used extensively in oil industry
29
Desks Back project onto a table top, usually stereo with head tracking good for fine manipulations restricted to a few users, can’t do good stereo with more than 2 users single projector, so cheaper than a wall or cave
32
Sound Before we can discuss sound hardware we need to understand how we hear there are a number of cues that we use to locate sound sources one is the volume, sound volume decreases with the square of distance for many sounds we know how loud they normally are, can estimate distance
33
Sound Another sound cue is based on comparing results for our two ears unless the sound is directly in front or behind us, there will be a time difference between the ears there will also be a slight difference in volume the main idea behind stereo sound
34
Sound The most important cue comes from the shape of our outer ear the outer ear filters sound, filtering depends upon sound direction we learn the filtering performed by our ear, can then judge where sounds are located we only need one ear for this, two ears really aren’t that important
35
Sound Outer ear filtering called HRTF - head- related transfer function HRTF can be measured by placing a small microphone in the ear, measure sound intensities for sound coming from different directions use HRTF to filter sound, feed to each ear separately using headphones
36
Sound If the listener and sound source are moving relative to each other get Doppler effect sounds moving towards us increase in frequency, sounds moving away decrease in frequency this is noticeable at around 30 km/h, so its something we could expect in our virtual environment
37
Sound How do we implement this? Drop HRTF, the rest is fairly easy for simple acoustic environments given listener position in VE, and positions of sound sources, only need to do some simple filtering, either CPU or sound card output then goes to headphones
38
Sound If using speakers its more complex need to know the position of listener in physical space, plus position of speakers generate separate audio feed for each speaker typically needs at least 4 speakers, but depends upon size of real environment
39
Sound For HRTF we need to use headphones, also need to have HRTF for listener, generic HRTF can be used can be done on modern computers without much problem assume simple acoustic environment, no walls, no sound reflection, etc
40
Sound Simulating acoustical environments is more difficult typically need special purpose hardware for real-time, usually multiple DSPs restricted to a small number of sound sources and a small number of sound reflectors
41
Tactile and Force Related sensations, both involve hands and touch force involves objects acting on the user’s hands, basically a push against the hand the user must be holding the device that provides force output, typically both an input and output device force feedback joystick
43
Force Force feedback joystick, like a normal joystick, except motors attached to the stick stick can be moved under computer control react to user’s motion, if the hand strikes a wall stick stops moving stick shakes when gun fires, etc
44
Tactile Sense of touch, mainly research try to stimulate tactile sense organs in fingers and hand usually some form of vibration: film in front of loud speaker film in front of loud speaker array of small pins, each pin can vibrate or move slightly array of small pins, each pin can vibrate or move slightly
45
Smell Both easy and hard, not a standard technology! Fairly easy to generate smells, just need the right chemical in small vials to generate smell, open the vial and use a small fan to push smell towards the user opening and closing vial under computer control
46
Smell Real problem is stopping the smell need some way of getting rid of smell when no longer needed could use a large fan, probably interferes with the experience, may make too much noise also need some way of mixing smells predictably
47
Input To interact with a virtual environment there needs to be some way of sensing our actions VEs are 3D, so we need to be able to enter 3D data: position position orientation orientation we also want to manipulate objects, so there are other things we might want
48
3D Data There are a number of devices for 3D data an important sub-class are trackers, used to determine position and orientation of body parts trackers are used by most output devices, so we will start with them first look at some important properties, then look at hardware
49
Trackers Latency: most important property time from when user moves until position reported to computer system latency: time from when user moves until motion reflected in display must be kept less than 0.1 second, otherwise users become very sick
50
Trackers Noise: when the tracker is held still is there a variation in its output large amounts of noise decrease usability Accuracy: does the tracker accurately report its position warped spaces are hard to work in, hand- eye coordination is hard if devices aren’t accurate
51
Electromagnetic Trackers Source emits an electromagnetic field, sensor detects field computes position and orientation from field measurements reasonably accurate, but expensive limited range, problems with metal objects sometimes add buttons for interaction
54
Hybrid Trackers Based on combining two existing tracking technologies Inertia trackers are very fast, based on measuring accelerations Perform very well over short time periods, several seconds, but drift over longer periods
55
Hybrid Trackers Ultrasound is slow and requires many sound sources for accuracy Combine the two technologies: Use inertia tracking over short time periods Use inertia tracking over short time periods Use ultrasound tracking to correct the inertia tracker periodically Use ultrasound tracking to correct the inertia tracker periodically Produces a fast and accurate tracking system
56
Hybrid Trackers Advantages: Fast Fast Accurate Accurate Low noise Low noise Problems: Expensive Expensive Immature technology, still needs some work Immature technology, still needs some work
57
Video Tracking Based on using one or more video cameras to determine user’s position Full 3D requires multiple cameras, can do some things with a single camera General approach involves multiple cameras in different locations, capture an image from all of the cameras
58
Video Tracking In each image find the user and then the points on the user being tracked Identifying tracked positions is hard, may not even be visible Special optical markers often used to help identify tracked positions Extract position information from image, this is usually 2D data
59
Video Tracking Combine the information from all of the cameras to get 3D position Hard to do in real time, especially with high accuracy Quite often simplify the problem by using special backgrounds, special lighting, etc
60
Video Tracking Single camera systems can be used in special situations Face tracking: know what we are looking at and over a limited range With standard webcam can do a good job of determining positions of facial features
61
Optical Tracker Problem: suit-up for interaction, must wear devices, cables on floor, etc casual use is difficult, just walk into Cave and use it, nothing to wear observations: really a 2D problem, users don’t change height really a 2D problem, users don’t change height high accuracy not necessary for casual interaction high accuracy not necessary for casual interaction
62
Optical Tracker Floor level laser beam camera with filter, tuned to laser frequency camera mainly sees laser light, doesn’t see graphics user’s legs interrupt laser beam: position of break gives one axis position of break gives one axis height of break give the other height of break give the other under $400 to build
63
Optical Tracker
64
3D Data A number of devices can be used for position and orientation, but aren’t trackers A good example of this is a 3D joystick Twisting the control stick is used as the third dimension Reasonably good, but all 3 dimensions aren’t treated equally
66
Pucks Similar to joysticks, but easier to use in 3D round flat puck instead of control stick, can be moved in x, y and z as well as rotated about the same price as a joystick, hasn’t really caught on as a game device, so not as easy to find
68
Mechanical Trackers Looks like a robot arm user controls the end of the arm, can move it freely in 3D space end of arm looks and feels like a pencil move the end like a pencil, can draw in 3D space can trace out small 3D objects
69
Mechanical Trackers
70
Body Gestures Trackers give us a few points on the body, but don’t give complete gesture information To grasp a virtual object would like to move hand to the object and grab it Need to know what the hand is doing, are the fingers grasping the object Several approaches, none are satisfactory
71
Gloves The “Hollywood” VR device not extensively used now, more of a niche device hard to accurately recognize gestures gloves tend to get dirty and can’t be washed takes time to put on very expensive
75
Body Suits Similar technology to gloves, but cover the entire body, look like diving suit Several attempts, but never quite worked Difficult to get on Hard to calibrate, determine what the body is doing Large amounts of data that is hard to deal with
76
Video Video techniques have been used for hand gestures Work well if hand doesn’t move, much more difficult if user can move around Stationary user: one or two cameras can determine most hand gestures in real time
77
Speech Hands free interaction, extra channel of communications Good quality low cost speech recognition, accuracy > 95% Some attempts to use in VR, with limited success Speech recognition works best in a quiet environment
78
Speech Unfortunately, many VR systems are not quiet enough Main problem is fan noise, doesn’t bother us much, but causes problems for speech recognition Would like to use wireless microphone, also causes problems for speech recognition
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.