Download presentation
Presentation is loading. Please wait.
Published byCameron Griffin Modified over 9 years ago
1
Interacting With Dynamic Real Objects in a Virtual Environment Benjamin Lok February 14 th, 2003
2
Outline Motivation Incorporation of Dynamic Real Objects Managing Collisions Between Virtual and Dynamic Real Objects User Study NASA Case Study Conclusion Why we need dynamic real objects in VEs How we get dynamic real objects in VEs What good are dynamic real objects? Applying the system to a driving real world problem
3
Assembly Verification Given a model, we would like to explore: –Can it be readily assembled? –Can repairers service it? Example: –Changing an oil filter –Attaching a cable to a payload
4
Current Immersive VE Approaches Most objects are purely virtual –User –Tools –Parts Most virtual objects are not registered with a corresponding real object. System has limited shape and motion information of real objects.
5
Ideally Would like: –Accurate virtual representations, or avatars, of real objects –Virtual objects responding to real objects –Haptic feedback –Correct affordances –Constrained motion Example: Unscrewing a virtual oil filter from a car engine model
6
Dynamic Real Objects Tracking and modeling dynamic objects would: –Improve interactivity –Enable visually faithful virtual representations Dynamic objects can: –Change shape –Change appearance
7
Thesis Statement Naturally interacting with real objects in immersive virtual environments improves task performance and presence in spatial cognitive manual tasks.
8
Previous Work: Incorporating Real Objects into VEs Non-Real Time –Virtualized Reality (Kanade, et al.) Real Time –Image Based Visual Hulls [Matusik00, 01] –3D Tele-Immersion [Daniilidis00] Augment specific objects for interaction –Doll’s head [Hinkley94] –Plate [Hoffman98] How important is to get real objects into a virtual environment?
9
Previous Work: Avatars Self - Avatars in VEs –What makes avatars believable? [Thalmann98] –What avatars components are necessary? [Slater93, 94, Garau01] VEs currently have: – Choices from a library – Generic avatars – No avatars Generic avatars > no avatars [Slater93] Are visually faithful avatars better than generic avatars?
10
Visual Incorporation of Dynamic Real Objects in a VE
11
Motivation Handle dynamic objects (generate a virtual representation) Interactive rates Bypass an explicit 3D modeling stage Inputs: outside-looking-in camera images Generate an approximation of the real objects (visual hull)
12
Reconstruction Algorithm … 1. Start with live camera images 2. Image Subtraction 3. Use images to calculate volume intersection 4. Composite with the VE …
13
Visual Hull Computation Visual hull - tightest volume given a set of object silhouettes Intersection of the projection of object pixels
14
Visual Hull Computation Visual hull - tightest volume given a set of object silhouettes Intersection of the projection of object pixels
15
Volume Querying A point inside the visual hull projects onto an object pixel from each camera
16
Implementation 1 HMD-mounted and 3 wall-mounted cameras SGI Reality Monster – handles up to 7 video feeds Computation –Image subtraction is the most work –~16000 triangles/sec, 1.2 gigapixels 15-18 fps Estimated error: 1 cm Performance will increase as graphics hardware continues to improve
17
Results
18
Managing Collisions Between Virtual and Dynamic Real Objects
19
Approach We want virtual objects respond to real object avatars This requires detecting when real and virtual objects intersect If intersections exist, determine plausible responses
20
Assumptions Only virtual objects can move or deform at collision. Both real and virtual objects are assumed stationary at collision. We catch collisions soon after a virtual object enters the visual hull, and not as it exits the other side.
21
Detecting Collisions
22
Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull
23
Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull
24
Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull
25
Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull
26
Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull
27
Results
29
Collision Detection / Response Performance Volume-query about 5000 triangles per second Error of collision points is ~0.75 cm. –Depends on average size of virtual object triangles –Tradeoff between accuracy and time –Plenty of room for optimizations
30
Spatial Cognitive Task Study
31
Study Motivation Effects of –Interacting with real objects –Visual fidelity of self-avatars On –Task Performance –Presence For spatial cognitive manual tasks
32
Spatial Cognitive Manual Tasks Spatial Ability –Visualizing a manipulation in 3-space Cognition –Psychological processes involved in the acquisition, organization, and use of knowledge
33
Hypotheses Task Performance: Participants will complete a spatial cognitive manual task faster when manipulating real objects, as opposed to virtual objects only. Sense of Presence: Participants will report a higher sense of presence when their self- avatars are visually faithful, as opposed to generic.
34
Task Manipulated identical painted blocks to match target patterns Each block had six distinct patterns. Target patterns: –2x2 blocks (small) –3x3 blocks (large)
35
Measures Task performance –Time to complete the patterns correctly Sense of presence –(After experience) Steed-Usoh-Slater Sense of Presence Questionnaire (SUS) Other factors –(Before experience) spatial ability –(Before and after experience) simulator sickness
36
Conditions All participants did the task in a real space environment. Each participant did the task in one of three VEs. Real Space Purely Virtual Hybrid Vis. Faithful Hybrid
37
Conditions Avatar Fidelity Generic Visually faithful Interact with Real objects HEVFHE Virtual objects PVE Task performance Sense of presence
38
Real Space Environment Task was conducted within a draped enclosure Participant watched monitor while performing task RSE performance was a baseline to compare against VE performance
39
Purely Virtual Environment Participant manipulated virtual objects Participant was presented with a generic avatar
40
Hybrid Environment Participant manipulated real objects Participant was presented with a generic avatar
41
Visually-Faithful Hybrid Env. Participant manipulated real objects Participant was presented with a visually faithful avatar
42
Task Performance Results Small Pattern Time (seconds)Large Pattern Time (seconds) MeanS.D.MeanS.D. Real Space (n=41)16.86.337.29.0 Purely Virtual (n=13)47.210.4117.032.3 Hybrid (n=13)31.75.786.826.8 Visually Faithful Hybrid (n=14)28.97.672.316.4
43
Small Pattern TimeLarge Pattern Time T-testp p Purely Virtual vs. Vis. Faithful3.320.0026**4.390.00016*** Purely Virtual vs. Hybrid2.810.0094**2.450.021* Hybrid vs. Vis. Faithful Hybrid1.020.322.010.055 * - significant at the =0.05 level ** - =0.01 level *** - =0.001 level Task Performance Results
44
Sense of Presence Results SUS Sense of Presence Score (0..6) MeanS.D. Purely Virtual Environment3.212.19 Hybrid Environment1.862.17 Visually Faithful Hybrid Environment2.361.94
45
Sense of Presence Results Sense of Presence T-testp Purely Virtual vs. Visually Faithful Hybrid1.100.28 Purely Virtual vs. Hybrid1.640.11 Hybrid vs. Visually Faithful Hybrid0.640.53
46
Debriefing Responses They felt almost completely immersed while performing the task. They felt the virtual objects in the virtual room (such as the painting, plant, and lamp) improved their sense of presence, even though they had no direct interaction with these objects. They felt that seeing an avatar added to their sense of presence. PVE and HE participants commented on the fidelity of motion, whereas VFHE participants commented on the fidelity of appearance. VFHE and HE participants felt tactile feedback of working with real objects improved their sense of presence. VFHE participants reported getting used to manipulating and interacting in the VE significantly faster than PVE participants.
47
Study Conclusions Interacting with real objects provided a quite substantial performance improvement over interacting with virtual objects for cognitive manual tasks Debriefing quotes show that the visually faithful avatar was preferred, though reported sense of presence was not significantly different. Kinematic fidelity of the avatar is more important than visual fidelity for sense of presence. Handling real objects makes task performance and interaction in the VE more like the actual task.
48
Case Study: NASA Langley Research Center (LaRC) Payload Assembly Task
49
NASA Driving Problems Given payload models, designers and engineers want to evaluate: –Assembly feasibility –Assembly training –Repairability Current Approaches –Measurements –Design drawings –Step-by-step assembly instruction list –Low fidelity mock-ups
50
Task Wanted a plausible task given common assembly jobs. Abstracted a payload layout task –Screw in tube –Attach power cable
51
Task Goal Determine how much space should be allocated between the TOP of the PMT and the BOTTOM of Payload A
52
Videos of Task
53
Results Participant #1#2#3#4 (Pre-experience) How much space is necessary? 14 cm14.2 cm15 – 16 cm15 cm (Pre-experience) How much space would you actually allocate? 21 cm16 cm20 cm15 cm Actual space required in VE15 cm22.5 cm22.3 cm23 cm (Post-experience) How much space would you actually allocate? 18 cm16 cm (modify tool) 25 cm23 cm The tube was 14 cm long, 4cm in diameter.
54
Results Late discovery of similar problems is not uncommon. Participant #1#2#3#4 Time cost of the spacing error days to months30 daysdays to monthsmonths Financial cost of the spacing error $100,000s - $1,000,000+ largest cost is huge hit in schedule $100,000s - $1,000,000+ $100,000s
55
Case Study Conclusions Object reconstruction VEs benefits: –Specialized tools and parts require no modeling –Short development time to try multiple designs –Allows early testing of subassembly integration from multiple suppliers Can get early identification of assembly, design, or integration issues that results in considerable savings in time and money.
56
Conclusions
57
Overall Innovations Presented algorithms for –Incorporation of real objects into VEs –Handling interactions between real and virtual objects Conducted formal studies to evaluate –Interaction with real vs. virtual object (significant effect) –Visually faithful vs. generic avatars (no significant effect) Applied to real-world task
58
Future Work Improved model fidelity Improved collision detection and response Further studies to illuminate the relationship between avatar kinematic fidelity and visual fidelity Apply system to upcoming NASA payload projects.
59
Current Projects at UNC-Charlotte with Dr. Larry Hodges Digitizing Humanity –If a virtual human gave you a compliment, would it brighten your day? –Do people interact with virtual characters the same way they do with real people? (carry over from reality -> virtual)
60
Diana
61
Current Projects at UNC-Charlotte with Dr. Larry Hodges Digitizing Humanity –Basic research into virtual characters What is important? How does personality affect interaction? –Applications: Social situations Human Virtual-Human Interaction Virtual Reality –Basic Research: Incorporating Avatars Locomotion Effect on Cognitive Performance –Applications: Balance Disorders (w/ Univ. of Pittsburg)
62
Current Projects at UNC-Charlotte with Dr. Larry Hodges Combining Computer Graphics with: –Computer Vision Dr. Min Shin –Human Computer Interaction Dr. Larry Hodges, Dr. Jee-In Kim –Virtual Reality Dr. Larry Hodges –Graduate and Undergraduate Research Future Computing Lab has 4 PhD, 3 MS, and 6 undergraduates
63
Collaboration on Research Digital Media –Applying VR/Computer Graphics to: –Digital Archaeology (Digital records of historic data) –Digital Media Program (Getting non-CS people involved in VR) –Mixed Reality Computer Vision/Image Processing –Using VR technology to aid in object tracking –Using computer vision to augment VR interaction Computer Graphics Lab –Photorealistic Rendering Novel Visualization –Evolutionary Computing Lab –Central Florida Remote Sensing Lab
64
Future Directions Long Term Goals –Enhance other CS projects with Graphics, Visualization, VR. –Computer Scientists are Toolsmiths –Help build the department into a leader in using graphics for visualization, simulation, and training. –Effective Virtual Environments (Graphics, Virtual Reality, and Psychology) –Digital Characters (Graphics & HCI) Additional benefit of having nearby companies (Disney) and military –Assistive Technology (Graphics, VR, and Computer Vision)
65
Thanks
66
Collaborators Dr. Frederick P. Brooks Jr. (PhD Advisor) Dr. Larry F. Hodges (Post-doc advisor) Prof. Mary Whitton Samir Naik Danette Allen (NASA LaRC) UNC-CH Effective Virtual Environments UNC-C Virtual Environments Group For more information: http://www.cs.uncc.edu/~bclok (VR2003, I3D2003) Funding Agencies The LINK Foundation NIH (Grant P41 RR02170) National Science Foundation Office of Navel Research
67
Object Pixels Identify new objects Perform image subtraction Separate the object pixels from background pixels current image - background image = object pixels
68
Volume Querying Next we do volume querying on a plane
69
Volume Querying For an arbitrary view, we sweep a series of planes.
70
Detecting Collisions Approach Are there real-virtual collisions? For virtual object i Done with object i Volume query each triangle Calculate plausible collision response Determine points on virtual object in collision NY
71
Research Interests Computer Graphics – computer scientists are toolsmiths –Applying graphics hardware to: 3D reconstruction simulation –Visualization –Interactive Graphics Virtual Reality –What makes a virtual environment effective? –Applying to assembly verification & clinical psychology Human Computer Interaction –3D Interaction –Virtual Humans Assistive Technology –Computer Vision and Mobile Technology to help disabled
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.