Download presentation
Presentation is loading. Please wait.
Published byBrenda Wilkerson Modified over 9 years ago
1
Incorporating Dynamic Real Objects into Immersive Virtual Environments Benjamin Lok University of North Carolina at Charlotte Samir Naik Disney VR Studios Mary Whitton, Frederick P. Brooks Jr. University of North Carolina at Chapel Hill April 28 th, 2003
2
Outline Motivation Managing Collisions Between Virtual and Dynamic Real Objects NASA Case Study Conclusion Why we need dynamic real objects in VEs How we get dynamic real objects in VEs Applying the system to a driving real world problem
3
Assembly Verification Given a model, we would like to explore: –Can it be readily assembled? –Can repairers service it? Example: –Changing an oil filter –Attaching a cable to a payload
4
Current Immersive VE Approaches Most objects are purely virtual –User –Tools –Parts Most virtual objects are not registered with a corresponding real object. System has limited shape and motion information of real objects.
5
Ideally Would like: –Accurate virtual representations, or avatars, of real objects –Virtual objects responding to real objects –Haptic feedback –Correct affordances –Constrained motion Example: Unscrewing a virtual oil filter from a car engine model
6
Dynamic Real Objects Tracking and modeling dynamic objects (change shape and appearance) would: –Improve interactivity –Enable visually faithful virtual representations
7
Previous Work: Incorporating Real Objects into VEs Non-Real Time –Virtualized Reality (Kanade, et al.) Real Time –Image Based Visual Hulls [Matusik00, 01] –3D Tele-Immersion [Daniilidis00] How important is to get real objects into a virtual environment?
8
Previous Work: Interaction and Collision Detection Commercial Interaction Solutions –Tracked mice, gloves, joysticks Augment specific objects for interaction –Doll’s head [Hinkley1994] –Plate [Hoffman1998] Virtual object collision detection –Traditional packages [Ehmann2000] –Hardware accelerated [Hoff2001] Virtual object – real object –a priori modeling and tracking [Breen1996]
9
Real-time Object Reconstruction System Handle dynamic objects (generate a virtual representation) Interactive rates Bypass an explicit 3D modeling stage Inputs: outside-looking-in camera images Generate an approximation of the real objects (visual hull)
10
Reconstruction Algorithm … 1. Start with live camera images 2. Image Subtraction 3. Use images to calculate volume intersection (visual hull) 4. Composite with the VE …
11
Visual Hull Computation Visual hull - tightest volume given a set of object silhouettes Intersection of the projection of object pixels
12
Visual Hull Computation Visual hull - tightest volume given a set of object silhouettes Intersection of the projection of object pixels
13
Volume Querying in Hardware A point (P) inside the visual hull (VH real objects ) projects onto an object pixel from each camera P VH real objects iff i j, P = C i -1 O i, j
14
Implementation 1 HMD-mounted and 3 wall-mounted cameras SGI Reality Monster – handles up to 7 video feeds 15-18 fps Estimated error: 1 cm Performance will increase as graphics hardware continues to improve
15
Managing Collisions Between Virtual and Dynamic Real Objects
16
Approach We want virtual objects to respond to real object avatars This requires detecting when real and virtual objects intersect If intersections exist, determine plausible responses Only virtual objects can move or deform at collision. Both real and virtual objects are assumed stationary at collision.
17
Detecting Collisions
18
Visual Hull Computation
19
Detecting Collisions Approach Are there real-virtual collisions? For virtual object i Done with object i Volume query each triangle Calculate plausible collision response Determine points on virtual object in collision NY
20
Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration. CP obj 2. Define plausible recovery vector V rec = RP obj - CP obj 3.Back out virtual object. CP obj = CP hull CP obj V rec CP hull
21
Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration. CP obj 2. Define plausible recovery vector V rec = RP obj - CP obj 3.Back out virtual object. CP obj = CP hull CP obj V rec CP hull
22
Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration. CP obj 2. Define plausible recovery vector V rec = RP obj - CP obj 3.Back out virtual object. CP obj = CP hull CP obj V rec CP hull
23
Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration. CP obj 2. Define plausible recovery vector V rec = RP obj - CP obj 3.Back out virtual object. CP obj = CP hull CP obj V rec CP hull
24
Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration. CP obj 2. Define plausible recovery vector V rec = RP obj - CP obj 3.Back out virtual object. CP obj = CP hull CP obj V rec CP hull
25
Results
27
Collision Detection / Response Performance Volume-query about 5000 triangles per second Error of collision points is ~0.75 cm. –Depends on average size of virtual object triangles –Tradeoff between accuracy and time –Plenty of room for optimizations
28
Case Study: NASA Langley Research Center (LaRC) Payload Assembly Task
29
NASA Driving Problems Given payload models, designers and engineers want to evaluate: –Assembly feasibility –Assembly training –Repairability Current Approaches –Measurements –Design drawings –Step-by-step assembly instruction list –Low fidelity mock-ups
30
Task Wanted a plausible task given common assembly jobs. Abstracted a payload layout task –Screw in tube –Attach power cable
31
Task Goal Determine how much space should be allocated between the TOP of the PMT and the BOTTOM of Payload A
32
Videos of Task
33
Results Participant #1#2#3#4 (Pre-experience) How much space is necessary? 14 cm14.2 cm15 – 16 cm15 cm (Pre-experience) How much space would you actually allocate? 21 cm16 cm20 cm15 cm Actual space required in VE15 cm22.5 cm22.3 cm23 cm (Post-experience) How much space would you actually allocate? 18 cm16 cm (modify tool) 25 cm23 cm The tube was 14 cm long, 4cm in diameter.
34
Results Late discovery of similar problems is not uncommon. Participant #1#2#3#4 Time cost of the spacing error days to months30 daysdays to monthsmonths Financial cost of the spacing error $100,000s - $1,000,000+ largest cost is huge hit in schedule $100,000s - $1,000,000+ $100,000s
35
Case Study Conclusions Object reconstruction VEs benefits: –Specialized tools and parts require no modeling –Short development time to try multiple designs –Allows early testing of subassembly integration from multiple suppliers Possible to identify assembly, design, and integration issues early that results in considerable savings in time and money.
36
Conclusions
37
Innovations Presented algorithms for –Incorporation of real objects into VEs –Handling interactions between real and virtual objects Applied to real-world task
38
Future Work Improved model fidelity Improved collision detection and response Apply system to upcoming NASA payload projects.
39
Thanks Collaborators Dr. Larry F. Hodges Danette Allen (NASA LaRC) UNC-CH Effective Virtual Environments UNC-C Virtual Environments Group For more information: http://www.cs.uncc.edu/~bclok (I3D2001, VR2003) Correct Email: bclok@uncc.edu Funding Agencies The LINK Foundation NIH (Grant P41 RR02170) National Science Foundation Office of Naval Research
40
Object Pixels Identify new objects Perform image subtraction Separate the object pixels from background pixels current image - background image = object pixels
41
Current Projects at UNC-Charlotte with Dr. Larry Hodges Digitizing Humanity –Basic research into virtual characters What is important? How does personality affect interaction? –Applications: Social situations Human Virtual-Human Interaction Virtual Reality –Basic Research: Incorporating Avatars Locomotion Effect on Cognitive Performance –Applications: Balance Disorders (w/ Univ. of Pittsburg)
42
Research Interests Computer Graphics – computer scientists are toolsmiths –Applying graphics hardware to: 3D reconstruction simulation –Visualization –Interactive Graphics Virtual Reality –What makes a virtual environment effective? –Applying to assembly verification & clinical psychology Human Computer Interaction –3D Interaction –Virtual Humans Assistive Technology –Computer Vision and Mobile Technology to help disabled
43
Future Directions Long Term Goals –Help build the department into a leader in using graphics for visualization, simulation, and training. –Effective Virtual Environments (Graphics, Virtual Reality, and Psychology) –Digital Characters (Graphics & HCI) Additional benefit of having nearby companies (Disney) and military –Assistive Technology (Graphics, VR, and Computer Vision)
44
Occlusion
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.