Interacting With Dynamic Real Objects in a Virtual Environment Benjamin Lok February 14 th, 2003.

Slides:



Advertisements
Similar presentations
A Natural Interactive Game By Zak Wilson. Background This project was my second year group project at University and I have chosen it to present as it.
Advertisements

Addressing Patient Motivation In Virtual Reality Based Neurocognitive Rehabilitation A.S.Panic - M.Sc. Media & Knowledge Engineering Specialization Man.
Centre for Interactive Multimedia Information Mining Canada Foundation for Innovation (CFI) Ontario Research Fund (ORF) Ryerson University CIM2 1.
Augmented Reality David Johnson.
Move With Me S.W Graduation Project An Najah National University Engineering Faculty Computer Engineering Department Supervisor : Dr. Raed Al-Qadi Ghada.
SienceSpace Virtual Realities for Learning Complex and Abstract Scientific Concepts.
VADE - Virtual Assembly Design Environment Virtual Reality & Computer Integrated Manufacturing Lab.
Multi-view stereo Many slides adapted from S. Seitz.
Interactive Systems Technical Design
Chapter 9 Multimedia, Web Authoring, and More. Multimedia Multimedia integrates all kinds of information. Pages are linked by buttons. Story boards show.
National 4/5 Graphic Communication Advantages of CAD 3D Modelling.
© 2010 Pearson Addison-Wesley. All rights reserved. Addison Wesley is an imprint of 1-1 HCI Human Computer Interaction Week 10.
Introduction to Virtual Environments CIS 4930/6930
Introduction to Graphics and Virtual Environments.
Virtual Reality: How Much Immersion Is Enough? Angela McCarthy CP5080, SP
Effect of Latency on Presence in Stressful Virtual Environments Analysis by The Team: Justin Gosselin, Maya Hughes, Allison Smith.
Welcome to CGMB574 Virtual Reality Computer Graphics and Multimedia Department.
Jeanne Corcoran, OTD, OTR/L October 6 th, The mission of Envision Center for Data Perceptualization is to serve, support, and collaborate with faculty,
A haptic presentation of 3D objects in virtual reality for the visually disabled MSc Marcin Morański Professor Andrzej Materka Institute of Electronics,
11 C H A P T E R Artificial Intelligence and Expert Systems.
Introduction to Virtual Environments Slater, Sherman and Bowman readings.
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.
Graduate Programs in Computer Science A Soft Hand Model for Physically-based Manipulation of Virtual Objects Jan Jacobs Group Research.
Incorporating Dynamic Real Objects into Immersive Virtual Environments Benjamin Lok University of North Carolina at Charlotte Samir Naik Disney VR Studios.
1 Computational Vision CSCI 363, Fall 2012 Lecture 31 Heading Models.
Combined Central and Subspace Clustering for Computer Vision Applications Le Lu 1 René Vidal 2 1 Computer Science Department, Johns Hopkins University,
Jessica Tsimeris Supervisor: Bruce Thomas Wearable Computer Lab
Effects of Handling Real Objects and Avatar Fidelity on Cognitive Task Performance in Virtual Environments Benjamin Lok University of North Carolina at.
Encuentro/Rencontre/Meeting VR and IG Girona, Spain, Dec 19-20, 2007 An INRIA Project-team in partnership with four other institutions Stéphane Donikian.
Augmented reality Prepared by: Khyati Kataria 08DCE001
1 Sense of Presence in Virtual Reality Sherman & Craig, p. 9.
Chapter 2. 3D User Interfaces: History and Roadmap.
A Mixed Reality Approach for Merging Abstract and Concrete Knowledge John Quarles Department of CISE Samsun Lampotang Department of Anesthesiology Ira.
Tele Immersion. What is Tele Immersion? Tele-immersion is a technology to be implemented with Internet2 that will enable users in different geographic.
Advanced Decision Architectures Collaborative Technology Alliance An Interactive Decision Support Architecture for Visualizing Robust Solutions in High-Risk.
F, t, and p Basic Statistics for Computer Scientists (aka knowing enough to be critical of user studies) April 4, 2002 Benjamin Lok.
A Mixed Reality System for Enabling Collocated After Action Review John Quarles Samsun Lampotang Ira Fischler Paul Fishwick Benjamin Lok.
Augmented Reality Authorized By: Miss.Trupti Pardeshi. NDMVP, Comp Dept. Augmented Reality 1/ 23.
Fundamentals of Information Systems, Third Edition1 The Knowledge Base Stores all relevant information, data, rules, cases, and relationships used by the.
Virtual Reality and Digital Characters: New Modalities for Human Computer Interaction G2V2 Talk September 5 th, 2003 Benjamin Lok.
Participatory Simulations: immersive learning environments Emotionally engaging, “first-person” experience Identification with and use of tangible objects.
Improving O&M Skills Through the Use of VE for People Who Are Blind: Past Research and Future Potential O. Lahav School of Education, Tel Aviv University.
TELE IMMERSION AMAN BABBER
Interacting With Dynamic Real Objects in a Virtual Environment Benjamin Lok February 28 th, 2003.
1 Presence in Virtual Reality Kyle Johnsen. 2 Presence The sense of “being there” The sense of “being there” “Mental Immersion” “Mental Immersion” Is.
Computing & Information Sciences Kansas State University Lecture 23 of 42CIS 636/736: (Introduction to) Computer Graphics Lecture 23 of 42 William H. Hsu.
Real Time Collaboration and Sharing
Immersive Virtual Characters for Educating Medical Communication Skills J. Hernendez, A. Stevens, D. S. Lind Department of Surgery (College of Medicine)
Experiences in Extemporaneous Incorporation of Real Objects in Immersive Virtual Environments Benjamin Lok University of Florida Samir Naik Disney VR Studios.
A Mixed Reality Approach for Merging Abstract and Concrete Knowledge John Quarles Department of CISE Samsun Lampotang Department of Anesthesiology Ira.
Rapidly Incorporating Real Objects for Evaluation of Engineering Designs in a Mixed Reality Environment Xiyong Wang, Aaron Kotranza, John Quarles, Benjamin.
A Framework for Perceptual Studies in Photorealistic Augmented Reality Martin Knecht 1, Andreas Dünser 2, Christoph Traxler 1, Michael Wimmer 1 and Raphael.
Abstract Panoramic Virtual Reality Motivation to Use Virtual Reality VR Types
CIRP Annals - Manufacturing Technology 60 (2011) 1–4 Augmented assembly technologies based on 3D bare-hand interaction S.K. Ong (2)*, Z.B. Wang Mechanical.
October 23rd, 2003 Benjamin Lok
A Plane-Based Approach to Mondrian Stereo Matching
Introduction to Virtual Environments & Virtual Reality
Multimedia Application
Real-time Wall Outline Extraction for Redirected Walking
Interacting With Dynamic Real Objects in a Virtual Environment
Mixed Environments for Review and Generation of Engineering Designs
VIRTUAL/REMOTE LABS By Dr. Sandip Kumar Raut Lecturer in Tabla
Image Based Modeling and Rendering (PI: Malik)
Sculpting 3D Models Dorsey & McMillan
Virtual Reality.
Resource Allocation for Distributed Streaming Applications
February 27th, 2004 Benjamin Lok
Schematic diagram showing inputs and modules of iDROP software.
December 12th, 2003 Benjamin Lok
Mixed Reality: Are Two Hands Better Than One?
Presentation transcript:

Interacting With Dynamic Real Objects in a Virtual Environment Benjamin Lok February 14 th, 2003

Outline Motivation Incorporation of Dynamic Real Objects Managing Collisions Between Virtual and Dynamic Real Objects User Study NASA Case Study Conclusion Why we need dynamic real objects in VEs How we get dynamic real objects in VEs What good are dynamic real objects? Applying the system to a driving real world problem

Assembly Verification Given a model, we would like to explore: –Can it be readily assembled? –Can repairers service it? Example: –Changing an oil filter –Attaching a cable to a payload

Current Immersive VE Approaches Most objects are purely virtual –User –Tools –Parts Most virtual objects are not registered with a corresponding real object. System has limited shape and motion information of real objects.

Ideally Would like: –Accurate virtual representations, or avatars, of real objects –Virtual objects responding to real objects –Haptic feedback –Correct affordances –Constrained motion Example: Unscrewing a virtual oil filter from a car engine model

Dynamic Real Objects Tracking and modeling dynamic objects would: –Improve interactivity –Enable visually faithful virtual representations Dynamic objects can: –Change shape –Change appearance

Thesis Statement Naturally interacting with real objects in immersive virtual environments improves task performance and presence in spatial cognitive manual tasks.

Previous Work: Incorporating Real Objects into VEs Non-Real Time –Virtualized Reality (Kanade, et al.) Real Time –Image Based Visual Hulls [Matusik00, 01] –3D Tele-Immersion [Daniilidis00] Augment specific objects for interaction –Doll’s head [Hinkley94] –Plate [Hoffman98] How important is to get real objects into a virtual environment?

Previous Work: Avatars Self - Avatars in VEs –What makes avatars believable? [Thalmann98] –What avatars components are necessary? [Slater93, 94, Garau01] VEs currently have: – Choices from a library – Generic avatars – No avatars Generic avatars > no avatars [Slater93] Are visually faithful avatars better than generic avatars?

Visual Incorporation of Dynamic Real Objects in a VE

Motivation Handle dynamic objects (generate a virtual representation) Interactive rates Bypass an explicit 3D modeling stage Inputs: outside-looking-in camera images Generate an approximation of the real objects (visual hull)

Reconstruction Algorithm … 1. Start with live camera images 2. Image Subtraction 3. Use images to calculate volume intersection 4. Composite with the VE …

Visual Hull Computation Visual hull - tightest volume given a set of object silhouettes Intersection of the projection of object pixels

Visual Hull Computation Visual hull - tightest volume given a set of object silhouettes Intersection of the projection of object pixels

Volume Querying A point inside the visual hull projects onto an object pixel from each camera

Implementation 1 HMD-mounted and 3 wall-mounted cameras SGI Reality Monster – handles up to 7 video feeds Computation –Image subtraction is the most work –~16000 triangles/sec, 1.2 gigapixels fps Estimated error: 1 cm Performance will increase as graphics hardware continues to improve

Results

Managing Collisions Between Virtual and Dynamic Real Objects

Approach We want virtual objects respond to real object avatars This requires detecting when real and virtual objects intersect If intersections exist, determine plausible responses

Assumptions Only virtual objects can move or deform at collision. Both real and virtual objects are assumed stationary at collision. We catch collisions soon after a virtual object enters the visual hull, and not as it exits the other side.

Detecting Collisions

Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull

Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull

Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull

Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull

Resolving Collisions Approach 1.Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull

Results

Collision Detection / Response Performance Volume-query about 5000 triangles per second Error of collision points is ~0.75 cm. –Depends on average size of virtual object triangles –Tradeoff between accuracy and time –Plenty of room for optimizations

Spatial Cognitive Task Study

Study Motivation Effects of –Interacting with real objects –Visual fidelity of self-avatars On –Task Performance –Presence For spatial cognitive manual tasks

Spatial Cognitive Manual Tasks Spatial Ability –Visualizing a manipulation in 3-space Cognition –Psychological processes involved in the acquisition, organization, and use of knowledge

Hypotheses Task Performance: Participants will complete a spatial cognitive manual task faster when manipulating real objects, as opposed to virtual objects only. Sense of Presence: Participants will report a higher sense of presence when their self- avatars are visually faithful, as opposed to generic.

Task Manipulated identical painted blocks to match target patterns Each block had six distinct patterns. Target patterns: –2x2 blocks (small) –3x3 blocks (large)

Measures Task performance –Time to complete the patterns correctly Sense of presence –(After experience) Steed-Usoh-Slater Sense of Presence Questionnaire (SUS) Other factors –(Before experience) spatial ability –(Before and after experience) simulator sickness

Conditions All participants did the task in a real space environment. Each participant did the task in one of three VEs. Real Space Purely Virtual Hybrid Vis. Faithful Hybrid

Conditions Avatar Fidelity Generic Visually faithful Interact with Real objects HEVFHE Virtual objects PVE Task performance Sense of presence

Real Space Environment Task was conducted within a draped enclosure Participant watched monitor while performing task RSE performance was a baseline to compare against VE performance

Purely Virtual Environment Participant manipulated virtual objects Participant was presented with a generic avatar

Hybrid Environment Participant manipulated real objects Participant was presented with a generic avatar

Visually-Faithful Hybrid Env. Participant manipulated real objects Participant was presented with a visually faithful avatar

Task Performance Results Small Pattern Time (seconds)Large Pattern Time (seconds) MeanS.D.MeanS.D. Real Space (n=41) Purely Virtual (n=13) Hybrid (n=13) Visually Faithful Hybrid (n=14)

Small Pattern TimeLarge Pattern Time T-testp p Purely Virtual vs. Vis. Faithful ** *** Purely Virtual vs. Hybrid ** * Hybrid vs. Vis. Faithful Hybrid * - significant at the  =0.05 level ** -  =0.01 level *** -  =0.001 level Task Performance Results

Sense of Presence Results SUS Sense of Presence Score (0..6) MeanS.D. Purely Virtual Environment Hybrid Environment Visually Faithful Hybrid Environment

Sense of Presence Results Sense of Presence T-testp Purely Virtual vs. Visually Faithful Hybrid Purely Virtual vs. Hybrid Hybrid vs. Visually Faithful Hybrid

Debriefing Responses They felt almost completely immersed while performing the task. They felt the virtual objects in the virtual room (such as the painting, plant, and lamp) improved their sense of presence, even though they had no direct interaction with these objects. They felt that seeing an avatar added to their sense of presence. PVE and HE participants commented on the fidelity of motion, whereas VFHE participants commented on the fidelity of appearance. VFHE and HE participants felt tactile feedback of working with real objects improved their sense of presence. VFHE participants reported getting used to manipulating and interacting in the VE significantly faster than PVE participants.

Study Conclusions Interacting with real objects provided a quite substantial performance improvement over interacting with virtual objects for cognitive manual tasks Debriefing quotes show that the visually faithful avatar was preferred, though reported sense of presence was not significantly different. Kinematic fidelity of the avatar is more important than visual fidelity for sense of presence. Handling real objects makes task performance and interaction in the VE more like the actual task.

Case Study: NASA Langley Research Center (LaRC) Payload Assembly Task

NASA Driving Problems Given payload models, designers and engineers want to evaluate: –Assembly feasibility –Assembly training –Repairability Current Approaches –Measurements –Design drawings –Step-by-step assembly instruction list –Low fidelity mock-ups

Task Wanted a plausible task given common assembly jobs. Abstracted a payload layout task –Screw in tube –Attach power cable

Task Goal Determine how much space should be allocated between the TOP of the PMT and the BOTTOM of Payload A

Videos of Task

Results Participant #1#2#3#4 (Pre-experience) How much space is necessary? 14 cm14.2 cm15 – 16 cm15 cm (Pre-experience) How much space would you actually allocate? 21 cm16 cm20 cm15 cm Actual space required in VE15 cm22.5 cm22.3 cm23 cm (Post-experience) How much space would you actually allocate? 18 cm16 cm (modify tool) 25 cm23 cm The tube was 14 cm long, 4cm in diameter.

Results Late discovery of similar problems is not uncommon. Participant #1#2#3#4 Time cost of the spacing error days to months30 daysdays to monthsmonths Financial cost of the spacing error $100,000s - $1,000,000+ largest cost is huge hit in schedule $100,000s - $1,000,000+ $100,000s

Case Study Conclusions Object reconstruction VEs benefits: –Specialized tools and parts require no modeling –Short development time to try multiple designs –Allows early testing of subassembly integration from multiple suppliers Can get early identification of assembly, design, or integration issues that results in considerable savings in time and money.

Conclusions

Overall Innovations Presented algorithms for –Incorporation of real objects into VEs –Handling interactions between real and virtual objects Conducted formal studies to evaluate –Interaction with real vs. virtual object (significant effect) –Visually faithful vs. generic avatars (no significant effect) Applied to real-world task

Future Work Improved model fidelity Improved collision detection and response Further studies to illuminate the relationship between avatar kinematic fidelity and visual fidelity Apply system to upcoming NASA payload projects.

Current Projects at UNC-Charlotte with Dr. Larry Hodges Digitizing Humanity –If a virtual human gave you a compliment, would it brighten your day? –Do people interact with virtual characters the same way they do with real people? (carry over from reality -> virtual)

Diana

Current Projects at UNC-Charlotte with Dr. Larry Hodges Digitizing Humanity –Basic research into virtual characters What is important? How does personality affect interaction? –Applications: Social situations Human Virtual-Human Interaction Virtual Reality –Basic Research: Incorporating Avatars Locomotion Effect on Cognitive Performance –Applications: Balance Disorders (w/ Univ. of Pittsburg)

Current Projects at UNC-Charlotte with Dr. Larry Hodges Combining Computer Graphics with: –Computer Vision Dr. Min Shin –Human Computer Interaction Dr. Larry Hodges, Dr. Jee-In Kim –Virtual Reality Dr. Larry Hodges –Graduate and Undergraduate Research Future Computing Lab has 4 PhD, 3 MS, and 6 undergraduates

Collaboration on Research Digital Media –Applying VR/Computer Graphics to: –Digital Archaeology (Digital records of historic data) –Digital Media Program (Getting non-CS people involved in VR) –Mixed Reality Computer Vision/Image Processing –Using VR technology to aid in object tracking –Using computer vision to augment VR interaction Computer Graphics Lab –Photorealistic Rendering Novel Visualization –Evolutionary Computing Lab –Central Florida Remote Sensing Lab

Future Directions Long Term Goals –Enhance other CS projects with Graphics, Visualization, VR. –Computer Scientists are Toolsmiths –Help build the department into a leader in using graphics for visualization, simulation, and training. –Effective Virtual Environments (Graphics, Virtual Reality, and Psychology) –Digital Characters (Graphics & HCI) Additional benefit of having nearby companies (Disney) and military –Assistive Technology (Graphics, VR, and Computer Vision)

Thanks

Collaborators Dr. Frederick P. Brooks Jr. (PhD Advisor) Dr. Larry F. Hodges (Post-doc advisor) Prof. Mary Whitton Samir Naik Danette Allen (NASA LaRC) UNC-CH Effective Virtual Environments UNC-C Virtual Environments Group For more information: (VR2003, I3D2003) Funding Agencies The LINK Foundation NIH (Grant P41 RR02170) National Science Foundation Office of Navel Research

Object Pixels Identify new objects Perform image subtraction Separate the object pixels from background pixels current image - background image = object pixels

Volume Querying Next we do volume querying on a plane

Volume Querying For an arbitrary view, we sweep a series of planes.

Detecting Collisions Approach Are there real-virtual collisions? For virtual object i Done with object i Volume query each triangle Calculate plausible collision response Determine points on virtual object in collision NY

Research Interests Computer Graphics – computer scientists are toolsmiths –Applying graphics hardware to: 3D reconstruction simulation –Visualization –Interactive Graphics Virtual Reality –What makes a virtual environment effective? –Applying to assembly verification & clinical psychology Human Computer Interaction –3D Interaction –Virtual Humans Assistive Technology –Computer Vision and Mobile Technology to help disabled