3D Interaction Techniques for Virtual Environments

Slides:



Advertisements
Similar presentations
Interaction Techniques for Common Tasks in Immersive Virtual Environments Design, Evaluation, and Application Doug A. Bowman April 27, 1998.
Advertisements

Regis Kopper Mara G. Silva Ryan P. McMahan Doug A. Bowman.
Map of Human Computer Interaction
A Natural Interactive Game By Zak Wilson. Background This project was my second year group project at University and I have chosen it to present as it.
Chapter 12 User Interface Design
1 Software Engineering: A Practitioner’s Approach, 6/e Chapter 12b: User Interface Design Software Engineering: A Practitioner’s Approach, 6/e Chapter.
Using the Crosscutting Concepts As conceptual tools when meeting an unfamiliar problem or phenomenon.
COMP322/S2000/L41 Classification of Robot Arms:by Control Method The Control unit is the brain of the robot. It contains the instructions that direct the.
Tabletop Interactive Camera Control Mukesh Barange / Master Thesis 2010 Armando de la Re Vega WGM #53 May 4th 2012.
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
WPI HIVE: Introduction to Virtual Reality Interesting Issues, Open Problems Prof. Robert W. Lindeman Worcester Polytechnic Institute Department of Computer.
Class 6 LBSC 690 Information Technology Human Computer Interaction and Usability.
VR Interfaces – Navigation, Selection and UI Elements By David Johnson.
Vision-Based Interactive Systems Martin Jagersand c610.
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
SCA Introduction to Multimedia
Input: Devices and Theory. Input for Selection and Positioning Devices Power Law of Practice Fitt’s Law (2D, 3D lag) Eye hand coordination Two handed.
Interactive Mesh Sculpting Using a Haptic Device.
Using Tweek to Create Graphical User Interfaces in Virtual Reality Patrick Hartling IEEE VR 2003.
CS335 Principles of Multimedia Systems Multimedia and Human Computer Interfaces Hao Jiang Computer Science Department Boston College Nov. 20, 2007.
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e and are provided with permission by.
2D or 3D ? Presented by Xu Liu, Ming Luo. Is 3D always better than 2D? NO!
Lecture 5: Interaction and Navigation Dr. Xiangyu WANG Acknowledge the notes from Dr. Doug Bowman.
Underlying Technologies Part Two: Software Mark Green School of Creative Media.
3D Interaction Techniques for Virtual Environments
1 Gestural and Bimanual Input Doantam Phan CS 376 Discussion.
Graphical User Interfaces in Virtual Reality Patrick Hartling Virtual Reality Applications Center IEEE VR 2002.
Computer-Based Animation. ● To animate something – to bring it to life ● Animation covers all changes that have visual effects – Positon (motion dynamic)
Asper School of Business University of Manitoba Systems Analysis & Design Instructor: Bob Travica System implementation and deployment Updated: November.
Knowledge Systems Lab JN 8/24/2015 A Method for Temporal Hand Gesture Recognition Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Virtual Reality Design and Representation. VR Design: Overview Objectives, appropriateness Creating a VR application Designing a VR experience: goals,
Interaction in the Virtual World: Overview
CS378 - Mobile Computing App Project Overview. App Project Teams of 2 or 3 students Develop an Android application of your choosing subject to instructor.
VE Input Devices(I) Doug Bowman Virginia Tech Edited by Chang Song.
Designing 3D Interfaces Examples of 3D interfaces Pros and cons of 3D interfaces Overview of 3D software and hardware Four key design issues: system performance,
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
Oct 30, 2006 LUONNOS Navigation techniques for construction industry product models Jukka Rönkkö, HUT/VTT
Introduction to Virtual Environments Slater, Sherman and Bowman readings.
Augmented and mixed reality (AR & MR)
Centre for HCI Design INTERACT 2003 Tutorial Multimedia & the Web  Planning thematic threads through several media  Presentation layout: sequential or.
Scientific Visualization Module 6 Volumetric Algorithms (adapted by S.V. Moore – slides deleted, modified, and added) prof. dr. Alexandru (Alex) Telea.
EE 492 ENGINEERING PROJECT LIP TRACKING Yusuf Ziya Işık & Ashat Turlibayev Yusuf Ziya Işık & Ashat Turlibayev Advisor: Prof. Dr. Bülent Sankur Advisor:
Virtual Cockpit: Terbu Alexander Institute for Computer Graphics and Vision Graz University of Technology An Alternative Augmented Reality User Interface.
CHAPTER TEN AUTHORING.
Hyper-Hitchcock F. Shipman, A. Girgensohn, and L. Wilcox, "Hyper-Hitchcock: Towards the Easy Authoring of Interactive Video", Proceedings of INTERACT 2003,
VIRTUAL REALITY (VR) INTRODUCTION AND BASIC APPLICATIONS الواقع الافتراضي : مقدمة وتطبيقات Dr. Naji Shukri Alzaza Assist. Prof. of Mobile technology Dean.
Human Computer Interaction © 2014 Project Lead The Way, Inc.Computer Science and Software Engineering.
1 The Rendering Pipeline. CS788 Topic of HCI 2 Outline  Introduction  The Graphics Rendering Pipeline  Three functional stages  Example  Bottleneck.
Evaluation and metrics: Measuring the effectiveness of virtual environments Doug Bowman Edited by C. Song.
BlindAid: a Virtual Exploration Tool for People who are Blind O. Lahav, Ph.D., D. Schloerb, Ph.D., S. Kumar, and M. A. Srinivasan, Ph.D Touch Lab, RLE,
Chapter 2. 3D User Interfaces: History and Roadmap.
Chapter 10 Interacting with Visualization 박기남
1 COSC 4406 Software Engineering COSC 4406 Software Engineering Haibin Zhu, Ph.D. Dept. of Computer Science and mathematics, Nipissing University, 100.
1 Human Computer Interaction Week 5 Interaction Devices and Input-Output.
Human Interaction World in miniature papers. INTERACTIVE W orlds I n M iniature WIM 1.Introduction 2.System Description 3.Findings from Previous Work.
User Performance in Relation to 3D Input Device Design  Studies conducted at University of Toronto  Usability review of 6 degree of freedom (DOF) input.
4/25/02 SKETCH: Robert C. Zelenik Kenneth P. Herndon John F. Hughes An Interface for Sketching 3D Scenes SIGGRAPH ‘96 Presented by Mike Margolis.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Software Architecture for Multimodal Interactive Systems : Voice-enabled Graphical Notebook.
Expressive Intelligence Studio // Center for Games and Playable Media // 3D User Interfaces Using the Kinect.
1 INTRODUCTION TO COMPUTER GRAPHICS. Computer Graphics The computer is an information processing machine. It is a tool for storing, manipulating and correlating.
Introduction to 3D User Interface. 첫번째 강의 내용  강의 계획서 설명 강의와 논문 발표 ( 학생 ) 발표 논문 리스트  Chapter 1 & 2 참고  SIGGRAPH 2001: Course Notes 44 Advance Topics.
Human Computer Interaction (HCI)
Introduction to Virtual Environments & Virtual Reality
Fundamentals of Human Computer Interaction (HCI)
BlindAid: a Virtual Exploration Tool for People who are Blind
Map of Human Computer Interaction
What is a 3D User Interface / History and Roadmap of 3D UI’s
Chapter 9 System Control
Presentation transcript:

3D Interaction Techniques for Virtual Environments Doug A. Bowman

Terminology Interaction Technique (IT) – method for accomplishing a task 3D application – system that displays 3D information 3D interaction – performing user actions in three dimensions (C) 2005 Doug Bowman, Virginia Tech

Didn’t we already cover input devices? ITs User interface software Input devices System Software Output devices User (C) 2005 Doug Bowman, Virginia Tech

Video example: ISAAC (C) 2005 Doug Bowman, Virginia Tech

Universal interaction tasks Navigation Travel – motor component Wayfinding – cognitive component Selection Manipulation System control Symbolic input We’ll be considering four “basic” or “universal” interaction tasks for 3D/VE applications. Obviously, there are other tasks which are specific to an application domain, but these are some basic building blocks that can often be combined to create a more complex task. Navigation is the most common VE task, and is actually composed of two tasks. Travel is the motor component of navigation, and just refers to the physical movement from place to place. Wayfinding is the cognitive or decision-making component of navigation, and it asks the questions, “where am I?”, “where do I want to go?”, “how do I get there?”, and so on. Selection is simply the specification of an object or a set of objects for some purpose. Manipulation refers to the specification of object properties (most often position and orientation, but also other attributes). Selection and manipulation are often used together, but selection may be a stand-alone task. For example, the user may select an object in order to apply a command such as “delete” to that object. System control is the task of changing the system state or the mode of interaction. This is usually done with some type of command to the system (either explicit or implicit). Examples in 2D systems include menus and command-line interfaces. It is often the case that a system control technique is composed of the other three tasks (e.g. a menu command involves selection), but it’s also useful to consider it separately since special techniques have been developed for it and it is quite common. (C) 2005 Doug Bowman, Virginia Tech

Selection & Manipulation Selection: specifying one or more objects from a set Manipulation: modifying object properties (position, orientation, scale, shape, color, texture, behavior, etc.) (C) 2005 Doug Bowman, Virginia Tech

(C) 2005 Doug Bowman, Virginia Tech

Goals of selection Indicate action on object Query object Make object active Travel to object location Set up manipulation (C) 2005 Doug Bowman, Virginia Tech

Selection performance Variables affecting user performance Object distance from user Object size Density of objects in area Occluders (C) 2005 Doug Bowman, Virginia Tech

Common selection techniques Touching with virtual hand Ray/cone casting Occlusion / framing Naming Indirect selection (C) 2005 Doug Bowman, Virginia Tech

Enhancements to basic techniques Arm-extension Mapping “Reeling” 2D / 3D World in Miniature Select iconic objects (C) 2005 Doug Bowman, Virginia Tech

Go-Go implementation Requires “torso position” t - tracked or inferred Each frame: Get physical hand position h in world CS Calculate physical distance from torso dp = dist(h, t) Calculate virtual hand distance dv = gogo(dp) Normalize torso-hand vector V. hand position v = t + dv *(h-t) (in world CS) To implement Go-Go, we first need the concept of the position of the user’s body. This is needed because we stretch our hands out from the center of our body, not from our head (which is usually the position that is tracked). I have implemented this using an inferred torso position, which is defined as a constant offset in the negative y direction from the head. You could also place a tracker on the user’s torso. Before rendering each frame, you get the physical hand position in the world CS, and then calculate its distance from the torso object using the distance formula. The virtual hand distance can then be obtained by applying the function shown in the graph on the previous slide. I have used the function d2.3 (starting at the point (D,D)) as a useful function in my environments, but the exponent used depends on the size of the environment and the desired accuracy of selection at a distance. Now that we know the distance at which to place the virtual hand, we need to determine its position. The most common implementation is to keep the virtual hand on the ray extending from the torso and going through the physical hand. Therefore, if we get a vector between these two points, normalize it, multiply it by the distance, then add this vector to the torso point, we obtain the position of the virtual hand. (C) 2005 Doug Bowman, Virginia Tech

(C) 2005 Doug Bowman, Virginia Tech

Selection classification graphical tactile audio Feedback Object indication object touching pointing indirect selection Selection Indication to select button gesture voice (C) 2005 Doug Bowman, Virginia Tech

Evaluation: Selection Task Ray-casting and image-plane generally more effective than Go-Go Exception: selection of very small objects can be more difficult with pointing Ray-casting and image-plane techniques result in the same performance (2DOF) Image-plane technique less comfortable This slide summarizes some of the results from the experiments conducted by Poupyrev et al. (1998) and Bowman et al. (1999). Two interaction techniques have been evaluated in experiments by Poupyrev: ray-casting and Go-Go interaction technique with 5 levels of the distance and 3 levels of the object size; within subject design with 12 participants was used in the experiments. Three interaction techniques have been evaluated by Bowman: ray-casting, image plane, and Go-Go techniques with 3 levels of the distance and 2 levels of the object sizes. Between subject design with 5 participants for each technique was used in the experiments Several general conclusions emerge from the experiments (also, see figures below) First, in general purpose selection task pointing techniques demonstrate better performance then Go-Go requiring less physical movement from the user. The only exceptions reported by Poupyrev is when very high precision of selection was required, e.g. for small objects (about 4 degrees of field of view) especially located far away at the distance. In this case pointing was not as effective as Go-Go which is quite intuitive: pointing on small objects is more difficult then on large especially when they are further away. This replicates observations of other researchers (Liang, 1994; Forsberg, 1996); the Fitt’s law provides a theoretical explanation for this phenomena (in Bowman’s experiments the size and the distance did not significantly affected the user performance for pointing techniques). There was significant differences in user performance in these two experiments which can be explained by different implementations of the interaction techniques. For example, in Poupyrev’s experiments the ray-casting was implemented as a short ray using which the user pointed at objects. Bowman, on the other hand, implemented ray-casting so that the user could actually touch object with a ray, which might be a more effective implementation of this technique. Furthermore, different settings in the Go-Go technique can also greatly affect user performance. (C) 2005 Doug Bowman, Virginia Tech

Implementation issues for selection techniques How to indicate selection event Object intersections Feedback Graphical Aural Tactile Virtual hand avatar List of selectable objects There are several common issues for the implementation of selection techniques. One of the most basic is how to indicate that the selection event should take place (e.g. you are touching the desired object, now you want to pick it up). This is usually done via a button press, gesture, or voice command, but it might also be done automatically if the system can infer the user’s intent. You also have to have efficient algorithms for object intersections for many of these techniques. We’ll discuss a couple of possibilities. The feedback you give to the user regarding which object is about to be selected is also very important. Many of the techniques require an avatar (virtual representation) for the user’s hand. Finally, you should consider keeping a list of objects that are “selectable”, so that your techniques do not have to test every object in the world for selection, increasing efficiency. (C) 2005 Doug Bowman, Virginia Tech

Goals of manipulation Object placement Tool usage Travel Design Layout Grouping Tool usage Travel (C) 2005 Doug Bowman, Virginia Tech

Manipulation metaphors Simple virtual hand Natural but limited Ray casting little effort required Exact positioning and orienting very difficult (lever arm effect) (C) 2005 Doug Bowman, Virginia Tech

Manipulation metaphors II Hand position mapping Natural, easy placement Limited reach, fatiguing, overshoot Indirect depth mapping Infinite reach, not tiring Not natural, separates DOFs (C) 2005 Doug Bowman, Virginia Tech

HOMER technique Hand-Centered Object Manipulation Extending Ray-Casting Select: ray-casting Manipulate: hand Time (C) 2005 Doug Bowman, Virginia Tech

Manipulation metaphors III HOMER (ray-casting + arm-extension) Easy selection & manipulation Expressive over range of distances Hard to move objects away from you (C) 2005 Doug Bowman, Virginia Tech

HOMER implementation Requires torso position t Upon selection, detach virtual hand from tracker, move v. hand to object position in world CS, and attach object to v. hand (w/out moving object) Get physical hand position h and distance dh = dist(h, t) Get object position o and distance do = dist(o, t) Like Go-Go, HOMER requires a torso position, because you want to keep the virtual hand on the ray between the user’s body (torso) and the physical hand. The problem here is that HOMER moves the virtual hand from the physical hand position to the object upon selection, and it is not guaranteed that the torso, physical hand, and object will all line up at this time. Therefore, in my implementation, I calculate where the virtual hand would be if it were on this ray initially, then calculate the offset to the position of the virtual object, and maintain this offset throughout manipulation. When an object is selected via ray-casting, you must first detach the virtual hand from the hand tracker. This is due to the fact that if it remained attached but you move the virtual hand model away from the physical hand location, a rotation of the physical hand will cause a rotation and translation of the virtual hand. You then move the virtual hand in the world CS to the position of the selected object, and attach the object to the virtual hand in the scene graph (again, without moving the object in the world CS). To implement the linear depth mapping, we need to know the initial distance between the torso and the physical hand, and between the torso and the selected object. The ratio do/dh will be the scaling factor. (C) 2005 Doug Bowman, Virginia Tech

HOMER implementation (cont.) Each frame: Copy hand tracker matrix to v. hand matrix (to set orientation) Get physical hand position hcurr and distance: dh-curr = dist(hcurr, t) V. hand distance Normalize torso-hand vector V. hand position vh = t + dvh*(thcurr) Now, each frame you need to set the position and orientation of the virtual hand. The selected object is attached to the virtual hand, so it will follow along. Setting the orientation is relatively easy. You can simply copy the transformation matrix for the hand tracker to the virtual hand, so that their orientation matches. To set the position, we need to know the correct depth and the correct direction. The depth is found by applying the linear mapping to the current physical hand depth. The physical hand distance is simply the distance between it and the torso, and we multiply this by the scale factor do/dh to get the virtual hand distance. We then obtain a normalized vector between the physical hand and the torso, multiply this vector by the v. hand distance, and add the result to the torso position to obtain the virtual hand position. (C) 2005 Doug Bowman, Virginia Tech

Manipulation metaphors IV Scaled-world grab Easy, natural manipulation User discomfort with use World-in-miniature All manipulation in reach Doesn’t scale well, indirect (C) 2005 Doug Bowman, Virginia Tech

(C) 2005 Doug Bowman, Virginia Tech

Technique Classification by metaphor This slide presents a simple taxonomy of current VE manipulation techniques. They are categorized according to their basic interaction metaphors into exocentric or egocentric techniques. These two terms originated in the studies of cockpit displays , and are now used to distinguish between two fundamental styles of interaction within VEs. In exocentric interaction, also known as the God’s eye viewpoint, users interact with VEs from the outside (the outside-in world referenced display ); examples are the World-In-Miniature and world scaling techniques. In egocentric interaction, which is the most common in immersive VEs, the user interacts from inside the environment, i.e., the VE embeds the user . There are currently two basic metaphors for egocentric manipulation: virtual hand and virtual pointer. With the techniques based on the virtual hand metaphor, users can reach and grab objects by “touching” and “picking” them with a virtual hand. The choice of the mapping function discriminates techniques based on the virtual hand metaphor. With techniques based on the virtual pointer metaphor, the user interacts with objects by pointing at them. The selection volume (i.e. conic or ray), definition of ray direction, and disambiguation function are some of the design parameters for such techniques. (C) 2005 Doug Bowman, Virginia Tech

Technique Classification by components Component-based (Bowman, 1999) taxonomy is another way to classify the manipulation techniques. The taxonomy is based on the observation that all techniques consist of several basic components with similar purposes. For example, in a manipulation task, the object can be positioned, rotated, and the feedback should be provided, etc. Then, instead of classifying the techniques as a whole, we classify only components that allow to accomplish these subtasks within techniques. Then, theoretically, any technique can be constructed out of these components simply by picking appropriate components and putting them together. (C) 2005 Doug Bowman, Virginia Tech

Evaluation: positioning task Ray casting effective if the object is repositioned at constant distance Scaling techniques (HOMER, scaled world grab) difficult in outward positioning of objects: e.g. pick an object located within reach and move it far away If outward positioning is not needed then scaling techniques might be effective Evaluating techniques for positioning task is difficult because of the vast amount of variables affecting user performance during the manipulation task: depending of the direction of positioning (e.g. inward or outward, left to right, on the constant distance from the user or with change of distance, and etc.), travel distance, required accuracy of manipulation as well as degrees of freedom, the performance of the user can differ drastically. One of the reasons behind these differences in user performance is different groups of muscles that are come into action for different task conditions. However, there were exploratory evaluations (Poupyrev, et al. 1998, Bowman et al. 1999). These studies found, for example, that that ray-casting technique, while not very efficient for positioning with a change of the distance from the user, can be very efficient when the starting and target positions of the object are located at the constant distance. The scaling techniques, such as HOMER (Bowman et al. 1997) and scaled-world grab (Mine, 1998) can not be used to perform outward positioning of objects: i.e. if the user wants to pick an object that is already located within the reach and move it away at-a-distance the coefficient of scaling would be equal to 1 and this operation might be difficult to accomplish. However, if the outward movement are rare and do not occur often then the scaling techniques can be effective. (C) 2005 Doug Bowman, Virginia Tech

Evaluation: orientation task Setting precise orientation can be very difficult Shape of objects is important Orienting at-a-distance harder than positioning at-a-distance Techniques should be hand-centered (C) 2005 Doug Bowman, Virginia Tech

Manipulation notes No universally best technique Constraints and reduced DOFs Naturalism not always desirable If VE is not based in the real, design it so manipulation is optimized (C) 2005 Doug Bowman, Virginia Tech

Manipulation enhancements Constraints 2-handed manipulation Haptic feedback Multi-modal manipulation (C) 2005 Doug Bowman, Virginia Tech

Implementation issues for manipulation techniques Integration with selection technique Disable selection and selection feedback while manipulating What happens upon release? One important issue for any manipulation technique is how well it integrates with the chosen selection technique. Many techniques, as we have said, do both: e.g. simple virtual hand, ray-casting, and go-go. Another issue is that when an object is being manipulated, you should take care to disable the selection technique and the feedback you give the user for selection. If this is not done, then serious problems can occur if, for example, the user tries to release the currently selected object but the system also interprets this as trying to select a new object. Finally, you should think in general about what happens when the object is released. Does it remain at its last position, possibly floating in space? Does it snap to a grid? Does it fall via gravity until it contacts something solid? The application requirements will determine this choice. (C) 2005 Doug Bowman, Virginia Tech