October 23rd, 2003 Benjamin Lok

Slides:



Advertisements
Similar presentations
SEMINAR ON VIRTUAL REALITY 25-Mar-17
Advertisements

A Natural Interactive Game By Zak Wilson. Background This project was my second year group project at University and I have chosen it to present as it.
C1 - The Impact of CAD on the Design Process.  Consider CAD drawing, 2D, 3D, rendering and different types of modelling.
Virtual Reality Design Virtual reality systems are designed to produce in the participant the cognitive effects of feeling immersed in the environment.
Case Tools Trisha Cummings. Our Definition of CASE  CASE is the use of computer-based support in the software development process.  A CASE tool is a.
Augmented Reality David Johnson.
Virtual Reality. What is virtual reality? a way to visualise, manipulate, and interact with a virtual environment visualise the computer generates visual,
Multi-view stereo Many slides adapted from S. Seitz.
ARCHAVE ARCHAVE A Three Dimensional GIS For a CAVE Environment Team: Eileen Vote Daniel Acevedo Feliz Martha Sharp Joukowsky David H. Laidlaw.
CS335 Principles of Multimedia Systems Multimedia and Human Computer Interfaces Hao Jiang Computer Science Department Boston College Nov. 20, 2007.
Introduction to Virtual Reality Mark Green School of Creative Media.
Sunee Holland University of South Australia School of Computer and Information Science Supervisor: Dr G Stewart Von Itzstein.
© 2010 Pearson Addison-Wesley. All rights reserved. Addison Wesley is an imprint of 1-1 HCI Human Computer Interaction Week 10.
Introduction to Virtual Environments CISE 6930/4930
Introduction to Virtual Environments CIS 4930/6930
 Introduction  Devices  Technology – Hardware & Software  Architecture  Applications.
Sixth Sense Technology. Already existing five senses Five basic senses – seeing, feeling, smelling, tasting and hearing.
Interacting With Dynamic Real Objects in a Virtual Environment Benjamin Lok February 14 th, 2003.
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
Incorporating Dynamic Real Objects into Immersive Virtual Environments Benjamin Lok University of North Carolina at Charlotte Samir Naik Disney VR Studios.
Effects of Handling Real Objects and Avatar Fidelity on Cognitive Task Performance in Virtual Environments Benjamin Lok University of North Carolina at.
Virtual Reality Lecture2. Some VR Systems & Applications 고려대학교 그래픽스 연구실.
GENESIS OF VIRTUAL REALITY  The term ‘Virtual reality’ (VR) was initially coined by Jaron Lanier, founder of VPL Research (1989)..
13 Step Approach to Network Design Steps A Systems Approach 8Conduct a feasibility Study 8Prepare a plan 8Understand the current system 8Design.
Tele Immersion. What is Tele Immersion? Tele-immersion is a technology to be implemented with Internet2 that will enable users in different geographic.
HCI 입문 Graphics Korea University HCI System 2005 년 2 학기 김 창 헌.
Virtual Characters. Overview What is a digital character? What is a digital character? Why do would we want digital characters? Why do would we want digital.
Virtual Reality and Digital Characters: New Modalities for Human Computer Interaction G2V2 Talk September 5 th, 2003 Benjamin Lok.
TELE IMMERSION AMAN BABBER
Interacting With Dynamic Real Objects in a Virtual Environment Benjamin Lok February 28 th, 2003.
Computing & Information Sciences Kansas State University Lecture 23 of 42CIS 636/736: (Introduction to) Computer Graphics Lecture 23 of 42 William H. Hsu.
Haris Ali (15) Abdul Ghafoor (01) Kashif Zafar (27)
Immersive Virtual Characters for Educating Medical Communication Skills J. Hernendez, A. Stevens, D. S. Lind Department of Surgery (College of Medicine)
Experiences in Extemporaneous Incorporation of Real Objects in Immersive Virtual Environments Benjamin Lok University of Florida Samir Naik Disney VR Studios.
A Mixed Reality Approach for Merging Abstract and Concrete Knowledge John Quarles Department of CISE Samsun Lampotang Department of Anesthesiology Ira.
Rapidly Incorporating Real Objects for Evaluation of Engineering Designs in a Mixed Reality Environment Xiyong Wang, Aaron Kotranza, John Quarles, Benjamin.
COMP413: Computer Graphics Overview of Graphics Systems Chapter 1.
CIRP Annals - Manufacturing Technology 60 (2011) 1–4 Augmented assembly technologies based on 3D bare-hand interaction S.K. Ong (2)*, Z.B. Wang Mechanical.
CS Chapter 11.5 – Computer GraphicsPage 145 Computer Graphics Recent advances in processor power, display technology, memory capacity, and rendering.
Digital Media & Interaction Design LECTURE 4+5. Lecture 4+5 Draw requirement + Prototyping.
Final Project Ideas, Requirements, and Deadlines
Crowds (and research in computer animation and games)
EYE TRACKING TECHNOLOGY
A Plane-Based Approach to Mondrian Stereo Matching
Real-Time ray casting for virtual reality
Introducing virtual REALITY
Augmented Reality.
Introduction to Virtual Environments & Virtual Reality
Multimedia Application
Jun Shimamura, Naokazu Yokoya, Haruo Takemura and Kazumasa Yamazawa
Visualization Shading
CS201 Lecture 02 Computer Vision: Image Formation and Basic Techniques
Image-Based Rendering
Deferred Lighting.
3D Graphics Rendering PPT By Ricardo Veguilla.
Human-Computer Interaction A Computer Science Perspective Benjamin Lok September 20 th, 2004.
The Graphics Rendering Pipeline
Interacting With Dynamic Real Objects in a Virtual Environment
Mixed Environments for Review and Generation of Engineering Designs
Crowds (and research in computer animation and games)
Image Based Modeling and Rendering (PI: Malik)
Telepresence Irene Maria Saji S3 ECE Roll number-37.
Virtual Reality.
J. Hernendez, A. Stevens, D. S. Lind
AN INTRODUCTION TO COMPUTER GRAPHICS Subject: Computer Graphics Lecture No: 01 Batch: 16BS(Information Technology)
Accessibility Not An After Thought
February 27th, 2004 Benjamin Lok
COMPUTER GRAPHICS with OpenGL (3rd Edition) Donald Hearn M
December 12th, 2003 Benjamin Lok
Mixed Reality: Are Two Hands Better Than One?
Presentation transcript:

October 23rd, 2003 Benjamin Lok Virtual Characters and Environments: New Modalities for Human Computer Interaction October 23rd, 2003 Benjamin Lok

Overview Computer generated characters and environments Amazing visuals and audio Interacting is limited! Reduce applicability? Goals: Create new methods to interact Evaluate the effectiveness of these interaction methods Aki from Final Fantasy: The Spirits Within Walking Experiment PIT - UNC

Virtual Environments Been around for almost 30 years # of systems in research labs > day to day use Why? Interaction with the virtual environment is too poor Everything is virtual isn’t necessarily good Example, change a light bulb Approach: Real objects as interfaces to the virtual world Merge the real and virtual spaces Evaluate what VR is good for!

Collaborators July 30th, 2003 Benjamin Lok University of Florida Samir Naik Disney VR Studios Mary Whitton, Frederick P. Brooks Jr. University of North Carolina at Chapel Hill July 30th, 2003 Good afternoon. Today, I’ll be discussing our research into incorporating dynamic real objects into virtual environments.

Objects in Immersive VEs Most are virtual (tools, parts, users’ limbs) Not registered with a corresponding real object (limited shape and motion information) Example: Unscrewing a virtual oil filter from a car engine model Ideally: handle real objects improved look, feel, affordance, interaction Solution: tracking and modeling dynamic objects With only a few exceptions we’ll mention later, in current VEs, almost all objects, such as the user, tools, and parts are virtual. That is the virtual objects are not registered with any physical object. Now, we are focusing specifically on fully immersive virtual environments, systems that use head mounted displays, as opposed to CAVE or augmented reality systems. Since fully modeling and tracking the participant, tools, parts, or other real objects are difficult, the system does not have shape and motion information for real objects. [TALK ABOUT IMAGE]

Previous Work Incorporating Real Objects into VEs Interaction Virtualized Reality (Kanade, et al.) Image Based Visual Hulls [Matusik00, 01] 3D Tele-Immersion [Daniilidis00] Interaction Commercial solutions (tracked mice, gloves, joysticks) Augment specific objects for interaction Doll’s head w/ trackers [Hinkley1994] Plate [Hoffman1998] Virtual object collision detection [Ehmann2000, Hoff2001] Virtual object – real object a priori modeling and tracking [Breen1996] [TOO LONG?] Our work builds on the research areas of incorporating real objects and human avatars in VEs. We want to generate models of specific objects, and for this task, prebuilt models or measuring and modeling packages, are usually inadequate solutions. The main distinction between approaches is whether it is designed for real-time model generation or is it optimized for, models of static objects. The Virtualized Reality project at Carnegie Mellon led by Kanade has a room filled with 49 cameras. Then recorded events can be played back from a “virtual camera”. It uses a baseline stereo approach to create volumetric representations of the room. At SIGGRAPH 2000 Matusik from MIT presented a image-based rendering approach to examine camera images and generate visual hull models in real-time. Our approach is similar in approximating real object shape. The 3D Tele-Immersion Project at University of Pennsylvania uses dense stereo approaches to calculate models of parties in communication and transmits them to the other site. To help with giving real objects to interact with, others have augmented specific objects. Hinckley at University of Virginia engineered a doll’s head with sliding rods to be an I/O device to facilitate doctors’ selecting cutting planes for visualizing MRI data of a patient’s head. Hoffman at the HITLAB at the University of Washington attached a magnetic tracker to a real plate to register a virtual model of a plate that was rendered in the VE as shown in the images. This allowed the participant to interact with a real plate where he saw a virtual one.

Visual Incorporation of Dynamic Real Objects in a VE I3D2001 We’ll now discuss a method to visually incorporate virtual representations of real objects within a virtual environment.

Real-time Object Reconstruction System Inputs: outside-looking-in camera images of real objects in a scene Outputs: an approximation of the real objects (visual hull) Handle dynamic objects (generate a virtual representation) Interactive rates Our approach has the following goals: First, we’d like to handle dynamic scenes. That is we’d like to generate representations on the fly and not use pre-built models. Second, we want to generate these representations at interactive rates. This leads us to bypass an explicit 3D modeling stage, as computing full models of real objects can be extremely complex and time-consuming. For many types of tasks, an approximate model of the real objects will do. The visual hull is one such approximation. we present a real-time approach to compute approximate models of real objects. It exploits the tremendous recent advances in graphics hardware. We start by using a set of outside - looking - in cameras. Given a set of camera images, we want to generate a 3D representation of objects within a volume.

Reconstruction Algorithm 1. Start with live camera images 2. Image Subtraction 3. Use images to calculate volume intersection … … 4. Composite with the VE We want to reconstruct the visual hull from a novel viewpoint. To do this, the reconstruction algorithm has four major steps to go from live camera images to the visual hull to inserting it into the VE. We start with camera images, then perform image subtraction to get object pixels. Then we use the graphics hardware to accelerate volume intersection to calculate the visual hull. Next we render the visual hull and composite it with the VE. So the first step is to identify the real objects in the source camera images.

Visual Hull Computation Visual hull - tightest volume given a set of object silhouettes Intersection of the projection of object pixels So given these object pixels for each camera, we’d like to determine the shape and appearance of the objects in the volume. So instead of computing an exact model for the real objects, we compute the visual hull of the real objects. The visual hull is the tightest volume given a set of object silhouettes. In our case, if we project the object pixels from each camera into a volume, the visual hull is the intersection. So let’s take a look at this example.

Visual Hull Computation Visual hull - tightest volume given a set of object silhouettes Intersection of the projection of object pixels

Volume Querying in Hardware Volume Querying: If a point (P) is inside the visual hull (VHreal objects), it projects onto an ‘object pixel’ from each camera P  VHreal objects iff  i  j, P = Ci-1 Oi, j Perform this test in graphics hardware. We have said that the intersection of the object pixels projections results in the visual hull. We can reverse the statement and say, if a point is inside the visual hull, it projects onto an object pixel from every camera. This volume querying approach asks the question, “for a 3D point, am I inside the visual hull?” Let’s look at these examples.

Implementation 1 HMD-mounted and 3 wall-mounted cameras SGI Reality Monster – handles up to 7 video feeds 15-18 fps Estimated error: 1 cm Performance will increase as graphics hardware continues to improve We implemented a system with 3 wall mounted cameras for reconstruction. The reconstruction is texture-mapped with a HMD-mounted camera. We get between 15-18 frames per second with most of the work being done on the image subtraction stage. There is about .3 second of latency, with an estimated reconstruction error of 1 centimeter.

Managing Collisions Between Virtual and Dynamic Real Objects We want virtual objects to respond to real object avatars Requires detecting when real and virtual objects intersect If intersections exist, determine plausible responses Only virtual objects can move or deform at collision. Both real and virtual objects are assumed stationary at collision. To physically incorporate real objects, we want to have virtual objects react to the real objects avatars. [Describe images] This requires being able to detect when virtual objects are in intersection with the real objects. Then if there are intersections, to determine plausible responses. I’ll present an approach to use volume querying to determine collisions and responses between real and virtual objects.

Detecting Collisions The fundamental operation is finding points on the surface of the virtual object that are in collision with the visual hull.

Resolving Collisions CPhull CPobj Vrec Estimate point of deepest virtual object penetration. CPobj 2. Define plausible recovery vector Vrec = RPobj - CPobj Back out virtual object. CPobj = CPhull CPobj CPhull Vrec Once we know there is a collision, we return that information to the application. The application can then request additional information to determine plausible responses. We provide one way to resolve collisions. Let’s look at this diagram. Recall that the collision points are those pixels that were highlighted as “on” in volume querying. They are illustrated here as blue dots. They are points on the virtual object lying within the visual hull. We first estimate the point of deepest virtual object penetration. We take the point farthest from the object center as our best estimate of the point on the virtual object that first made contact with the visual hull. The green dot here. Since we can’t back track both objects, this is only an estimate. Second, we define the recovery vector, the purple vector here, from this farthest point towards the object center. This recovery vector is the direction we’ll move the virtual object to get it out of collision. Third, we want to determine the point of collision on the visual hull. To do this, we volume query again and search to find the point of collision on the visual hull. This red vector is an estimate of the direction and distance to move the virtual object to get it out of collision.

Resolving Collisions CPhull CPobj Vrec Estimate point of deepest virtual object penetration. CPobj 2. Define plausible recovery vector Vrec = RPobj - CPobj Back out virtual object. CPobj = CPhull CPobj CPhull Vrec

Resolving Collisions CPhull CPobj Vrec Estimate point of deepest virtual object penetration. CPobj 2. Define plausible recovery vector Vrec = RPobj - CPobj Back out virtual object. CPobj = CPhull CPobj CPhull Vrec

Resolving Collisions CPhull CPobj Vrec Estimate point of deepest virtual object penetration. CPobj 2. Define plausible recovery vector Vrec = RPobj - CPobj Back out virtual object. CPobj = CPhull CPobj CPhull Vrec

Resolving Collisions CPhull CPobj Vrec Estimate point of deepest virtual object penetration. CPobj 2. Define plausible recovery vector Vrec = RPobj - CPobj Back out virtual object. CPobj = CPhull CPobj CPhull Vrec

Results Make 3rd larger

Collision Detection / Response Performance Volume-query about 5000 triangles per second Error of collision points is ~0.75 cm. Depends on average size of virtual object triangles Tradeoff between accuracy and time Plenty of room for optimizations The approach is unoptimized and we are confident that improved performance can be easily obtained.

Case Study: NASA Langley Research Center (LaRC) Payload Assembly Task Given payload models, designers and engineers want to evaluate: Assembly feasibility Assembly training Repairability Current Approaches Measurements Design drawings Step-by-step assembly instruction list Low fidelity mock-ups So we gave them information in the same format they usually receive it. Given a CAD model of photon multiplier tube box (PMT) which is part of a weather imaging satellite. We abstracted a task that had the participant screw in a tube and attach a power cable down this center shaft while interacting with the virtual model.

Task Wanted a plausible task given common assembly jobs. Abstracted a payload layout task Screw in tube Attach power cable Determine how much space should be allocated between the TOP of the PMT and the BOTTOM of Payload A Typical assemble and then make cable connections

Videos of Task Explain Flashing So he can see the other virtual models and their constraints imposed on the task.

Results The tube was 14 cm long, 4cm in diameter. Participant #1 #2 #3 #4 (Pre-experience) How much space is necessary? 14 cm 14.2 cm 15 – 16 cm 15 cm (Pre-experience) How much space would you actually allocate? 21 cm 16 cm 20 cm Actual space required in VE 22.5 cm 22.3 cm 23 cm (Post-experience) How much space would you actually allocate? 18 cm (modify tool) 25 cm Again, the tube was 14 cm long, and 4 cm in diameter. What we see in the first two rows is how much space the participant thought was necessary and would actually allocate after looking at only assembly task drawings and descriptions. Payload space is scarce, and they were stingy with it. Participant #1 performed this task with a stiff enough cable that she could force it in without needing a tool. So since we replaced the cable with a more flexible one for the later participants, so really the interesting cases are #2-4. After performing the task, we see here how much space was actually required. We see that none of the original estimates had enough space for the eventual required tool.

largest cost is huge hit in schedule Results Participant #1 #2 #3 #4 Time cost of the spacing error days to months 30 days months Financial cost of the spacing error $100,000s -$1,000,000+ largest cost is huge hit in schedule $100,000s [CANDIDATE FOR CHOP] The participants were then asked after the experience, how much time and money would finding a spacing error like this one at the integration stage cost? As you can see it is days to months in time (which is the most precious commodity at the later stages), and the financial implications can be anywhere upwards of the hundreds of thousands of dollars. Such late discovery of such problems is not uncommon. Late discovery of similar problems is not uncommon.

Case Study Conclusions Object reconstruction VEs benefits: Specialized tools and parts require no modeling Short development time to try multiple designs Shows promise for early testing of subassembly integration from multiple suppliers Possible to identify assembly, design, and integration issues early that results in considerable savings in time and money.

Future Work Improved model fidelity Improved collision detection and response Apply system to upcoming NASA payload projects. There are plenty of areas for future research. Using advanced image processing techniques, camera calibration and graphics functions (such as pixel shaders) would improve collision detection and refine the visuals. We feel that additional studies should be conducted to investigate more closely the importance of visual fidelity compared to kinematic fidelity. If the virtuality of objects so hindered task performance, we believe the same hindrances would affect task training and learning. Finally, and perhaps the most rewarding, would be to continue to work on applying hybrid systems to upcoming NASA payload projects.

Collaborators Graduate Students Cathy Zanbaka, Jonathan Jackson Sabarish Babu, Dan Xiao, Amy Ulinski Jee-In Kim, Min Shin, Larry Hodges University of North Carolina at Charlotte Good afternoon. Today, I’ll be discussing our research into incorporating dynamic real objects into virtual environments.

Avatars in VE Most virtual environments do not provide an avatar (user self-representation) Why? Because tracking the human body is difficult. Solution: Use simple computer vision to track colored markers to generate an avatar

STRAPS Video Play Video

Locomotion in VR Most common locomotion: Use a ‘virtual walking’ metaphor. Does this reduce effectiveness? We can test this because of new wide-area tracking technologies.

Results Natural interaction is better! | # of questions correct Sketch of room  \/ Sense of presence \

Undergraduate Students Collaborators Graduate Students George Mora Undergraduate Students Sam Preston, Andrew Joubert, Sayed Hashimi Good afternoon. Today, I’ll be discussing our research into incorporating dynamic real objects into virtual environments.

What is a Virtual Character? Virtual character - a character who represents an executing system In TRON (1982), humans and humans that represents software interacted within a world that represented the hardware.

What is a Virtual Character We look to to have humans and human the represent software interact in the real world. http://movies.yahoo.com/shop?d=hv&id=1807432839&cf=trailer

Digital Characters A new medium to interact with system information Spectrum from the paper clip -> AIBO Examples: Agents (AI, NLP) Robots (Robotics)

Life-Sized Virtual Characters Virtual Characters as a way to interact with information If virtual characters are presented with an adequate level of realism, would people respond to them as they would to other people? Effective Interaction Natural (> than keyboard and mouse) 3D, Dynamic (augmentable) Effective Collaboration Non-verbal communication (60%) High impact?

Comparing Human-Human Interaction Modalities IM Phone Video Phone Digital Character 3D Tele-Immsn Face to Face Input Verbal L H Non-Verbal N ? (M) Privacy Output Dynamic? M L/? Efficacy Impact ? (H) Interaction Logistics Bandwidth N/A Cost [ N – H ] Availability N- None, L – Low, M – Medium, H – High

Interaction Each participant in a communication has three stages: perception, cognition, and response Investigate: Display, perception, efficacy Thinking Perceiving Responding Virtual Character Thinking Responding Perceiving Interaction Perceiving Responding Thinking Participant

What we plan on studying There are many components in digital characters We will focus only on interaction Input (perception) How can we have the system capture non-verbal information? (emotion, gestures, eye-gaze, etc.) Proposed solution: Initially straps -> advanced computer vision techniques Output (response) How do we display the digital character? Proposed solution: Evaluate different display technologies Eventually we will look at efficacy (future research)

How we plan on studying it Perception: STRAPS: Track head, arms, upper body, and objects (telephone). Develop techniques to transmit eye-gaze and expressions User Studies: Evaluate effectiveness of each component Response: Evaluate the display modality. User Studies: Immersive vs Non-Immersive Life-sized vs. Non-life sized 2D vs 3D Goal: TV + webcams + laptop

Projects underway Interpersonal communication Teaching Distributed acting rehearsal (DAS student) Teaching Deaf students Future work: Universal Access Disabled Minorities Rural communities

Recruiting Work with Looking for Cognitive Psychologists Equipment Computer Scientists Computational Geometry Aesthetic computing Simulation/modeling Digital Arts and Science Looking for MS and PhD students Equipment Wide area 12’x12’ tracker V8 HMD 42” plasma TV Cyberware Scanner 6 data projectors (stereo) Firewire cameras High-end PCs, video

Thanks Collaborators For more information: Funding Agencies Danette Allen (NASA LaRC) UNC-CH Effective Virtual Environments UNC-C Virtual Environments Group For more information: http://www.cise.unc.edu/~lok (I3D2001, I3D2003, VR2003, VR2004) Funding Agencies The LINK Foundation NIH (Grant P41 RR02170) National Science Foundation Office of Naval Research

Case Study: NASA Langley Research Center (LaRC) Payload Assembly Task We wanted to see how applicable the object reconstruction system would be for a real world problem. So we began collaborating with a payload-engineering group at NASA Langley Research Center (NASA LaRC) in Hampton, Virginia. In a first exploratory study, four experts in payload design and engineering used the reconstruction system to evaluate an abstracted version of payload integration layout design.

Visual Hull Computation

Ideally Would like: Accurate virtual representations, or avatars, of real objects Virtual objects responding to real objects Haptic feedback Correct affordances Constrained motion Example: Unscrewing a virtual oil filter from a car engine model What we would really like in a virtual world, would be accurate visual representations, or avatars, of real objects. We extend our definition of an avatar to include virtual representations of any real object not just of the user. Virtual objects react to and respond to real objects. Both assembly and servicing are hands-on tasks, and would benefit from haptic feedback, manual affordances, and constrained motions when interacting with virtual objects. That is you can feel the object and interact with it naturally. For example, imagine trying to simulate something as basic as unscrewing an oil filter from a car engine when everything is virtual! What might be easier is to perform this task with a real oil filter and any tools like an oil wrench while interacting with a virtual model.

Object Pixels Identify new objects Perform image subtraction Separate the object pixels from background pixels current image - background image = object pixels For each frame, we subtract the current frame from a background reference frame and compare the result against a threshold. Here we see a single frame for the current image, background, and the subtracted image with the resulting pixels whose difference is greater than the threshold. We call these “object pixels”.

Current Projects at UNC-Charlotte with Dr. Larry Hodges Digitizing Humanity Basic research into virtual characters What is important? How does personality affect interaction? Applications: Social situations Human Virtual-Human Interaction Virtual Reality Basic Research: Incorporating Avatars Locomotion Effect on Cognitive Performance Balance Disorders (w/ Univ. of Pittsburg)

Research Interests Computer Graphics – computer scientists are toolsmiths Applying graphics hardware to: 3D reconstruction simulation Visualization Interactive Graphics Virtual Reality What makes a virtual environment effective? Applying to assembly verification & clinical psychology Human Computer Interaction 3D Interaction Virtual Humans Assistive Technology Computer Vision and Mobile Technology to help disabled

Future Directions Long Term Goals Help build the department into a leader in using graphics for visualization, simulation, and training. Effective Virtual Environments (Graphics, Virtual Reality, and Psychology) Digital Characters (Graphics & HCI) Additional benefit of having nearby companies (Disney) and military Assistive Technology (Graphics, VR, and Computer Vision)

Occlusion

Occlusion