3DDI: 3D Direct Interaction John Canny Computer Science Division UC Berkeley.

Slides:



Advertisements
Similar presentations
C1 - The Impact of CAD on the Design Process.  Consider CAD drawing, 2D, 3D, rendering and different types of modelling.
Advertisements

Virtual Reality Design Virtual reality systems are designed to produce in the participant the cognitive effects of feeling immersed in the environment.
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Slide 1 Tiled Display Walls - Relation to the Access Grid and Other Systems Mike Walterman, Manager of Graphics Programming, Scientific Computing and Visualization.
SIGGRAPH Course 30: Performance-Driven Facial Animation Section: Markerless Face Capture and Automatic Model Construction Part 2: Li Zhang, Columbia University.
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
Virtual reality interfaces in connection with building process simulations. Prof. Nash Dawood Centre for Construction Innovation Research University of.
Virtual Reality. What is virtual reality? a way to visualise, manipulate, and interact with a virtual environment visualise the computer generates visual,
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Communicating with Avatar Bodies Francesca Barrientos Computer Science UC Berkeley 8 July 1999 HCC Research Retreat.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Application Programming Interface For Tracking Face & Eye Motion Team Members Tharaka Roshan Pathberiya Nimesh Saveendra Chamara Susantha Gayan Gunarathne.
Computer Vision in Graphics Production Adrian Hilton Visual Media Research Group Centre for Vision, Speech and Signal Processing University of Surrey
Design and Virtual Prototyping of Human-worn Manipulation Devices Peng Song GRASP Laboratory University of Pennsylvania ASME DETC99/CIE-9029 GRASP Laboratory.
Modularly Adaptable Rover and Integrated Control System Mars Society International Conference 2003 – Eugene, Oregon.
Image Processing1 Image Acquisition Photo sensors –Photo diodes and photo diode array –Photo transistor Video camera.
VE Input Devices(I) Doug Bowman Virginia Tech Edited by Chang Song.
Designing 3D Interfaces Examples of 3D interfaces Pros and cons of 3D interfaces Overview of 3D software and hardware Four key design issues: system performance,
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
PRESENTED BY Geenas GS S7, ECE Roll.No:  Introduction.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
INTERACTING WITH SIMULATION ENVIRONMENTS THROUGH THE KINECT Fayez Alazmi Supervisor: Dr. Brett Wilkinson Flinders University Image 1Image 2 Source : 1.
Gesture-Based Interactive Beam- Bending By Justin Gigliotti Mentored by Professor Tarek El Dokor And Dr. David Lanning Arizona/NASA Space Grant Symposium.
3DDI Visualization MURI UC Berkeley and MIT. UC- MIT 3DDI: Overview Project pipeline: 3D capture: Modeling, simulation Rendering3D Display Applications:
Advanced Computer Technology II FTV and 3DV KyungHee Univ. Master Course Kim Kyung Yong 10/10/2015.
INTRODUCTION INTRODUCTION Computer Graphics: As objects: images generated and/or displayed by computers. As a subject: the science of studying how to generate.
COMPUTER GRAPHICS Hochiminh city University of Technology Faculty of Computer Science and Engineering CHAPTER 01: Graphics System.
COMPUTER PARTS AND COMPONENTS INPUT DEVICES
Dr. Gallimore10/18/20151 Cognitive Issues in VR Chapter 13 Wickens & Baker.
Full-body motion analysis for animating expressive, socially-attuned agents Elisabetta Bevacqua Paris8 Ginevra Castellano DIST Maurizio Mancini Paris8.
Vrobotics I. DeSouza, I. Jookhun, R. Mete, J. Timbreza, Z. Hossain Group 3 “Helping people reach further”
VIRTUAL REALITY Sagar.Khadabadi. Introduction The very first idea of it was presented by Ivan Sutherland in 1965: “make that (virtual) world in the window.
Virtual Reality Lecture2. Some VR Systems & Applications 고려대학교 그래픽스 연구실.
GENESIS OF VIRTUAL REALITY  The term ‘Virtual reality’ (VR) was initially coined by Jaron Lanier, founder of VPL Research (1989)..
Spatiotemporal Information Processing No.8 Information-oriented real space ~ Telexistence The role of Spatiotemporal Information Processing Kazuhiko HAMAMOTO.
Research Interests of Dr. Dennis J Bouvier Fall 2007.
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Spectrograms Revisited Feature Extraction Filter Bank Analysis EEG.
Seminar on EyeTap.  Introduction  What is Eye Tap  EyeTap: The eye itself as display and camera  History of EyeTap  Principle & working of the Eyetap.
3D animation is rendered clip of animated 3D objects in a 3D environment. An example: Examples of movies released in 3D are Toy Story, Cars, Shrek, Wall-E,
Tele Immersion. What is Tele Immersion? Tele-immersion is a technology to be implemented with Internet2 that will enable users in different geographic.
The Effect of Interface on Social Action in Online Virtual Worlds Anthony Steed Department of Computer Science University College London.
Data dan Teknologi Multimedia Sesi 07 Nofriyadi Nurdam.
4 November 2000Bridging the Gap Workshop 1 Control of avatar gestures Francesca Barrientos Computer Science Division UC Berkeley.
HCI 입문 Graphics Korea University HCI System 2005 년 2 학기 김 창 헌.
Gesture Modeling Improving Spatial Recognition in Architectural Design Process Chih-Pin Hsiao Georgia Institute of Technology.
Subject Name: Computer Graphics Subject Code: Textbook: “Computer Graphics”, C Version By Hearn and Baker Credits: 6 1.
AUGMENTED AND VISUAL REALITY. WHAT IS AUGMENTED AND VISUAL REALITY?
TELE IMMERSION AMAN BABBER
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
TELE-IMMERSION PRESENTED BY: N. Saai Kaushiik N. Saai Kaushiik.
Knowledge Systems Lab JN 1/15/2016 Facilitating User Interaction with Complex Systems via Hand Gesture Recognition MCIS Department Knowledge Systems Laboratory.
AGENDA  Introduction  Early developments  Requirements for immersive tele conferences systems  How tele immersion works  Share table environment 
DAUIN: computer graphics and visualization n Principal investigators:  MONTUSCHI PAOLO – Full professor  ALDO LAURENTINI – Associate professor  SANNA.
School of Computer Science Advanced Interfaces Group Extensive expertise in R&D of VR software systems and applications MAVERIK VR software downloaded.
Mixed Reality Conferencing Hirokazu Kato, Mark Billinghurst HIT Lab., University of Washington.
What is Multimedia Anyway? David Millard and Paul Lewis.
Rapidly Incorporating Real Objects for Evaluation of Engineering Designs in a Mixed Reality Environment Xiyong Wang, Aaron Kotranza, John Quarles, Benjamin.
Jun Shimamura, Naokazu Yokoya, Haruo Takemura and Kazumasa Yamazawa
CS201 Lecture 02 Computer Vision: Image Formation and Basic Techniques
CAPTURING OF MOVEMENT DURING MUSIC PERFORMANCE
Controlling Gestures on Avatars
Image Based Modeling and Rendering (PI: Malik)
Communicating with Avatar Bodies
LEAP MOTION: GESTURAL BASED 3D INTERACTIONS
Computer Graphics Lecture 15.
Criteria for rapid prototyping
Computer Vision Readings
Presentation transcript:

3DDI: 3D Direct Interaction John Canny Computer Science Division UC Berkeley

3DDI: Goals The goals of the project are: l Allow natural and transparent access to 3D models, simulation output and remote environments. –No Cognitive transitions from real to virtual. –Direct Interaction with 3D worlds, no gloves or glasses. –3D Content is animated, interaction is in real-time. –Content is real-world. 3D models come either from live capture or from offline capture and modeling.

3DDI: Motivating Examples Training/Collaborative Design Interfaces l Users interact with animated virtual objects. l Autostereoscopic displays render the object in space. l 3D capture provides hand gesture input to the virtual world. Laser scanner 3D display Virtual object

3DDI: Examples Collaboration l Spatial cues such as position and gaze are essential for natural interaction. l 2D Video provides one viewpoint and distorts those cues. l 3D video preserves spatial cues. RemoteLocal

3DDI: Examples 3D video (aka tele-immersion) l 3D (depth) data as well as color captured by laser scanners. l 3D data is transmitted as texture-mapped polygons. l 3D data is rendered using autostereographic displays. l 100k polygons/sec  bytes/sec + low latency. 3D scanners 3D displays

3DDI: 3D capture Currently, two classes of devices are used for shape and motion acquisition. In future, ranging devices will support both functions with greater speed, accuracy and versatility. 3D scanners (static) Motion capture devices Video rate Range Scanners + real-time modeling Detailed geometry Motion of rigid parts Geometry and motion of rigid and non-rigid parts

3D Camera Imaging System RGB CCD ICCD VCSEL Array Source

Physical behaviors using Java Francesca Barrientos, Brian Mirtich l A java “physics” interpreter was added to the UCB simulator IMPULSE l Supports interactive distributed and extensible simulation l Remote-controlled blimp simulation included –Buoyancy –Aerodynamic drag –Remote user propulsion control

Future Work l Demonstrate 3D video using scanner data (S98). l Extend real-time simulation to larger environments (computation-limited). l Develop model-based 3D to deal with occlusion and to assist with gestural input. –Involves an offline training phase for kinematics. –Online tracking should then be feasible in real time.