1 3D Tele-Collaboration over Internet2 Herman Towles, UNC-CH representing members of the National Tele-Immersion Initiative (NTII) ITP 2002 Juan-les-Pins,

Slides:



Advertisements
Similar presentations
SEMINAR ON VIRTUAL REALITY 25-Mar-17
Advertisements

Real-Time Projector Tracking on Complex Geometry Using Ordinary Imagery Tyler Johnson and Henry Fuchs University of North Carolina – Chapel Hill ProCams.
National Tele-Immersion Initiative: Amela Sadagic, PhD Towards Compelling Tele-Immersive Collaborative Environments.
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Vision Sensing. Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo.
Xingfu Wu Xingfu Wu and Valerie Taylor Department of Computer Science Texas A&M University iGrid 2005, Calit2, UCSD, Sep. 29,
VisHap: Guangqi Ye, Jason J. Corso, Gregory D. Hager, Allison M. Okamura Presented By: Adelle C. Knight Augmented Reality Combining Haptics and Vision.
5/13/2015CAM Talk G.Kamberova Computer Vision Introduction Gerda Kamberova Department of Computer Science Hofstra University.
Stereo.
Asa MacWilliams Lehrstuhl für Angewandte Softwaretechnik Institut für Informatik Technische Universität München Dec Software.
A Distributed Cooperative Framework for Continuous Multi- Projector Pose Estimation IEEE VR March 16, 2009 Tyler Johnson, Greg Welch, Henry Fuchs,
Advanced Graphics, Overview Advanced Computer Graphics Overview.
3dtv.at Stereoscopic Player and Stereoscopic Multiplexer S3D-Today November 2006 Munich, Germany.
Slide 1 Tiled Display Walls - Relation to the Access Grid and Other Systems Mike Walterman, Manager of Graphics Programming, Scientific Computing and Visualization.
Real-time Structured Light Depth Extraction Kurtis Keller & Jeremy Ackerman Department of Computer Science University of North Carolina Chapel Hill, North.
RANSAC-Assisted Display Model Reconstruction for Projective Display Patrick Quirk, Tyler Johnson, Rick Skarbez, Herman Towles, Florian Gyarfas, Henry Fuchs.
BPC: Art and Computation – Fall 2006 Introduction to virtual environments Glenn Bresnahan
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
Creating Adaptive Views for Group Video Teleconferencing – An Image-Based Approach Creating Adaptive Views for Group Video Teleconferencing – An Image-Based.
1 Combining Approximate Geometry with VDTM – A Hybrid Approach to 3D Video Teleconferencing Celso Kurashima 2, Ruigang Yang 1, Anselmo Lastra 1 1 Department.
Image-Based Visual Hulls Wojciech Matusik Chris Buehler Leonard McMillan Wojciech Matusik Chris Buehler Leonard McMillan Massachusetts Institute of Technology.
A Personal Surround Environment: Projective Display with Correction for Display Surface Geometry and Extreme Lens Distortion Tyler Johnson, Florian Gyarfas,
High-Quality Video View Interpolation
Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.
Virtual Reality at Boston University Glenn Bresnahan Boston University Scientific Computing and Visualization (
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
11/21/02Visualization Laboratory, Texas A&M University1 Next Generation Spatially Immersive Visualization Systems Prof. Frederic I. Parke Visualization.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
5/5/2006Visualization Sciences, Texas A&M University1 Spatially Immersive Visualization Systems (an update) Prof. Frederic I. Parke Visualization Sciences.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
2.03B Common Types and Interface Devices and Systems of Virtual Reality 2.03 Explore virtual reality.
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
PRESENTED BY Geenas GS S7, ECE Roll.No:  Introduction.
Active Pursuit Tracking in a Projector-Camera System with Application to Augmented Reality Shilpi Gupta and Christopher Jaynes University of Kentucky.
Proxy Plane Fitting for Line Light Field Rendering Presented by Luv Kohli COMP238 December 17, 2002.
Image-based rendering Michael F. Cohen Microsoft Research.
D EPT. OF I NFO. & C OMM., KJIST Access Grid with High Quality DV Video JongWon Kim, Ph.D. 17 th APAN Meeting /JointTech WS Jan. 29 th, 2004 Networked.
ABSTRACT A single camera can be a useful surveillance tool, but video recorded from a single point of reference becomes ineffective when objects of interest.
Effects of Handling Real Objects and Avatar Fidelity on Cognitive Task Performance in Virtual Environments Benjamin Lok University of North Carolina at.
Computer Vision Technologies for Remote Collaboration Using Physical Whiteboards, Projectors and Cameras Zhengyou Zhang Microsoft Research mailto:
GENESIS OF VIRTUAL REALITY  The term ‘Virtual reality’ (VR) was initially coined by Jaron Lanier, founder of VPL Research (1989)..
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Tele-immersion in eCommerce Saša Glasenhardt “Lošinjska plovidba - Brodarstvo”
The Effects of Immersion and Navigation on the Acquisition of Spatial Knowledge of Abstract Data Networks James Henry, M.S. Nicholas F. Polys, Ph.D. Virginia.
Spring 2015 CSc 83020: 3D Photography Prof. Ioannis Stamos Mondays 4:15 – 6:15
Tele Immersion. What is Tele Immersion? Tele-immersion is a technology to be implemented with Internet2 that will enable users in different geographic.
2.03 Explore virtual reality design and use.
University of Pennsylvania 7/15/98 Asymmetric Bandwidth Channel (ABC) Architecture Insup Lee University of Pennsylvania July 25, 1998.
Investigating the Performance of Audio/Video Service Architecture I: Single Broker Ahmet Uyar & Geoffrey Fox Tuesday, May 17th, 2005 The 2005 International.
TELE IMMERSION AMAN BABBER
MCS  FUTURESLABARGONNE  CHICAGO Rick Stevens, Terry Disz, Lisa Childers, Bob Olson Argonne National Laboratory
Scientific Visualization Facilities The Digital Worlds Institute Andy Quay Associate Director Digital Worlds Institute University of Florida.
TELE-IMMERSION PRESENTED BY: N. Saai Kaushiik N. Saai Kaushiik.
Vision-based SLAM Enhanced by Particle Swarm Optimization on the Euclidean Group Vision seminar : Dec Young Ki BAIK Computer Vision Lab.
Journal of Visual Communication and Image Representation
AGENDA  Introduction  Early developments  Requirements for immersive tele conferences systems  How tele immersion works  Share table environment 
3DDI: 3D Direct Interaction John Canny Computer Science Division UC Berkeley.
Experiences in Extemporaneous Incorporation of Real Objects in Immersive Virtual Environments Benjamin Lok University of Florida Samir Naik Disney VR Studios.
Auto-stereoscopic Light-Field Display By: Jesus Caban George Landon.
Digital Video Representation Subject : Audio And Video Systems Name : Makwana Gaurav Er no.: : Class : Electronics & Communication.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Using Virtual Reality for the Visualization of Developing Tissues J.P. Schulze 2, L.D. Soares 1, J. Weaver 1, A.S. Forsberg 2, S.M. Shim 2, K.A. Wharton.
Design and Calibration of a Multi-View TOF Sensor Fusion System Young Min Kim, Derek Chan, Christian Theobalt, Sebastian Thrun Stanford University.
Jun Shimamura, Naokazu Yokoya, Haruo Takemura and Kazumasa Yamazawa
Coding Approaches for End-to-End 3D TV Systems
Image Based Modeling and Rendering (PI: Malik)
A Distributed System for Real-time Volume Reconstruction
TELE-PRESENCE BY BHABANI SANKAR SENAPATI.
Presentation transcript:

1 3D Tele-Collaboration over Internet2 Herman Towles, UNC-CH representing members of the National Tele-Immersion Initiative (NTII) ITP 2002 Juan-les-Pins, France 06 December D Tele-Collaboration over Internet2 Herman Towles, UNC-CH representing members of the National Tele-Immersion Initiative (NTII) ITP 2002 Juan-les-Pins, France 06 December 2002

2 NTII Collaborators & Co-authors University of North Carolina at Chapel HillUniversity of North Carolina at Chapel Hill –Wei-Chao Chen, Ruigang Yang, Sang-Uok Kum, and Henry Fuchs University of PennsylvaniaUniversity of Pennsylvania –Nikhil Kelshikar, Jane Mulligan, and Kostas Daniilidis Brown UniversityBrown University –Loring Holden, Bob Zeleznik, and Andy Van Dam Advanced Network & ServicesAdvanced Network & Services –Amela Sadagic and Jaron Lanier University of North Carolina at Chapel HillUniversity of North Carolina at Chapel Hill –Wei-Chao Chen, Ruigang Yang, Sang-Uok Kum, and Henry Fuchs University of PennsylvaniaUniversity of Pennsylvania –Nikhil Kelshikar, Jane Mulligan, and Kostas Daniilidis Brown UniversityBrown University –Loring Holden, Bob Zeleznik, and Andy Van Dam Advanced Network & ServicesAdvanced Network & Services –Amela Sadagic and Jaron Lanier

3 Clear Motivation to Provide Higher ResolutionHigher Resolution Larger, more immersive Field-of-ViewLarger, more immersive Field-of-View Participants at Accurate Geometric ScaleParticipants at Accurate Geometric Scale Eye ContactEye Contact Spatialized Audio (Group settings)Spatialized Audio (Group settings) More Natural Human-Computer InterfacesMore Natural Human-Computer Interfaces Higher ResolutionHigher Resolution Larger, more immersive Field-of-ViewLarger, more immersive Field-of-View Participants at Accurate Geometric ScaleParticipants at Accurate Geometric Scale Eye ContactEye Contact Spatialized Audio (Group settings)Spatialized Audio (Group settings) More Natural Human-Computer InterfacesMore Natural Human-Computer Interfaces

4 Related Work Improved Resolution & FOVImproved Resolution & FOV –Access Grid – Childers et al., 2000 –Commerical, multi-channel extensions of ‘1-camera to 1-display’ Gaze-AwarenessGaze-Awareness –MONJUnoCHIE System – Aoki et al., 1998 –Blue-C Project - Kunz and Spagno, –VIRTUE Project – Cooke, Kauff, Schreer et al., D Reconstruction/New Novel Views3D Reconstruction/New Novel Views –CMU’s Virtualized Reality Project – Narayanan, Kanade, 1998 –Visual Hull Methods – Matusik, McMillan et al, 2000 –VIRTUE Project – Cooke, Kauff, Schreer et al., Human Computer InterfacesHuman Computer Interfaces –T-I Data Exploration (TIDE) – Leigh, DeFanti et al., 1999 –VisualGlove Project - Constanzo, Iannizzotto, 2002 Improved Resolution & FOVImproved Resolution & FOV –Access Grid – Childers et al., 2000 –Commerical, multi-channel extensions of ‘1-camera to 1-display’ Gaze-AwarenessGaze-Awareness –MONJUnoCHIE System – Aoki et al., 1998 –Blue-C Project - Kunz and Spagno, –VIRTUE Project – Cooke, Kauff, Schreer et al., D Reconstruction/New Novel Views3D Reconstruction/New Novel Views –CMU’s Virtualized Reality Project – Narayanan, Kanade, 1998 –Visual Hull Methods – Matusik, McMillan et al, 2000 –VIRTUE Project – Cooke, Kauff, Schreer et al., Human Computer InterfacesHuman Computer Interfaces –T-I Data Exploration (TIDE) – Leigh, DeFanti et al., 1999 –VisualGlove Project - Constanzo, Iannizzotto, 2002

5 XTP: ‘Xtreme Tele-Presence UNC ‘Office of the Future’ Andrei State 1998

6 Research Snapshots

7 Presentation Outline Motivation and Related WorkMotivation and Related Work NTII Tele-Collaboration TestbedNTII Tele-Collaboration Testbed –Acquisition and 3D Reconstruction –Collaborative Graphics & User Interfaces –Rendering & Display –Network ResultsResults Future ChallengesFuture Challenges Motivation and Related WorkMotivation and Related Work NTII Tele-Collaboration TestbedNTII Tele-Collaboration Testbed –Acquisition and 3D Reconstruction –Collaborative Graphics & User Interfaces –Rendering & Display –Network ResultsResults Future ChallengesFuture Challenges

8 Scene Acquisition & Reconstruction Foreground: Real-Time Stereo AlgorithmForeground: Real-Time Stereo Algorithm –Frame Rate: 2-3 fps (550MHz Quad-CPU) - REAL-TIME! –Volume: 1 cubic meter –Resolution: 320x240 (15K-25K foreground points) Background: Scanning Laser RangefinderBackground: Scanning Laser Rangefinder –Frame Rate: 1 frame in minutes - OFFLINE! –Volume: Room-size –Resolution: More data than you can handle! Composite Live Foreground & Static Background Foreground: Real-Time Stereo AlgorithmForeground: Real-Time Stereo Algorithm –Frame Rate: 2-3 fps (550MHz Quad-CPU) - REAL-TIME! –Volume: 1 cubic meter –Resolution: 320x240 (15K-25K foreground points) Background: Scanning Laser RangefinderBackground: Scanning Laser Rangefinder –Frame Rate: 1 frame in minutes - OFFLINE! –Volume: Room-size –Resolution: More data than you can handle! Composite Live Foreground & Static Background

9 Real-Time Foreground Acquisition Trinocular Stereo Reconstruction AlgorithmTrinocular Stereo Reconstruction Algorithm –After background segmentation, find corresponding pixels in each image using MNCC method –3D ray intersection yields pixel depth –Median filter the disparity map to reduce outliers Produce 320x240 Depth Maps (1/z, R,G,B)Produce 320x240 Depth Maps (1/z, R,G,B) Trinocular Stereo Reconstruction AlgorithmTrinocular Stereo Reconstruction Algorithm –After background segmentation, find corresponding pixels in each image using MNCC method –3D ray intersection yields pixel depth –Median filter the disparity map to reduce outliers Produce 320x240 Depth Maps (1/z, R,G,B)Produce 320x240 Depth Maps (1/z, R,G,B) = Images courtesy of UPenn GRASP Lab +

10 UNC Acquisition Array Five Dell 6350 Quad-Processor Servers Seven Sony Digital 1394 Cameras – Five Trinocular Views

11 Stereo Processing Sequence Camera Views Disparity Maps 3 Views of Combined Point Clouds Images courtesy of UPenn GRASP Lab

12 Collaborative Graphics & User I/F

13 Shared 3D Objects Scene Graph SharingScene Graph Sharing –Distributed, Common Scene Graph Dataset –Local Changes, Shared Automatically with Remote Nodes Object Manipulation with 2D & 3D PointersObject Manipulation with 2D & 3D Pointers –3D Virtual Laser Pointing Device –Embedded magnetic tracker –Laser beam rendered as part of Scene Graph –One event/behavior button Scene Graph SharingScene Graph Sharing –Distributed, Common Scene Graph Dataset –Local Changes, Shared Automatically with Remote Nodes Object Manipulation with 2D & 3D PointersObject Manipulation with 2D & 3D Pointers –3D Virtual Laser Pointing Device –Embedded magnetic tracker –Laser beam rendered as part of Scene Graph –One event/behavior button

14 Rendering System Overview Rendering System Overview

15 3D Stereo Display Passive Stereo & Circular PolarizationPassive Stereo & Circular Polarization –Custom Filters on Projectors –Lightweight Glasses –Silvered Display Surface Front ProjectionFront Projection –Usable in any office/room –Ceiling-mounted Configurations Two Projector StereoTwo Projector Stereo –100% Duty Cycle –Brighter & No flicker –Permits multi-PC Rendering Passive Stereo & Circular PolarizationPassive Stereo & Circular Polarization –Custom Filters on Projectors –Lightweight Glasses –Silvered Display Surface Front ProjectionFront Projection –Usable in any office/room –Ceiling-mounted Configurations Two Projector StereoTwo Projector Stereo –100% Duty Cycle –Brighter & No flicker –Permits multi-PC Rendering

16 View-Dependent Rendering HiBall  6DOF TrackerHiBall  6DOF Tracker –3D Position & Orientation –Accurate, Low latency & noise –Headband-mounted Sensor –HiBall to Eyeball Calibration PC Network ServerPC Network Server HiBall  6DOF TrackerHiBall  6DOF Tracker –3D Position & Orientation –Accurate, Low latency & noise –Headband-mounted Sensor –HiBall to Eyeball Calibration PC Network ServerPC Network Server

17 Rendering Configurations One PC Configuration (Linux)One PC Configuration (Linux) –Dual-channel NVIDIA graphics Three PC Configuration (Linux)Three PC Configuration (Linux) –Separate left & right-eye rendering PCs w/NVIDIA graphics –One PC used as network interface, multicasts depth map stream to rendering PCs Performance – 933MHz PCs & GeForce2Performance – 933MHz PCs & GeForce2 –Interactive Display Rates of fps –Asynchronous updates of 3D Reconstruction (2-3Hz) & Scene Graph (20Hz) Newest Rendering Configuration 10-20XNewest Rendering Configuration 10-20X –2.4GHz, GeForce4, Multi-Threaded, VAR Arrays One PC Configuration (Linux)One PC Configuration (Linux) –Dual-channel NVIDIA graphics Three PC Configuration (Linux)Three PC Configuration (Linux) –Separate left & right-eye rendering PCs w/NVIDIA graphics –One PC used as network interface, multicasts depth map stream to rendering PCs Performance – 933MHz PCs & GeForce2Performance – 933MHz PCs & GeForce2 –Interactive Display Rates of fps –Asynchronous updates of 3D Reconstruction (2-3Hz) & Scene Graph (20Hz) Newest Rendering Configuration 10-20XNewest Rendering Configuration 10-20X –2.4GHz, GeForce4, Multi-Threaded, VAR Arrays

18 Network Considerations All Tests over Internet2All Tests over Internet2 Data Rates of ~20-75 Mbps from Armonk, NY and Philadelphia into Chapel HillData Rates of ~20-75 Mbps from Armonk, NY and Philadelphia into Chapel Hill –320 x 240 Resolution –Up to 5 Reconstruction Views per site –Frame Rates 2-3 fps TCP/IPTCP/IP Latency of 2-3 seconds typicalLatency of 2-3 seconds typical All Tests over Internet2All Tests over Internet2 Data Rates of ~20-75 Mbps from Armonk, NY and Philadelphia into Chapel HillData Rates of ~20-75 Mbps from Armonk, NY and Philadelphia into Chapel Hill –320 x 240 Resolution –Up to 5 Reconstruction Views per site –Frame Rates 2-3 fps TCP/IPTCP/IP Latency of 2-3 seconds typicalLatency of 2-3 seconds typical

19 Presentation Outline Motivation and Related WorkMotivation and Related Work NTII Tele-Collaboration TestbedNTII Tele-Collaboration Testbed –Acquisition and 3D Reconstruction –Collaborative Graphics & User Interfaces –Rendering & Display –Network ResultsResults Future ChallengesFuture Challenges Motivation and Related WorkMotivation and Related Work NTII Tele-Collaboration TestbedNTII Tele-Collaboration Testbed –Acquisition and 3D Reconstruction –Collaborative Graphics & User Interfaces –Rendering & Display –Network ResultsResults Future ChallengesFuture Challenges

20 Results ‘Roll the Tape’

21 Summary ‘One-on-One’ 3D Tele-Immersion Testbed‘One-on-One’ 3D Tele-Immersion Testbed Life-size, view-dependent, passive stereo displayLife-size, view-dependent, passive stereo display Interact with shared 3D Objects using a virtual laser pointerInteract with shared 3D Objects using a virtual laser pointer Half-Duplex Operation todayHalf-Duplex Operation today Operation over Internet2 between Chapel Hill, Philadelphia and ArmonkOperation over Internet2 between Chapel Hill, Philadelphia and Armonk Audio over H.323 or POTSAudio over H.323 or POTS ‘One-on-One’ 3D Tele-Immersion Testbed‘One-on-One’ 3D Tele-Immersion Testbed Life-size, view-dependent, passive stereo displayLife-size, view-dependent, passive stereo display Interact with shared 3D Objects using a virtual laser pointerInteract with shared 3D Objects using a virtual laser pointer Half-Duplex Operation todayHalf-Duplex Operation today Operation over Internet2 between Chapel Hill, Philadelphia and ArmonkOperation over Internet2 between Chapel Hill, Philadelphia and Armonk Audio over H.323 or POTSAudio over H.323 or POTS

22 Future Challenges Improved 3D Reconstruction QualityImproved 3D Reconstruction Quality –Larger Working Volume, Faster Frame Rates – 60 cameras –Fewer Reconstruction Errors (using structured light and adaptive correlation kernels) Reduce System Latency and Susceptibility to Network CongestionReduce System Latency and Susceptibility to Network Congestion –Pipelined architecture –Shunt Protocol (between TCP/UDP and IP layers) that allows multiple flows to do coordinated congestion control Full Duplex OperationFull Duplex Operation Unobtrusive OperationUnobtrusive Operation –No headmounts, No eyeglasses! Improved 3D Reconstruction QualityImproved 3D Reconstruction Quality –Larger Working Volume, Faster Frame Rates – 60 cameras –Fewer Reconstruction Errors (using structured light and adaptive correlation kernels) Reduce System Latency and Susceptibility to Network CongestionReduce System Latency and Susceptibility to Network Congestion –Pipelined architecture –Shunt Protocol (between TCP/UDP and IP layers) that allows multiple flows to do coordinated congestion control Full Duplex OperationFull Duplex Operation Unobtrusive OperationUnobtrusive Operation –No headmounts, No eyeglasses!

23 Thank You Research funded by Advanced Network and Services, Inc. and National Science Foundation (USA) Research funded by Advanced Network and Services, Inc. and National Science Foundation (USA)

24 UPenn Acquisition Array Fifteen Sony Digital 1394 Cameras – Five Trinocular Views

25 System Overview

26 Past Experiments Past Experiments UPenn Philadelphia Advanced Armonk, NY UNC Chapel Hill 3D Data + 2D Images 2D Video + Audio UNC Chapel Hill 3D Data + 2D Images 2D Video + Audio Advanced Armonk, NY Scene Bus Data With Collaboration w/o Collaboration