Electronic Visualization Laboratory, University of Illinois at Chicago Collaborative Visualization Architecture in Scalable Adaptive Graphics Environment.

Slides:



Advertisements
Similar presentations
-Grids and the OptIPuter Software Architecture Andrew A. Chien Director, Center for Networked Systems SAIC Chair Professor, Computer Science and Engineering.
Advertisements

Electronic Visualization Laboratory University of Illinois at Chicago Photonic Interdomain Negotiator (PIN): Interoperate Heterogeneous Control & Management.
TCP/IP MODEL Maninder Kaur
Middleware Support for RDMA-based Data Transfer in Cloud Computing Yufei Ren, Tan Li, Dantong Yu, Shudong Jin, Thomas Robertazzi Department of Electrical.
System Area Network Abhiram Shandilya 12/06/01. Overview Introduction to System Area Networks SAN Design and Examples SAN Applications.
Spring 2000CS 4611 Introduction Outline Statistical Multiplexing Inter-Process Communication Network Architecture Performance Metrics.
AHM Overview OptIPuter Overview Third All Hands Meeting OptIPuter Project San Diego Supercomputer Center University of California, San Diego January 26,
Distributed Multimedia Systems
Xingfu Wu Xingfu Wu and Valerie Taylor Department of Computer Science Texas A&M University iGrid 2005, Calit2, UCSD, Sep. 29,
University of Illinois at Chicago Electronic Visualization Laboratory (EVL) Intensive Collaboration Environments Jason Leigh.
University of Illinois at Chicago Electronic Visualization Laboratory (EVL) OptIPuter Collaboration and Visualization Research Jason Leigh.
Electronic visualization laboratory, university of illinois at chicago Status and Plans for EVL Software Components Jason Leigh, Luc Renambot Venkat Vishwanath,
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
Distributed Virtual Computer (DVC): Simplifying the Development of High-Performance Grid Applications Nut Taesombut and Andrew A. Chien Department of Computer.
An Introduction to the inSORS Grid. Total Market Customer Sectors RESEARCH – Universities and National Labs COMMERCIAL-Energy, Hi-Tech, Medical GOVERNMENT-Research,
1-1 Introduction to Computer Networks and Data Communications.
1 6/22/ :39 Chapter 9Fiber Channel1 Rivier College CS575: Advanced LANs Chapter 9: Fibre Channel.
Data Communications Architecture Models. What is a Protocol? For two entities to communicate successfully, they must “speak the same language”. What is.
Protocols and the TCP/IP Suite Chapter 4. Multilayer communication. A series of layers, each built upon the one below it. The purpose of each layer is.
Data Communications and Networks
CIS679: RTP and RTCP r Review of Last Lecture r Streaming from Web Server r RTP and RTCP.
1 An Extensible Videoconference Tool for a Collaborative Computing Network Junjun He.
Jongchurl Park Networked Media Laboratory Dept. of Information and Communications School of Information & Mechatronics Gwangju Institute.
Presentation on Osi & TCP/IP MODEL
Lecture 2 TCP/IP Protocol Suite Reference: TCP/IP Protocol Suite, 4 th Edition (chapter 2) 1.
Chapter 4. After completion of this chapter, you should be able to: Explain “what is the Internet? And how we connect to the Internet using an ISP. Explain.
Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago.
Page 1 Transform SCN Sample, Compress, Network Transporting computer imagery over IP networks.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Towards a Common Communication Infrastructure for Clusters and Grids Darius Buntinas Argonne National Laboratory.
Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital.
Storage Area Network Presented by Chaowalit Thinakornsutibootra Thanapat Kangkachit
David Abramson & Hoang Anh Nguyen Monash University.
Chapter 1. Introduction. By Sanghyun Ahn, Deot. Of Computer Science and Statistics, University of Seoul A Brief Networking History §Internet – started.
QoS Support in High-Speed, Wormhole Routing Networks Mario Gerla, B. Kannan, Bruce Kwan, Prasasth Palanti,Simon Walton.
NETWORKING COMPONENTS AN OVERVIEW OF COMMONLY USED HARDWARE Christopher Johnson LTEC 4550.
The Way Forward Factors Driving Video Conferencing Dr. Jan Linden, VP of Engineering Global IP Solutions.
D EPT. OF I NFO. & C OMM., KJIST Access Grid with High Quality DV Video JongWon Kim, Ph.D. 17 th APAN Meeting /JointTech WS Jan. 29 th, 2004 Networked.
A Wide Range of Scientific Disciplines Will Require a Common Infrastructure Example--Two e-Science Grand Challenges –NSF’s EarthScope—US Array –NIH’s Biomedical.
1 CHAPTER 8 TELECOMMUNICATIONSANDNETWORKS. 2 TELECOMMUNICATIONS Telecommunications: Communication of all types of information, including digital data,
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Rate Control Rate control tunes the packet sending rate. No more than one packet can be sent during each packet sending period. Additive Increase: Every.
University of Illinois at Chicago Electronic Visualization Laboratory (EVL) SuperDuperNetworking Transforming Supercomputing …from the point of view of.
John D. McCoy Principal Investigator Tom McKenna Project Manager UltraScienceNet Research Testbed Enabling Computational Genomics Project Overview.
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
TOPS “Technology for Optical Pixel Streaming” Paul Wielinga Division Manager High Performance Networking SARA Computing and Networking Services
GridNets, October 1, AR-PIN/PDC: Flexible Advance Reservation of Intradomain and Interdomain Lightpaths Eric He, Xi Wang, Jason Leigh Electronic.
Networks and Distributed Systems Mark Stanovich Operating Systems COP 4610.
CHAPTER 4 PROTOCOLS AND THE TCP/IP SUITE Acknowledgement: The Slides Were Provided By Cory Beard, William Stallings For Their Textbook “Wireless Communication.
D EPT. OF I NFO. & C OMM., GIST AG connect: Toward better connectivity for the AG 19 th APAN Bangkok Meeting ( ) Namgon Kim and JongWon Kim Networked.
Investigating the Performance of Audio/Video Service Architecture I: Single Broker Ahmet Uyar & Geoffrey Fox Tuesday, May 17th, 2005 The 2005 International.
The OptIPuter Project Tom DeFanti, Jason Leigh, Maxine Brown, Tom Moher, Oliver Yu, Bob Grossman, Luc Renambot Electronic Visualization Laboratory, Department.
Vinay.R Mar Networked Media Lab GIST Multi-site Visualization Sharing.
Networks and Distributed Systems Sarah Diesburg Operating Systems COP 4610.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
Intro to Distributed Systems Hank Levy. 23/20/2016 Distributed Systems Nearly all systems today are distributed in some way, e.g.: –they use –they.
Client-server communication Prof. Wenwen Li School of Geographical Sciences and Urban Planning 5644 Coor Hall
Seminar On Rain Technology
Ad Hoc – Wireless connection between two devices Backbone – The hardware used in networking Bandwidth – The speed at which the network is capable of sending.
Instructor & Todd Lammle
Accelerating Peer-to-Peer Networks for Video Streaming
Electronic Visualization Laboratory University of Illinois at Chicago
Grid Computing.
Using High-Resolution Visualization over Optical Networks
Wavestore Integrates…
TOPS “Technology for Optical Pixel Streaming”
University of Houston Datacom II Lecture 1B Review Dr Fred L Zellner
The OptIPortal, a Scalable Visualization, Storage, and Computing Termination Device for High Bandwidth Campus Bridging Presentation by Larry Smarr to.
TOPS “Technology for Optical Pixel Streaming”
Introduction and Foundation
Presentation transcript:

Electronic Visualization Laboratory, University of Illinois at Chicago Collaborative Visualization Architecture in Scalable Adaptive Graphics Environment Byungil Jeong

Electronic Visualization Laboratory, University of Illinois at Chicago Introduction Data intensive domains need to visualize huge amount of data Data storage, computation and visualization resource Networking cost down => Sharing remote computation and visualization resource and data storage => The fundamental premise behind shared cyber-infrastructure OptIPuter: designing advanced cyber-infrastructure for data-intensive science using optical networks User requirements - Visualize large data in real-time using remote visualization resources - Seeing and interacting multiple high-resolution visualization at a time - Distant collaboration in high-resolution display environment Scalable Adaptive Graphics Environment (SAGE) : a specialized visualization middleware to support these user requirements - 2 -

Electronic Visualization Laboratory, University of Illinois at Chicago Scalable Adaptive Graphics Environment (SAGE) Remote laptop High-resolution maps Live video feeds 3D surface rendering Volume Rendering Remote sensing Information must be able to flexibly move around the wall

Electronic Visualization Laboratory, University of Illinois at Chicago SAGE Rendering Model - 4 -

Electronic Visualization Laboratory, University of Illinois at Chicago SAGE Demonstration in EVL LambdaVision: 11x5 tiled display, 100 Mega-pixel resolution Using ultra high-speed networks (multi-ten gigabits/sec) - 5 -

Electronic Visualization Laboratory, University of Illinois at Chicago User Driven Features of SAGE Sharing remote visualization resource and data –Reducing visualization cost –Securely keeping original data in the visualization center Unified visualization environment –Any visualization application can be integrated into SAGE –Simple pixel streaming API (10 to 20 lines of codes) Scalability in performance and resolution Fully multitasking environment on scalable displays –Scalable displays have been typically used for a single application –Concurrently run multiple visualization applications resizing and repositioning their windows - 6 -

Electronic Visualization Laboratory, University of Illinois at Chicago Comparison with Other Approaches SAGEDCVSGEDMXChromium Multi-tasking (window operation) Y DMX YY Display-rendering decoupling Y Y (RVN) Y -- Scalable parallel application support YYY - Y High-bandwidth wide-area streaming Y ---- Image multicasting for scalable tiled displays Y

Electronic Visualization Laboratory, University of Illinois at Chicago render1 dispNdisp2disp1 SAGE Architecture SAGE UI App Tiled Display FreeSpace Manager SAIL SAGE Receiver SAGE Receiver SAGE Receiver SAGE UI Pixel Stream SAGE MessagesSynchronization Channel SAIL: SAGE Application Interface Library render2renderM SAIL App

Electronic Visualization Laboratory, University of Illinois at Chicago How to manage dynamic pixel streams? Render A Render B AB Reconfigure Receiving Buffer Texture Rectangle Connect Reconfigure Image Partition Active Connection

Electronic Visualization Laboratory, University of Illinois at Chicago How to manage dynamic pixel streams? FSManager SAIL SAGE Receiver Pixel Data Control FSManager SAIL SAGE Receiver Pixel Data Control Obvious Approach (100 to 1000ms latency) –SAGE Receiver doesn’t know when newly partitioned data arrives –FSManager has to pause pixel streams for reconfiguration Improved Approach (10 to 100ms latency) –SAIL sends control data together with pixel data –Reconfigure streams without stopping streaming Obvious ApproachImproved Approach

Electronic Visualization Laboratory, University of Illinois at Chicago How to Partition Pixel Data? Pixel block streaming: independent of window layouts Good to stream pixels to multiple end-points Small sized pixel block Increases system calls for network send and pixel downloading Aggregation of pixel blocks Streamer SAGE Display Pixel Block Streaming Streamer SAGE Display Image Frame Streaming

Electronic Visualization Laboratory, University of Illinois at Chicago Synchronization of Dynamic Streams Each application has different refresh rate => has to be synchronized independently Groups of active or inactive nodes: dynamically changed Using low latency TCP channel for sending sync signals Active Nodes Inactive Nodes Sync Group A Sync Group B

Electronic Visualization Laboratory, University of Illinois at Chicago Network Streaming Protocol TCP module: low or unstable performance on long round- trip time wide-area networks (10Gbit networks) UDP module: no data flow control => packet loss, artifacts Extended the UDP module to control the data transfer rate not exceeding user definable upper bound Packet loss was reduced below 1%

Electronic Visualization Laboratory, University of Illinois at Chicago SAGE Users They want to collaborate each other using SAGE

Electronic Visualization Laboratory, University of Illinois at Chicago Visualcasting Support distant collaboration with multiple endpoints –All participants interact with one another, as well as with their data –Various display configuration at each endpoint Increasing the complexity of the pixel routing problem –Independent application layout at each endpoint –Dynamic changes in the number of applications and endpoints

Electronic Visualization Laboratory, University of Illinois at Chicago SAGE extended by Visualcasting SAGE only multiple sources single destination SAGE with Visualcasting multiple to multiple

Electronic Visualization Laboratory, University of Illinois at Chicago SAGE Bridge A new software component of SAGE Broadcasts pixel data to the endpoints Running on a high- performance PC cluster Placed at core hubs in the network

Electronic Visualization Laboratory, University of Illinois at Chicago How to Distribute Pixels to Multiple Endpoints? Sending Side (Overloaded) Rendering DuplicationPartitionDisplay Endpoints 4Mpix 1Gbps 10Mpix 10Gbps

Electronic Visualization Laboratory, University of Illinois at Chicago How to Distribute Pixels to Multiple Endpoints Sending Side Rendering DuplicationPartitionDisplay Endpoints SAGE Bridge 10Mpix 10Gbps 4Mpix 1Gbps Regrouping

Electronic Visualization Laboratory, University of Illinois at Chicago Visualcasting at SC06 EVL(Calit2/UCSD), SARA(Dutch Consortium), University of Michigan(Research Channel)

Electronic Visualization Laboratory, University of Illinois at Chicago Performance Evaluation Two 28-node LambdaVision cluster in San Diego (UCSD) and Chicago (UIC) National LambdaRail provided a 10gigabit dedicated optical network link between two cluster (CAVEWave) Gigabit Ethernet (GigE) network interface fully connected to each other through a gigabit network switch Excluding other traffic on the network Jumbo frames are used (9KB data packet size) UIC UCSD

Electronic Visualization Laboratory, University of Illinois at Chicago SAGE Streaming Performance A high-resolution image viewer, streaming from San Diego to Chicago over 10Gbit dedicated link, UDP, 0.9Gbps upper bound Over 80% of network utilization Scalability in frame rate and resolution

Electronic Visualization Laboratory, University of Illinois at Chicago SAGE Bridge Performance Total output bandwidth of a SAGE Bridge node 1Mpixel images up to nine endpoints Two dual core AMD 64bit CPU, a Myricom 10Gbit network interface Supported up to 7 endpoints without loosing scalability.

Electronic Visualization Laboratory, University of Illinois at Chicago Current Research How to arbitrarily scale simultaneous data distribution to multiple receivers? Incremental bridge node allocation: If initially allocated nodes become overloaded, SAGE Bridge allocates additional nodes for the visualcasting session. No additional node available: request senders to down- sample or compress pixel blocks What are the conditions for adding or removing SAGE Bridge nodes? How to support heterogeneity of endpoints in network bandwidth, computing power and display resolution?

Electronic Visualization Laboratory, University of Illinois at Chicago Conclusion Wide-area distributed visualization is now feasible at ultra- high resolutions while maintaining interactivity. SAGE Visualcasting supports global collaboration with multiple endpoints in scalable display environments. Scalability and heterogeneous endpoint support of Visualcasting

Electronic Visualization Laboratory, University of Illinois at Chicago Acknowledgement Support from the National Science Foundation: OptIPuter project, cooperative agreement OCI to University of California, San Diego; MRI - LambdaVision Thanks to Jason Leigh, Andrew Johnson, Luc Renambot, Tom Defanti Thanks for excellent support of EVL support team

Electronic Visualization Laboratory, University of Illinois at Chicago More Info SAGE: My web page: