O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 Enabling Supernova Computations by Integrated Transport and Provisioning Methods Optimized.

Slides:



Advertisements
Similar presentations
All rights reserved © 2006, Alcatel Grid Standardization & ETSI (May 2006) B. Berde, Alcatel R & I.
Advertisements

Regional Routing Model Review Frank Southworth Oak Ridge National Laboratory Oak Ridge, TN NETS Program Review December 12, 2005 Washington DC.
Regional Routing Model Review: A) Data Fusion Efforts and Issues Frank Southworth Oak Ridge National Laboratory Oak Ridge, TN NETS Program Review.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
EPICS Channel Access Overview 2006
A Feedback and Continuous Improvement Tool to ACE Audit Readiness Keith S. Joy Quality Manager September 14, 2004.
Architecture and Implementation of Lustre at the National Climate Computing Research Center Douglas Fuller National Climate Computing Research Center /
Weigh-in-Motion User Manual for WIM Integrated System Cindy Lopez City University of New York-York College Research Alliance in Math and Science (RAMS)
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Multi-agent based High-Dimensional Cluster Analysis SciDAC SDM-ISIC Kickoff Meeting July.
Summary Role of Software (1 slide) ARCS Software Architecture (4 slides) SNS -- Caltech Interactions (3 slides)
End-to-End GMPLS Signaling in CHEETAH Project Xiangfei Zhu 5/5/2005 Master’s Project Presentation.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Infrastructure and Protocols for Dedicated Bandwidth Channels Nagi Rao Computer Science.
Presented by DOE UltraScience Net: High-Performance Experimental Network Research Testbed Nagi Rao Complex Systems Computer Science and Mathematics Division.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The DOE UltraScience Network (aka UltraNet) GNEW2004 March 15, 2004 Wing/Rao.
Neutron Scattering Experiment Automation with Python RT2010 Conference, Lisbon, Portugal (PCM-26) Piotr Żołnierczuk, Rick Riedel Neutron Scattering Science.
1 NSF CHEETAH project “End-To-End Provisioned Optical Network Testbed for Large-Scale eScience Applications” Xuan Zheng & Malathi Veeraraghavan Univ. of.
Distributed Systems: Client/Server Computing
UNIVERSITY of MARYLAND GLOBAL LAND COVER FACILITY High Performance Computing in Support of Geospatial Information Discovery and Mining Joseph JaJa Institute.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Sustainable Design An Overview of the Sustainability Efforts at Oak Ridge National Laboratory.
1 Exploring Data Reliability Tradeoffs in Replicated Storage Systems NetSysLab The University of British Columbia Abdullah Gharaibeh Advisor: Professor.
Report of the “DOE Workshop on Ultra High-Speed Transport Protocols.
Presented by CCS Network Roadmap Steven M. Carter Network Task Lead Center for Computational Sciences.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Nanoscale Electronics / Single-Electron Transport in Quantum Dot Arrays Dene Farrell SUNY.
Automated Variance Reduction for SCALE Shielding Calculations Douglas E. Peplow and John C. Wagner Nuclear Science and Technology Division Oak Ridge National.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY DOE-UltraScience Net (& network infrastructure) Update JointTechs Meeting February 15, 2005.
Networking and Computing Technologies Division Becky Verastegui December 6, 2004 RAMS Workshop.
Active Monitoring in GRID environments using Mobile Agent technology Orazio Tomarchio Andrea Calvagna Dipartimento di Ingegneria Informatica e delle Telecomunicazioni.
November 2005 Advanced Research Networks Conference BCNET UVic is Wired Leverage the Power.
How computer’s are linked together.
1 CHEETAH's use of DRAGON DRAGON software (current usage) RSVP-TE for an end-host client VLSR for a CVLSR to support immediate-request calls DRAGON network.
UVA work items  Provisioning across CHEETAH and UltraScience networks Transport protocol for dedicated circuits: Fixed-Rate Transport Protocol (FRTP)
Graphical Expert System for Analyzing Nuclear Facility Vulnerability David Sulfredge Oak Ridge National Laboratory December 3, 2002 O AK R IDGE N ATIONAL.
Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Vision for OSC Computing and Computational Sciences
Large Scale Visualization on the Cray XT3 Using ParaView Cray User’s Group 2008 May 8, 2008 Sandia is a multiprogram laboratory operated by Sandia Corporation,
Logistical Networking Micah Beck, Research Assoc. Professor Director, Logistical Computing & Internetworking (LoCI) Lab Computer.
1 End-host Route Selection in the CHEETAH Networking Solution Zhanxiang Huang 05/01/2006 Advisor: Malathi Veeraraghavan Master’s Project Presentation Acknowledgement:
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 SNS 2 Meeting Opening Remarks, Purpose Glenn R. Young Physics Division, ORNL August 28,
Service - Oriented Middleware for Distributed Data Mining on the Grid ,劉妘鑏 Antonio C., Domenico T., and Paolo T. Journal of Parallel and Distributed.
John D. McCoy Principal Investigator Tom McKenna Project Manager UltraScienceNet Research Testbed Enabling Computational Genomics Project Overview.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
Ted Fox Interim Associate Laboratory Director Energy and Engineering Sciences Oak Ridge, Tennessee March 21, 2006 Oak Ridge National Laboratory.
1 CHEETAH – a high speed optical network Xiuduan Fang, Tao Li, Mark Eric McGinley, Xiangfei Zhu, and Malathi Veeraraghavan.
A Utility-based Approach to Scheduling Multimedia Streams in P2P Systems Fang Chen Computer Science Dept. University of California, Riverside
TeraScale Supernova Initiative: A Networker’s Challenge 11 Institution, 21 Investigator, 34 Person, Interdisciplinary Effort.
1 Spallation Neutron Source Data Analysis Jessica Travierso Research Alliance in Math and Science Program Austin Peay State University Mentor: Vickie E.
Lambda scheduling algorithm for file transfers on high-speed optical circuits Hojun Lee Polytechnic Univ. Hua Li and Edwin Chong Colorado State Univ. Malathi.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
DOE UltraScience Net The Need –DOE large-scale science applications on supercomputers and experimental facilities require high-performance networking Petabyte.
Enabling Supernova Computations on Dedicated Channels Malathi Veeraraghavan University of Virginia
By David P. Schissel and Reza Shakoori Presented at DOE Office of Science High-Performance Network Research PI Meeting Brookhaven National Lab September.
1 CHEETAH - CHEETAH – Circuit Switched High-Speed End-to-End Transport ArcHitecture Xuan Zheng, Xiangfei Zhu, Xiuduan Fang, Anant Mudambi, Zhanxiang Huang.
Emerging and Evolving Cyber Threats Require Sophisticated Response and Protection Capabilities  Advanced Algorithms  Cyber Attack Detection and Machine.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Data Requirements for Climate and Carbon Research John Drake, Climate Dynamics Group Computer.
An Architectural Approach to Managing Data in Transit Micah Beck Director & Associate Professor Logistical Computing and Internetworking Lab Computer Science.
Detecting Undesirable Insider Behavior Joseph A. Calandrino* Princeton University Steven J. McKinney* North Carolina State University Frederick T. Sheldon.
XAL based PV Browser Jeff Patton, Chris Fowlkes EPICS Collaboration Meeting – RDB SIG June 12, 2006.
Logistical Networking: Buffering in the Network Prof. Martin Swany, Ph.D. Department of Computer and Information Sciences.
Presented by DOE UltraScience Net: High-Performance Experimental Network Research Testbed Nagi Rao Computer Science and Mathematics Division Complex Systems.
SDN-SF LANL Tasks. LANL Research Tasks Explore parallel file system networking (e.g. LNet peer credits) in order to give preferential treatment to isolated.
Office of Science U.S. Department of Energy High-Performance Network Research Program at DOE/Office of Science 2005 DOE Annual PI Meeting Brookhaven National.
BDTS and Its Evaluation on IGTMD link C. Chen, S. Soudan, M. Pasin, B. Chen, D. Divakaran, P. Primet CC-IN2P3, LIP ENS-Lyon
DOE Facilities - Drivers for Science: Experimental and Simulation Data
Grid Computing.
Project overview Agenda Project goals How we plan to work together
End-to-End Provisioned Network Testbed for eScience
TeraScale Supernova Initiative
Detailed plan - UVA Dynamic circuit setup/release
Presentation transcript:

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 Enabling Supernova Computations by Integrated Transport and Provisioning Methods Optimized for Dedicated Channels Nagi Rao, Bill Wing, Tony Mezzacappa Oak Ridge National Laboratory Malathi Veeraraghavan University of Virginia DOE MICS PI Meeting: High-Performance Networking Program September 14-16, 2004 Fermi National Laboratory

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Outline  Background  ORNL Tasks  Preliminary Results  UVA Tasks  Preliminary Results

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Terascale Supernova Initiative - TSI  Science Objective: Understand supernova evolutions  DOE SciDAC Project: ORNL and 8 universities  Teams of field experts across the country collaborate on computations  Experts in hydrodynamics, fusion energy, high energy physics  Massive computational code  Terabyte/day generated currently  Archived at nearby HPSS  Visualized locally on clusters – only archival data  Current Networking Challenges  Limited transfer throughput  Hydro code – 8 hours to generate and 14 hours to transfer out  Runaway computations  Find out after the fact that parameters needed adjustment

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Data and File Transfers (terabyte – petabyte)  Move data from computations on supercomputers  Supply data to visualizations on clusters and supercomputers Interactive Computations and Visualization  Monitor, collaborate and steer computations  Collaborative and comparative visualizations Visualization channel Visualization control channel Steering channel TSI Desired Capabilities Computation or visualization

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Background on NSF CHEETAH Project  Circuit-switched High-speed End-to-End Transport arcHitecture (CHEETAH)  Team: UVA, ORNL, NCSU, CUNY  Concept:  Share bandwidth on a dynamic call-by-call basis  End-to-end circuit:  Ethernet - Ethernet over SONET - Ethernet  Network  Second NICs at hosts in a compute cluster/viz cluster  Connected to MSPPs that perform Ethernet-SONET mapping  GMPLS-enabled SONET crossconnects  Transport protocols and middleware  To support file transfers on dedicated circuits  To support remote visualization and computational steering  Applications to support TSI scientists  SFTP  Ensight + new visualization programs

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Current DOE ORNL-UVA Project: Complementary Roles Project Components: Provisioning for UltraScience Net - GMPLS File transfers for dedicated channels Peering – DOE UltraScience Net and NSF CHEETAH Network optimized visualizations for TSI TSI application support over UltraScience Net + CHEETAH Peering ORNL UVA Visualization TSI Application Provisioning File Transfers This project leverages two projects DOE UltraScience Net NSF CHEETAH

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Peered UltraScienceNet-CHEETAH Enables coast-to-coast dedicated channels Phase I: TL1-GMPLS cross conversion Phase II: GMPLS-based

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY ORNL: Year 1 Activities Peering CHEETAH - UltraScienceNet Visualization Decomposable visualization pipeline Analytical formulation First implementation TSI support Monitoring Visualizations

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY ORNL Personnel Conference Papers M. Zhu, Q. Wu, N. S. V. Rao, S. S.Iyengar, “Adaptive Visualization Pipeline Partition and Mapping on Computer Network”, International Conference on Image Processing and Graphics, ICIG2004. On Optimal Mapping of Visualization Pipeline onto Linear Arrangement of Network Nodes”, International Conference on Visualization and Data Analysis, 2005M. Zhu, Q. Wu, N. S. V. Rao, S. S.Iyengar, “On Optimal Mapping of Visualization Pipeline onto Linear Arrangement of Network Nodes”, International Conference on Visualization and Data Analysis, 2005 Publications Nagi Rao, Bill Wing, Tony Mezzacappa (PIs) Qishi Wu (Post-Doctoral Fellow) Mengxia Zhu (Phd Student – Louisiana State Uni.)

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Modules of Visualization Pipeline  Visualization Modules  Pipeline consists of several modules  Some modules are better suited to certain network nodes  Visualization clusters  Computation clusters  Power walls  Data transfers between modules are of varied sizes and rates Note: Commercial tools do not support efficient decomposition

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Grouping Visualization Modules  Grouping  Decompose the pipeline into modules  Combine the modules into groups  Transfers on single node are generally faster  Between node transfers take place over the network  Align bottleneck network links between modules with least data requirements

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Optimal Mapping of Visualization Pipeline: Minimization of Total Delay Dynamic Programming Solution  Combine modules into groups  Align bottleneck network links between modules with least data requirements  Polynomial-time solvable – not NP-complete Note: 1.Commercial tools (Ensight) are not readily amenable to optimal network deployment 2.This method can be implemented into tools that provide appropriate hooks

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Optimal Mapping of Visualization Pipeline: Maximization of Frame Rate Dynamics Programming Solution  Align bottleneck network links between modules with least data requirements  Polynomial-time solvable – not NP-complete

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY First Implementation  Client/Server OpenGL implementation (leveraged from CHEETAH)  Case 1: small cube geometry or frame-buffer  Case 2: small geometry  Case 3: small geometry  CT scan: raw image or frame-buffer DimensionEstimated bandwidth Minimum delay Raw data size/delay Geometry size/delay FB size/delay Cube 1 10x6x80.284Mbps0.032sec8 K / 0.257sec1K / 0.032sec1.8M/50.73sec Cube 2 50x20x Mbps0.034sec610K / 16.3sec16K / 0.46sec1.8M/48.03sec Cube 3 150x210x Mbps0.033sec71.6M / 34.4min2.4M / 69.34sec1.8M/52.01sec Hand256x256x Mbps0.033sec81.9M / 45.69min NA1.8M/60.28sec

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY  Requirements  Light-weight server located at the computation site  Remote client provides constant monitoring of variables  Our first implementation  OpenGL server and client  Client  Geometric operations  Point, iso-surface, vector view  Commercial Visualization tools  Not light weight – server on supercomputers  Expensive – collaborative visualization by team  Not optimized for network deployment Monitoring Visualization

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY ORNL: Year 2 Activities MPLS Peering CHEETAH Visualizations Computational Monitoring Collaborative Visualization TSI support Collaborative Steering Integrated Data Transfers