Building the National Fusion Grid Presented by David P. Schissel at 2003 Joint US-European Transport Task Force Meeting Madison, WI.

Slides:



Advertisements
Similar presentations
The Access Grid Ivan R. Judson 5/25/2004.
Advertisements

Wei Lu 1, Kate Keahey 2, Tim Freeman 2, Frank Siebenlist 2 1 Indiana University, 2 Argonne National Lab
GridWorld 2006 Use of MyProxy for the FusionGrid Mary Thompson Monte Goode GridWorld 2006.
Distributed Data Processing
FES International Collaboration Program: Vision and Budget Steve Eckstrand International Program Manager Office of Fusion Energy Sciences U.S. Department.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
High Performance Computing Course Notes Grid Computing.
4.1.5 System Management Background What is in System Management Resource control and scheduling Booting, reconfiguration, defining limits for resource.
MTA SZTAKI Hungarian Academy of Sciences Grid Computing Course Porto, January Introduction to Grid portals Gergely Sipos
MDSplus Tom Fredian MIT Plasma Science and Fusion Center.
SWIM WEB PORTAL by Dipti Aswath SWIM Meeting ORNL Oct 15-17, 2007.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
6th Biennial Ptolemy Miniconference Berkeley, CA May 12, 2005 Distributed Computing in Kepler Ilkay Altintas Lead, Scientific Workflow Automation Technologies.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Simo Niskala Teemu Pasanen
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
VAP What is a Virtual Application ? A virtual application is an application that has been optimized to run on virtual infrastructure. The application software.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Shilpa Seth.  Centralized System Centralized System  Client Server System Client Server System  Parallel System Parallel System.
1 The SpaceWire Internet Tunnel and the Advantages It Provides For Spacecraft Integration Stuart Mills, Steve Parkes Space Technology Centre University.
Chapter 9 Elements of Systems Design
LIGO-G E ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech Argonne, May 10, 2004.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Version 4.0. Objectives Describe how networks impact our daily lives. Describe the role of data networking in the human network. Identify the key components.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
Computer Emergency Notification System (CENS)
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
The european ITM Task Force data structure F. Imbeaux.
Cracow Grid Workshop October 2009 Dipl.-Ing. (M.Sc.) Marcus Hilbrich Center for Information Services and High Performance.
Service - Oriented Middleware for Distributed Data Mining on the Grid ,劉妘鑏 Antonio C., Domenico T., and Paolo T. Journal of Parallel and Distributed.
CHEP03 Mar 25Mary Thompson Fine-grained Authorization for Job and Resource Management using Akenti and Globus Mary Thompson LBL,Kate Keahey ANL, Sam Lang.
09/02 ID099-1 September 9, 2002Grid Technology Panel Patrick Dreher Technical Panel Discussion: Progress in Developing a Web Services Data Analysis Grid.
9 Systems Analysis and Design in a Changing World, Fourth Edition.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
ICALEPCS /13/2005 M. Greenwald Visions for Data Management and Remote Collaboration on ITER M. Greenwald, D. Schissel, J. Burruss, T. Fredian, J.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Terena conference, June 2004, Rhodes, Greece Norbert Meyer The effective integration of scientific instruments in the Grid.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
Internet2 AdvCollab Apps 1 Access Grid Vision To create virtual spaces where distributed people can work together. Challenges:
ElVis – Collaborative Visualization Display and explore fusion data. Monitor TRANSP runs. Display input data. Retrieve shot data from MDSplus. Whiteboard.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
10/041 Remote Collaboration at the PSFC and in the Fusion Community Joshua Stillerman MIT Plasma Science and Fusion Center National Fusion Collaboratory.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
By David P. Schissel and Reza Shakoori Presented at DOE Office of Science High-Performance Network Research PI Meeting Brookhaven National Lab September.
14, Chicago, IL, 2005 Science Gateways to DEISA Motivation, user requirements, and prototype example Thomas Soddemann, RZG, Germany.
VIEWS b.ppt-1 Managing Intelligent Decision Support Networks in Biosurveillance PHIN 2008, Session G1, August 27, 2008 Mohammad Hashemian, MS, Zaruhi.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Chapter 16 Client/Server Computing Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
9 Systems Analysis and Design in a Changing World, Fifth Edition.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
Accessing the VI-SEEM infrastructure
Clouds , Grids and Clusters
Example: Rapid Atmospheric Modeling System, ColoState U
VirtualGL.
Grid Computing.
Introduction to Operating System (OS)
Chapter 1: Introduction
Use of MyProxy for the FusionGrid
Presentation transcript:

Building the National Fusion Grid Presented by David P. Schissel at 2003 Joint US-European Transport Task Force Meeting Madison, WI

THE GOAL OF THE NFC IS TO ADVANCE SCIENTIFIC UNDERSTANDING & INNOVATION IN FUSION RESEARCH Experimental Facilities — More efficient use resulting in greater progress with less cost Theory & Modeling — Integrate theory & experiment Facilitate multi-institution collaboration — Integrate geographically diverse groups Create standard tool set — To build in these services in the future

VISION FOR THE FUSION GRID Data, Codes, Analysis Routines, Visualization Tools should be thought of as network accessible services Shared security infrastructure Collaborative nature of research requires shared visualization applications and widely deployed collaboration technologies — Integrate geographically diverse groups Not focused on CPU cycle scavenging or “distributed” supercomputing (typical Grid justifications) — Optimize the most expensive resource - people’s time

VISION – RESOURCES AS SERVICES Access is stressed rather than portability Users are shielded from implementation details Transparency and ease–of–use are crucial elements Shared toolset enables collaboration between sites and across sub–disciplines Knowledge of relevant physics is still required of course

VISION – SECURITY INFRASTRUCTURE Strong authentication identifies users (currently based on X.509 certificates from the DOE Grids CA) Distributed authorization allows stakeholders to control their own resources — Facility owners can protect computers, data, and experiments — Code developers can control intellectual property — Fair use of shared resources can be demonstrated & controlled

VISION – VISUALIZATION AND A/V TOOLS Maximum interactivity for visualization of very large datasets Use of extended tool sets for remote collaboration — Flexible collaboration environment — Shared applications

FUSION SCIENCE PROGRAMS’ INPUT ACTIVELY SOUGHT Presence at scientific meetings in April 2002 and APS/DPP in November 2002 — Both the experimental and theoretical user community — First of their kind demonstrations at these meetings Demonstrations to the large experimental teams — Shared visualization ANL to San Diego and PCS to PPPL Comments and discussions with Oversight Committee — Represents broad cross–section of fusion community

FIRST YEAR’S WORK CULMINATED IN A FULL DEMONSTRATION AT THE NOV 2002 APS/DPP MEETING

NFC TOOLS USED TO CALCULATE AND PRESENT SCIENTIFIC RESULTS AT THE APS/DPP MEETING M. Murakami 2002 APS/DPP Greater number of TRANSP calculations than previously possible via FusionGrid D. Brennan 2002 APS/DPP New capability in 3D visualization & animation via MDSplus stored data

NFC’S TOOLS AND TECHNOLOGIES Secure MDSplus using Globus GSI available — Authentication and Authorization using DOE Grids CA TRANSP available for worldwide usage on FusionGrid — Beowulf cluster, client application, complete job monitoring — Secure access by Globus GSI, Akenti, DOE Grids CA Personal Access Grid (PIG) software and specifications available — Installed at MIT and GA; PPPL has large AG node SCIRun for 3D visualization including MDSplus stored Fusion data Toolkits for sharing visualization wall to wall and on AG — Tiled walls at GA and PPPL

A GRID IS A COLLECTION OF HARDWARE & SOFTWARE RESOURCES SHARED BY ONE PROJECT Resource owners decide which of their resources to contribute — Authentication keeps the Grid secure Not all Grid members can use all resources — Authorization allows finer grain access control to resources Single sign–on keeps eliminates the need to log into multiple machines — User logs into the Grid only one time — Delegated credential accompanies all resource requests The Grid refers to the infrastructure that enables the integrated collaborative use of high–end computers, networks, databases, and scientific instruments owned and managed by multiple organizations.

MDSplus IS WIDELY USED IN THE FUSION COMMUNITY

MDSplus ENHANCEMENTS FOR COLLABORATORY MDSplus is the data management system for FusionGrid MDSplus data system secured with Globus GSI — Underlying technologies are X.509 certificates/SSL Will use MDSplus layer to secure PC based relational database (SQL Server) Parallel MDSplus I/O – gridpst (Grid parallel socket tunnel) Important shared fusion codes modified to use MDSplus for I/O — EFIT, TRANSP, gs2, NIMROD Looking at remote job submission through MDSIP

TRANSP IMPLEMENTED AS FUSION GRID SERVICE TRANSP – Tools for time dependent analysis & simulation of tokamak plasmas

TRANSP SERVICE Advantages of Grid implementation — Remote sites avoid costly installation and code maintenance — PPPL maintains and supports a single production version of code on well characterized platform — Trouble–shooting occurs at central site where TRANSP experts reside — Benefits to users and service providers alike Production system: since October 1, 2002 — 16 processor Linux cluster — Dedicated PBS queue — Tools for job submission, cancellation, monitoring — 21 Grid certificates issued, 7 institutions (1 in Europe) Four European machines interested — JET, MAST, ASDEX–U, TEXTOR — Concerns: understanding software, firewalls, grid security

FUSION GRID MONITOR: A FAST AND EFFICIENT MONITORING SYSTEM FOR THE GRID ENVIRONMENT Derivative from DIII–D between pulse data analysis monitoring system Users track and monitor the state of applications on FusionGrid — Output dynamically via HTML, Detailed log run files accessible Code maintenance notification — Users notified, queuing turned off, code rebuilt, queue restarted Built as a Java Servlet (using JDK2.1)

FUSION GRID TRANSP RUN PRODUCTION 755 Runs: Oct. 1, 2002 – Mar. 21, 2003 Research Users: CMod, DIII-D, NSTX projects; PPPL Collaborations.

FUSION GRID TRANSP SERVER UTILIZATION 4,657 CPU Hours: Oct 1, 2002 – Mar 21, GHz Pentium-4 (Linux) – capacity for 16 concurrent runs.

HOW TO JOIN FUSIONGRID Each user and host on FusionGrid is represented by an X.509 certificate and its corresponding private key — Used by Globus SSL (GSI) to establish authenticated, secure connections between two processes on two different machines — Registration Authority (human) verifies request is valid — Certificate Authority (CA) software issues X.509 certificates Requesting a user certificate — FusionGrid uses the DOE Grids C A — — Export certificate where Globus can find it Install Globus client software — Install the DOE Grids CA certificate

PLACING A RESOURCE ON FUSIONGRID Globus must be installed on the server to be placed on FusionGrid — Resource can be a code or an MDSplus data server The Distinguished Name (DN) of each person allowed to use the resource must be specified — Allows for control of who may use the resource Finer grain access control can be accomplished with Akenti software — Allows restricted access to a limited set of executables NFC Project will help anyone interested in contributing a FusionGrid resource

COLLABORATIVE NATURE OF FES RESEARCH NECESSITATES A SHARED VIS ENVIRONMENT Strive to dramatically reduce the hurdles that presently exist for collaborative scientific visualization Leverage existing technology where possible — SCIRun for advanced scientific visualization — Access Grid (AG) for large remote audio/video interactions — Integrate existing AG collaborative tools with tiled display walls Collaborative Control Room — Large on–site group interactively work with small to large off–site group — ELVis, java based graphics for TRANSP and other sources — VNC and DMX for sharing any X graphics code New visualization software — Simultaneous sharing of complex visualization — Error representation in complex experimental & simulation data

ADVANCED VISUALIZATION USING SCIRun SCIRun adapted for Fusion — Utah Imaging Institute — Open source, low cost — Runs on low-cost platforms NIMROD data from MDSplus — Testing storage paradigm Raises challenge of very large data sets SCIRun visualization of NIMROD Pressure Simulation

TILED DISPLAY WALLS ALLOW A LARGE GROUP TO EXPLORE INFORMATION IN COLLABORATION MORE EFFECTIVELY General Atomics Advanced Imagery Laboratory

COLLABORATIVE CONTROL ROOM FOR FUSION EXPERIMENTS Workstation to workstation or wall possible – control room communication Demonstrated from Chicago – San Diego & PCS – PPPL with VNC, DMC, ELVis Investigating deployment in NSTX control room Shared Tiled Walls at ANLTesting Tiled Wall at NSTX Control Room

REMOTE COMMUNICATION WITH PERSONAL AG NODE Targeted for the small research center — For one to one and one to many interactions Usage example: communication to a tokamak control room — Includes sharing of complex visualization

INTERNATIONAL THERMONUCLEAR EXPERIMENTAL REACTOR: THE NEXT GENERATION WORLDWIDE FUSION EXPERIMENT ~$5B class device, over 20 countries — Thousands of scientists, US rejoining Pulsed experiment with simulations — ~TBs of data in 30 minutes International collaboration — Productive engaging work environment for off–site personnel Successful operation requires — Large simulations, shared vis, decisions back to the control room — Remote Collaboration!

1ST YEAR NFC ACCOMPLISHMENTS (1) FusionGrid created: MDSplus data system secured with Globus GSI FusionGrid released with complete monitoring: TRANSP fusion code remotely accessible via Globus/Akenti and fine-grain authorization via GRAM — FusionGrid replaced old system, now supports U.S. TRANSP usage — Sample statistics for October 2002: 300 runs, 1474 CPU hours Large demonstrations to the user community at 3 major fusion science meetings — Both user education and user feedback to the NFC team FusionGrid used for scientific calculations presented at the APS/DPP Mtg — Advancing the science Prototyped: between pulse pre-emptive scheduling, parallel MDSplus I/O GS2 low-frequency turbulence code being tested on FusionGrid — Considerably less time to grid-enable the second code

1ST YEAR NFC ACCOMPLISHMENT (2) SCIRun 3D visualization of NIMROD fusion data via MDSplus — New capability in 3D visualization & animation via MDSplus data SCIRun visualizations used for scientific work presented at APS/DPP — Advancing the science Access Grid functional on Tiled Wall as well as small scale system (PIG) — Allows investigation of diverse AG usage in fusion science Collaborative Visualization: Wall to wall/workstation (VNC, DMX), ELVis — Detailed analysis back into the control room — Collaborative working meetings

ISSUES FOR FUTURE WORK Ease–of–use and Ease–of–installation still need a lot of work Still have conflict on site security versus Grid security Globus undergoing major transformation — Can “lighter weight”, small footprint software provide some of the needed capabilities Toolset “hardening” required Manipulating very large multi–dimensional data sets is still a challenge — Need to test new approaches

CONCLUDING COMMENTS The National Fusion Collaboratory Project is implementing and testing new collaborative technologies for fusion research — Grid computing — Shared visualization and communication Collaborative technology critical to the success of FE program — Experiments: fewer, larger machines in future (e.g. ITER) — Computation: moving toward integrated simulation User feedback critical to the success of the NFC Project — Tools are to assist the researcher push their science forward