Download presentation
Presentation is loading. Please wait.
Published byChristine Sparks Modified over 8 years ago
1
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries Mike Minkoff/ANL
2
2/22/2001Greenbook 2001/OASCR2 Outline MPI Extensions and Programming Models Grid-based computing -- MPICH-G2 Grid-based Climate Simulation
3
2/22/2001Greenbook 2001/OASCR3 Standards-Based Programming Environments MPI (Message-Passing Interface) standard defines a portable library interface for the message- passing model of parallel computation. MPICH is a portable, high-performance implementation of the Standard. Many vendor implementations of MPI have been based on MPICH. Research continues on implementation issues, necessary for increased performance and scalability.
4
2/22/2001Greenbook 2001/OASCR4 Beyond Message-Passing MPI-2 is an standard extension to the message-passing model specification, with –Parallel I/O –Dynamic creation of processes –One-sided remote memory access MPICH will soon provide a portable implementation of MPI-2.
5
2/22/2001Greenbook 2001/OASCR5 Experimental Programming Models Robust implementations of language extensions, such as co-array Fortran, are needed if such approaches are to evolve into real productivity enhancers for DOE applications. More speculatively, new memory-centric programming and execution models will be needed for future machine architectures. Argonne is leading an effort to explore both near- term and longer-term programming model issues.
6
2/22/2001Greenbook 2001/OASCR6 MPICH-G2 Developed by Karonis (NIU) and Toonen (ANL) Based on ANL’s MPICH library (Gropp & Lusk) A grid-enabled MPI http://www.niu.edu/mpi
7
2/22/2001Greenbook 2001/OASCR7 MPICH-G2: A grid-enabled MPI Uses many Globus services –job startup –GSI for security –data conversion –asynchronous socket communication (Globus I/O) Multi-protocol support –vendor-supplied MPI for intra-machine messages –TCP for inter-machine messages
8
2/22/2001Greenbook 2001/OASCR8 MPI “grid” Applications MPI applications wanting to solve problems too big for any single computer Use MPICH-G2 to couple multiple computers forming a computational grid Modify application to respect slower LAN/WAN performance
9
2/22/2001Greenbook 2001/OASCR9 Cactus Developed at Max Planck Institute for Gravitational Physics, Germany Originally developed as framework for the numerical solution to Einstein’s Equations Evolved into general-purpose problem solving environment that provides a modular and parallel computational framework atop MPI
10
2/22/2001Greenbook 2001/OASCR10 Cactus-G: A Case Study Cactus-G: coupled Cactus and MPICH-G2 Multiple machines –T3E 900, IBM SP2 - NERSC –Origin O2K, IBM SP - ANL –Origin O2K - NCSA –IBM SP - SDSC
11
2/22/2001Greenbook 2001/OASCR11 Our Experience Primary problem: WAN performance between sites far below what was “advertised” Machines with single portal (processor/network interface) to the WAN contributed to WAN performance problem Our conclusions: –Don’t need a bigger machine –Need better communication performance between existing machines –Machine architecture need to supports high-bandwidth, multi-stream, off-machine communication
12
2/22/2001Greenbook 2001/OASCR12 Next Generation NERSC - Grid Computing Grid access using Globus to enable –remote job submission to all NERSC computer systems –data transfer to/from NERSC computers using GRID technologies, including tape storage –develop a new generation of GRID enabled tools to facilitate job submission, monitoring and analysis
13
2/22/2001Greenbook 2001/OASCR13 Next Generation NERSC - Climate Computing Integrating climate models requires –multi-teraflop scale computer systems able to deliver teraflop performance to a single user on a routine basis –ability to perform long duration simulations which requires access to a large number of processors and TB of disk space –enhanced software tools for analysis & visualization of TB of model results
14
2/22/2001Greenbook 2001/OASCR14 Science Projects Three-dimensional premixed turbulent flames with full chemistry model (J. Bell, LBNL) Discrete Event Simulation (M. Novotny, FSU)
15
2/22/2001Greenbook 2001/OASCR15 Questions to Ponder What Should NERSC Support Spend Resources On? Remote and secure job submission and storage access. –Support for PSEs Role of Nanotech/SCIDAC simulation reqs. Diversity vs. single- source machine Role of Libraries (IMSL, EISPACK) that are dated and/or single processor Role of NERSC Math Server Integrated debugging/performance tool Extend UHU to OASCR/App. matching
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.