Supercomputing the Next Century Talk to the Max-Planck-Institut fuer Gravitationsphysik Albert-Einstein-Institut, Potsdam, Germany June 15, 1998.

Slides:



Advertisements
Similar presentations
DOT: Distributed Optical Testbed Valerie Taylor, Joe Mambretti, Alok Choudhary, Peter Dinda Northwestern University Charlie Catlett, Bill Nickless, Linda.
Advertisements

National Computational Science Alliance “The Coming of the Grid” Building a Computational Grid Workshop Argonne National Laboratory September 8-10,1997.
Ver 0.1 Page 1 SGI Proprietary Introducing the CRAY SV1 CRAY SV1-128 SuperCluster.
National Computational Science Alliance The Emergence of the NT/Intel Standard Talk Given to Visitors from Compaq and Allstate January 14, 1998.
SDSC Computing the 21st Century Talk Given to the NSF Sugar Panel May 27, 1998.
Problem-Solving Environments: The Next Level in Software Integration David W. Walker Cardiff University.
Where do we go from here? “Knowledge Environments to Support Distributed Science and Engineering” Symposium on Knowledge Environments for Science and Engineering.
National Computational Science Alliance NCSA is the Leading Edge Site for the National Computational Science Alliance
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
The Interplay of Funding Policy for infrastructure at NSF Richard S. Hirsh.
Chapter 2 Computer Clusters Lecture 2.1 Overview.
Welcome to HTCondor Week #14 (year #29 for our project)
National Computational Science Alliance Coupling the Leading Edge Site with the Alliance Partners Talk given to First Annual ITEA Workshop on High Performance.
SICSA student induction day, 2009Slide 1 Social Simulation Tutorial Session 6: Introduction to grids and cloud computing International Symposium on Grid.
National Computational Science Alliance Introducing the National Computational Science Alliance Panel Presentation to Supercomputing ‘97 in San Jose November.
National Computational Science Alliance From NCSA to the Alliance - Computer Science interacting with Computational Science Invited Talk to UIUC Computer.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
National Computational Science Alliance Coupling the Leading Edge Site with the Alliance Partners Talk given to Alliance ‘98 at the University of Illinois.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
National Computational Science Alliance Introducing the Alliance Talk to the Assistant Director of the NSF CISE Directorate on his visit to NCSA September.
National Computational Science Alliance Supercomputing: Directions in Technology, Architecture and Applications Keynote Talk to Supercomputer’98 in Mannheim,
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
National Computational Science Alliance University of Illinois at Urbana-Champaign Keeping Illinois in a Leadership Position Presentation to Secretary.
Lappeenranta University of Technology / JP CT30A7001 Concurrent and Parallel Computing Introduction to concurrent and parallel computing.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
The John von Neumann Institute for Computing (NIC): A survey of its computer facilities and its Europe-wide computational science activities Norbert Attig.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
National Computational Science Alliance Bringing the Grid to Chemical Engineering Opening Talk at the 1998 Foundations of Computer Aided Process Operations.
National Computational Science Alliance Knowledge Management and Corporate Intranets Talk to visiting team from Fisher Scientific January 13, 1998.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
National Computational Science Alliance Tele-Immersion - The Killer Application for High Performance Networks Panel Talk at a Vanguard Meeting in San Francisco,
National Computational Science Alliance Increasing Competitiveness Through the Utilization of Emerging Technologies Leader to Leader Speaker Series, Allstate.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor : A Concept, A Tool and.
National Computational Science Alliance From Supercomputing to the Grid Invited Talk at SGI Booth, Supercomputing ‘98 Orlando, Florida, November 10,1998.
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
Key prototype applications Grid Computing Grid computing is increasingly perceived as the main enabling technology for facilitating multi-institutional.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing – High-End Resources Wayne Pfeiffer Deputy Director NPACI & SDSC NPACI.
National Computational Science Alliance Visualization Needs in Science and Technology Talk given to ASCI Workshop on Data Visualization Corridors for Large.
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Computing Environment The computing environment rapidly evolving ‑ you need to know not only the methods, but also How and when to apply them, Which computers.
National Computational Science Alliance Visualization and GIS at NCSA (Polly Baker, Group
Computational Science & Engineering meeting national needs Steven F. Ashby SIAG-CSE Chair March 24, 2003.
National Computational Science Alliance Overview of the Alliance Kickoff Course in Alliance Streaming Video Series January 22, 1998.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
National Computational Science Alliance The Emerging National Technology Grid-Coupling Supercomputers, Networks, Virtual Reality to the Researchers Desktops.
Environmental Hydrology Applications Team Alliance Report
MCS  FUTURESLABARGONNE  CHICAGO Rick Stevens, Terry Disz, Lisa Childers, Bob Olson Argonne National Laboratory
National Computational Science Alliance A Review of User Projects at the Alliance Leading Edge Site Opening Talk to the Alliance Allocation Board Hosted.
National Computational Science Alliance The Alliance Distributed Supercomputing Facilities Opening Talk to the Alliance User Advisory Council Held at Supercomputing.
Cyberinfrastructure Overview Russ Hobby, Internet2 ECSU CI Days 4 January 2008.
National Computational Science Alliance Bringing Science to the Grid Keynote Talk at the High Performance Distributed Computing Conference in Chicago,
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
National Computational Science Alliance The Alliance Visualization Program Talk given to Sandia and Lawrence Livermore National Laboratories February 19,
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
NSF Middleware Initiative Purpose To design, develop, deploy and support a set of reusable, expandable set of middleware functions and services that benefit.
PACI Program : One Partner’s View Paul R. Woodward LCSE, Univ. of Minnesota NSF Blue Ribbon Committee Meeting Pasadena, CA, 1/22/02.
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
National Computational Science Ky PACS at the University of Kentucky April 2000 –Advanced Computing Resources –EPSCoR Outreach –SURA Liaison –John.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Introduction.
Realizing the Promise of Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science.
Using Cyberinfrastructure to Study the Earth’s Climate and Air Quality Don Wuebbles Department of Atmospheric Sciences University of Illinois, Urbana-Champaign.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
National Computational Science Alliance Industrial Supercomputing Opening Talk to NCSA Strategic Industrial Partners Program Advisory Committee NCSA, University.
Information Infrastructure for the Social Sciences in the 21st Century
Exploring Distributed Computing Techniques with Ccactus and Globus
Polly Baker Division Director: Data, Mining, and Visualization
with Computational Scientists
Presentation transcript:

Supercomputing the Next Century Talk to the Max-Planck-Institut fuer Gravitationsphysik Albert-Einstein-Institut, Potsdam, Germany June 15, 1998

NCSA is the Leading Edge Site for the National Computational Science Alliance

The Alliance Team Structure Leading Edge Center Enabling Technology –Parallel Computing –Distributed Computing –Data and Collab. Computing Partners for Advanced Computational Services –Communities –Training –Technology Deployment –Comp. Resources & Services Strategic Industrial and Technology Partners Application Technologies –Cosmology –Environmental Hydrology –Chemical Engineering –Nanomaterials –Bioinformatics –Scientific Instruments EOT –Education –Evaluation –Universal Access –Government

Alliance‘98 Hosts 1000 Attendees With Hundreds On-Line!

The “Experts” Are Not Always Right! Seek a New Vision and Stick to It We do not believe that workstation-class systems (even if they offer more processing power than the old Cray Y-MPs) can become the mainstay of a National center. At $500K purchase prices, departments should be able to afford the SGI Power Challenge systems on their own. For these reasons, we wonder whether the role of the proposed SGI systems in NCSA’s plan might be different from that of a “production machine”. -Program Plan Review Panel Feb 1994

The SGI Power Challenge Array as NCSA’s Production Facility for Four Years Sep94 Nov94 Jan95 Mar95 May95 Jul95 Sep95 Nov95 Jan96 Mar96 May96 Jul96 Sep96 Nov96 Jan97 Mar97 May97 Jul97 Sep97 Nov97 Jan98 Mar98 May98 Month Number of Users Per Month SGI Power Challenge Array CM5 Convex C3880 HP/Convex SPP-1200 Cray Y-MP SGI Origin HP/Convex SPP-2000 C3880 (retired 10/95) SPP-1200 Y-MP (retired 12/94) Origin SPP-2000 CM-5 (retired 1/97) PCA

The NCSA Origin Array Doubles Again This Month

Let’s Blow This Up! The Growth Rate of the National Capacity is Slowing Down again Source: Quantum Research; Lex Lane, NCSA

Major Gap Developing in National Usage at NSF Supercomputer Centers 70% Annual Growth Rate is the Historical Rate of National Usage Growth. It is Also Slightly Greater Than the Rate of Moore’s Law, So Lesser Growth Means Desktops Gain on Supers Projection

Monthly National Usage at NSF Supercomputer Centers FY96FY97FY98FY99 Projection Capacity Level NSF Proposed at 3/97 NSB Meeting Transition Period

Accelerated Strategic Computing Initiative is Coupling DOE DP Labs to Universities Access to ASCI Leading Edge Supercomputers Academic Strategic Alliances Program Data and Visualization Corridors

Comparison of the DoE ASCI and the NSF PACI Origin Array Scale Through FY99 /Hardware/schedule.html Los Alamos Origin System FY processors NCSA Proposed System FY99 6x128 and 4x64=1024 processors

NEC SX-5 –32 x 16 vector processor SMP –512 Processors –8 Gigaflop Peak Processor IBM SP –256 x 16 RISC Processor SMP –4096 Processors –1 Gigaflop Peak Processor SGI Origin Follow-on –32 x 128 RISC Processor DSM –4096 Processors –1 Gigaflop Peak Processor High-End Architecture Scalable Clusters of Shared Memory Modules Each is 4 Teraflops Peak

Emerging Portable Computing Standards HPF MPI OpenMP Hybrids of MPI and OpenMP

Top500 Shared Memory Systems Vector ProcessorsMicroprocessors TOP500 Reports: PVP Systems Jun-93 Nov-93 Jun-94 Nov-94 Jun-95 Nov-95 Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 Number of Systems Europe Japan USA SMP + DSM Systems Jun-93 Nov-93 Jun-94 Nov-94 Jun-95 Nov-95 Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 Number of Systems USA

The Exponential Growth of NCSA’s SGI Shared Memory Supercomputers Doubling Every Nine Months! Challenge Power Challenge Origin SN1

Extreme and Large PIs Dominant Usage of NCSA Origin January thru April, 1998

Disciplines Using the NCSA Origin 2000 CPU-Hours in March 1995

Source: Mitas, Hayes, Tafti, Saied, Balsara, NCSA; Wilkins, OSU;Woodward, UMinn; Freeman, NW NCSA 128-processor Origin IRIX 6.5 Users, NCSA, SGI, and Alliance Parallel Team Working to Make Better Scaling Routine

Solving 2D Navier-Stokes Kernel - Performance of Scalable Systems Source: Danesh Tafti, NCSA Preconditioned Conjugate Gradient Method With Multi-level Additive Schwarz Richardson Pre-conditioner (2D 1024x1024)

A Variety of Discipline Codes - Single Processor Performance Origin vs. T3E

Alliance PACS Origin2000 Repository Kadin Tseng, BU, Gary Jensen, NCSA, Chuck Swanson, SGI John Connolly, U Kentucky Developing Repository for HP Exemplar

Simulation of the Evolution of the Universe on a Massively Parallel Supercomputer 12 Billion Light Years 4 Billion Light Years Virgo Project - Evolving a Billion Pieces of Cold Dark Matter in a Hubble Volume processor CRAY T3E at Garching Computing Centre of the Max-Planck-Society

Limitations of Uniform Grids for Complex Scientific and Engineering Problems Source: Greg Bryan, Mike Norman, NCSA 512x512x512 Run on 512-node CM-5 Gravitation Causes Continuous Increase in Density Until There is a Large Mass in a Single Grid Zone

Use of Shared Memory Adaptive Grids To Achieve Dynamic Load Balancing Source: Greg Bryan, Mike Norman, John Shalf, NCSA 64x64x64 Run with Seven Levels of Adaption on SGI Power Challenge, Locally Equivalent to 8192x8192x8192 Resolution

NCSA Visualization -- VRML Viewers John Shalf on Greg Bryan Cosmology AMR Data

NT Workstation Shipments Rapidly Surpassing UNIX Source: IDC, Wall Street Journal, 3/6/98

Current Alliance LES NT Cluster Testbed - Compaq Computer and Hewlett-Packard Schedule of NT Supercluster Goals –1998 Deploy First Production Clusters –Scientific and Engineering Tuned Cluster –Andrew Chien, Alliance Parallel Computing Team –Rob Pennington, NCSA C&C –Currently 256-processors of HP and Compaq Pentium II SMPs –Data Intensive Tuned Cluster –1999 Enlarge to 512-Processors in Cluster –2000 Move to Merced – Achieve Teraflop Performance UNIX/RISC & NT/Intel will Co-exist for 5 Years – Move Applications to NT/Intel – Convergence toward NT/Merced

First Scaling Testing of ZEUS-MP on CRAY T3E and Origin vs. NT Supercluster “Supercomputer performance at mail-order prices”-- Jim Gray, Microsoft access.ncsa.uiuc.edu/CoverStories/SuperCluster/super.html Zeus-MP Hydro Code Running Under MPI Alliance Cosmology Team Andrew Chien, UIUC Rob Pennington, NCSA

NCSA NT Supercluster Solving Navier-Stokes Kernel Preconditioned Conjugate Gradient Method With Multi-level Additive Schwarz Richardson Pre-conditioner (2D 1024x1024) Single Processor Performance: MIPS R10k 117 MFLOPS Intel Pentium II 80 MFLOPS Danesh Tafti, Rob Pennington, Andrew Chien NCSA

Near Perfect Scaling of Cactus - 3D Dynamic Solver for the Einstein GR Equations Ratio of GFLOPs Origin = 2.5x NT SC Danesh Tafti, Rob Pennington, Andrew Chien NCSA Cactus was Developed by Paul Walker, MPI-Potsdam UIUC, NCSA

NCSA Symbio - A Distributed Object Framework Bringing Scalable Computing to NT Desktops Parallel Computing on NT Clusters –Briand Sanderson, NCSA –Microsoft Co-Funds Development Features –Based on Microsoft DCOM –Batch or Interactive Modes –Application Development Wizards Current Status & Future Plans –Symbio Developer Preview 2 Released –Princeton University Testbed

The Road to Merced

vBNS Connected Alliance Site vBNS Alliance Site Scheduled for Connection NCSA FY98 Assembling the Links in the Grid with NSF’s vBNS Connections ProgramStarTAP 27 Alliance sites running... …16 more in progress. vBNS Backbone Node 1999: Expansion via Abilene vBNS & Abilene at 2.4 Gbit/s Source: Charlie Catlett, Randy Butler, NCSA NCSA Distributed Applications Support Team for vBNS

Globus Ubiquitous Supercomputing Testbed (GUSTO) Alliance Middleware for the Grid-Distributed Computing Team GII Next Generation Winner SF Express -- NPACI / Alliance DoD Mod. Demonstration –Largest Distributed Interactive Simulation Ever Performed The Grid: Blueprint for a New Computing Infrastructure –Edited by Ian Foster and Carl Kesselman, July 1998 IEEE Symposium on High Performance Distributed Computing –July 29-31, 1998 Chicago Illinois NASA IPG Most Recent Funding Addition

Alliance National Technology Grid Workshop and Training Facilities Powered by Silicon Graphics Linked by the NSF vBNS Jason Leigh and Tom DeFanti, EVL; Rick Stevens, ANL

Using NCSA Virtual Director to Explore Structure of Density Isosurfaces of MHD Star Formation Simulation by Dinshaw Balsara, NCSA, Alliance Cosmology Team Visualization by Bob Patterson, NCSA Red Iso = 4x Mean Density Yellow Iso = 8x Mean Density Red Iso = 12x Mean Density Isosurface models generated by Vis5d Choreographed with Cave5D/VirDir Rendered with Wavefront on SGI Onyx

Linking CAVE to Vis5D = CAVE5D Then Use Virtual Director to Analyze Simulations Donna Cox, Robert Patterson, Stuart Levy, NCSAVirtual Director Team

Java 3D API HPC Application: VisAD Environ. Hydrology Team, (Bill Hibbard, Wisconsin) Steve Pietrowicz, NCSA Java Team Standalone or CAVE-to-Laptop-Collaborative Environmental Hydrology Collaboration: From CAVE to Desktop NASA IPG is Adding Funding To Collaborative Java3D

Caterpillar’s Collaborative Virtual Prototyping Environment Data courtesy of Valerie Lehner, NCSA Real Time Linked VR and Audio-Video Between NCSA and Germany Using SGI Indy/Onyx and HP Workstations