Download presentation
Presentation is loading. Please wait.
Published byRose Banks Modified over 9 years ago
1
Supercomputing the Next Century Talk to the Max-Planck-Institut fuer Gravitationsphysik Albert-Einstein-Institut, Potsdam, Germany June 15, 1998
2
NCSA is the Leading Edge Site for the National Computational Science Alliance www.ncsa.uiuc.edu
3
The Alliance Team Structure Leading Edge Center Enabling Technology –Parallel Computing –Distributed Computing –Data and Collab. Computing Partners for Advanced Computational Services –Communities –Training –Technology Deployment –Comp. Resources & Services Strategic Industrial and Technology Partners Application Technologies –Cosmology –Environmental Hydrology –Chemical Engineering –Nanomaterials –Bioinformatics –Scientific Instruments EOT –Education –Evaluation –Universal Access –Government
4
Alliance‘98 Hosts 1000 Attendees With Hundreds On-Line!
5
The “Experts” Are Not Always Right! Seek a New Vision and Stick to It We do not believe that workstation-class systems (even if they offer more processing power than the old Cray Y-MPs) can become the mainstay of a National center. At $500K purchase prices, departments should be able to afford the SGI Power Challenge systems on their own. For these reasons, we wonder whether the role of the proposed SGI systems in NCSA’s plan might be different from that of a “production machine”. -Program Plan Review Panel Feb 1994
6
The SGI Power Challenge Array as NCSA’s Production Facility for Four Years 0 100 200 300 400 500 Sep94 Nov94 Jan95 Mar95 May95 Jul95 Sep95 Nov95 Jan96 Mar96 May96 Jul96 Sep96 Nov96 Jan97 Mar97 May97 Jul97 Sep97 Nov97 Jan98 Mar98 May98 Month Number of Users Per Month SGI Power Challenge Array CM5 Convex C3880 HP/Convex SPP-1200 Cray Y-MP SGI Origin HP/Convex SPP-2000 C3880 (retired 10/95) SPP-1200 Y-MP (retired 12/94) Origin SPP-2000 CM-5 (retired 1/97) PCA
7
The NCSA Origin Array Doubles Again This Month
8
Let’s Blow This Up! The Growth Rate of the National Capacity is Slowing Down again Source: Quantum Research; Lex Lane, NCSA
9
Major Gap Developing in National Usage at NSF Supercomputer Centers 70% Annual Growth Rate is the Historical Rate of National Usage Growth. It is Also Slightly Greater Than the Rate of Moore’s Law, So Lesser Growth Means Desktops Gain on Supers Projection
10
Monthly National Usage at NSF Supercomputer Centers FY96FY97FY98FY99 Projection Capacity Level NSF Proposed at 3/97 NSB Meeting Transition Period
11
Accelerated Strategic Computing Initiative is Coupling DOE DP Labs to Universities Access to ASCI Leading Edge Supercomputers Academic Strategic Alliances Program Data and Visualization Corridors http://www.llnl.gov/asci-alliances/centers.html
12
Comparison of the DoE ASCI and the NSF PACI Origin Array Scale Through FY99 www.lanl.gov/projects/asci/bluemtn /Hardware/schedule.html Los Alamos Origin System FY99 5-6000 processors NCSA Proposed System FY99 6x128 and 4x64=1024 processors
13
NEC SX-5 –32 x 16 vector processor SMP –512 Processors –8 Gigaflop Peak Processor IBM SP –256 x 16 RISC Processor SMP –4096 Processors –1 Gigaflop Peak Processor SGI Origin Follow-on –32 x 128 RISC Processor DSM –4096 Processors –1 Gigaflop Peak Processor High-End Architecture 2000- Scalable Clusters of Shared Memory Modules Each is 4 Teraflops Peak
14
Emerging Portable Computing Standards HPF MPI OpenMP Hybrids of MPI and OpenMP
15
Top500 Shared Memory Systems Vector ProcessorsMicroprocessors TOP500 Reports: http://www.netlib.org/benchmark/top500.html PVP Systems 0 100 200 300 Jun-93 Nov-93 Jun-94 Nov-94 Jun-95 Nov-95 Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 Number of Systems Europe Japan USA SMP + DSM Systems 0 100 200 300 Jun-93 Nov-93 Jun-94 Nov-94 Jun-95 Nov-95 Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 Number of Systems USA
16
The Exponential Growth of NCSA’s SGI Shared Memory Supercomputers Doubling Every Nine Months! Challenge Power Challenge Origin SN1
17
Extreme and Large PIs Dominant Usage of NCSA Origin January thru April, 1998
18
Disciplines Using the NCSA Origin 2000 CPU-Hours in March 1995
19
Source: Mitas, Hayes, Tafti, Saied, Balsara, NCSA; Wilkins, OSU;Woodward, UMinn; Freeman, NW NCSA 128-processor Origin IRIX 6.5 Users, NCSA, SGI, and Alliance Parallel Team Working to Make Better Scaling Routine
20
Solving 2D Navier-Stokes Kernel - Performance of Scalable Systems Source: Danesh Tafti, NCSA Preconditioned Conjugate Gradient Method With Multi-level Additive Schwarz Richardson Pre-conditioner (2D 1024x1024)
21
A Variety of Discipline Codes - Single Processor Performance Origin vs. T3E
22
Alliance PACS Origin2000 Repository http://scv.bu.edu/SCV/Origin2000/ Kadin Tseng, BU, Gary Jensen, NCSA, Chuck Swanson, SGI John Connolly, U Kentucky Developing Repository for HP Exemplar
23
Simulation of the Evolution of the Universe on a Massively Parallel Supercomputer 12 Billion Light Years 4 Billion Light Years Virgo Project - Evolving a Billion Pieces of Cold Dark Matter in a Hubble Volume - 688-processor CRAY T3E at Garching Computing Centre of the Max-Planck-Society http://www.mpg.de/universe.htm
24
Limitations of Uniform Grids for Complex Scientific and Engineering Problems Source: Greg Bryan, Mike Norman, NCSA 512x512x512 Run on 512-node CM-5 Gravitation Causes Continuous Increase in Density Until There is a Large Mass in a Single Grid Zone
25
Use of Shared Memory Adaptive Grids To Achieve Dynamic Load Balancing Source: Greg Bryan, Mike Norman, John Shalf, NCSA 64x64x64 Run with Seven Levels of Adaption on SGI Power Challenge, Locally Equivalent to 8192x8192x8192 Resolution
26
NCSA Visualization -- VRML Viewers John Shalf on Greg Bryan Cosmology AMR Data http://infinite-entropy.ncsa.uiuc.edu/Projects/AmrWireframe/
27
NT Workstation Shipments Rapidly Surpassing UNIX Source: IDC, Wall Street Journal, 3/6/98
28
Current Alliance LES NT Cluster Testbed - Compaq Computer and Hewlett-Packard Schedule of NT Supercluster Goals –1998 Deploy First Production Clusters –Scientific and Engineering Tuned Cluster –Andrew Chien, Alliance Parallel Computing Team –Rob Pennington, NCSA C&C –Currently 256-processors of HP and Compaq Pentium II SMPs –Data Intensive Tuned Cluster –1999 Enlarge to 512-Processors in Cluster –2000 Move to Merced –2002-2005 Achieve Teraflop Performance UNIX/RISC & NT/Intel will Co-exist for 5 Years –1998-2000 Move Applications to NT/Intel –2000-2005 Convergence toward NT/Merced
29
First Scaling Testing of ZEUS-MP on CRAY T3E and Origin vs. NT Supercluster “Supercomputer performance at mail-order prices”-- Jim Gray, Microsoft access.ncsa.uiuc.edu/CoverStories/SuperCluster/super.html Zeus-MP Hydro Code Running Under MPI Alliance Cosmology Team Andrew Chien, UIUC Rob Pennington, NCSA
30
NCSA NT Supercluster Solving Navier-Stokes Kernel Preconditioned Conjugate Gradient Method With Multi-level Additive Schwarz Richardson Pre-conditioner (2D 1024x1024) Single Processor Performance: MIPS R10k 117 MFLOPS Intel Pentium II 80 MFLOPS Danesh Tafti, Rob Pennington, Andrew Chien NCSA
31
Near Perfect Scaling of Cactus - 3D Dynamic Solver for the Einstein GR Equations Ratio of GFLOPs Origin = 2.5x NT SC Danesh Tafti, Rob Pennington, Andrew Chien NCSA Cactus was Developed by Paul Walker, MPI-Potsdam UIUC, NCSA
32
NCSA Symbio - A Distributed Object Framework Bringing Scalable Computing to NT Desktops http://access.ncsa.uiuc.edu/Features/Symbio/Symbio.html Parallel Computing on NT Clusters –Briand Sanderson, NCSA –Microsoft Co-Funds Development Features –Based on Microsoft DCOM –Batch or Interactive Modes –Application Development Wizards Current Status & Future Plans –Symbio Developer Preview 2 Released –Princeton University Testbed
33
The Road to Merced http://developer.intel.com/solutions/archive/issue5/focus.htm#FOUR
34
vBNS Connected Alliance Site vBNS Alliance Site Scheduled for Connection NCSA FY98 Assembling the Links in the Grid with NSF’s vBNS Connections ProgramStarTAP 27 Alliance sites running... …16 more in progress. vBNS Backbone Node 1999: Expansion via Abilene vBNS & Abilene at 2.4 Gbit/s Source: Charlie Catlett, Randy Butler, NCSA NCSA Distributed Applications Support Team for vBNS
35
Globus Ubiquitous Supercomputing Testbed (GUSTO) Alliance Middleware for the Grid-Distributed Computing Team GII Next Generation Winner SF Express -- NPACI / Alliance DoD Mod. Demonstration –Largest Distributed Interactive Simulation Ever Performed The Grid: Blueprint for a New Computing Infrastructure –Edited by Ian Foster and Carl Kesselman, July 1998 IEEE Symposium on High Performance Distributed Computing –July 29-31, 1998 Chicago Illinois NASA IPG Most Recent Funding Addition
36
Alliance National Technology Grid Workshop and Training Facilities Powered by Silicon Graphics Linked by the NSF vBNS Jason Leigh and Tom DeFanti, EVL; Rick Stevens, ANL
37
Using NCSA Virtual Director to Explore Structure of Density Isosurfaces of 256 3 MHD Star Formation Simulation by Dinshaw Balsara, NCSA, Alliance Cosmology Team Visualization by Bob Patterson, NCSA Red Iso = 4x Mean Density Yellow Iso = 8x Mean Density Red Iso = 12x Mean Density Isosurface models generated by Vis5d Choreographed with Cave5D/VirDir Rendered with Wavefront on SGI Onyx
38
Linking CAVE to Vis5D = CAVE5D Then Use Virtual Director to Analyze Simulations Donna Cox, Robert Patterson, Stuart Levy, NCSAVirtual Director Team
39
Java 3D API HPC Application: VisAD Environ. Hydrology Team, (Bill Hibbard, Wisconsin) Steve Pietrowicz, NCSA Java Team Standalone or CAVE-to-Laptop-Collaborative Environmental Hydrology Collaboration: From CAVE to Desktop NASA IPG is Adding Funding To Collaborative Java3D
40
Caterpillar’s Collaborative Virtual Prototyping Environment Data courtesy of Valerie Lehner, NCSA Real Time Linked VR and Audio-Video Between NCSA and Germany Using SGI Indy/Onyx and HP Workstations
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.