National Computational Science Alliance NCSA is the Leading Edge Site for the National Computational Science Alliance www.ncsa.uiuc.edu.

Slides:



Advertisements
Similar presentations
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
Advertisements

Slides Prepared from the CI-Tutor Courses at NCSA By S. Masoud Sadjadi School of Computing and Information Sciences Florida.
Parallel computer architecture classification
National Computational Science Alliance “The Coming of the Grid” Building a Computational Grid Workshop Argonne National Laboratory September 8-10,1997.
Commodity Computing Clusters - next generation supercomputers? Paweł Pisarczyk, ATM S. A.
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
Ver 0.1 Page 1 SGI Proprietary Introducing the CRAY SV1 CRAY SV1-128 SuperCluster.
SGI’2000Parallel Programming Tutorial Supercomputers 2 With the acknowledgement of Igor Zacharov and Wolfgang Mertz SGI European Headquarters.
Beowulf Supercomputer System Lee, Jung won CS843.
National Computational Science Alliance The Emergence of the NT/Intel Standard Talk Given to Visitors from Compaq and Allstate January 14, 1998.
SDSC Computing the 21st Century Talk Given to the NSF Sugar Panel May 27, 1998.
Lincoln University Canterbury New Zealand Evaluating the Parallel Performance of a Heterogeneous System Elizabeth Post Hendrik Goosen formerly of Department.
NPACI Panel on Clusters David E. Culler Computer Science Division University of California, Berkeley
Supercomputers Daniel Shin CS 147, Section 1 April 29, 2010.
Introduction What is Parallel Algorithms? Why Parallel Algorithms? Evolution and Convergence of Parallel Algorithms Fundamental Design Issues.
PARALLEL PROCESSING The NAS Parallel Benchmarks Daniel Gross Chen Haiout.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Machine.
Lecture 1: Introduction to High Performance Computing.
Chapter 2 Computer Clusters Lecture 2.1 Overview.
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
National Computational Science Alliance Coupling the Leading Edge Site with the Alliance Partners Talk given to First Annual ITEA Workshop on High Performance.
HPCx: Multi-Teraflops in the UK A World-Class Service for World-Class Research Dr Arthur Trew Director.
National Computational Science Alliance Introducing the National Computational Science Alliance Panel Presentation to Supercomputing ‘97 in San Jose November.
National Computational Science Alliance Coupling the Leading Edge Site with the Alliance Partners Talk given to Alliance ‘98 at the University of Illinois.
Information Technology at Purdue Presented by: Dr. Gerry McCartney Vice President and CIO, ITaP HPC User Forum September 8-10, 2008 Using SiCortex SC5832.
National Computational Science Alliance Supercomputing: Directions in Technology, Architecture and Applications Keynote Talk to Supercomputer’98 in Mannheim,
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
Lappeenranta University of Technology / JP CT30A7001 Concurrent and Parallel Computing Introduction to concurrent and parallel computing.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
National Computational Science Alliance Bringing the Grid to Chemical Engineering Opening Talk at the 1998 Foundations of Computer Aided Process Operations.
Supercomputing the Next Century Talk to the Max-Planck-Institut fuer Gravitationsphysik Albert-Einstein-Institut, Potsdam, Germany June 15, 1998.
National Computational Science Alliance Knowledge Management and Corporate Intranets Talk to visiting team from Fisher Scientific January 13, 1998.
National Computational Science Alliance Increasing Competitiveness Through the Utilization of Emerging Technologies Leader to Leader Speaker Series, Allstate.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor : A Concept, A Tool and.
National Computational Science Alliance From Supercomputing to the Grid Invited Talk at SGI Booth, Supercomputing ‘98 Orlando, Florida, November 10,1998.
Case Study in Computational Science & Engineering - Lecture 2 1 Parallel Architecture Models Shared Memory –Dual/Quad Pentium, Cray T90, IBM Power3 Node.
Computing Environment The computing environment rapidly evolving ‑ you need to know not only the methods, but also How and when to apply them, Which computers.
Evaluation of Modern Parallel Vector Architectures Leonid Oliker Future Technologies Group Computational Research Division LBNL
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing: Performance, Usage, Tflops Peter R. Taylor San Diego Supercomputer.
HPCMP Benchmarking Update Cray Henry April 2008 Department of Defense High Performance Computing Modernization Program.
Mcs/ HPC challenges in Switzerland Marie-Christine Sawley General Manager CSCS SOS8, Charleston April,
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
Computing Environment The computing environment rapidly evolving ‑ you need to know not only the methods, but also How and when to apply them, Which computers.
CS591x -Cluster Computing and Parallel Programming
Computational Science & Engineering meeting national needs Steven F. Ashby SIAG-CSE Chair March 24, 2003.
National Computational Science Alliance Overview of the Alliance Kickoff Course in Alliance Streaming Video Series January 22, 1998.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
CCSM Performance, Successes and Challenges Tony Craig NCAR RIST Meeting March 12-14, 2002 Boulder, Colorado, USA.
National Computational Science Alliance A Review of User Projects at the Alliance Leading Edge Site Opening Talk to the Alliance Allocation Board Hosted.
Outline Why this subject? What is High Performance Computing?
National Computational Science Alliance The Alliance Distributed Supercomputing Facilities Opening Talk to the Alliance User Advisory Council Held at Supercomputing.
National Computational Science Alliance Bringing Science to the Grid Keynote Talk at the High Performance Distributed Computing Conference in Chicago,
Status and plans at KEK Shoji Hashimoto Workshop on LQCD Software for Blue Gene/L, Boston University, Jan. 27, 2006.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing: Performance, Usage, Tflops Peter R. Taylor San Diego Supercomputer.
LCSE – NCSA Partnership Accomplishments, FY01 Paul R. Woodward Laboratory for Computational Science & Engineering University of Minnesota October 17, 2001.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-2.
PACI Program : One Partner’s View Paul R. Woodward LCSE, Univ. of Minnesota NSF Blue Ribbon Committee Meeting Pasadena, CA, 1/22/02.
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
National Computational Science Ky PACS at the University of Kentucky April 2000 –Advanced Computing Resources –EPSCoR Outreach –SURA Liaison –John.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Introduction.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
National Computational Science Alliance Industrial Supercomputing Opening Talk to NCSA Strategic Industrial Partners Program Advisory Committee NCSA, University.
Developing HPC Scientific and Engineering Applications: From the Laptop to the Grid Gabrielle Allen, Tom Goodale, Thomas.
Condor A New PACI Partner Opportunity Miron Livny
Amit Amritkar & Danesh Tafti Eric de Sturler & Kasia Swirydowicz
Super Computing By RIsaj t r S3 ece, roll 50.
Exploring Distributed Computing Techniques with Ccactus and Globus
with Computational Scientists
The C&C Center Three Major Missions: In This Presentation:
Presentation transcript:

National Computational Science Alliance NCSA is the Leading Edge Site for the National Computational Science Alliance

National Computational Science Alliance Scientific Applications Continue to Require Exponential Growth in Capacity MACHINE REQUIREMENT IN FLOPS NSF Capability NSF Leading Edge Molecular Dynamics for Biological Molecules Computational Cosmology Turbulent Convection in Stars Atomic/Diatomic Interaction QCD MEMORYMEMORY BYTES BYTES = Long Range Projections from Recent Applications Workshop = Next Step Projections by NSF Grand Challenge Research Teams = Recent Computations by NSF Grand Challenge Research Teams ASCI in year climate model in hours NSF in 2004 (Projected) From Bob Voigt, NSF

National Computational Science Alliance The Promise of the Teraflop - From Thunderstorm to National-Scale Simulation Simulation by Wilhelmson, et al.; Figure from Supercomputing and the Transformation of Science, Kaufmann and Smarr, Freeman, 1993

National Computational Science Alliance Accelerated Strategic Computing Initiative is Coupling DOE Defense Labs to Universities Access to ASCI Leading Edge Supercomputers Academic Strategic Alliances Program Data and Visualization Corridors

National Computational Science Alliance Comparison of the DoE ASCI and the NSF PACI Origin Array Scale Through FY99 /Hardware/schedule.html Los Alamos Origin System FY processors NCSA Proposed System FY99 6x128 and 4x64=1024 processors

National Computational Science Alliance Future Upgrade Under Negotiation with NSF NCSA Combines Shared Memory Programming with Massive Parallelism CM-5 CM-2

National Computational Science Alliance The Exponential Growth of NCSA’s SGI Shared Memory Supercomputers Doubling Every Nine Months! Challenge Power Challenge Origin SN1

National Computational Science Alliance TOP500 Systems by Vendor TOP500 Reports: CRI SGI IBM Convex HP Sun TMC Intel DEC Japanese Other Jun-93 Nov-93 Jun-94 Nov-94 Jun-95 Nov-95 Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 Number of Systems Other Japanese DEC Intel TMC Sun HP Convex IBM SGI CRI

National Computational Science Alliance Average User MFLOPS Number of Users March, February, 1993 Average Performance, Users > 0.5 CPU Hour Cray Y-MP4 / 64 Average Speed 70 MFLOPS Peak Speed MIPS R8000 Peak Speed Y-MP1 Why NCSA Switched From Vector to RISC Processors NCSA 1992 Supercomputing Community

National Computational Science Alliance Replacement of Shared Memory Vector Supercomputers by Microprocessor SMPs TOP500 Reports: Top500 Installed SC’s Jun-93 Jun-94 Jun-95 Jun-96 Jun-97 Jun-98 MPP SMP/DSM PVP

National Computational Science Alliance Top500 Shared Memory Systems Vector ProcessorsMicroprocessors TOP500 Reports: PVP Systems Jun-93 Nov-93 Jun-94 Nov-94 Jun-95 Nov-95 Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 Number of Systems Europe Japan USA SMP + DSM Systems Jun-93 Nov-93 Jun-94 Nov-94 Jun-95 Nov-95 Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 Number of Systems USA

National Computational Science Alliance Simulation of the Evolution of the Universe on a Massively Parallel Supercomputer 12 Billion Light Years 4 Billion Light Years Virgo Project - Evolving a Billion Pieces of Cold Dark Matter in a Hubble Volume processor CRAY T3E at Garching Computing Centre of the Max-Planck-Society

National Computational Science Alliance Limitations of Uniform Grids for Complex Scientific and Engineering Problems Source: Greg Bryan, Mike Norman, NCSA 512x512x512 Run on 512-node CM-5 Gravitation Causes Continuous Increase in Density Until There is a Large Mass in a Single Grid Zone

National Computational Science Alliance Use of Shared Memory Adaptive Grids To Achieve Dynamic Load Balancing Source: Greg Bryan, Mike Norman, John Shalf, NCSA 64x64x64 Run with Seven Levels of Adaption on SGI Power Challenge, Locally Equivalent to 8192x8192x8192 Resolution

National Computational Science Alliance Extreme and Large PIs Dominant Usage of NCSA Origin January thru April, 1998

National Computational Science Alliance Disciplines Using the NCSA Origin 2000 CPU-Hours in March 1995

National Computational Science Alliance Solving 2D Navier-Stokes Kernel - Performance of Scalable Systems Source: Danesh Tafti, NCSA Preconditioned Conjugate Gradient Method With Multi-level Additive Schwarz Richardson Pre-conditioner (2D 1024x1024)

National Computational Science Alliance A Variety of Discipline Codes - Single Processor Performance Origin vs. T3E

National Computational Science Alliance Alliance PACS Origin2000 Repository Kadin Tseng, BU, Gary Jensen, NCSA, Chuck Swanson, SGI John Connolly, U Kentucky Developing Repository for HP Exemplar

National Computational Science Alliance NEC SX-5 –32 x 16 vector processor SMP –512 Processors –8 Gigaflop Peak Processor IBM SP –256 x 16 RISC Processor SMP –4096 Processors –1 Gigaflop Peak Processor SGI Origin Follow-on –32 x 128 RISC Processor DSM –4096 Processors –1 Gigaflop Peak Processor High-End Architecture Scalable Clusters of Shared Memory Modules Each is 4 Teraflops Peak

National Computational Science Alliance Emerging Portable Computing Standards HPF MPI OpenMP Hybrids of MPI and OpenMP

National Computational Science Alliance Basket of Applications Average Performance as Percentage of Linpack Performance 22% 25% 14%19% 33%26% Applications Codes: CFD Biomolecular Chemistry Materials QCD

National Computational Science Alliance Harnessing Distributed UNIX Workstations - University of Wisconsin Condor Pool Condor Cycles CondorView, Courtesy of Miron Livny, Todd Tannenbaum(UWisc)

National Computational Science Alliance NT Workstation Shipments Rapidly Surpassing UNIX Source: IDC, Wall Street Journal, 3/6/98

National Computational Science Alliance First Scaling Testing of ZEUS-MP on CRAY T3E and Origin vs. NT Supercluster “Supercomputer performance at mail-order prices”-- Jim Gray, Microsoft access.ncsa.uiuc.edu/CoverStories/SuperCluster/super.html Zeus-MP Hydro Code Running Under MPI Alliance Cosmology Team Andrew Chien, UIUC Rob Pennington, NCSA

National Computational Science Alliance NCSA NT Supercluster Solving Navier-Stokes Kernel Preconditioned Conjugate Gradient Method With Multi-level Additive Schwarz Richardson Pre-conditioner (2D 1024x1024) Single Processor Performance: MIPS R10k 117 MFLOPS Intel Pentium II 80 MFLOPS Danesh Tafti, Rob Pennington, Andrew Chien NCSA

National Computational Science Alliance Near Perfect Scaling of Cactus - 3D Dynamic Solver for the Einstein GR Equations Ratio of GFLOPs Origin = 2.5x NT SC Danesh Tafti, Rob Pennington, Andrew Chien NCSA Cactus was Developed by Paul Walker, MPI-Potsdam UIUC, NCSA

National Computational Science Alliance NCSA Symbio - A Distributed Object Framework Bringing Scalable Computing to NT Desktops Parallel Computing on NT Clusters –Briand Sanderson, NCSA –Microsoft Co-Funds Development Features –Based on Microsoft DCOM –Batch or Interactive Modes –Application Development Wizards Current Status & Future Plans –Symbio Developer Preview 2 Released –Princeton University Testbed

National Computational Science Alliance The Road to Merced