National Center for Supercomputing Applications University of Illinois at Urbana-Champaign HPC USER FORUM Stuttgart Germany October 2010 Merle Giles Private.

Slides:



Advertisements
Similar presentations
Danny Powell Executive Director
Advertisements

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
1 Ideas About the Future of HPC in Europe “The views expressed in this presentation are those of the author and do not necessarily reflect the views of.
Slides Prepared from the CI-Tutor Courses at NCSA By S. Masoud Sadjadi School of Computing and Information Sciences Florida.
1 Computational models of the physical world Cortical bone Trabecular bone.
ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
High-Performance Computing
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Beowulf Supercomputer System Lee, Jung won CS843.
Parallel Research at Illinois Parallel Everywhere
GENI: Global Environment for Networking Innovations Larry Landweber Senior Advisor NSF:CISE Joint Techs Madison, WI July 17, 2006.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Claude TADONKI Mines ParisTech – LAL / CNRS / INP 2 P 3 University of Oujda (Morocco) – October 7, 2011 High Performance Computing Challenges and Trends.
Using R as enterprise-wide data analysis platform Zivan Karaman.
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
Some Thoughts on Technology and Strategies for Petaflops.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Introduction to Grid Computing Ann Chervenak Carl Kesselman And the members of the Globus Team.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
UNIVERSITY of MARYLAND GLOBAL LAND COVER FACILITY High Performance Computing in Support of Geospatial Information Discovery and Mining Joseph JaJa Institute.
Exascale Evolution 1 Brad Benton, IBM March 15, 2010.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Programming for High Performance Computers John M. Levesque Director Cray’s Supercomputing Center Of Excellence.
New Direction Proposal: An OpenFabrics Framework for high-performance I/O apps OFA TAC, Key drivers: Sean Hefty, Paul Grun.
© 2011 IBM Corporation Smarter Software for a Smarter Planet The Capabilities of IBM Software Borislav Borissov SWG Manager, IBM.
SGI Proprietary SGI Update IDC HPC User Forum September, 2008.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Future of High Performance Computing Thom Dunning National Center.
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
CCS machine development plan for post- peta scale computing and Japanese the next generation supercomputer project Mitsuhisa Sato CCS, University of Tsukuba.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Jaguar Super Computer Topics Covered Introduction Architecture Location & Cost Bench Mark Results Location & Manufacturer Machines in top 500 Operating.
March 9, 2015 San Jose Compute Engineering Workshop.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
Leibniz Supercomputing Centre Garching/Munich Matthias Brehm HPC Group June 16.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Exscale – when will it happen? William Kramer National Center for Supercomputing Applications.
DiRAC-3 – The future Jeremy Yates, STFC DiRAC HPC Facility.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Professor Arthur Trew Director, EPCC EPCC: KT in novel computing.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
ENEA GRID & JPNM WEB PORTAL to create a collaborative development environment Dr. Simonetta Pagnutti JPNM – SP4 Meeting Edinburgh – June 3rd, 2013 Italian.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
Extreme Scale Infrastructure
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Clouds , Grids and Clusters
Performance Technology for Scalable Parallel Systems
Appro Xtreme-X Supercomputers
Super Computing By RIsaj t r S3 ece, roll 50.
University of Technology
BlueGene/L Supercomputer
IBM Power Systems.
Presentation transcript:

National Center for Supercomputing Applications University of Illinois at Urbana-Champaign HPC USER FORUM Stuttgart Germany October 2010 Merle Giles Private Sector Program & Economic Development

Imaginations unbound Beckman Institute ECE CSL Digital Computer Lab College of Engineering Biotechnology Center Institute for Genomic Biology NCSA Basic & Applied Research Computer Science

Theoretical & Basic Research Commercialization & Production (.com or.org) Applied Prototyping & Development Optimization & Robustification NCSA Bridges the Gap BETWEEN Basic Research & Commercialization Product Life Cycle Phase 1 Feasibility Phase 3 Prototyping Phase 4 Production/ Deployment Phase 0 Concept/ Vision Universities & Labs NCSA Bridges Basic Research and Commercialization with Application Private Industry Application Phase 2 Design/ Development Over the horizon development … Imaginations unbound

3D virtual prototyping at NCSA => Caterpillar’s Global Simulation Center in Champaign Full-scale simulation of cell tower activity Reduced reliance on wet labs thru computation Cluster design and architecture HPC-capable fast-network hard drives World’s first Internet GUI interface Prototyping Windows® HPC Operating System Value Creation and Economic Development Imaginations unbound

The HDF Group (pdf-equivalent for data) Linux OS made standard for HPC Apache server software Telnet remote access R-systems hosts Wolfram Alpha Music data analytics at One Llama River Glass spinout – recent Boeing buyout $1 Trillion per founder Larry Smarr Value Creation and Economic Development Imaginations unbound TOTAL $$

Imaginations unbound Industry Partners over time

CURRENT PARTNERS Imaginations unbound

CLASSICAL HPC MISSION Historically, HPC has focused on science discovery Economic value has also been achieved in HPC derivatives Industrial value keys on discovery and optimization Increasingly, industry brings world-class problems Imaginations unbound

Tomorrow’s Industrial Challenge TIME COMPLEXITY FIDELITY tomorrow’s HPC Challenge today Imaginations unbound

Discovery is No Longer Sufficient TIME COMPLEXITY FIDELITY tomorrow’s HPC Challenge today Industrial Design Cycle Imaginations unbound

PLM Imaginations unbound Shipping Quality Assurance AssemblyManufacturing Production Planning Product Modeling Technical Regs and Standards Product Design CAD/CAE Process Platform Supply Chain Mgmt Bills Of Materials Cost Accounting data integration Product Lifecycle Management Proves Why Product Specification HPC|||||||HPC||||||| HPC|||||||HPC||||||| Information Platform

Forbes Magazine: Publisher Rich Karlgaard Digital Rules, September 13, 2010 Smart-aggregation rules: Some forms of content will always need human curating. Dumb-aggregation rules: And some forms of content won’t. The trick is to figure out where algorithms beat humans, and vice versa. Imaginations unbound

Competitive Advantage needs Human Expertise Materials, fluids, mechanical, aerothermal, electrical, etc. Parallelization, scaling, multiphysics, algorithms, etc. Petascale, ISV code, data management, architecture, etc. Imaginations unbound Strategy, process, services, partnerships, etc.

Headlines GERMANY Trade & Invest – Partnership is the key to country’s thriving R&D landscape. IBM Smarter Planet – Thanks to pervasive instrumentation and global interconnection, we are now capturing data in unprecedented volume and variety. IBM Smarter Planet – World’s network traffic will soon total more than half a zettabyte. WSJ Steve Conway – “There is growing recognition of the close link between supercomputing and scientific advancement as well as industrial competitiveness.” Forbes – Consumer technology is now ahead of most industrial technology. Imaginations unbound

National Center for Supercomputing Applications University of Illinois at Urbana-Champaign EXTREME COMPUTING

NCSA’s Blue Waters is the first open-access system tasked to achieve ≥ 1 petaflop/s on real applications. New Performance Driver Imaginations unbound

Guess What This Is ? From

Imaginations unbound Guess What This Is ? From A hard disk drive with 5 MB storage

Giga Tera Peta (Bits, Bytes, Flop/s) Exa Mega Zetta Yotta Imaginations unbound Leading-Edge Collaboration

Blue Waters Expected to Beat 2008’s TOP500 ® COMBINED! Imaginations unbound

U.S. Leadership Computing Programs U.S. Department of Energy Oak Ridge National Laboratory: Jaguar + Follow-on Argonne National Laboratory: Intrepid + Follow-on Lawrence Livermore National Lab: Dawn + Sequoia National Science Foundation University of Illinois/NCSA: Blue Waters NASA Ames Research Center: Pleiades Imaginations unbound

Petaflop/s Comparison SYSTEM ATTRIBUTENCSA Abe DOE Jaguar NCSA Blue Waters Vendor DellCrayIBM Processor Intel Xeon 5300AMD 2435 IBM Power7 Peak Performance (Pf/s) ~10.0 Sustained Performance (Pf/s) ~0.005??≥1.03 # Cores/Chip 468 # Cores (total) 9,600224,256>300,000 Memory (Terabytes) >1,200 Online Disk Storage (Terabytes) 10010,000>18,000 Archival Storage (Petabytes) 520up to 500 Sustained Disk Transfer (TB/s) na.240> 1.5 Imaginations unbound

Machine Comparison SYSTEM ATTRIBUTEASC Purple IBM DOE Jaguar CRAY Blue Waters IBM **1 Rack** Blue Waters IBM Complete Year Deployed Processor IBM P5AMDIBM P7 # Cores 12,000224,2563,072300,000+ Peak Performance (Tf/s) 1002, ,000+ Disk Storage (Terabytes) 2,0007, ,000+ # Disk drives 10,000??38440,000 Memory (Terabytes) ,200 Cost $290M?? $208M Imaginations unbound

IH Server Node 8 QCM’s (256 cores) 8 TF (peak) 1 TB memory 4 TB/s memory bw 8 Hub chips Power supplies PCIe slots Fully water cooled Quad-chip Module 4 Power7 chips 128 GB memory 512 GB/s memory bw 1 TF (peak) Hub Chip TB/s bw Blue Waters is built from components that can be used to build other systems with a wide range of capabilities— from servers to beyond Blue Waters. Blue Waters will be the most powerful computer in the world for scientific research when it comes on line in Power7 Chip 8 cores, 32 threads L1, L2, L3 cache (32 MB) Up to 256 GF (peak) 128 Gb/s memory bw Blue Waters ~10 PF Peak ~1 PF sustained >300,000 cores ~1.2 PB of memory >18 PB of disk storage 500 PB of archival storage ≥100 Gbps connectivity Blue Waters Building Block 32 IH server nodes 256 TF (peak) 32 TB memory 128 TB/s memory bw 4 Storage systems (>500 TB) 10 Tape drive connections Imaginations unbound Integrated/Scalable System

From Chip to Entire Integrated System 25 On-line Storage Near-line Storage P7 Chip (8 cores) SMP node (32 cores) Drawer (256 cores) SuperNode (1024 cores) Building Block Blue Waters System NPCF Imaginations unbound

IBM P7IH Supernode = 128 CPUs/1024 cores Imaginations unbound

27 BPA  200 to 480Vac  370 to 575Vdc  Redundant Power  Direct Site Power Feed  PDU Elimination WCU  Facility Water Input  100% Heat to Water  Redundant Cooling  CRAH Eliminated Storage Unit  4U  0-6 / Rack  Up To 384 SFF DASD/Unit  File System CECs  2U  1-12 CECs/Rack  256 Cores  128 SN DIMM Slots / CEC  8,16, (32) GB DIMMs  17 PCI-e Slots  Imbedded Switch  Redundant DCA  NW Fabric  Up to: 3072 cores, 24.6TB (49.2TB) Rack  990.6w x d x  39”w x 72”d x 83”h  ~2948kg (~6500lbs) Data Center In a Rack Compute Storage Switch 100% Cooling PDU Eliminated Input: 8 Water Lines, 4 Power Cords Out: ~100TFLOPs / 24.6TB / 153.5TB 192 PCI-e 16x / 12 PCI-e 8x Data Center in a Rack

Diverse Large Scale Computational Science Imaginations unbound

Full – featured OS (AIX or Linux), Sockets, threads, shared memory, checkpoint/restart Full – featured OS (AIX or Linux), Sockets, threads, shared memory, checkpoint/restart Languages: C/C++, Fortran ( including CAF), UPC IO Model: Global, Parallel shared file system (>10 PB) and archival storage (GPFS/HPSS) MPI I/O IO Model: Global, Parallel shared file system (>10 PB) and archival storage (GPFS/HPSS) MPI I/O Libraries: MASS, ESSL, PESSL, PETSc, visualization… Programming Models: MPI/MP2, OpenMP, PGAS, Charm++, Cactus Hardware Multicore POWER7 processor with Simultaneous MultiThreading (SMT) and Vector MultiMedia Extensions (VSX) Private L1, L2 cache per core, shared L3 cache per chip High-Performance, low-latency interconnect supporting RDMA Hardware Multicore POWER7 processor with Simultaneous MultiThreading (SMT) and Vector MultiMedia Extensions (VSX) Private L1, L2 cache per core, shared L3 cache per chip High-Performance, low-latency interconnect supporting RDMA Environment: Traditional (command line), Eclipse IDE (application development, debugging, performance tuning, job and workflow management) Low-level communications API supporting active messages (LAPI) Resource manager: Batch and interactive access Performance tuning: HPC and HPCS toolkits, open source tools Parallel debugging at full scale Programming Environment Imaginations unbound

Blue Waters Benchmark Codes NSF Challenge: ≥1 Sustained petaflop/s Photos courtesy of NERSC, UIUC, IBM Imaginations unbound

Path to Petascale Imaginations unbound

2 Paths to Blue Waters Imaginations unbound

National Petascale Computing Facility $72.5M, 25MW, LEED Gold+ Military-grade security Non-classified 88,000 ft 2

National Center for Supercomputing Applications University of Illinois at Urbana-Champaign THANK YOU!