An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.

Slides:



Advertisements
Similar presentations
© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Blue Gene/P System Overview - Hardware.
Advertisements

IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
The AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members December 9,
Commodity Computing Clusters - next generation supercomputers? Paweł Pisarczyk, ATM S. A.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Alabama Supercomputer Authority Internet and Technology for Education.
Beowulf Supercomputer System Lee, Jung won CS843.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Information Technology Center Introduction to High Performance Computing at KFUPM.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
1 BGL Photo (system) BlueGene/L IBM Journal of Research and Development, Vol. 49, No. 2-3.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
Parallel/Concurrent Programming on the SGI Altix Conley Read January 25, 2007 UC Riverside, Department of Computer Science.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Earth Simulator Jari Halla-aho Pekka Keränen. Architecture MIMD type distributed memory 640 Nodes, 8 vector processors each. 16GB shared memory per node.
Sun FIRE Jani Raitavuo Niko Ronkainen. Sun FIRE 15K Most powerful and scalable Up to 106 processors, 576 GB memory and 250 TB online disk storage Fireplane.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
Cluster Computing Slides by: Kale Law. Cluster Computing Definition Uses Advantages Design Types of Clusters Connection Types Physical Cluster Interconnects.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
Hardware. THE MOVES INSTITUTE Hardware So you want to build a cluster. What do you need to buy? Remember the definition of a beowulf cluster: Commodity.
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
Executing OpenMP Programs Mitesh Meswani. Presentation Outline Introduction to OpenMP Machine Architectures Shared Memory (SMP) Distributed Memory MPI.
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
1 Support and Maintenance Kristi Jacobson 7/15/08.
Russ Miller Center for Computational Research Computer Science & Engineering SUNY-Buffalo Hauptman-Woodward Medical Inst IDF: Multi-Core Processing for.
Lecture 1 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Sobolev Showcase Computational Mathematics and Imaging Lab.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
Increasing Web Server Throughput with Network Interface Data Caching October 9, 2002 Hyong-youb Kim, Vijay S. Pai, and Scott Rixner Rice Computer Architecture.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
CCS Overview Rene Salmon Center for Computational Science.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
On High Performance Computing and Grid Activities at Vilnius Gediminas Technical University (VGTU) dr. Vadimas Starikovičius VGTU, Parallel Computing Laboratory.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Argonne Leadership Computing Facility ALCF at Argonne  Opened in 2006  Operated by the Department of Energy’s Office of Science  Located at Argonne.
CEA DSM Irfu IRFU site report. CEA DSM Irfu HEPiX Fall 0927/10/ Computing centers used by IRFU people IRFU local computing IRFU GRIF sub site Windows.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 10, 2009.
Building Cyberinfrastructure into a University Culture EDUCAUSE Live! March 30, 2010 Curt Hillegas Director, TIGRESS HPC Center Princeton University 1.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
ARCHER Advanced Research Computing High End Resource
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 1 State of the CCS SOS 8 April 13, 2004 James B. White.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
11 October 2000Iain A Bertram - Lancaster University1 Lancaster Computing Facility zStatus yVendor for Facility Chosen: Workstations UK yPurchase Contract.
NIIF HPC services for research and education
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Building a Cyberinfrastructure Culture: IT as a Partner in Research
BlueGene/L Supercomputer
Overview of HPC systems and software available within
Cluster Computers.
Presentation transcript:

An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas

Introduction SGI Altix - Hecate IBM Blue Gene/L – Orangena Dell Beowulf Cluster – Della Storage Other resources

TIGRESS High Performance Computing Center Terascale Infrastructure for Groundbreaking Research in Engineering and Science

Partnerships Princeton Institute for Computational Science and Engineering (PICSciE) Office of Information Technology (OIT) School of Engineering and Applied Science (SEAS) Lewis-Sigler Institute for Integrative Genomics Astrophysical Sciences Princeton Plasma Physics Laboratory (PPPL)

SGI Altix - Hecate GHz Itanium2 processors 256 GB RAM (4 GB per processor) NUMAlink interconnect 5 TB local disk 360 GFlops

SGI Altix – Itanium GHz 4 MB L3 Cache –256 KB L2 Cache –32 KB L1 Cache

SGI Altix - NUMAlink NUMAlink GB/s per direction Physical latency – 28 ns MPI latency – 1  s Up to 256 processors

SGI Altix - Software SLES 9 with SGI ProPack – sn2 kernel Intel Fortran compilers v8.1 Intel C/C++ compilers v8.1 Intel Math Kernel Libraries v7 Intel vtune Torque/Maui OpenMP MPT (SGI mpich libraries) fftw-2.1.5, fftw hdf4, hdf5 ncarg petsc

IBM Blue Gene/L - Orangena MHz Power4 processors 1024 nodes 512 MB RAM (256 MB per processor) 5 Interconnects including a 3D torus 8 TB local disk TFlops

IBM Blue Gene/L – Full system architecture 1024 nodes –2 PowerPC 440 cpus –512 MB RAM –1 rack –35 kVA –100 kBTU/hr 2 racks of supporting servers and disks –Service node –Front end node –8 storage nodes –8 TB GPFS storage –1 Cisco switch

IBM Blue Gene/L

IBM Blue Gene/L - networks 3D Torus network Collective (tree) network Barrier network Functional network Service network

IBM Blue Gene/L - Software LoadLeveler (coming soon) mpich XL Fortran Advanced Edition V9.1 –mpxlf, mpf90, mpf95 XL C/C++ Advanced Edition V7.0 –Mpcc, mpxlc, mpCC fftw and fftw hdf netcdf BLAS, LAPACK, ScaLAPACK

IBM Blue Gene/L – More…

Dell Beowulf Cluster - Della GHz Xeon processors 256 nodes 2 TB RAM (4 GB per processor) Gigabit Ethernet 64 nodes connected to Infiniband 3 TB local disk TFlops

Dell Beowulf Cluster – Interconnects All nodes connected with Gigabit Ethernet –1 Gb/s –MPI latency ~ 30  s 64 nodes connected with Infiniband –10 Gb/s –MPI latency ~5  s

Dell Beowulf Cluster - Software Elders RHEL 4 based image – ELsmp kernel Intel compilers Torque/Maui OpenMPI-1.1 fftw-2.1.5, fftw R MatlabR2006a

Dell Beowulf Cluster – More…

Storage 38 TB delivered GPFS filesystem At least 200 MB/s Installation at the end of this month Fees to recover half the cost

Getting Access 1 – 3 page proposal Scientific background and merit Resource requirements –# concurrent cpus –Total cpu hours –Memory per process/total memory –Disk space A few references

Other resources adrOIT Condor Programming help

Questions