Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC Benchmarks at HLRS Matthias Mueller High Performance Computing Center Stuttgart

Slides:



Advertisements
Similar presentations
SPEC High Performance Group (An Introduction)
Advertisements

UNICORE – The Seamless GRID Solution Hans–Christian Hoppe A Member of the ExperTeam Group Pallas GmbH Hermülheimer Straße 10 D–50321 Brühl, Germany
March 2008MPI Forum Voting 1 MPI Forum Voting March 2008.
MPI Forum ABI WG Status and Discussion Dublin meeting September, 2008.
MPI ABI Working Group status March 10, 2008 Jeff Brown, chair (LANL)
SPECs High-Performance Benchmark Suite, SPEC HPC Rudi Eigenmann Purdue University Indiana, USA.
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop this is important for machine procurements and for understanding where HPC technology is.
Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC Benchmarks for Large Systems Matthias Mueller, Kumaran Kalyanasundaram, G. Gaertner, W. Jones,
SPEC HPG Benchmarks for HPC Systems Kumaran Kalyanasundaram for SPEC High-Performance Group Kumaran Kalyanasundaram, PhD Chair, SPEC HPG Manager, SGI Performace.
Höchstleistungsrechenzentrum Stuttgart Matthias M üller Current Efforts of SPEC HPG Application Benchmarks for High Performance Computing IPSJ SIGMPS 2003.
Höchstleistungsrechenzentrum Stuttgart Matthias M üller Overview of SPEC HPG Benchmarks SPEC BOF SC2003 Matthias Mueller High Performance Computing Center.
Presentation Outline A word or two about our program Our HPC system acquisition process Program benchmark suite Evolution of benchmark-based performance.
Forschungszentrum Jülich in der Helmholtz-Gesellschaft December 2006 A European Grid Middleware Achim Streit
Weather Research & Forecasting: A General Overview
Initial Design of a Test Suite for Automatic Performance Analysis Tools Bernd Mohr Forschungszentrum Jülich John von Neumann - Institut für Computing Germany.
1 Uniform memory access (UMA) Each processor has uniform access time to memory - also known as symmetric multiprocessors (SMPs) (example: SUN ES1000) Non-uniform.
Introductions to Parallel Programming Using OpenMP
The OpenUH Compiler: A Community Resource Barbara Chapman University of Houston March, 2007 High Performance Computing and Tools Group
Eos Compilers Fernanda Foertter HPC User Assistance Specialist.
HPC in Poland Marek Niezgódka ICM, University of Warsaw
Commodity Computing Clusters - next generation supercomputers? Paweł Pisarczyk, ATM S. A.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Grid Computing at Intel c urrent Status and Outlook Ralf Ratering Senior Software Engineer Intel Parallel and Distributed Solutions Division (PDSD)
The UNICORE GRID Project Karl Solchenbach Gesellschaft für Parallele Anwendungen und Systeme mbH Pallas GmbH Hermülheimer Straße 10 D Brühl, Germany.
ASU/TGen Computational Facility.
CPSC 2031 What do you really need to know about computers today? Where are we in development?
MD240 - Management Information Systems Sept. 13, 2005 Computing Hardware – Moore's Law, Hardware Markets, and Computing Evolution.
Unified Parallel C at LBNL/UCB UPC at LBNL/U.C. Berkeley Overview Kathy Yelick U.C. Berkeley, EECS LBNL, Future Technologies Group.
Software Group © 2006 IBM Corporation Compiler Technology Task, thread and processor — OpenMP 3.0 and beyond Guansong Zhang, IBM Toronto Lab.
The TAU Performance Technology for Complex Parallel Systems (Performance Analysis Bring Your Own Code Workshop, NRL Washington D.C.) Sameer Shende, Allen.
1 HPC and the ROMS BENCHMARK Program Kate Hedstrom August 2003.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Mstack on the Cray MTA-2. What is Mstack? (1) Written by Daniel M. Pressel of the US Army Research Lab Small, simple, lightweight, easily portable. Simulates.
Aim High…Fly, Fight, Win NWP Transition from AIX to Linux Lessons Learned Dan Sedlacek AFWA Chief Engineer AFWA A5/8 14 MAR 2011.
Hossein Bastan Isfahan University of Technology 1/23.
OpenMP in a Heterogeneous World Ayodunni Aribuki Advisor: Dr. Barbara Chapman HPCTools Group University of Houston.
Presented by The MPI Forum Richard L Graham. Outline  Forum Structure  Schedules  Introductions  Scope  Voting Rules  Committee Rules.
1 Intel® Many Integrated Core (Intel® MIC) Architecture MARC Program Status and Essentials to Programming the Intel ® Xeon ® Phi ™ Coprocessor (based on.
IS-ENES Kick-off meeting Paris, March 2009 Overview of JRA2 European ESM: Performance Enhancement Graham Riley, University of Manchester IS-ENES Kick-off.
Progress in Multi-platform Software Deployment (Linux and Windows) Tim Kwiatkowski Welcome Consortium Members November 29,
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
XSEDE14 Reproducibility Workshop: Reproducibility in Large Scale Computing – Where do we stand Mark R. Fahey, NICS Robert McLay, TACC XSEDE14 - Reproducibility.
TRACEREP: GATEWAY FOR SHARING AND COLLECTING TRACES IN HPC SYSTEMS Iván Pérez Enrique Vallejo José Luis Bosque University of Cantabria TraceRep IWSG'15.
Center for Programming Models for Scalable Parallel Computing: Project Meeting Report Libraries, Languages, and Execution Models for Terascale Applications.
The John von Neumann Institute for Computing (NIC): A survey of its computer facilities and its Europe-wide computational science activities Norbert Attig.
Experience with COSMO MPI/OpenMP hybrid parallelization Matthew Cordery, William Sawyer Swiss National Supercomputing Centre Ulrich Schättler Deutscher.
OpenMP – Introduction* *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
Profile Analysis with ParaProf Sameer Shende Performance Reseaerch Lab, University of Oregon
Supercomputing Cross-Platform Performance Prediction Using Partial Execution Leo T. Yang Xiaosong Ma* Frank Mueller Department of Computer Science.
Case Study in Computational Science & Engineering - Lecture 2 1 Parallel Architecture Models Shared Memory –Dual/Quad Pentium, Cray T90, IBM Power3 Node.
Leibniz Supercomputing Centre Garching/Munich Matthias Brehm HPC Group June 16.
HPCMP Benchmarking Update Cray Henry April 2008 Department of Defense High Performance Computing Modernization Program.
A New Parallel Debugger for Franklin: DDT Katie Antypas User Services Group NERSC User Group Meeting September 17, 2007.
Mcs/ HPC challenges in Switzerland Marie-Christine Sawley General Manager CSCS SOS8, Charleston April,
1 Qualifying ExamWei Chen Unified Parallel C (UPC) and the Berkeley UPC Compiler Wei Chen the Berkeley UPC Group 3/11/07.
HPC1 OpenMP E. Bruce Pitman October, HPC1 Outline What is OpenMP Multi-threading How to use OpenMP Limitations OpenMP + MPI References.
Weather Research & Forecasting Model Xabriel J Collazo-Mojica Alex Orta Michael McFail Javier Figueroa.
Lautenschlager + Thiemann (M&D/MPI-M) / / 1 Introduction Course 2006 Services and Facilities of DKRZ and M&D Integrating Model and Data Infrastructure.
Porting processes to threads with MPC instead of forking Some slides from Marc Tchiboukdjian (IPDPS’12) : Hierarchical Local Storage Exploiting Flexible.
SDM Center High-Performance Parallel I/O Libraries (PI) Alok Choudhary, (Co-I) Wei-Keng Liao Northwestern University In Collaboration with the SEA Group.
Distributed Real-time Systems- Lecture 01 Cluster Computing Dr. Amitava Gupta Faculty of Informatics & Electrical Engineering University of Rostock, Germany.
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
TeraGrid Capability Discovery John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory.
EGEE Workshop on Management of Rights in Production Grids Paris, June 19th, 2006 Victor Alessandrini IDRIS - CNRS DEISA : status, strategies, perspectives.
Monterey HPDC Workshop Experiences with MC-GPFS in DEISA Andreas Schott
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Dieter an Mey Center for Computing and Communication, RWTH Aachen University, Germany V 3.0.
TAU integration with Score-P
The Automotive Simulation Center Stuttgart – and other remarks
Given that {image} {image} Evaluate the limit: {image} Choose the correct answer from the following:
Presentation transcript:

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC Benchmarks at HLRS Matthias Mueller High Performance Computing Center Stuttgart

Höchstleistungsrechenzentrum Stuttgart Matthias M üller The role of HLRS in Germany 3 National High Performance Computing Centers –HLRS at Stuttgart (1996) –NIC at Jülich (1997) –LRZ at Munich (1999) Provide Supercomputing Performance to the German research community Coordinated procurements –Provide different kinds of architectures –Provide state of the art technology 1996: NEC SX-4 (HLRS) 1997: Cray T3E (HLRS and NIC) 1999: NEC SX-5 (HLRS) 2000: Hitachi SR8000 (Munich) 2002: IBM (NIC) 2003: ??? (HLRS)

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Target Platforms at HLRS Cray SV1/20 SGI Origin 2000 NEC SX-4/32 H2 NEC SX-5/32 M2e Cray T3E-900/512 Hitachi SR8000 hpcLine hp N-Class NEC Azusa MPP: SMP: Cluster of SMP: HP zx2600 cluster Intel Tiger

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Intel Itanium 2 evaluation at HLRS General steps in evaluation of new systems: –Portability issues (compiler, libraries) –Correctness –Stability –Performance SPEC OMP benchmarks offer a convenient framework for all the issues above Performance of own applications is more important, but: –SPEC benchmarks can represent the non-power users –You can look at the single application results and choose the numbers that are the most relevant

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SMP Performance Gain Itanium/Itanium 2

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC HPC2002 effort Full Application benchmarks (including I/O) targeted at HPC platforms OpenMP and/or MPI Currently three applications: –SPECenv: weather forecast –SPECseis: seismic processing –SPECchem: comp. chemistry HLRS efforts: –increasing portability –Candidate codes for new benchmarks