SPEC HPG Benchmarks for HPC Systems Kumaran Kalyanasundaram for SPEC High-Performance Group Kumaran Kalyanasundaram, PhD Chair, SPEC HPG Manager, SGI Performace.

Slides:



Advertisements
Similar presentations
SPEC High Performance Group (An Introduction)
Advertisements

SPEC ENV2002: Environmental Simulation Benchmarks Wesley Jones
SPEC MPI2007 Benchmarks for HPC Systems Ron Lieberman Chair, SPEC HPG HP-MPI Performance Hewlett-Packard Company Dr. Tom Elken Manager, Performance Engineering.
SPECs High-Performance Benchmark Suite, SPEC HPC Rudi Eigenmann Purdue University Indiana, USA.
SPEC OMP Benchmark Suite H. Saito, G. Gaertner, W. Jones, R. Eigenmann, H. Iwashita, R. Lieberman, M. van Waveren, and B. Whitney SPEC High-Performance.
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop this is important for machine procurements and for understanding where HPC technology is.
Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC Benchmarks for Large Systems Matthias Mueller, Kumaran Kalyanasundaram, G. Gaertner, W. Jones,
Höchstleistungsrechenzentrum Stuttgart Matthias M üller Current Efforts of SPEC HPG Application Benchmarks for High Performance Computing IPSJ SIGMPS 2003.
Höchstleistungsrechenzentrum Stuttgart Matthias M üller Overview of SPEC HPG Benchmarks SPEC BOF SC2003 Matthias Mueller High Performance Computing Center.
Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC Benchmarks at HLRS Matthias Mueller High Performance Computing Center Stuttgart
Presentation Outline A word or two about our program Our HPC system acquisition process Program benchmark suite Evolution of benchmark-based performance.
MPI version of the Serial Code With One-Dimensional Decomposition Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy.
Parallel Processing with OpenMP
Introduction to Openmp & openACC
1 ISCM-10 Taub Computing Center High Performance Computing for Computational Mechanics Moshe Goldberg March 29, 2001.
Introductions to Parallel Programming Using OpenMP
The OpenUH Compiler: A Community Resource Barbara Chapman University of Houston March, 2007 High Performance Computing and Tools Group
Thoughts on Shared Caches Jeff Odom University of Maryland.
Parallel Apriori Algorithm Using MPI Congressional Voting Records Çankaya University Computer Engineering Department Ahmet Artu YILDIRIM January 2010.
What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve?
Parallel Computation of the 2D Laminar Axisymmetric Coflow Nonpremixed Flames Qingan Andy Zhang PhD Candidate Department of Mechanical and Industrial Engineering.
Optimized Parallel Approach for 3D Modelling of Forest Fire Behaviour G. Accary, O. Bessonov, D. Fougère, S. Meradji, D. Morvan Institute for Problems.
Chapter 4 M. Keshtgary Spring 91 Type of Workloads.
Introduction CS 524 – High-Performance Computing.
CSCE 212 Chapter 4: Assessing and Understanding Performance Instructor: Jason D. Bakos.
CS2214 Recitation Presented By Veejay Sani. Benchmarking SPEC CPU2000 Integer Benchmark Floating Point Benchmark We will only deal with Integer Benchmark.
NPACI: National Partnership for Advanced Computational Infrastructure Supercomputing ‘98 Mannheim CRAY T90 vs. Tera MTA: The Old Champ Faces a New Challenger.
A Comparative Study of Network Protocols & Interconnect for Cluster Computing Performance Evaluation of Fast Ethernet, Gigabit Ethernet and Myrinet.
Parallel/Concurrent Programming on the SGI Altix Conley Read January 25, 2007 UC Riverside, Department of Computer Science.
SW Architecture Review Steven Anastos Jose De Jesus Sean Stevenson.
Earth Simulator Jari Halla-aho Pekka Keränen. Architecture MIMD type distributed memory 640 Nodes, 8 vector processors each. 16GB shared memory per node.
Unified Parallel C at LBNL/UCB FT Benchmark in UPC Christian Bell and Rajesh Nishtala.
CPU Performance Assessment As-Bahiya Abu-Samra *Moore’s Law *Clock Speed *Instruction Execution Rate - MIPS - MFLOPS *SPEC Speed Metric *Amdahl’s.
Budapest, November st ALADIN maintenance and phasing workshop Short introduction to OpenMP Jure Jerman, Environmental Agency of Slovenia.
Executing OpenMP Programs Mitesh Meswani. Presentation Outline Introduction to OpenMP Machine Architectures Shared Memory (SMP) Distributed Memory MPI.
OMPi: A portable C compiler for OpenMP V2.0 Elias Leontiadis George Tzoumas Vassilios V. Dimakopoulos University of Ioannina.
OpenMP in a Heterogeneous World Ayodunni Aribuki Advisor: Dr. Barbara Chapman HPCTools Group University of Houston.
University of Maryland Compiler-Assisted Binary Parsing Tugrul Ince PD Week – 27 March 2012.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Measuring Synchronisation and Scheduling Overheads in OpenMP J. Mark Bull EPCC University of Edinburgh, UK
OpenMP Blue Waters Undergraduate Petascale Education Program May 29 – June
S AN D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE On pearls and perils of hybrid OpenMP/MPI programming.
Alternative ProcessorsHPC User Forum Panel1 HPC User Forum Alternative Processor Panel Results 2008.
 Copyright, HiCLAS1 George Delic, Ph.D. HiPERiSM Consulting, LLC And Arney Srackangast, AS1MET Services
1 Parallel Programming Aaron Bloomfield CS 415 Fall 2005.
Software Overview Environment, libraries, debuggers, programming tools and applications Jonathan Carter NUG Training 3 Oct 2005.
From lecture slides for Computer Organization and Architecture: Designing for Performance, Eighth Edition, Prentice Hall, 2010 CS 211: Computer Architecture.
Performance Lecture notes from MKP, H. H. Lee and S. Yalamanchili.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Introduction to OpenMP Eric Aubanel Advanced Computational Research Laboratory Faculty of Computer Science, UNB Fredericton, New Brunswick.
HPC1 OpenMP E. Bruce Pitman October, HPC1 Outline What is OpenMP Multi-threading How to use OpenMP Limitations OpenMP + MPI References.
Threaded Programming Lecture 2: Introduction to OpenMP.
Jim Conboy / DPGTF-H 20-May Toric at JET – Status & Tools Work in Progress – See for latest.
Software Engineering Prof. Dr. Bertrand Meyer March 2007 – June 2007 Chair of Software Engineering Lecture #20: Profiling NetBeans Profiler 6.0.
Benchmarking and Applications. Purpose of Our Benchmarking Effort Reveal compiler (and run-time systems) weak points and lack of adequate automatic optimizations.
SSU 1 Dr.A.Srinivas PES Institute of Technology Bangalore, India 9 – 20 July 2012.
Project number: Andreas Sakellariou – PRISMA ELECTRONICS 1 st EU Mid-Term Review meeting 7 November 2014.
As a general rule you should be using multiple languages these days (except for Java)
Parallel Algorithm Design & Analysis Course Dr. Stephen V. Providence Motivation, Overview, Expectations, What’s next.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
CSCI-235 Micro-Computer Applications
CSCE 212 Chapter 4: Assessing and Understanding Performance
High Level Programming Languages
Memory Opportunity in Multicore Era
Compiler Back End Panel
Compiler Back End Panel
Alternative Processor Panel Results 2008
Parallel Computing Explained How to Parallelize a Code
CINECA HIGH PERFORMANCE COMPUTING SYSTEM
Question 1 How are you going to provide language and/or library (or other?) support in Fortran, C/C++, or another language for massively parallel programming.
Presentation transcript:

SPEC HPG Benchmarks for HPC Systems Kumaran Kalyanasundaram for SPEC High-Performance Group Kumaran Kalyanasundaram, PhD Chair, SPEC HPG Manager, SGI Performace Engineering

SPEC HPGs Purpose zThe High Performance Group focuses on the development of application benchmarks for high performance computers.

SPEC HPG zFounded in 1994 (Perfect Benchmarks initiative became HPG). zMembers from industry and academia. zTwo active benchmarks - SPEC OMP & SPEC HPC2002. zNew MPI2006 benchmark currently under development.

OMPL2001 Founding of SPEC HPG HPC96 OMP2001 HPC2002 MPI2006 Jan 1994Oct 1995June 2001June 2002Jan SPEC HPG Benchmark Suites

SPEC OMP zBenchmark suite developed by SPEC HPG (High Performance Group) zBenchmark suite for performance testing of shared memory processor systems zUses OpenMP versions of SPEC CPU2000 benchmarks and candidates

Why Did SPEC Choose OpenMP? zBenchmark suite is focused on SMP systems zOpenMP is a standard, and is applicable to Fortran, C, and C++. zDirective based OpenMP allows serial version to remain largely intact. zQuickest path to parallel code conversion.

OMP/CPU2000 Similarities zSame tools used to run the benchmarks zSimilar run and reporting rules zUses geometric mean to calculate overall performance relative to a baseline system zSimilar output format

SPEC OMP Benchmark Principles zSource code based zLimited code and directive modifications zFocused on SMP performance zRequires a base run xwith no source modifications xsingle set of compiler flags for all benchmarks zSPEC supplied tools required to run benchmark

OMPM2001 Benchmarks

OMP vs CPU2000

Program Memory Footprints

SPEC HPC2002 Benchmark zFull Application benchmarks (including I/O) targeted at HPC platforms zSerial and parallel (OpenMP and/or MPI) zCurrently three applications : ySPECenv: weather forecast ySPECseis: seismic processing, used in the search for oil and gas ySPECchem: comp. chemistry, used in chemical and pharmaceutical industries (gamess) zAll codes include several data sizes

zAn application benchmark suite that measures CPU, memory bw, interconnect, compiler, MPI performance. zSearch program is open till March 31st, 06 zCandidate codes in the areas of Comp. Chemistry, weather forecasting, HE Physics, Oceanography, CFD, etc. SPEC MPI2006

zVery large data sets for MPI2006. zFollow onto SPEC OMPM(L)2001. zUpdate SPEC HPC2002 suite. Future Goals