Running Mantevo Benchmark on a Bare-metal Server Mohammad H. Mofrad January 28, 2016

Slides:



Advertisements
Similar presentations
MINJAE HWANG THAWAN KOOBURAT CS758 CLASS PROJECT FALL 2009 Extending Task-based Programming Model beyond Shared-memory Systems.
Advertisements

Parallel Processing with OpenMP
Introduction to Openmp & openACC
Beowulf Supercomputer System Lee, Jung won CS843.
1. Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction of Beowulf Cluster The use of.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Types of Parallel Computers
Information Technology Center Introduction to High Performance Computing at KFUPM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
SHARCNET. Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
MPI in uClinux on Microblaze Neelima Balakrishnan Khang Tran 05/01/2006.
Tuesday, September 12, 2006 Nothing is impossible for people who don't have to do it themselves. - Weiler.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Parallel/Concurrent Programming on the SGI Altix Conley Read January 25, 2007 UC Riverside, Department of Computer Science.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Beowulf Cluster Computing Each Computer in the cluster is equipped with: – Intel Core 2 Duo 6400 Processor(Master: Core 2 Duo 6700) – 2 Gigabytes of DDR.
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Dual Stack Virtualization: Consolidating HPC and commodity workloads in the cloud Brian Kocoloski, Jiannan Ouyang, Jack Lange University of Pittsburgh.
High Performance Computation --- A Practical Introduction Chunlin Tian NAOC Beijing 2011.
Parallel Processing LAB NO 1.
CC02 – Parallel Programming Using OpenMP 1 of 25 PhUSE 2011 Aniruddha Deshmukh Cytel Inc.
1 Datamation Sort 1 Million Record Sort using OpenMP and MPI Sammie Carter Department of Computer Science N.C. State University November 18, 2004.
OpenMP in a Heterogeneous World Ayodunni Aribuki Advisor: Dr. Barbara Chapman HPCTools Group University of Houston.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Chapter 6 Operating System Support. This chapter describes how middleware is supported by the operating system facilities at the nodes of a distributed.
STRATEGIC NAMING: MULTI-THREADED ALGORITHM (Ch 27, Cormen et al.) Parallelization Four types of computing: –Instruction (single, multiple) per clock cycle.
Virtual Machine Scheduling for Parallel Soft Real-Time Applications
OPERATING SYSTEM: DESIGN AND IMPLEMENT FINAL PROPOSAL SCHEDULER 周長毅, 周榆豐.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Copyright © 2002, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners
Multi-stack System Software Jack Lange Assistant Professor University of Pittsburgh.
CCS Overview Rene Salmon Center for Computational Science.
A High Performance Middleware in Java with a Real Application Fabrice Huet*, Denis Caromel*, Henri Bal + * Inria-I3S-CNRS, Sophia-Antipolis, France + Vrije.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
On High Performance Computing and Grid Activities at Vilnius Gediminas Technical University (VGTU) dr. Vadimas Starikovičius VGTU, Parallel Computing Laboratory.
PGP Project Viktor Yarmolenko Lewis Mackenzie Paul Cockshott Ewan Borland.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Virtualization and Databases Ashraf Aboulnaga University of Waterloo.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Harnessing Grid-Based Parallel Computing Resources for Molecular Dynamics Simulations Josh Hursey.
Gravitational N-body Simulation Major Design Goals -Efficiency -Versatility (ability to use different numerical methods) -Scalability Lesser Design Goals.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
Exploring Parallelism with Joseph Pantoga Jon Simington.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
USEIMPROVEEVANGELIZE High-Performance Computing and OpenSolaris ● Silveira Neto ● Sun Campus Ambassador ● Federal University of Ceará ● ParGO - Paralellism,
J.J. Keijser Nikhef Amsterdam Grid Group MyFirstMic experience Jan Just Keijser 26 November 2013.
NFV Compute Acceleration APIs and Evaluation
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Introduction to Parallel Processing
Grid Computing.
Team 1 Aakanksha Gupta, Solomon Walker, Guanghong Wang
Parallel & Cluster Computing
WORKFLOW PETRI NETS USED IN MODELING OF PARALLEL ARCHITECTURES
Is System X for Me? Cal Ribbens Computer Science Department
University of Technology
What is Parallel and Distributed computing?
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Hybrid Programming with OpenMP and MPI
Chapter 4: Threads & Concurrency
Assoc. Prof. Marc FRÎNCU, PhD. Habil.
Presentation transcript:

Running Mantevo Benchmark on a Bare-metal Server Mohammad H. Mofrad January 28, 2016

Contents Mantevo benchmark (CloverLeaf, CoMD, MiniFE) Running Mantevo on Baremetall Results 2

Mantevo Benchmark A collection of some application performance proxies known as mini applications (miniapps). Two advantages of mini apps Encapsulating most important computational operations of a scientific application Consolidating physics capabilities that belongs to a variety of scientific applications 3

Mantevo Benchmark – Selected Miniapps CloverLeaf a mini-app that solves the compressible Euler equations on a Cartesian grid, using an explicit, second- order accurate method. CoMD a Classical molecular dynamics algorithms and workloads as used in materials science 4 MiniFE a proxy application for unstructured implicit finite element codes

Mantevo Benchmark – Selected Libraries OpenMP Open Multi-Processing (OpenMP) is a Application programming Interface (API) that supports multi- platform shared memory multiprocessing programming in C, C++, and Fortran. MPI Message Passing Interface (MPI) is a standardized and portable message- passing system. Mpicc provides MPI libraries for C programmers. 5

Single Node Bare-metal Server Specification 6 Linux kernel el7.x86_64 Linux distribution Centos 64 bit CPU Intel Core i5 4 GHz HDD Seagate 1TB serial ATA Network Interface Realtek Gigabit Ethernet 1000 Mb/s RAM Samsung DDR MHz

Results Each experiment performs 10 times OpenMP implementation of Mantevo runs with 1, 2, and 4 threads MPI implementation of Mantevo runs with 1, 2, and 4 CPUs 7

Results - OpenMP 8

Results - MPI 9

What’s done? Reading Yuyu’s Supercomputing conference poster Centos 7 configuration Mantevo installation Tweaking the Mantevo script Collecting results 10

What’s next? Extending the experiments Introducing Kernel-based Virtual Machine (KVM) Installing and configuring KVM (done) Installing Mantevo benchmark on a virtual machine (ongoing) Running Mantevo benchmark on KVM comparing kvm with bare-metal 11

References Yuyu’s poster in Supercomputing 2015 conference Yuyu’s scaletest Github repository Mantevo benchmark homepage: 12