Supervisor: Andreas Gellrich

Slides:



Advertisements
Similar presentations
P3- Represent how data flows around a computer system
Advertisements

Beowulf Supercomputer System Lee, Jung won CS843.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Computer Cluster at UTFSM Yuri Ivanov, Jorge Valencia.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
A Commodity Cluster for Lattice QCD Calculations at DESY Andreas Gellrich *, Peter Wegner, Hartmut Wittig DESY CHEP03, 25 March 2003 Category 6: Lattice.
MPI and C-Language Seminars Seminar Plan  Week 1 – Introduction, Data Types, Control Flow, Pointers  Week 2 – Arrays, Structures, Enums, I/O,
Beowulf Cluster Computing Each Computer in the cluster is equipped with: – Intel Core 2 Duo 6400 Processor(Master: Core 2 Duo 6700) – 2 Gigabytes of DDR.
Cluster Computing. References HA Linux Project – Sys Admin – Load Balancing.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
1 Performance Evaluation of Gigabit Ethernet & Myrinet
NPACI: National Partnership for Advanced Computational Infrastructure August 17-21, 1998 NPACI Parallel Computing Institute 1 Cluster Archtectures and.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
PC DESY Peter Wegner 1. Motivation, History 2. Myrinet-Communication 4. Cluster Hardware 5. Cluster Software 6. Future …
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
How Computers Work. A computer is a machine f or the storage and processing of information. Computers consist of hardware (what you can touch) and software.
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
The High Performance Cluster for QCD Calculations: System Monitoring and Benchmarking Lucas Fernandez Seivane Summer Student 2002.
University of Illinois at Urbana-Champaign NCSA Supercluster Administration NT Cluster Group Computing and Communications Division NCSA Avneesh Pant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
QCD Project Overview Ying Zhang September 26, 2005.
A computer programmer writes applications programs or system programs, test and debug programs, prepare the installation of CD-ROMS and maintain programs.
Unit 3 - Computer Systems
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
SSS Test Results Scalability, Durability, Anomalies Todd Kordenbrock Technology Consultant Scalable Computing Division Sandia is a multiprogram.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
HPC computing at CERN - use cases from the engineering and physics communities Michal HUSEJKO, Ioannis AGTZIDIS IT/PES/ES 1.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
On High Performance Computing and Grid Activities at Vilnius Gediminas Technical University (VGTU) dr. Vadimas Starikovičius VGTU, Parallel Computing Laboratory.
Optimizing Charm++ Messaging for the Grid Gregory A. Koenig Parallel Programming Laboratory Department of Computer.
Efficiency of small size tasks calculation in grid clusters using parallel processing.. Olgerts Belmanis Jānis Kūliņš RTU ETF Riga Technical University.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Distributed Real-time Systems- Lecture 01 Cluster Computing Dr. Amitava Gupta Faculty of Informatics & Electrical Engineering University of Rostock, Germany.
Running Mantevo Benchmark on a Bare-metal Server Mohammad H. Mofrad January 28, 2016
FroNtier Stress Tests at Tier-0 Status report Luis Ramos LCG3D Workshop – September 13, 2006.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
Cluster computing. 1.What is cluster computing? 2.Need of cluster computing. 3.Architecture 4.Applications of cluster computing 5.Advantages of cluster.
FTC-Charm++: An In-Memory Checkpoint-Based Fault Tolerant Runtime for Charm++ and MPI Gengbin Zheng Lixia Shi Laxmikant V. Kale Parallel Programming Lab.
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
Running clusters on a Shoestring US Lattice QCD Fermilab SC 2007.
HPC need and potential of ANSYS CFD and mechanical products at CERN A. Rakai EN-CV-PJ2 5/4/2016.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
Lyon Analysis Facility - status & evolution - Renaud Vernet.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Matt Lemons Nate Mayotte
Computer Hardware Can you name some computer components?
Drill Translate the following message:
Is System X for Me? Cal Ribbens Computer Science Department
LQCD benchmarks on cluster architectures M. Hasenbusch, D. Pop, P
NCSA Supercluster Administration
EDUHIPC-18: WORKSHOP ON EDUCATION FOR HIGH-PERFORMANCE COMPUTING, 17 December 2018, Bengaluru, India Parallel & Distributed Computing (PDC) Using Low Cost,
Lucas Fernandez Seivane Summer Student 2002 IT Group, DESY Hamburg
User-level Distributed Shared Memory
Hybrid Programming with OpenMP and MPI
1.00 Examine the role of hardware and software.
Introduction, background, jargon
CINECA HIGH PERFORMANCE COMPUTING SYSTEM
The performance of NAMD on a large Power4 system
Designing a PC Farm to Simultaneously Process Separate Computations Through Different Network Topologies Patrick Dreher MIT.
Optimizing MPI collectives for SMP clusters
Cluster Computers.
Presentation transcript:

Supervisor: Andreas Gellrich The High Performance Cluster for Lattice QCD Calculations: System Monitoring and Benchmarking Lucas Fernandez Seivane quevedin@mail.desy.de Michał Kapałka kapalka@icslab.agh.edu.pl Supervisor: Andreas Gellrich IT Division September 2002

A cluster?!? Lattice QCD calculations are complicated We need a lot of power, so what do we do? Buy a parallel computer? Or maybe a cluster…  good solution, but we have to MANAGE it! Manage = install software + test + monitor + benchmark + analyse + repair + etc. DESY Summer Student Programme 2002 Michał Kapałka

Monitoring Observing, how things are going on… Measuring CPU/network/whatever load Making logs (so nobody plays Quake on 500,000 $ cluster) And calling system administrator at 4 am when a node refuses working… DESY Summer Student Programme 2002 Michał Kapałka

Monitoring ‘lattice’ – before The ‘lattice’ cluster = master + 32 nodes Each node = dual Xeon CPU (16 x 1.7 GHz + 16 x 2.0 GHz), 1 GB RAM, 18 GB SCSI HDD Network: Fast Ethernet + Myrinet DESY Summer Student Programme 2002 Michał Kapałka

Monitoring – after DESY Summer Student Programme 2002 Michał Kapałka

Benchmarking Benchmark = compare or test (in order to improve things) On cluster we test: single nodes (CPU, memory, …) communication between them (here: MPI) MPI – a standard  sending data = passing messages between processes send receive Node A MSG MSG receive send Node B DESY Summer Student Programme 2002 Michał Kapałka

Benchmarking – results Interpretation is usually … … NOT trivial  DESY Summer Student Programme 2002 Michał Kapałka