Lappeenranta University of Technology June 2015Igor Monastyrnyi HPC2N - High Performance Computing Center North System hardware.

Slides:



Advertisements
Similar presentations
Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
Advertisements

Christian Delbe1 Christian Delbé OASIS Team INRIA -- CNRS - I3S -- Univ. of Nice Sophia-Antipolis November Automatic Fault Tolerance in ProActive.
ARCHITECTURE OF APPLE’S G4 PROCESSOR BY RON WEINWURZEL MICROPROCESSORS PROFESSOR DEWAR SPRING 2002.
© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Blue Gene/P System Overview - Hardware.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
2. Computer Clusters for Scalable Parallel Computing
Today’s topics Single processors and the Memory Hierarchy
Beowulf Supercomputer System Lee, Jung won CS843.
1. Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction of Beowulf Cluster The use of.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Types of Parallel Computers
Information Technology Center Introduction to High Performance Computing at KFUPM.
History of Distributed Systems Joseph Cordina
A Teraflop Linux Cluster for Lattice Gauge Simulations in India N.D. Hari Dass Institute of Mathematical Sciences Chennai.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
CS 213 Commercial Multiprocessors. Origin2000 System – Shared Memory Directory state in same or separate DRAMs, accessed in parallel Upto 512 nodes (1024.
HELICS Petteri Johansson & Ilkka Uuhiniemi. HELICS COW –AMD Athlon MP 1.4Ghz –512 (2 in same computing node) –35 at top500.org –Linpack Benchmark 825.
A Comparative Study of Network Protocols & Interconnect for Cluster Computing Performance Evaluation of Fast Ethernet, Gigabit Ethernet and Myrinet.
Teraserver Darrel Sharpe Matt Todd Rob Neff Mentor: Dr. Palaniappan.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Earth Simulator Jari Halla-aho Pekka Keränen. Architecture MIMD type distributed memory 640 Nodes, 8 vector processors each. 16GB shared memory per node.
PARALLEL PROCESSING The NAS Parallel Benchmarks Daniel Gross Chen Haiout.
Group 11 Pekka Nikula Ossi Hämäläinen Introduction to Parallel Computing Kentucky Linux Athlon Testbed 2
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University.
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Chapter 5 Array Processors. Introduction  Major characteristics of SIMD architectures –A single processor(CP) –Synchronous array processors(PEs) –Data-parallel.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
1 Lecture 7: Part 2: Message Passing Multicomputers (Distributed Memory Machines)
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
A brief overview about Distributed Systems Group A4 Chris Sun Bryan Maden Min Fang.
High Performance Computing Presented To Mam Saman Iftikhar Presented BY Siara Nosheen MSCS 2 nd sem 2514.
Slide 1 DESIGN, IMPLEMENTATION, AND PERFORMANCE ANALYSIS OF THE ISCSI PROTOCOL FOR SCSI OVER TCP/IP By Anshul Chadda (Trebia Networks)-Speaker Ashish Palekar.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
High Performance Computing Processors Felix Noble Mirayma V. Rodriguez Agnes Velez Electric and Computer Engineer Department August 25, 2004.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Checksum Strategies for Data in Volatile Memory Authors: Humayun Arafat(Ohio State) Sriram Krishnamoorthy(PNNL) P. Sadayappan(Ohio State) 1.
Frank Casilio Computer Engineering May 15, 1997 Multithreaded Processors.
Grid MP at ISIS Tom Griffin, ISIS Facility. Introduction About ISIS Why Grid MP? About Grid MP Examples The future.
Parallel Programming on the SGI Origin2000 With thanks to Igor Zacharov / Benoit Marchand, SGI Taub Computer Center Technion Moshe Goldberg,
Patrick R. Haspel, University of Mannheim1 FutureDAQ Kick-off Network Design Space Exploration andAnalysis Computer Architecture Group Prof. Brüning Patrick.
1 CMPE 511 HIGH PERFORMANCE COMPUTING CLUSTERS Dilek Demirel İşçi.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
CCS Overview Rene Salmon Center for Computational Science.
Infiniband Bart Taylor. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be.
Copyright © 2011 Curt Hill MIMD Multiple Instructions Multiple Data.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Grid Computing Framework A Java framework for managed modular distributed parallel computing.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
Interconnection network network interface and a case study.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
| nectar.org.au NECTAR TRAINING Module 4 From PC To Cloud or HPC.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Fault Tolerance in Charm++ Gengbin Zheng 10/11/2005 Parallel Programming Lab University of Illinois at Urbana- Champaign.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
BluesGene/L Supercomputer A System Overview Pietro Cicotti October 10, 2005 University of California, San Diego.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
FTC-Charm++: An In-Memory Checkpoint-Based Fault Tolerant Runtime for Charm++ and MPI Gengbin Zheng Lixia Shi Laxmikant V. Kale Parallel Programming Lab.
SYSTEM MODELS FOR ADVANCED COMPUTING Jhashuva. U 1 Asst. Prof CSE
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
Brief introduction about “Grid at LNS”
Super Computing By RIsaj t r S3 ece, roll 50.
Parallel & Cluster Computing
Is System X for Me? Cal Ribbens Computer Science Department
Overview of HPC systems and software available within
Cluster Computers.
Presentation transcript:

Lappeenranta University of Technology June 2015Igor Monastyrnyi HPC2N - High Performance Computing Center North System hardware and software overview INTRODUCTION TO PARALLEL COMPUTING Professor, TkT Jari Porras

Lappeenranta University of Technology June 2015Igor Monastyrnyi HPC2N - High Performance Computing Center North Umea University Sweden

Lappeenranta University of Technology June 2015Igor Monastyrnyi HPC2N Super Cluster 120 nodes, dual Athlon MP2000+ (1.66GHz) 1GB memory per node Wulfkit SCI high speed interconnect

Lappeenranta University of Technology June 2015Igor Monastyrnyi Facts Hardware –Rack chassis –2AMD Athlon MP2000+ runs at GHz –1 Tyan Tiger MPX Motherboard –1-4 GB Memory –1 Western Digital 20GB Hard disk –1 Fast Ethernet NIC –1 Wulfkit3 SCI Software –Linux 2.4.x –Debian 3.0 –ScaMPI –ATLAS –ScaLAPACK –PGI Compiler suite

Lappeenranta University of Technology June 2015Igor Monastyrnyi Technical characteristics 3 dimensional torus organized as a 4  5  6 grid Bandwidth 667 Mbytes/s Peak perfomance of 800 Gflops/s Floating poin perfomance 3334 Gflops/s Application level latency of 1.46  s

Lappeenranta University of Technology June 2015Igor Monastyrnyi Applications Molecular dynamics simulations Theoretical studies of the structure and function of metal proteins Virtual Radiography Real Time Visualization of Motion

Lappeenranta University of Technology June 2015Igor Monastyrnyi Links Dolphin Interconnect solutions – Scali A/S – Wulfkit – Debian –

Lappeenranta University of Technology June 2015Igor Monastyrnyi ScaMPI – High Perfomance MPI High optimised implementation Multi-Thread-Safe and Hot Fault Tolerance Automatic selection of physical transport mechnism MIMD Support Heterogeneous Clusters Exact Message Size Option

Lappeenranta University of Technology June 2015Igor Monastyrnyi SCI - Scalable Coherent Interface High-perfomance bus Extremely high speed Point-to-Point packet link protocol