PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč 2013-09-27.

Slides:



Advertisements
Similar presentations
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Advertisements

IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Martin Hamilton, Centre Manager − hpc-midlands.ac.uk HPC Midlands Cloud Supercomputing for Academia and Industry.
WEST VIRGINIA UNIVERSITY HPC and Scientific Computing AN OVERVIEW OF HIGH PERFORMANCE COMPUTING RESOURCES AT WVU.
Jia Yao Director: Vishwani D. Agrawal High Performance Compute Cluster April 13,
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Peter Stefan, NIIF 29 June, 2007, Amsterdam, The Netherlands NIIF Storage Services Collaboration on Storage Services.
The University of Texas Research Data Repository : “Corral” A Geographically Replicated Repository for Research Data Chris Jordan.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
Hardware Overview Iomega Network Storage LENOVO | EMC CONFIDENTIAL. ALL RIGHTS RESERVED. Storage for SMB and Distributed Enterprise PX SERIES.
HPC at IISER Pune Neet Deo System Administrator
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
Big Red II & Supporting Infrastructure Craig A. Stewart, Matthew R. Link, David Y Hancock Presented at IUPUI Faculty Council Information Technology Subcommittee.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Lab System Environment
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
SAN DIEGO SUPERCOMPUTER CENTER SDSC's Data Oasis Balanced performance and cost-effective Lustre file systems. Lustre User Group 2013 (LUG13) Rick Wagner.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
Ultimate Integration Joseph Lappa Pittsburgh Supercomputing Center ESCC/Internet2 Joint Techs Workshop.
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
ARCHER Advanced Research Computing High End Resource
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
LIOProf: Exposing Lustre File System Behavior for I/O Middleware
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Online Newspaper CMS 1 Date: 27/12/2012. Contents Introduction Project Management Requirement Specifications Design Description Test Documentation Summary.
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
ATLAS Computing Wenjing Wu outline Local accounts Tier3 resources Tier2 resources.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
Workstations & Thin Clients
NIIF HPC services for research and education
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Buying into “Summit” under the “Condo” model
HPC usage and software packages
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Heterogeneous Computation Team HybriLIT
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
USF Health Informatics Institute (HII)
HII Technical Infrastructure
Advanced Computing Facility Introduction
Footer.
Overview of HPC systems and software available within
High Performance Computing in Bioinformatics
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Presentation transcript:

PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč

Intro What is the supercomputer Infrastructure Access to cluster Support Log-in

Why Anselm 6000 name suggestions The very first coal mine at the region The very first to have a steam engine Anselm of Canterbury

Early days

Future - Hal

What is a supercomputer Bunch of computers Having a lot of CPU power Having a lot of RAM Local storage Shared storage High-speed interconnected Message Process Interface

Supercomputer

Supercomputer ?!?

Anselm HW 209 compute nodes 3344 cores 15TB RAM 300TB /home 135TB /scratch Bull Extreme Computing Linux (RHEL clone)

Type of Nodes 180 compute nodes 23 GPU accelerated nodes 4 MIC accelerated nodes 2 fat nodes

General Node 180 nodes 2880 cores in total two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node 64 GB of physical memory per node one 500GB SATA 2,5” 7,2 krpm HDD per node bullx B510 blade servers cn[1-180]

GPU Accelerated Nodes 23 nodes 368 cores in total two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node 96 GB of physical memory per node one 500GB SATA 2,5” 7,2 krpm HDD per node GPU accelerator 1x NVIDIA Tesla Kepler K20 per node bullx B515 blade servers cn[ ]

MIC Accelerated Nodes Intel Many Integrated Core Architecture 4 nodes 64 cores in total two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node 96 GB of physical memory per node one 500GB SATA 2,5” 7,2 krpm HDD per node MIC accelerator 1x Intel Phi 5110P per node bullx B515 blade servers cn[ ]

Fat Node 2 nodes 32 cores in total 2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node 512 GB of physical memory per node two 300GB SAS 3,5”15krpm HDD (RAID1) per node two 100GB SLC SSD per node bullx R423-E3 servers cn[ ]

Storage 300TB /home 135TB /scratch Infiniband 40 Gb/s – Native 3600 MB/s – Over TCP 1700MB/s Ethernet – 114MB/s LustreFS

Lustre File System Clustered OSS – object storage server MDS – meta-data server Limits in petabytes Parallel - striped

Stripes Stripe count – Parallel access – Mind the script processes – Stripe per gigabyte lfs setstripe|getstripe

Quotas /home – 250GB /scratch – no quota lfs quota –u hrb33 /home

Access to Anselm Internal Access Call - 4x a year – 3 rd round Open Access Call – 2x a year – 2 nd round

Proposals Proposals undergoing evaluation – Scientific – Technical – Economical Primary Investigator – List of co-operators

Login Credentials Personal certificate Signed request Credentials encrypted – Login – Password – Ssh keys – Password to the key

Credentials lifetime Active project(s) or affiliation to IT4Innovations Deleted 1 year after the last project Announcement – 3 months before the removal – 1 month before the removal – 1 week before the removal

Support Bug tracking and trouble ticketing system Documentation IT4I internal command line tools IT4I web applications IT4I android application End-user courses

Main Mean Request Tracker

Documentation documentation/ documentation/ Still evolving Changes almost every day

IT4I internal command line tools It4free Rspbs Licenses allocation Internal in-house scripts – Automation to handle the credentials – Cluster automation – PBS accounting

IT4I web applications Internal information system – Project management – Project accounting – User management Cluster monitoring

IT4I android application Internal tool Considering the release to end-users Features – News – Graphs Feature requests – Accounting – Support – Nodes allocation – Jobs status

Log-in to Anselm Finally! Ssh protocol Via anselm.it4i.cz – login1.anselm.it4i.cz – login2.anselm.it4i.cz

VNC ssh anselm –L 5961:localhost:5961 Remmina Vncviewer :5961

Links documentation/ documentation/

Questions Thank you.