2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.

Slides:



Advertisements
Similar presentations
SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.
Advertisements

1 Agenda … HPC Technology & Trends HPC Platforms & Roadmaps HP Supercomputing Vision HP Today.
Using Kure and Killdevil
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Getting Started on Topsail Charles Davis ITS Research Computing April 8, 2009.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
M.S MABAKANE HPC USERS AND ADMINISTRATORS WORKSHOP.
MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization.
NERCS Users’ Group, Oct. 3, 2005 NUG Training 10/3/2005 Logistics –Morning only coffee and snacks –Additional drinks $0.50 in refrigerator in small kitchen.
Kirsten Fagnan NERSC User Services Februar 12, 2013 Getting Started at NERSC.
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
Zellescher Weg 16 Trefftz-Bau (HRSK-Anbau) Room HRSK/151 Tel Guido Juckeland Center for Information.
Getting Started on Topsail Mark Reed Charles Davis ITS Research Computing.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
Introduction to HPC resources for BCB 660 Nirav Merchant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
Computing Labs CL5 / CL6 Multi-/Many-Core Programming with Intel Xeon Phi Coprocessors Rogério Iope São Paulo State University (UNESP)
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Sobolev Showcase Computational Mathematics and Imaging Lab.
Lab System Environment
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Grid Programming on Taiwan Unigrid Platform. Outline Introduction to Taiwan Unigrid How to use Taiwan Unigrid.
NML Bioinformatics Service— Licensed Bioinformatics Tools High-throughput Data Analysis Literature Study Data Mining Functional Genomics Analysis Vector.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 10, 2009.
1 Cray Inc. 11/28/2015 Cray Inc Slide 2 Cray Cray Adaptive Supercomputing Vision Cray moves to Linux-base OS Cray Introduces CX1 Cray moves.
Cluster Software Overview
Computational Research in the Battelle Center for Mathmatical medicine.
ARCHER Advanced Research Computing High End Resource
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
HPC at HCC Jun Wang Outline of Workshop2 Familiar with Linux file system Familiar with Shell environment Familiar with module command Familiar with queuing.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Operational and Application Experiences with the Infiniband Environment Sharon Brunett Caltech May 1, 2007.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
Brief introduction about “Grid at LNS”
Auburn University
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
HPC usage and software packages
White Rose Grid Infrastructure Overview
SMHI operational HIRLAM EWGLAM October , OSLO Lars Meuller SMHI
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
CRESCO Project: Salvatore Raia
Cray Announces Cray Inc.
Welcome to our Nuclear Physics Computing System
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Welcome to our Nuclear Physics Computing System
Advanced Computing Facility Introduction
Footer.
HPCC Mid-Morning Break
Overview of HPC systems and software available within
SiCortex Update IDC HPC User Forum
High Performance Computing in Bioinformatics
Presentation transcript:

2011/08/23 國家高速網路與計算中心

Advanced Large-scale Parallel Supercluster

Hostname: alps.nchc.org.tw Login Node: alps.nchc.org.tw Interactive Nodes : alpi1.nchc.org.tw ( ) alpi2.nchc.org.tw ( ) alpi3.nchc.org.tw ( ) alpi4.nchc.org.tw ( ) alpi5.nchc.org.tw ( ) Directories: /home (209 TB), for user's home /pkg (16 TB), for package /work (162 TB), for working scratch

System Family: Acer Group Cluster System Model: Acer AR585 F1 Cluster Processors: AMD Opteron 6174, 12 cores, 2.2GHz (compute nodes) AMD Opteron 6136, 8 cores, 2.4GHz (fat nodes) Main Memory (per node): 128 GB (compute nodes) 256 GB (fat nodes)

Operating System: ovell SuSE Linux Enterprise 11 SP1 Job Scheduler & Queuing System: Platform LSF (Load Sharing Facility) 7.06 Parallel Filesystem: Lustre Development Tools: Compiler: Intel Cluster Toolkit, PGI CDK/Server, x86 Open64, GCC Debugger: Allinea DDT Libraries: MPI/OpenMP: Platform MPI (formerly HP-MPI), Intel MPI, PGI MPI/OpenMP, VAPICH/MVAPICH2 Math: Intel MKL (Math Kernel Library), AMD ACML (AMD Coe Math Library)