Running OpenIFS on HECToR

Slides:



Advertisements
Similar presentations
Focus on Your Content, Not on Ingesting Your Content Terry Brady Applications Programmer Analyst Georgetown University Library
Advertisements

RDz and Process Integration Using Menu Manager and HATS to customize your RDz installation David Myers.
This is a template, please replace this with your data This file is at a 4 x 3 resolution This file has a 10 slides On the first slide of your presentation.
A Dynamic World, what can Grids do for Multi-Core computing? Daniel Goodman, Anne Trefethen and Douglas Creager
1 OBJECTIVES To generate a web-based system enables to assemble model configurations. to submit these configurations on different.
NCAS Unified Model Introduction Part 5: Finale University of Reading, 3-5 December 2014.
Advanced MPI Lab MPI I/O Exercises. 1 Getting Started Get a training account from the instructor Login (using ssh) to ranger.tacc.utexas.edu.
Academic and Research Technology (A&RT)
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 8 Introduction to Printers in a Windows Server 2008 Network.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
RTFDDA/CFDDA/E-RTFDDA Resources. Repositories –CVS –Data Sources Documentation & Useful Information –Internal –External –Mailing lists/aliases –Misc Web.
Process Description and Control. Process concepts n Definitions – replaces task, job – program in execution – entity that can be assigned to and executed.
MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization.
Introduction to HP LoadRunner Getting Familiar with LoadRunner >>>>>>>>>>>>>>>>>>>>>>
High Performance Computation --- A Practical Introduction Chunlin Tian NAOC Beijing 2011.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
Computing Labs CL5 / CL6 Multi-/Many-Core Programming with Intel Xeon Phi Coprocessors Rogério Iope São Paulo State University (UNESP)
Debugging and Profiling GMAO Models with Allinea’s DDT/MAP Georgios Britzolakis April 30, 2015.
Program documentation Using the Doxygen tool Program documentation1.
Parallel Computing with Matlab CBI Lab Parallel Computing Toolbox TM An Introduction Oct. 27, 2011 By: CBI Development Team.
Wenjing Wu Computer Center, Institute of High Energy Physics Chinese Academy of Sciences, Beijing BOINC workshop 2013.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
HYDRA: Using Windows Desktop Systems in Distributed Parallel Computing Arvind Gopu, Douglas Grover, David Hart, Richard Repasky, Joseph Rinkovsky, Steve.
DIRAC Review (13 th December 2005)Stuart K. Paterson1 DIRAC Review Exposing DIRAC Functionality.
Compiler Sensitivity Study Different compilers, which are after all the interface between researchers expectations expressed in the model code and the.
1 Cray Inc. 11/28/2015 Cray Inc Slide 2 Cray Cray Adaptive Supercomputing Vision Cray moves to Linux-base OS Cray Introduces CX1 Cray moves.
Marco Cattaneo - DTF - 28th February 2001 File sharing requirements of the physics community  Background  General requirements  Visitors  Laptops 
Portal Update Plan Ashok Adiga (512)
Innovation for Our Energy Future Opportunities for WRF Model Acceleration John Michalakes Computational Sciences Center NREL Andrew Porter Computational.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
Threaded Programming Lecture 2: Introduction to OpenMP.
Toward GSI Community Code Louisa Nance, Ming Hu, Hui Shao, Laurie Carson, Hans Huang.
® IBM Software Group ©IBM Corporation IBM Information Server Architecture Overview.
Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction.
HTCondor-CE for USATLAS Bob Ball AGLT2/University of Michigan OSG AHM March, 2015 Bob Ball AGLT2/University of Michigan OSG AHM March, 2015.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarksEGEE-III INFSO-RI MPI on the grid:
March 2014 NCAS Unified Model Introduction Finale York – March 2014.
7/9/20161 cms.ncas.ac.uk March 2014 NCAS Unified Model Introduction Part 1: Overview of the UM system York – March 2014.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Plans for the National NERC HPC services UM vn 6.1 installations and performance UM vn 6.6 and NEMO(?) plans.
Linux Systems Administration
HPC In The Cloud Case Study: Proteomics Workflow
Segments Introduction: slides 2–6, 8 10 minutes
HPC In The Cloud Case Study: Proteomics Workflow
Central Ancillary Program (CAP)
Applied Operating System Concepts
OpenPBS – Distributed Workload Management System
Introduction to Metview
4D-VAR Optimization Efficiency Tuning
GWE Core Grid Wizard Enterprise (
Modernize ConfigMgr OSD with Community Tools
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
Building the UM within Rose/Cylc
GLAST Release Manager Automated code compilation via the Release Manager Navid Golpayegani, GSFC/SSAI Overview The Release Manager is a program responsible.
Spatial Analysis With Big Data
Virtualization in the gLite Grid Middleware software process
Monsoon, NEXCS and access to MASS
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
Chapter 1: Introduction
Cray Announces Cray Inc.
Monsoon, NEXCS and access to MASS
Microsoft Ignite NZ October 2016 SKYCITY, Auckland.
Chapter 1: Intro (excerpt)
From Prototype to Production Grid
Introduction to the Rose suite
Section 1: Linux Basics and SLES9 Installation
Working in The IITJ HPC System
ACI-Ref Virtual Residency 2019
Question 1 How are you going to provide language and/or library (or other?) support in Fortran, C/C++, or another language for massively parallel programming.
Presentation transcript:

Running OpenIFS on HECToR Glenn Carver* Rosalyn Hatcher** Grenville Lister** Mark Richardson† * ECMWF, ** NCAS-CMS, † NAG

Running OpenIFS on HECToR CMS hosts an OpenIFS FCM code repository on PUMA Code available over the web Accessible to those with an OpenIFS licence Reading and Manchester have licenses Glenn & Ros maintain list of authorized users

Running OpenIFS on HECToR HECToR code installation FCM grib_api eckit ecbuild FDB Mars Metview

Running OpenIFS on HECToR HECToR OpenIFS build a few lines of scripting (template provided) driven by a simple FCM configuration file HECToR job submission simple batch scheduler (PBS) script Sample jobs T159 T511 T1279 supplied by Glenn start data, namelists

Running OpenIFS on HECToR

Running OpenIFS on HECToR DCSE Project OpenIFS FDB IO single writer replaced with parallel IO model – hide IO cost Important for high-resolution modelling and/or creating lots of data

Running OpenIFS on HECToR

Running OpenIFS on a desktop OpenIFS supports gfortran, pgi, intel, ibm, cray compilers SUSe desktop 8 core Intel i7 32Gb T159L60 4 OMP threads 20 days/hr (30m ts, gfortran) T511L60 2 MPI tasks, 4 OMP threads, 1 day/1.6 hrs (15m ts) IO-dependent performance (non FDB)