Complementary Capability Computing on HPCx Dr Alan Gray.

Slides:



Advertisements
Similar presentations
NGS computation services: API's,
Advertisements

NGS computation services: APIs and.
HPCx Power for the Grid Dr Alan D Simpson HPCx Project Director EPCC Technical Director.
NetPay provides best and effective solution for company Managers to maintain their employee scheduling task (including staff in/out details, overtime,
STFC and the UK e-Infrastructure Initiative The Hartree Centre Prof. John Bancroft Project Director, the Hartree Centre Member, e-Infrastructure Leadership.
PRACE – The European HPC Infrastructure Liz Sim (EPCC) PRACE WP6 Operations – User Services Subtask Leader.
Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 NW-GRID Future Developments R.J. Allan CCLRC Daresbury Laboratory.
BBSRC and data visualisation Head of Policy Evidence
LiberRATE Estimating It thinks like you do! Edition 3 Instructions Click on buttons to advance or to repeat the previous slide PreviousNext.
Rhea Analysis & Post-processing Cluster Robert D. French NCCS User Assistance.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 19 Scheduling IV.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Workshop on HPC in India Grid Middleware for High Performance Computing Sathish Vadhiyar Grid Applications Research Lab (GARL) Supercomputer Education.
Monitoring and performance measurement in Production Grid Environments David Wallom.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 8 Introduction to Printers in a Windows Server 2008 Network.
Introduction to z/OS Basics © 2006 IBM Corporation Chapter 7: Batch processing and the Job Entry Subsystem (JES) Batch processing and JES.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
SERVICE BROKER. SQL Server Service Broker SQL Server Service Broker provides the SQL Server Database Engine native support for messaging and queuing applications.
Introduction to the new mainframe © Copyright IBM Corp., All rights reserved. Chapter 5: Batch processing and the Job Entry Subsystem (JES) Batch.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
Chapter 2. Creating the Database Environment
EPCC, University of Edinburgh DIRAC and SAFE. DIRAC requirements DIRAC serves a variety of different user communities. –These have different computational.
“SEMI-AUTOMATED PARALLELISM USING STAR-P " “SEMI-AUTOMATED PARALLELISM USING STAR-P " Dana Schaa 1, David Kaeli 1 and Alan Edelman 2 2 Interactive Supercomputing.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Scientific Computing Division Juli Rew CISL User Forum May 19, 2005 Scheduler Basics.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Flexibility and user-friendliness of grid portals: the PROGRESS approach Michal Kosiedowski
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
IM&T Vacation Program Benjamin Meyer Virtualisation and Hyper-Threading in Scientific Computing.
TeraGrid Advanced Scheduling Tools Warren Smith Texas Advanced Computing Center wsmith at tacc.utexas.edu.
A New Parallel Debugger for Franklin: DDT Katie Antypas User Services Group NERSC User Group Meeting September 17, 2007.
Computing Resource Paradigms CS3353. Computing Resource Paradigms Centralized Computing Distributed Computing.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Combining the strengths of UMIST and The Victoria University of Manchester “Use cases” Stephen Pickles e-Frameworks meets e-Science workshop Edinburgh,
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
Copyright © 2010, SAS Institute Inc. All rights reserved. SAS ® Using the SAS Grid.
N ET PAY Advantages Features Working Structure. Advantages Redback NetPay allow companies to replace multiple complex computer applications with a single.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
MX Automatic Call Distribution (ACD) v3.0 New Features.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Grid Remote Execution of Large Climate Models (NERC Cluster Grid) Dan Bretherton, Jon Blower and Keith Haines Reading e-Science Centre
Science Support for Phase 4 Dr Alan D Simpson HPCx Project Director EPCC Technical Director.
Migrating Desktop Uniform Access to the Grid Marcin Płóciennik Poznan Supercomputing and Networking Center Poland EGEE’08 Conference, Istanbul, 24 Sep.
5-7 May 2003 SCD Exec_Retr 1 Research Data, May Archive Content New Archive Developments Archive Access and Provision.
MyFloridaMarketPlace Quality Improvement Plan. Page 2 MFMP Quality Improvement Plan  The MFMP team has developed a quality improvement plan that addresses.
2004 Queue Scheduling and Advance Reservations with COSY Junwei Cao Falk Zimmermann C&C Research Laboratories NEC Europe Ltd.
Store and exchange data with colleagues and team Synchronize multiple versions of data Ensure automatic desktop synchronization of large files B2DROP is.
GPCF* Update Present status as a series of questions / answers related to decisions made / yet to be made * General Physics Computing Facility (GPCF) is.
Grid Computing: An Overview and Tutorial Kenny Daily BIT Presentation 22/09/2016.
Plans for the National NERC HPC services UM vn 6.1 installations and performance UM vn 6.6 and NEMO(?) plans.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
Auburn University
GWE Core Grid Wizard Enterprise (
Ruslan Fomkin and Tore Risch Uppsala DataBase Laboratory
Architecture & System Overview
Grid Computing.
NGS computation services: APIs and Parallel Jobs
Integration of Singularity With Makeflow
Support for ”interactive batch”
"Cloud services" - what it is.
Building and running HPC apps in Windows Azure
User interaction and workflow management in Grid enabled e-VLBI experiments Dominik Stokłosa Poznań Supercomputing and Networking Center, Supercomputing.
Enhanced agent workspace for messaging
Presentation transcript:

Complementary Capability Computing on HPCx Dr Alan Gray

212 th November 2008HPCx User Group Meeting HPCx and HECToR: complementary services

312 th November 2008HPCx User Group Meeting Complementary capability computing Complementary services provide a unique opportunity to maximise the benefits for UK research –HECToR is our leading HPC facility –HPCx is our "National Supercomputer", trading overall utilisation in favour of a more flexible servic The main principles behind providing complementary services are: –to maximise the combined research benefits of HECToR and HPCx for the UK HPC user community; –to ensure that the most appropriate service is chosen for the scientific research to be conducted.

412 th November 2008HPCx User Group Meeting HPCx: a more flexible service HPCx’s operation has been modified to provide a more flexible service –We have adapted policies and enabled new features and functionalities HPCx can handle more unusual jobs –e.g. jobs that cannot easily or readily be accommodated on HECToR Certain job classes have adverse impact on overall utilisation on prime national service –but can be accommodated on a complementary service HPCx users have helped to guide these changes –We visited all consortia with >= 1M AUs remaining

512 th November 2008HPCx User Group Meeting HPCx flexibility

612 th November 2008HPCx User Group Meeting Flexible Policies Very long jobs –HPCx now has 48 hour queues, requests for longer jobs will also be considered Interactive and responsive computing –HPCx now has short high priority 20 minute debug queues –HPCx can now accommodate more flexible access patterns, for example to exploit grid technologies

712 th November 2008HPCx User Group Meeting New Features and Functionality Advanced reservation –HPCx now supports the Highly Available Robust Co- scheduler (HARC) –Allows users to reserve processors for a particular time in the future –Relevant for meta-computing, computational steering and on-line visualisation –You may just wish to reserve a series of processors one afternoon

812 th November 2008HPCx User Group Meeting New Features and Functionality Ensembles of jobs –We have developed functionality which allows multiple independent jobs to be easily submitted from the same job script to run simultaneously Visualisation –Paraview parallel visualisation tool now installed Computational steering –We are working directly with consortia to enable this for their specific cases

912 th November 2008HPCx User Group Meeting Additional flexibility Shared memory –A cluster of fat shared-memory nodes each with several processors under the control of a single operating system –Advantageous for applications that exploit shared memory parallelism and for users with large memory jobs –Accessing the full memory Under populating the node Shared memory segments and shared memory parallelism

1012 th November 2008HPCx User Group Meeting Additional flexibility Large memory jobs –Phase 4 upgrade of HPCx –Two of the IBM 575+ servers will have 128 GB of memory Data intensive jobs –HPCx has a significant tape store –Ideally suited for users with a need to archive large amounts of data to tape for local processing

1112 th November 2008HPCx User Group Meeting How to exploit HPCx’s new flexibility For the remainder of HPCx, we will (in addition to running the helpdesk) focus our additional support on Complementarity –In particular we will support the new CCC projects in any way we can Please get in touch with the helpdesk with –questions –suggestions or requests –requirements that cannot easily be met within the current operational policies of HECToR and HPCx –feedback Guidelines on Complementarity are available at: –