White Rose Grid Infrastructure Overview Chris Cartledge Deputy Director Corporate Information and Computing Services, The University of Sheffield

Slides:



Advertisements
Similar presentations
Symantec 2010 Windows 7 Migration EMEA Results. Methodology Applied Research performed survey 1,360 enterprises worldwide SMBs and enterprises Cross-industry.
Advertisements

Symantec 2010 Windows 7 Migration Global Results.
L ondon e-S cience C entre Application Scheduling in a Grid Environment Nine month progress talk Laurie Young.
1 Nia Sutton Becta Total Cost of Ownership of ICT in schools.
1 L U N D U N I V E R S I T Y Integrating Open Access Journals in Library Services & Assisting Authors in choosing publishing channels 4th EBIB Conference.
Libra: An Economy driven Job Scheduling System for Clusters Jahanzeb Sherwani 1, Nosheen Ali 1, Nausheen Lotia 1, Zahra Hayat 1, Rajkumar Buyya 2 1. Lahore.
Pricing for Utility-driven Resource Management and Allocation in Clusters Chee Shin Yeo and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS)
Properties Use, share, or modify this drill on mathematic properties. There is too much material for a single class, so you’ll have to select for your.
1 Building a Fast, Virtualized Data Plane with Programmable Hardware Bilal Anwer Nick Feamster.
UNITED NATIONS Shipment Details Report – January 2006.
DRIVER Long Term Preservation for Enhanced Publications in the DRIVER Infrastructure 1 WePreserve Workshop, October 2008 Dale Peters, Scientific Technical.
0 - 0.
Addition Facts
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
QMUL e-Science Research Cluster Introduction (New) Hardware Performance Software Infrastucture What still needs to be done.
Particle physics – the computing challenge CERN Large Hadron Collider –2007 –the worlds most powerful particle accelerator –10 petabytes (10 million billion.
The National Grid Service Mike Mineter.
NGS computation services: API's,
INFSO-RI Enabling Grids for E-sciencE EGEE and the National Grid Service Mike Mineter
The e-Science Institute Dr Anna Kenway, Deputy Director September 2005.
© University of Reading David Spence 20 April 2014 e-Research: Activities and Needs.
© University of Reading IT Services ITS Support for e­ Research Stephen Gough Assistant Director of IT Services 18 June 2008.
SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.
A couple of slides on RAL PPD Chris Brew CCLRC - RAL - SPBU - PPD.
The Institute for Learning and Research Technology is a national centre of excellence in the development and use of technology-based methods in teaching,
Intel VTune Yukai Hong Department of Mathematics National Taiwan University July 24, 2008.
Introduction for University Staff
Introduction for University Staff CiCS welcomes you to the University of Sheffield 12/06/2014Allan Wright © The University of Sheffield 1.
CS 6143 COMPUTER ARCHITECTURE II SPRING 2014 ACM Principles and Practice of Parallel Programming, PPoPP, 2006 Panel Presentations Parallel Processing is.
Making Time-stepped Applications Tick in the Cloud Tao Zou, Guozhang Wang, Marcos Vaz Salles*, David Bindel, Alan Demers, Johannes Gehrke, Walker White.
1 RA III - Regional Training Seminar on CLIMAT&CLIMAT TEMP Reporting Buenos Aires, Argentina, 25 – 27 October 2006 Status of observing programmes in RA.
Finnish Material Sciences Grid (M-grid) Arto Teräs Nordic-Sgi Meeting October 28, 2004.
KAIST Computer Architecture Lab. The Effect of Multi-core on HPC Applications in Virtualized Systems Jaeung Han¹, Jeongseob Ahn¹, Changdae Kim¹, Youngjin.
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
E-science grid facility for Europe and Latin America SA1 - Status Report Grid Infrastructure Activity Diego Carvalho (SA1 Activity Manager)
Addition 1’s to 20.
25 seconds left…...
Test B, 100 Subtraction Facts
SE-292 High Performance Computing
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
PSSA Preparation.
Raspberry Pi Performance Benchmarking
Beowulf Supercomputer System Lee, Jung won CS843.
Publishing applications on the web via the Easa Portal and integrating the Sun Grid Engine Publishing applications on the web via the Easa Portal and integrating.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
UK e-Science and the White Rose Grid Paul Townend Distributed Systems and Services Group Informatics Research Institute University of Leeds.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Introduction to Grid Computing with High Performance Computing Mike Griffiths White Rose Grid e-Science Centre of Excellence.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Edinburgh Investment in e-Science Infrastructure Dr Arthur Trew.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
CCS Overview Rene Salmon Center for Computational Science.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
HEP SYSMAN 23 May 2007 National Grid Service Steven Young National Grid Service Manager Oxford e-Research Centre University of Oxford.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
E-Science Centre of Excellence 1 The White Rose Grid Peter Dew Chair of the White Rose Grid Executive.
The UK National Grid Service Andrew Richards – CCLRC, RAL.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
White Rose Grid Infrastructure Overview
UK Grid: Moving from Research to Production
The Academic Service Partnership
NGS Oracle Service.
Presentation transcript:

White Rose Grid Infrastructure Overview Chris Cartledge Deputy Director Corporate Information and Computing Services, The University of Sheffield

22/03/06© The University of Sheffield / Department of Marketing and Communications 2 Contents History Web site Current computation capabilities Planned machines Usage YHMAN Grid capabilities Contacts Training FEC, Futures

22/03/06© The University of Sheffield / Department of Marketing and Communications 3 White Rose Grid History 2001: SRIF Opportunity, joint procurement Leeds led: Peter Dew, Joanna Schmidt 3 clusters Sun SPARC system, Solaris Leeds, Maxima: 6800 (20 processors), 4*V880 (8 proc) Sheffield, Titania: 10 (later 11)* V880 (8 proc) York, Pascali: 6800 (20 proc), Fimbrata: V880 1 cluster 2.2, 2.4 GHz Intel Xeon, Myrinet Leeds, Snowdon 292 CPUs, linux

22/03/06© The University of Sheffield / Department of Marketing and Communications 4 White Rose Grid History continued Joint working to enable use across sites but heterogenous: a range of systems each system primarily to meet local needs up to 25% for users from the other sites Key services common Sun Grid Engine to control work in the clusters Globus to link clusters registration

22/03/06© The University of Sheffield / Department of Marketing and Communications 5 WRG Web Site There is a shared web site: Linked to/from local sites Covers other related projects and resources e-Science Centre of Excellence Leeds SAN and specialist graphics equipment Sheffield ppGrid node York, UKLight work

22/03/06© The University of Sheffield / Department of Marketing and Communications 6 Current Facilities: Leeds Everest: supplied by Sun/ Streamline Dual core Opteron: power & space efficient 404 CPU cores, 920GB memory 64-bit Linux (SuSE 9.3) OS Low latency Myrinet interconnect 7 * 8-way (4 chips with 2 cores), 32GB 64 * 4-way (2 chips with 2 cores), 8GB

22/03/06© The University of Sheffield / Department of Marketing and Communications 7 Leeds (continued) SGE, Globus/GSI Intel, GNU, PGI compilers. Shared memory & Myrinet MPI NAG, FFTW, BLAS, LAPACK, etc Libraries 32- and 64-bit software versions

22/03/06© The University of Sheffield / Department of Marketing and Communications 8 Maxima transition Maintenance to June 2006, expensive Need to move all home directories to SAN Users can still use it, but “at risk” Snowdon transition Maintenance until June 2007 Home directories already on the SAN Users encouraged to move

22/03/06© The University of Sheffield / Department of Marketing and Communications 9 Sheffield Iceberg: Sun Microsystems/ Streamline 160 * 2.4GHz AMD Opteron (PC technology) processors 64-bit Scientific Linux (Redhat based) 20 * 4-way, 16GB, fast Myrinet for parallel/large 40 * 2-way, 4GB for high high throughput GNU and Portland Group compilers, NAG Sun Grid Engine (6), MPI, OpenMP, Globus Abaqus, Ansys, Fluent, Maple, Matlab

22/03/06© The University of Sheffield / Department of Marketing and Communications 10 Also At Sheffield GridPP (Particle Physics Grid) 160 * 2.4GHz AMD Opteron 80* 2-way, 4GB 32-bit Scientific Linux ppGrid stack 2 nd most productive Very successful!

22/03/06© The University of Sheffield / Department of Marketing and Communications 11 Popular! Sheffield Lots of users: 827 White Rose: 37 Utilisation high Since installation: 40% Last 3 months: 80% White Rose: 26%

22/03/06© The University of Sheffield / Department of Marketing and Communications 12 York £205k from SRIF 3 £100k computing systems £50k storage system remainder ancillary equipment, contingency Shortlist agreed(?) - for June Compute, possibly core, Opteron Storage, possibly 10TB

22/03/06© The University of Sheffield / Department of Marketing and Communications 13 Other Resources YHMAN Leased fibre 2Gb/s Performance Wide area MetroLAN UKLight Archiving Disaster recovery

22/03/06© The University of Sheffield / Department of Marketing and Communications 14 Grid Resources Queuing Sun Grid Engine (6) Globus Toolkit 2.4 is installed and working issue over GSI-SSH on 64-bit OS (ancient GTK) Globus 4 being looked at Storage Resource Broker being worked on

22/03/06© The University of Sheffield / Department of Marketing and Communications 15 Training Available across White Rose Universities Sheffield: RTP - 4 units, 5 credits each High Performance and Grid Computing Programming and Application Development for Computational Grids Techniques for High Performance Computing including Distributed Computing Grid Computing and Application Development

22/03/06© The University of Sheffield / Department of Marketing and Communications 16 Contacts Leeds: Joanna Schmidt +44 (0) Sheffield Michael Griffiths or Peter Tillotson +44 (0) , +44 (0) York: Aaron Turner (0) 190

22/03/06© The University of Sheffield / Department of Marketing and Communications 17 Futures FEC will have an impact Can we maintain 25% use from other sites? how can we fund continuing GRID work? Different Funding models a challenge Leeds: departmental shares Sheffield: unmetered service York: based in Computer Science Relationship opportunities NGS, WUN, region, suppliers?

22/03/06© The University of Sheffield / Department of Marketing and Communications 18 Achievements White Rose Grid: not hardware, services People(!): familiar in working with Grid Experience of working as a virtual organisation Intellectual property in training Success: Research Engaging with Industry Solving user problems