The Birmingham Environment for Academic Research Setting the Scene Peter Watkins, School of Physics and Astronomy (on behalf of the Blue Bear team)

Slides:



Advertisements
Similar presentations
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Advertisements

STFC and the UK e-Infrastructure Initiative The Hartree Centre Prof. John Bancroft Project Director, the Hartree Centre Member, e-Infrastructure Leadership.
Deploying GMP Applications Scott Fry, Director of Professional Services.
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
VO Sandpit, November 2009 NERC Big Data And what’s in it for NCEO? June 2014 Victoria Bennett CEDA (Centre for Environmental Data Archival)
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM Roadshow Introduction to LinkSCEEM/SESAME/IMAN1 4 May 2014, J.U.S.T Presented by Salman Matalgah Computing Group leader SESAME.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
The UK National Grid Service Using the NGS. Outline NGS Background Getting Certificates Acceptable usage policies Joining VO’s What resources will be.
 Changes to sources of funding for computing in the UK.  Past and present computing resources.  Future plans for computing developments. UK Status &
1Welcome Event – 16 March 2015 How does the University Work? 16 April 2015 Dr Paul Greatrix Registrar.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
UK -Tomato Chromosome Four Sarah Butcher Bioinformatics Support Service Centre For Bioinformatics Imperial College London
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Orvill Adams, Orvill Adams & Associates B.V. Orvill Adams Orvill Adams & Associates B.V. Measuring the Products of Medical Education.
Available to our Subscribers Karen Fraser: ULC 2013.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Nov. 18, EGEE and gLite are registered trademarks EGEE-III, Regional, and National.
University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton.
The International Entrepreneurship Educators Conference Sharing experience across contexts: UK Higher Education Academy Subject Centres.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Andrew McNabNorthGrid, GridPP8, 23 Sept 2003Slide 1 NorthGrid Status Andrew McNab High Energy Physics University of Manchester.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Edinburgh Investment in e-Science Infrastructure Dr Arthur Trew.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Contact: Hirofumi Amano at Kyushu Mission 40 Years of HPC Services Though the R. I. I.
UCSB Projects & Progress 2011 UC Santa Barbara Projects & Progress 2010 A brief look at some of the things we’ve been working on this past year.
INFSO-RI Enabling Grids for E-sciencE Hellas Grid infrastructure update Kostas Koumantaros, Christos Aposkitis EGEE-HellasGrid Coordination.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Tony Doyle - University of Glasgow 8 July 2005Collaboration Board Meeting GridPP Report Tony Doyle.
Welcome to the University of Nottingham Dr Ceri Gorton.
11 January 2005 High Performance Computing at NCAR Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder,
Tony Doyle - University of Glasgow Introduction. Tony Doyle - University of Glasgow 6 November 2006ScotGrid Expression of Interest Universities of Aberdeen,
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
SunSatFriThursWedTuesMon January
Brunel University, Department of Electronic and Computer Engineering, Uxbridge, UB8 3PH, UK Dr Peter R Hobson C.Phys M.Inst.P SIRE Group.
Transforming Children’s Services 2020
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Scheduling a 100,000 Core Supercomputer for Maximum Utilization and Capability September 2010 Phil Andrews Patricia Kovatch Victor Hazlewood Troy Baer.
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
Computing and Information Grid Development in Thailand Sornthep Vannarat NECTEC.
Our colleges and schools Our college structure puts academic endeavour at the heart of our decision making. Each college is led by a Head of College who.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
Our New Submit Server. chtc.cs.wisc.edu One thing out of the way... Your science is King! We need rules to facilitate resource sharing. Given good reasons.
NIIF HPC services for research and education
White Rose Grid Infrastructure Overview
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Grid infrastructure development: current state
Vanderbilt Tier 2 Project
EGI – Organisation overview and outreach
Cluster Computers.
Presentation transcript:

The Birmingham Environment for Academic Research Setting the Scene Peter Watkins, School of Physics and Astronomy (on behalf of the Blue Bear team)

One starting point Several Schools had computing clusters but very limited campus-wide facilities SRIF 1  Applications Service (capps) running on 6 dual-processor HP J6700 PA-RISC machines  limited disc space – 1 Tbyte

 some areas, for example humanities, have heavy compute, data and visualisation requirements

Major investments in computing at most research-led Universities in recent years through SRIF and other e-science initiatives Midlands e-science Centre (MeSC) (included several Universities in the West Midlands) Access Grid Node – videoconferencing facilities SRIF 2 support from University e-Science cluster of 54 Xeon dual processors Started to build strong campus-wide support for a large cluster dedicated to research computing

Some of the requirements include  Computer hardware (processing, memory, fast interconnect and storage) but there’s more to it than that.... –flexibility –ease of use for non-specialist users –sustainability –reliability

The procurement process  SRIF3 process co-ordinated by Heriot-Watt with the aim of ensuring best value for the funding bodies  strong support from Procurement (Helen Bignell, Keith McKenzie)  external expertise on clusters, especially Daresbury (Martyn Guest, Christine Kitchen) was essential  everything takes longer than expected

A very long process....  February 2005 – bid to University  May 2005 – bid approved by University  September 2005 – initial funds available  March Invitation to Tender submitted to Heriot- Watt, issued 18 April 2006  Easter 2006 (April) – data centre power and cooling upgrade  May 2006 – deadline for return of tenders  January 2007 – selected vendor notified  June 2007 – phase 1 cluster delivered  March 2008 – full cluster installed

First phase installed (May 2007)

 Strategic Collaboration in place between the University and IBM/Clustervision/Mechdyne

Current configuration – Blue Bear  384 dual processor dual core (1536 cores) 8GB ram IBM x3455 Opteron nodes (Linux)  2 quad processor dual core 32 GB ram nodes  Infiniband interconnect throughout  144 TB raw disk (GPFS)  24 TB tape library  8 node - Microsoft compute cluster

Registered users of Blue Bear (>230)  Archaeology and Antiquity  Biosciences  Business School  Computer Science  Chemical Engineering  Chemistry  Civil Engineering  Electronic, Electrical and Computer Engineering  English  Geography, Earth and Environmental Science  Primary Care and General Practice  Health Services Research  Mathematics  Mechanical and Manufacturing Engineering  Metallurgy and Materials  Obstetrics and Gynaecology  Public and Environmental Health  Physics and Astronomy  Psychology  Sports and Exercise Science New research work using Blue Bear already published Help us to keep on broadening the user base across the University

Many thanks to all the contributors from the Blue Bear team  Aslam Ghumra  Paul Hatton  Jonathan Hunt  Lawrie Lowe  John Owen  Alan Reed  PMW  Marcin Mogielnicki (Clustervision)

Future plans include  More users meetings – to help existing users and encourage new users from all disciplines  Develop the links with IBM and other partners to support our research projects  Increase related training courses and expertise level of users  Developing University long term support for excellence in research computing