Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre

Slides:



Advertisements
Similar presentations
L ondon e-S cience C entre Application Scheduling in a Grid Environment Nine month progress talk Laurie Young.
Advertisements

/ 1 N. Williams Grid Middleware Experiences Nadya Williams OCI Grid Computing, University of Zurich
CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
Storage Services Let the data flow! NorduNet 2008,.fi, 9 April 2008 Jan Meijer.
Grid Tech Team Certificates, Monitoring, & Firewall September 15, 2003 Chiang Mai, Thailand Allan Doyle, NASA With the help of the entire Grid Tech Team.
1 P-GRADE Portal and GEMLCA Legacy Code Architecture Peter Kacsuk MTA SZTAKI
Grid Initiatives for e-Science virtual communities in Europe and Latin America The VRC-driven GISELA Science Gateway Diego Scardaci.
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Particle physics – the computing challenge CERN Large Hadron Collider –2007 –the worlds most powerful particle accelerator –10 petabytes (10 million billion.
The National Grid Service Mike Mineter.
The Storage Resource Broker and.
Condor Project Computer Sciences Department University of Wisconsin-Madison Eager, Lazy, and Just-in-Time.
NGS computation services: API's,
NGS computation services: APIs and.
Data services on the NGS.
WS-JDML: A Web Service Interface for Job Submission and Monitoring Stephen M C Gough William Lee London e-Science Centre Department of Computing, Imperial.
The National Grid Service and OGSA-DAI Mike Mineter
Current status of grids: the need for standards Mike Mineter TOE-NeSC, Edinburgh.
18 April 2002 e-Science Architectural Roadmap Open Meeting 1 Support for the UK e-Science Roadmap David Boyd UK Grid Support Centre CLRC e-Science Centre.
INFSO-RI Enabling Grids for E-sciencE EGEE and the National Grid Service Mike Mineter
MyProxy Guy Warner NeSC Training.
VO Support and directions in OMII-UK Steven Newhouse, Director.
The Storage Resource Broker and.
E-Science Update Steve Gough, ITS 19 Feb e-Science large scale science increasingly carried out through distributed global collaborations enabled.
© University of Reading IT Services ITS Support for e­ Research Stephen Gough Assistant Director of IT Services 18 June 2008.
Peter Berrisford RAL – Data Management Group SRB Services.
Week 1.
CERN LCG Overview & Scaling challenges David Smith For LCG Deployment Group CERN HEPiX 2003, Vancouver.
UK Campus Grid Special Interest Group Dr. David Wallom University of Oxford.
1 Concepts of Condor and Condor-G Guy Warner. 2 Harvesting CPU time Teaching labs. + Researchers Often-idle processors!! Analyses constrained by CPU time!
The Community Authorisation Service – CAS Dr Steven Newhouse Technical Director London e-Science Centre Department of Computing, Imperial College London.
Dr. David Wallom Experience of Setting up and Running a Production Grid on a University Campus July 2004.
Dr. David Wallom Use of Condor in our Campus Grid and the University September 2004.
OxGrid, A Campus Grid for the University of Oxford Dr. David Wallom.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Oxford Interdisciplinary e-Research Centre I e R C OxGrid, A Campus Grid for the University of Oxford Dr. David Wallom Campus Grid Manager.
10/20/05 LIGO Scientific Collaboration 1 LIGO Data Grid: Making it Go Scott Koranda University of Wisconsin-Milwaukee.
Web: OMII-UK Campus Grid Toolkit NW-GRID Campus Grids Workshop 31 st October 2007 University of Liverpool Tim Parkinson.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Grid tool integration within the eMinerals project Mark Calleja.
Neil Geddes GridPP-10, June 2004 UK e-Science Grid Dr Neil Geddes CCLRC Head of e-Science Director of the UK Grid Operations Support Centre.
NGS Innovation Forum, Manchester4 th November 2008 Condor and the NGS John Kewley NGS Support Centre Manager.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
EGEE-Forum – May 11, 2007 Enabling Grids for E-sciencE EGEE and gLite are registered trademarks A gateway platform for Grid Nicolas.
Tools for collaboration How to share your duck tales…
The UK eScience Grid (and other real Grids) Mark Hayes NIEeS Summer School 2003.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Applications & a Reality Check Mark Hayes. Applications on the UK Grid Ion diffusion through radiation damaged crystal structures (Mark Calleja, Mark.
Building the e-Minerals Minigrid Rik Tyer, Lisa Blanshard, Kerstin Kleese (Data Management Group) Rob Allan, Andrew Richards (Grid Technology Group)
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
Introduction to The Storage Resource.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
The National Grid Service Mike Mineter.
The Storage Resource Broker and.
1 GridFTP and SRB Guy Warner Training, Outreach and Education Team, Edinburgh e-Science.
INFSO-RI Enabling Grids for E-sciencE Middleware Concepts and Services in Campus, National and International Grids Mike Mineter.
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Overview of gLite, the EGEE middleware Mike Mineter Training Outreach Education National.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 22 February 2006.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre
The UK National Grid Service Andrew Richards – CCLRC, RAL.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
1 eScience Grid Environments th May 2004 NESC - Edinburgh Deployment of Storage Resource Broker at CCLRC for E-science Projects Ananta Manandhar.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Data services on the NGS
The National Grid Service Mike Mineter NeSC-TOE
Presentation transcript:

Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre

2 Thanks to: Mark Calleja, Cambridge David McBride, Imperial College David Wallom, Oxford Jonathan Giddy, Welsh e-Science Centre John Brodholt, UCL for descriptions of their campus initiatives.

3 Overview Goals Methods & Examples Some opportunities and implications

4 Goals of campus grids Resource utilisation –Computers: in each university x,000 PCs are little used –Research data Collaboration –Access Grids – meetings across sites –Sharing visualisation, data, programs Infrastructure for research –From resource sharing in a university to international collaboration with one middleware stack

5 Overview Goals Methods & Examples –Harvesting CPU cycles –Crossing administrative domains Globus Storage Resource Broker Some opportunities and implications

6 Harvesting CPU time Teaching labs. + Researchers Often-idle processors!! Analyses constrained by CPU time!

7 Harvesting CPU time Teaching lab machines lie idle for most of the time Harvest spare compute cycles to create a low-cost “high throughput computing” (HTC) platform –Goal: run many tasks in a week, month, … –Typically: many similar tasks invoked from workflow or a script Monte-Carlo Simulation – parameter sweeps Pool processors as a batch processing resource Submit jobs that run when a machine is free Condor most common approach –

8 Example: viewshed analyses Derive viewsheds for all points in “digital elevation model” (DEM) Build a database to allow –Derivation of indices to characterise viewsheds –Applications to access pre-calculated viewsheds Viewsheds: what can be seen from point at “+” Mineter, Dowers, Caldwell

9 Example: viewshed analyses Condor Viewshed creation Coordinator Starts,monitors Condor jobs. Builds database intermediate files Condor DEM database Worker processors running Windows Coordinating processor Typical run: 39 PCs (Windows) 330 CPU days in 9.94 days elapsed Mineter, Dowers, Caldwell

10 submission machines central manager UCL Pool > 1000 CPUS Platforms: Windows XP and 2000 Capacity: 256Mb-1Gb RAM; Intel 3 (1Ghz) – Intel 3 (1Ghz) – Intel 4 (3.2Ghz) Intel 4 (3.2Ghz) Linux Different projects can have their own submission machine Note: need Windows executable (.exe file) The UCL Condor Pool John Brodholt UCL

11 Calculations: investigate surfaces 2 to 5 surface terminations 4 to 16 impurity positions > 4 concentrations Total number of calculations per impurity: CaO-termimated TiO 2 -termimated {001} surfaces of CaTiO 3 Mineral Surfaces M. Alfredsson, J.P. Brodholt and G.D. Price et al, Submitted to Nature Materials

12 Surface Energy for doped surfaces X M :bulk + M M :surface  M M :bulk + X M :surface, M M =un-doped material and X M =doped material (Ca,M)TiO 3 :bulk + CaTiO 3 :surface  CaTiO 3 :bulk + (Ca,M)TiO 3 :surface E S (n) = E 0 S + (n /S)·E seg E 0 S = un-doped E surf E seg = Segregation energy n = dopants on surface S = Surface Area Langmuir Isotherm Determine Equilibrium Situation  Beyond Langmuir Isotherm

13 Surface Energy for doped surfaces E S (n) = E 0 S + (n /S)·E seg E 0 S = un-doped E surf E seg = Segregation energy n = dopants on surface S = Surface Area Determine Equilibrium Situation  Beyond Langmuir Isotherm

14 Use these surface energies to predict crystal forms as a function of type dopant AND concentration: Maria Alfredsson, UCL

15 Crystal Morphologies of Ni-doped CaTiO 3 SEM-picture of Ni-doped CaTiO 3 Predicted Morphology of Ni-doped CaTiO 3 (submitted to Nature Materials) Maria Alfredsson, UCL

16 Campus grids Resources in many administrative domains Need basic services that provide: –Authentication, Authorisation mechanisms Based on certificates Single sign-on to access many resources Control of who can do what –Job submission services Submit jobs to batch queues on clusters or Condor pools –Information systems So you know what can be used –Ability to share data

17 Middleware for campus grids Globus toolkit –Tools built on Grid Security Infrastructure and include: Job submission: run a job on a remote computer Information services: So I know which computer to use Storage Resource Broker –Virtual filesystem: for files held in multiple locations –NIEES offers a testbed to give experience with SRB SRB and Globus Toolkit 2 are part of the National Grid Service stack

18 Globus A software toolkit: a modular “bag of technologies” –Made available under liberal open source license Not turnkey solutions, but building blocks and tools for application developers and system integrators International production grids are (currently) based on the Globus Toolkit release 2 Globus Alliance:

19 Globus is a Toolkit To submit a job to run on a remote computer globus-job-submit grid-data.rl.ac.uk/jobmanager-pbs /bin/hostname -f globus-job-status DONE globus-job-get-output grid-data12.rl.ac.uk Needs higher level services –Brokers to allow jobs to be submitted to “a grid” –VO- specific developments e.g. Portals

20 Storage Resource Broker SRB server Resource Driver SRB server Resource Driver SRB server Resource Driver Filesystems in different admin. domains User sees a virtual filesytem: –Command line (S-Commands) –MS Windows (InQ) –Web based (MySRB). –Java (JARGON) –Web Services (MATRIX)

21 How SRB Works MCAT Database MCAT Server SRB A Server SRB B Server SRB Client a b cd e f g 4 major components: –The Metadata Catalogue (MCAT) –The MCAT-Enabled SRB Server –The SRB Storage Server –The SRB Client

Example: OxGrid, a University Campus Grid Single entry point for Oxford users to shared and dedicated resources Seamless access to National Grid Service and OSC for registered users Single sign-on using PKI technology integrated with current methods NGSOSC OxGrid Central Management Broker MDS/ VOM Storage College Resources Departmental Resources Oxford Users David Wallom

23 Observations Need specific initial user communities >> vague sense this is a good idea! Engage with systems managers from first thoughts –Operations effort must be factored in and justifiable! Researcher enthusiasm is not enough! Be alert to National Grid Service deployment of middleware, and potential for interoperability

24 Summary Campus grids enable –High throughput computing –Improve resource utilisation –Collaboration – share processes, data, storage Builds culture, skills and infrastructure for service-oriented research… Advantages of using the same stack as the National Grid Service –Expertise and support available –Smaller leaps to partnership in and use of the NGS –White Rose Grid pioneered this!... and provides the expertise for others…