Oxford Jan 2005 RAL Computing 1 RAL Computing Implementing the computing model: SAM and the Grid Nick West.

Slides:



Advertisements
Similar presentations
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Advertisements

Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
Andrew McNab - Manchester HEP - 22 April 2002 EU DataGrid Testbed EU DataGrid Software releases Testbed 1 Job Lifecycle Authorisation at your site More.
Andrew McNab - EDG Access Control - 14 Jan 2003 EU DataGrid security with GSI and Globus Andrew McNab University of Manchester
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Batch Production and Monte Carlo + CDB work status Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
Ely June 2007Migration to the GRID 1 Migration to the GRID A Case Study Nick West.
Andrew McNab - Manchester HEP - 22 April 2002 UK Rollout and Support Plan Aim of this talk is to the answer question “As a site admin, what are the steps.
System Architecture & Hardware Configurations Dr. D. Bilal IS 592 Spring 2005.
Overview of the ODP Data Provider Sergey Sukhonosov National Oceanographic Data Centre, Russia Expert training on the Ocean Data Portal technology, Buenos.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
 To explain the importance of software configuration management (CM)  To describe key CM activities namely CM planning, change management, version management.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
CDF Grid Status Stefan Stonjek 05-Jul th GridPP meeting / Durham.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Summary of distributed tools of potential use for JRA3 Dugan Witherick HPC Programmer for the Miracle Consortium University College.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Nick Brook Current status Future Collaboration Plans Future UK plans.
- Iain Bertram R-GMA and DØ Iain Bertram RAL 13 May 2004 Thanks to Jeff Templon at Nikhef.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
UKQCD QCDgrid Richard Kenway. UKQCD Nov 2001QCDgrid2 why build a QCD grid? the computational problem is too big for current computers –configuration generation.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
16 September GridPP 5 th Collaboration Meeting D0&CDF SAM and The Grid Act I: Grid, Sam and Run II Rick St. Denis – Glasgow University Act II: Sam4CDF.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
…building the next IT revolution From Web to Grid…
Metadata Mòrag Burgon-Lyon University of Glasgow.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Presenter Name Facility Name UK Testbed Status and EDG Testbed Two. Steve Traylen GridPP 7, Oxford.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
Andrew McNabGrid in 2002, Manchester HEP, 7 Jan 2003Slide 1 Grid Work in 2002 Andrew McNab High Energy Physics University of Manchester.
Interactive Data Analysis on the “Grid” Tech-X/SLAC/PPDG:CS-11 Balamurali Ananthan David Alexander
David Adams ATLAS ATLAS distributed data management David Adams BNL February 22, 2005 Database working group ATLAS software workshop.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
The GridPP DIRAC project DIRAC for non-LHC communities.
 Distributed Database Concepts  Parallel Vs Distributed Technology  Advantages  Additional Functions  Distribution Database Design  Data Fragmentation.
Adapting SAM for CDF Gabriele Garzoglio Fermilab/CD/CCF/MAP CHEP 2003.
The Storage Resource Broker and.
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
LHCb Grid MeetingLiverpool, UK GRID Activities Glenn Patrick Not particularly knowledgeable-just based on attending 3 meetings.  UK-HEP.
Stephen Burke – Sysman meeting - 22/4/2002 Partner Logo The Testbed – A User View Stephen Burke, PPARC/RAL.
The GridPP DIRAC project DIRAC for non-LHC communities.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
EGEE is a project funded by the European Union under contract IST Issues from current Experience SA1 Feedback to JRA1 A. Pacheco PIC Barcelona.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
DGAS Distributed Grid Accounting System INFN Workshop /05/1009, Palau Giuseppe Patania Andrea Guarise 6/18/20161.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
(on behalf of the POOL team)
System Architecture & Hardware Configurations
TYPES OF SERVER. TYPES OF SERVER What is a server.
Presentation transcript:

Oxford Jan 2005 RAL Computing 1 RAL Computing Implementing the computing model: SAM and the Grid Nick West

Oxford Jan 2005 RAL Computing 2 An aside – do we want another list? Currently all RAL software changes –Get reported to:- Are people happy with this spam? –Or should we have a separate list dealing specifically with technical computing issues?

Oxford Jan 2005 RAL Computing 3 Sam at RAL SAM (Sequential data Access with Metadata) –Tracks and records metadata at file and dataset level –Delivers files through a range of protocols e.g. D’cache. SAM at Remotes sites –Inefficient to repeatedly retrieve data from FNAL ENSTORE –Solution: SAM Station with local cache Optimisation to use local copy where present Local copy remains until clients release

Oxford Jan 2005 RAL Computing 4 Sam at RAL DCM (Data Cache Manager) v. SAM –DCM designed (from initial request by Alfons) to Allow sharing of a pool of data files Provide simple retrieval system from FNAL –Using MySQL query to infer directory from run number –Then wget or FTP to retrieve –Overlap functionality with SAM DCM advantages –Better disk management: audits all files by user –Simple interface (directory of soft links) for one-off jobs –Potential for expansion to crib Jeff’s use of COMPLETE_FILE_LISTING SAM advantages –Better catalogue of file locations – will track future changes –Support for datasets – formation and tracking

Oxford Jan 2005 RAL Computing 5 Sam at RAL Hybrid Proposal: DCM over SAM –Rather than SAM station use SAM web client (already installed at RAL and Oxford) –It provides full database functionality without a local cache. –Advantage of minimal coupling to SAM – just some python scripts + web interface

Oxford Jan 2005 RAL Computing 6 The Grid What is the Grid? –A set of tools and protocols allowing secure distributed computing over a heterogeneous distributed computing community. Why have a Grid? –To handle projects that require more resources than a single centre can provide. In HEP specifically LHC. How is it managed? –Groups are organised into VOs (Virtual Organisations) –Individuals are identified by GRID Certificates and then join VOs. Why do we care? –After all we only want a small slice of one site. –Because RAL will move all its resources over to the GRID They don’t want to have to maintain two systems Expect to be GRID-only by end of 2006

Oxford Jan 2005 RAL Computing 7 The Grid How will it impact us? –The original model mandated:- No external connectivity –Parcel up software + data and send to worker node. All data access via GRID tools e.g. SRM (Storage Resource Manager) –Potentially it could be very serious Database access ? –In principle would mean programming to another API and breaking our DB support libraries SAM access? –Requires web access Event data access? –Currently homogenous farm all nodes share the same disks –That's not the GRID model: each node only has local disks

Oxford Jan 2005 RAL Computing 8 The Grid Database Access: Exploiting cracks in the model –Already arguments between groups and system managers over issues such as databases. –Solution: exceptions to the model are “VO Boxes” that contain non-grid software. –I think the MySQL database at RAL is one such box. Taking “Distributed” out of Distributed Computing –To use RAL’s MySQL must run at RAL –The GRID has access to a set of CEs (Computing Elements) –RAL is only one CE – jobs could run elsewhere –But can target jobs to specific CE

Oxford Jan 2005 RAL Computing 9 The Grid SAM access –It looks like web access will be allowed Data Access –Probably won’t be able to do an end-run round GRID –Perhaps then have to use SAM – The SAMGrid project? –Need to talk to the guys at RAL, particularly Steve Traylen system manager for RAL CE.

Oxford Jan 2005 RAL Computing 10 The Grid What Grid Tools? –Larger experiments have posts dedicated to integrating experiments production systems into the GRID GridPP Portal Project –To support small experiments –Contact: Gidon Moont Imperial College –Already used by projects such as CALICE (Linear Collider calorimetry R&D) and MICE. –He is keen to help us when we are ready. It’s not all LHC –Older experiments e.g. ZEUS and H1 are already exploiting it. It could be worth looking at what they have done.

Oxford Jan 2005 RAL Computing 11 The Grid Our next steps –Mike is planning to talk to some experts (I think) –We need to talk to Steve Traylen about satisfying our requirements at RAL Gidon Moont about a Portal Project –Establish a VO and get some GRID certificates –Have some trial runs to identify and eliminate problems ahead of any forced migration.