GridPP CM, ICL 16 September 2002 Roger Jones. RWL Jones, Lancaster University EDG Integration  EDG decision to put short-term focus of effort on making.

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Advertisements

Andrew McNab - Manchester HEP - 24 May 2001 WorkGroup H: Software Support Both middleware and application support Installation tools and expertise Communication.
Andrew McNab - Manchester HEP - 22 April 2002 EU DataGrid Testbed EU DataGrid Software releases Testbed 1 Job Lifecycle Authorisation at your site More.
Andrew McNab - Manchester HEP - 2 May 2002 Testbed and Authorisation EU DataGrid Testbed 1 Job Lifecycle Software releases Authorisation at your site Grid/Web.
D. Düllmann - IT/DB LCG - POOL Project1 POOL Release Plan for 2003 Dirk Düllmann LCG Application Area Meeting, 5 th March 2003.
Andrew McNab - Manchester HEP - 6 November Old version of website was maintained from Unix command line => needed (gsi)ssh access.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
K. Harrison CERN, 15th May 2003 GANGA: GAUDI/ATHENA AND GRID ALLIANCE - Development strategy - Ganga prototype - Release plans - Conclusions.
Framework for Automated Builds Natalia Ratnikova CHEP’03.
ATLAS-Specific Activity in GridPP EDG Integration LCG Integration Metadata.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Matthew Palmer, Cambridge University01/10/2015 First Use of the UK e-Science Grid Overview The Physics Experiences Looking forward Conclusions Matthew.
J.T Moscicki CERN LCG - Software Process & Infrastructure1 SPI Software Process & Infrastructure for LCG Software Packaging and Distribution LCG Application.
RLS Tier-1 Deployment James Casey, PPARC-LCG Fellow, CERN 10 th GridPP Meeting, CERN, 3 rd June 2004.
11 MANAGING AND DISTRIBUTING SOFTWARE BY USING GROUP POLICY Chapter 5.
K. Harrison CERN, 20th April 2004 AJDL interface and LCG submission - Overview of AJDL - Using AJDL from Python - LCG submission.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
EGEE is a project funded by the European Union under contract IST Testing processes Leanne Guy Testing activity manager JRA1 All hands meeting,
CERN Manual Installation of a UI – Oxford July - 1 LCG2 Administrator’s Course Oxford University, 19 th – 21 st July Developed.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Distribution After Release Tool Natalia Ratnikova.
Nick Brook Current status Future Collaboration Plans Future UK plans.
Claudio Grandi INFN Bologna CHEP'03 Conference, San Diego March 27th 2003 Plans for the integration of grid tools in the CMS computing environment Claudio.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
LHCb and DataGRID - the workplan for 2001 Eric van Herwijnen Wednesday, 28 march 2001.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
CMS Stress Test Report Marco Verlato (INFN-Padova) INFN-GRID Testbed Meeting 17 Gennaio 2003.
MAGDA Roger Jones UCL 16 th December RWL Jones, Lancaster University MAGDA  Main authors: Wensheng Deng, Torre Wenaus Wensheng DengTorre WenausWensheng.
C. Arnault – ATLAS RPMs- 14/03/ n° 1 ATLAS RPMs from CMT C. Arnault & C. Loomis
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
LHCb planning for DataGRID testbed0 Eric van Herwijnen Thursday, 10 may 2001.
05/29/2002Flavia Donno, INFN-Pisa1 Packaging and distribution issues Flavia Donno, INFN-Pisa EDG/WP8 EDT/WP4 joint meeting, 29 May 2002.
Production Tools in ATLAS RWL Jones GridPP EB 24 th June 2003.
Virtual Batch Queues A Service Oriented View of “The Fabric” Rich Baker Brookhaven National Laboratory April 4, 2002.
EGEE is a project funded by the European Union under contract IST Tools survey status, first experiences with the prototype Diana Bosio EGEE.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
EGEE is a project funded by the European Union under contract IST “Interfacing to the gLite Prototype” Andrew Maier / CERN LCG-SC2, 13 August.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
Andrew McNab - Manchester HEP - 17 September 2002 UK Testbed Deployment Aim of this talk is to the answer the questions: –“How much of the Testbed has.
The ATLAS Cloud Model Simone Campana. LCG sites and ATLAS sites LCG counts almost 200 sites. –Almost all of them support the ATLAS VO. –The ATLAS production.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
G.Govi CERN/IT-DB 1 September 26, 2003 POOL Integration, Testing and Release Procedure Integration  Packages structure  External dependencies  Configuration.
VO Box Issues Summary of concerns expressed following publication of Jeff’s slides Ian Bird GDB, Bologna, 12 Oct 2005 (not necessarily the opinion of)
LCG CERN David Foster LCG WP4 Meeting 20 th June 2002 LCG Project Status WP4 Meeting Presentation David Foster IT/LCG 20 June 2002.
Oxana Smirnova LCG/ATLAS/Lund September 3, 2002, Budapest 5 th EU DataGrid Conference ATLAS-EDG Task Force status report.
ATLAS-specific functionality in Ganga - Requirements for distributed analysis - ATLAS considerations - DIAL submission from Ganga - Graphical interfaces.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
INFSO-RI Enabling Grids for E-sciencE gLite Certification and Deployment Process Markus Schulz, SA1, CERN EGEE 1 st EU Review 9-11/02/2005.
Oxana Smirnova LCG/ATLAS/Lund August 27, 2002, EDG Retreat ATLAS-EDG Task Force status report.
Feedback from CMS Andrew Lahiff STFC Rutherford Appleton Laboratory Contributions from Christoph Wissing, Bockjoo Kim, Alessandro Degano CernVM Users Workshop.
Stephen Burke – Sysman meeting - 22/4/2002 Partner Logo The Testbed – A User View Stephen Burke, PPARC/RAL.
Comments on SPI. General remarks Essentially all goals set out in the RTAG report have been achieved. However, the roles defined (Section 9) have not.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
Klaus Rabbertz Berlin, Jahrestagung der DPG 1 CMS Software Installation Andreas Nowack Shahzad Muzaffar Andrea Sciabà Stephan Wynhoff Klaus.
Status of Task Forces Ian Bird GDB 8 May 2003.
Real Time Fake Analysis at PIC
Eleonora Luppi INFN and University of Ferrara - Italy
SPI external software build tool and distribution mechanism
INFN-GRID Workshop Bari, October, 26, 2004
Stephen Burke, PPARC/RAL Jeff Templon, NIKHEF
Leigh Grundhoefer Indiana University
Status and plans for bookkeeping system and production tools
Presentation transcript:

GridPP CM, ICL 16 September 2002 Roger Jones

RWL Jones, Lancaster University EDG Integration  EDG decision to put short-term focus of effort on making the ATLAS DC1 production work on 1.2 and after  Good input from the EDG side. Effort from ATLAS, especially UK, Italy and CERN (Notably Frederic Brochu, but also others: Stan Thompson, RJ, and Alvin Tam joining now)  The submission seems to work `after a fashion’ using multiple UIs and one production site  More problems using multiple production sites, esp. data replication  Inability to access Castor using Grid tools was a major problem, fixed  Large files sizes also a problem  Interfaces to catalogues need work  Encouraging effort, expect full production quality service for demonstrations in November  Note: analysis requires more work – the code is not already `out there’  Integration with other ATLAS Grid efforts needed  Evaluation session on Thursday at RHUL Software Week

RWL Jones, Lancaster University Exercises  Common ATLAS Environment pre-installed on sites  Exercise 1: simulation jobs  50 jobs, 5000 events each, 5 submitters  One-by-one submission because RB problems  Running restricted to CERN  1 job 25h real time, 1Gb output  Output stored on the CERN SE  Exercise 2 with patched RB and massive submission  250 shorter jobs (several hours, 100Mb output)  CERN, CNAF, CCIN2P3, Karlsrurhe, NIKEF, RAL  One single user  Output back to CERN – SE space is an issue  Also have `fast simulation in a box’ running on the EDG testbed, good exercise for analysis/non-installed tasks

RWL Jones, Lancaster University Issues  There is no defined `system software’  The system managers cannot dictate the shells to use, compilers etc  Multiple users will lead to multiple copies of the same tools unless a system advertising what is installed and where is available (PACMAN does this for instance)  A user service requires binaries to be `installed’ on the remote sites – trust has to work both ways

RWL Jones, Lancaster University Software Away from CERN  Several cases:  Software copy for developers at remote sites  Software (binary) installation for Grid Productions  Software (binary) download for Grid jobs  Software (source) download for developers on the Grid  Initially rely on by-hand packaging and installation  True Grid use requires automation to be scalable  Task decomposes three requirements  Relocatable code and environment  Packaging of the above  Deployment tool (something more than human+ftp!)

RWL Jones, Lancaster University World Relocatable Code  Six months ago, ATLAS code was far from deployable  Must be able to work with several cases:  afs for installation  No afs available  afs present but not to be used for the installation because of speed (commonplace!)  Significant improvement in this area, with help from John Couchman, John Kennedy, Mike Gardner and others  Big effort in reduction of package dependencies – work with US colleagues  However, it takes a long time for this knowledge to become the default – central procedures being improved (Steve O’Neale)  For the non-afs installation the cvsupd mechanism seems to work generally, but patches are need for e.g. Motif problems  Appropriate for developers at institutes, but not a good Grid solution in the long-term

RWL Jones, Lancaster University An Aside: Installation Method Evaluation  Previous work from Lancaster, now big effort from John Kennedy  Cvsupd  Problems with pserver/kserver; makefiles needed editing  Download took one night  Problems with CMT paths on the CERN side?  Problems fixing afs paths into local paths – previously developed script does not catch all  NorduGrid rpms  Work `after a fashion’. Does not mirror CERN, much fixing by hand  Much editing of makefiles to do anything real

RWL Jones, Lancaster University An Aside: Installation Method Evaluation (II)  Rsynch method  Works reasonably at Lancaster  Hard to be selective – easy to have huge downloads taking more than a day in the first instance  Only hurts the first time – better for updates  Official DC1 rpms  Circularity in dependencies cause problems with zsh  Requires root privilege for installation

RWL Jones, Lancaster University Installation Tools  To use the Grid, deployable software must be deployed on the Grid fabrics, and the deployable run- time environment established  Installable code and run-time environment/configuration  Both ATLAS and LHCb use CMT for the software management and environment configuration  CMT knows the package interdependencies and external dependencies  this is the obvious tool to prepare the deployable code and to `expose’ the dependencies to the deployment tool (MG testing)  Grid aware tool to deploy the above  PACMAN is a candidate which seems fairly easy to interface with CMT, see following talk

RWL Jones, Lancaster University CMT and deployable code  Christian Arnault and Charles Loomis have a beta-release of CMT that will produce package rpms, which is a large step along the way  Still need to clean-up site dependencies  Need to make the package dependencies explicit  Rpm requires root to install in the system database (but not for a private installation)  Developer and binary installations being prepared, probably also needs binary+headers+source for a single package  Converter making PACMAN cache files from auto- generated rpms seems to work

RWL Jones, Lancaster University The way forward?  An ATLAS Group exists for deployment tools (RWLJ convening)  LCG have `decided’ to use SCRAM.  Grounds for the decision seemed narrow, with little thought to implications outside of LCG  If this is to be followed generally, should reconsider strategy  What about using SlashGrid to create a virtual file system?