2 Sep 2002F Harris EDG/WP6 meeeting at Budapest LHC experiments use of EDG Testbed F Harris (Oxford/CERN)

Slides:



Advertisements
Similar presentations
1 ALICE Grid Status David Evans The University of Birmingham GridPP 16 th Collaboration Meeting QMUL June 2006.
Advertisements

Stephen Burke - WP8 Status - 9/5/2002 Partner Logo WP8 Status Stephen Burke, PPARC/RAL.
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
Réunion DataGrid France, Lyon, fév CMS test of EDG Testbed Production MC CMS Objectifs Résultats Conclusions et perspectives C. Charlot / LLR-École.
The DataGrid Project NIKHEF, Wetenschappelijke Jaarvergadering, 19 December 2002
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
LHCb Quarterly Report October Core Software (Gaudi) m Stable version was ready for 2008 data taking o Gaudi based on latest LCG 55a o Applications.
Production test on EDG-1.4 Goal 1: simulate and reconstuct 5000 Pb-Pb central events 1 job/event Output size: about 1.8 GB/event, so 9 TB Job duration:
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
EU 2nd Year Review – Feb – Title – n° 1 WP8: Progress and testbed evaluation F Harris (Oxford/CERN) (WP8 coordinator )
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
EGEE is a project funded by the European Union under contract IST Testing processes Leanne Guy Testing activity manager JRA1 All hands meeting,
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Nick Brook Current status Future Collaboration Plans Future UK plans.
Monitoring in EGEE EGEE/SEEGRID Summer School 2006, Budapest Judit Novak, CERN Piotr Nyczyk, CERN Valentin Vidic, CERN/RBI.
Sep 21, 20101/14 LSST Simulations on OSG Sep 21, 2010 Gabriele Garzoglio for the OSG Task Force on LSST Computing Division, Fermilab Overview OSG Engagement.
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
LHCb and DataGRID - the workplan for 2001 Eric van Herwijnen Wednesday, 28 march 2001.
WP8 Status – Stephen Burke – 30th January 2003 WP8 Status Stephen Burke (RAL) (with thanks to Frank Harris)
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
5 Sep 2002F Harris Plenary Budapest1 WP8 Report F Harris (Oxford/CERN)
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
LHCb planning for DataGRID testbed0 Eric van Herwijnen Thursday, 10 may 2001.
29 May 2002Joint EDG/WP8-EDT/WP4 MeetingClaudio Grandi INFN Bologna LHC Experiments Grid Integration Plans C.Grandi INFN - Bologna.
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
First attempt for validating/testing Testbed 1 Globus and middleware services WP6 Meeting, December 2001 Flavia Donno, Marco Serra for IT and WPs.
Enabling Grids for E-sciencE System Analysis Working Group and Experiment Dashboard Julia Andreeva CERN Grid Operations Workshop – June, Stockholm.
CERN Using the SAM framework for the CMS specific tests Andrea Sciabà System Analysis WG Meeting 15 November, 2007.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
Andrew McNab - Manchester HEP - 17 September 2002 UK Testbed Deployment Aim of this talk is to the answer the questions: –“How much of the Testbed has.
The ATLAS Cloud Model Simone Campana. LCG sites and ATLAS sites LCG counts almost 200 sites. –Almost all of them support the ATLAS VO. –The ATLAS production.
The GridPP DIRAC project DIRAC for non-LHC communities.
Oxana Smirnova LCG/ATLAS/Lund September 3, 2002, Budapest 5 th EU DataGrid Conference ATLAS-EDG Task Force status report.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
LHCb Data Challenge in 2002 A.Tsaregorodtsev, CPPM, Marseille DataGRID France meeting, Lyon, 18 April 2002.
Overview of ATLAS Data Challenge Oxana Smirnova LCG/ATLAS, Lund University GAG monthly, February 28, 2003, CERN Strongly based on slides of Gilbert Poulard.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Oxana Smirnova LCG/ATLAS/Lund August 27, 2002, EDG Retreat ATLAS-EDG Task Force status report.
The DataGrid Project NIKHEF, Wetenschappelijke Jaarvergadering, 19 December 2002
Gennaro Tortone, Sergio Fantinel – Bologna, LCG-EDT Monitoring Service DataTAG WP4 Monitoring Group DataTAG WP4 meeting Bologna –
David Adams ATLAS ATLAS Distributed Analysis (ADA) David Adams BNL December 5, 2003 ATLAS software workshop CERN.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE Operations: Evolution of the Role of.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
Moving the LHCb Monte Carlo production system to the GRID
INFN-GRID Workshop Bari, October, 26, 2004
ALICE Physics Data Challenge 3
Simulation use cases for T2 in ALICE
Gridifying the LHCb Monte Carlo production system
ATLAS DC2 & Continuous production
LHCb thinking on Regional Centres and Related activities (GRIDs)
Status and plans for bookkeeping system and production tools
Presentation transcript:

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest LHC experiments use of EDG Testbed F Harris (Oxford/CERN)

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest Outline of presentation Experiment plans and relation to data challenges and use of Grid facilities/services –ATLAS(currently Task Force activity) O Smirnova,L Perini –ALICE P Cerello –CMS P Capiluppi –LHCb E van Herwijnen Comments on services required by all experiments Experiments and ‘loose cannons’ Summary

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest ATLAS Currently in middle of Phase1 of DC1 (Geant3,Athena reconstruction,analysis). Many sites in Europe+US+Australia,Canada,Japan,Taiwan are involved Phase2 of DC1 will begin Oct-Nov 2002 using new event model DC2 start end Oct 2003, and will operate in framework of LCG with inter-operability between EU, US and other sites. For DC2 will interface to Grid from architecture as well as from scripts 1.2 and 2 Testbeds will be used to prepare this work(scale and performance tests) Plans for use of Grid tools in DCs –Atlas-EDG Task Force to repeat with EDG 1.2. ~1% of Geant3 simulations already done. Using CERN,CNAF,Nikhef,RAL,Lyon (see O Smirnova talk in WP8 meeting Sep 3 at Budapest) –Phase2 will make larger use of Grid tools. Maybe different sites will use different tools. There will be more sites. This to be defined Sep ~10**6 CPU hrs 20 TB input to reconstruction 5TB ouput(? How much on testbed?)

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest Current Atlas Task description Input: set of generated events as ROOT files (each input partition ca 1.8 GB, events); master copies are stored in CERN CASTOR Processing: ATLAS detector simulation using a pre-installed software release –Each input partition is processed by 20 jobs (5000 events each) –Full simulation is applied only to filtered events, ca 450 per job –A full event simulation takes ca 150 seconds per event on a 1GHz processor) Output: simulated events are stored in ZEBRA files (ca 1 GB each output partition); an HBOOK histogram file and a log-file (stdout+stderr) are also produced Total: 9 GB of input, 2000 CPU-hours of processing, 100 GB of output.

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest ALICE Alice assume that as soon as a stable version of 1.2.n is tested and validated it will be progressively installed on ‘all’ EDG testbed sites As new sites come will use an automatic tool for submission of test jobs of increasing output size and duration They believe Alien has more functionality than EDG in data management e.g. –Hierachy of Data/Metadata Catalogue –C++ API which allows data access via ‘open’ calls from Aliroot They are implementing interface between Alien and EDG. –Job Submission will work in pull mode according to availability of WNs on Ces. Job moved to EDG UI and submitted to RB. –EDG nodes will have access to Alien Catalogue to register/query –Alien nodes will be configured as EDG SEs, so EDG nodes can red/write/retrieve data on Alice controlled farms

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest ALICE planning for tests at the moment do not plan a "data challenge" with EDG. Will concentrate the AliEn/EDG interface and on the AliRoot/EDG interface in particular for items concerning the Data Management. However plan a data transfer test, as close as possible to the expected data transfer rate for a real production and analysis. concerning AliEn/EDG interface have configured the Torino farm both as AliEn Client and as EDG UI, CE, SE. plan to try there the first jobs sent to "AliEn" and automatically moved to EDG. We also plan to test there the access to data registered in the AliEn Catalogue from an EDG job. In addition to Torino will use CERN, CNAF,Nikhef,Catania and Lyon for first tests CPU and store requirements can be tailored to availability of facilities in testbed – but will need some scheduling and priorities

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest CMS CMS currently running production for DAQ Technical Design Report(TDR) Requires full chain of CMS software and production tools. This includes use of Objectivity. 5% Data Challenge(DC04) will start Summer 2003 and will last ~ 7 months. This will produce 5*10**7 events. In last month all data will be reconstructed and distributed to Tier1/2 centres for analysis. –1000 CPUs for 5 months 100 TB output Use of GRID tools –Will not be used for current production –Plan to use in DC04 –Currently testing integration of CMS environment in EDG+EDT testbeds. Will test integration in EDT+iVDGL test layout –EDG 1.2 will be used to make scale and performance tests (proof of concept). Tests on RB, RC and GDMP. Will need Objectivity for tests. –V2 seems best candidate for DC04 starting summer 2003(has functionality required by CMS)

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest Facilities for CMS use of testbed to Christmas Sites –Current (will be expanded for DC04) EDG 1.2 : IC,RAL,CNAF/BO,Padova,CERN,Nikhef,IN2P3,Ecol- Poly,ITEP EDT : IC,CNAF/BO,Padova,Ecole Poly CPU/STORE(modest requirements) –?~ 50 CPUs distributed over sites (quasi-dedicated for priority running in ‘agreed’ periods) –~200 Gbyte disk store at a few SEs

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest LHCB First intensive Data Challenge starts Oct 2002 – currently doing intensive pre-tests at all sites. Participating sites for 2002 –CERN,Lyon,Bologna,Nikhef,RAL –Bristol,Cambridge,Edinburgh,Imperial,Oxford,ITEP Moscow,Rio de Janeiro Use of EDG Testbed –Install latest OO environment on testbed sites. Flexible job submission Grid/non-Grid –First tests(now) Run 500 event MC generation Store on SE Recover data and histograms to CERN Run reconstruction. Output to SE. Recover log files and histos. Write recon output to Castor. Read Castor data with an analysis job. –Large scale production tests(by October) –Production (if tests OK) Aim to do percentage of production on Testbed

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest Facility requirements for LHCb DC1 –Total of ~3*10**7 events (signal + background)-detector studies –Total CPU ~30K Ghz days. I.e. 50 days of 600 1GHz CPUs A very rough scenario- not contractual! (300 CERN,100 Bologna,100 RAL, 50 Lyon, 50 Nikhef + others) ?? How much of this could be in Testbed –Storage for 3*10**7 events Brute force (2 Mbyte/ev) 60TB! Compressed to 200 kbyte/ev 6 TB (demand on Castor at CERN) And more compressed for analysis (20 kbyte).6 TB –With 600 CPUs (1 GHz), 47 days needed 45 days if we do not reconstruct the min. bias

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest General comments from experiments and ‘loose cannons’ on expectations for Testbed

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest Operation of Sites Stability more important than size –Application testbed sites have to be available most of the times –Smaller testbed sites might introduce instability to the whole Grid –Jobs typically run for 24 hours. This sets the scale of the needed stability (~weeks) Configuration of sites –Must be tested and documented (flag in the Information Index?) –Probably not all sites will be set up properly. EDG must cope with this situation. Various Testbeds –Need for Development, Validation and Application testbeds clearly demonstrated.

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest Monitoring and Documentation Status of the Grid –Up to now no real overview of the status of the Grid is available –Users can only guess about the resources available –A central status web-page would be useful (a la CERN IT status pages) Documentation –It is all there, but very scattered. –“Unified” modular documentation needed( and thanks for big current efforts) User Guide Release notes Installation Guide …. –Please dont mix these. 150 page manuals that cover EVERYTHING will not be read. The user will not find the relevant info.

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest Support Middleware WP contact persons needed –Sometimes difficult for applications to identify the proper person in a MW-WP. Dedicated contacts could help. The turn-around time in Bugzilla is sometimes too long. These contacts could speed up things. Debugging –It takes a long time in certain cases until the proper person is assigned to a given bug using Bugzilla.(see above) –More intensive testing is required during integration. Should be covered by EDG test group(as agreed at Chavannes)

2 Sep 2002F Harris EDG/WP6 meeeting at Budapest Summary Current WP8 top priority activity is Atlas/EDG Task Force work –Please come to O Smirnova talk tomorrow afternoon in WP8 session Will continue Task Force flavoured activities with other experiments Current use of Testbed is focused on main sites (CERN,Lyon,Nikhef,CNAF,RAL) – this is mainly for reason of support Once stability is achieved (see Atlas/EDG work) will expand to other sites. But we should be careful in selection of these sites. Local support is essential. MAIN ISSUES(process, management, and people) –When will we be ready to have broader distribution of EDG 1.2.n with good documentation? –Installation, validation and support of software at ‘non-major’ sites –Relationship between support for ‘gridified’ applications and middleware support at sites (user support) –Organisation of VOs in EDG testbed Will need to define quotas and priorities within a VO Will need to organise priority periods for particular Vos Scheduling use - ? Atlas may have big demand THANKS –To members of IT for heroic efforts in past months