EGEE is a project funded by the European Commission under contract IST-2003-508833 NA4/HEP work F Harris (Oxford/CERN) M.Lamanna(CERN) NA4 Open meeting.

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

LHC Computing Grid Project: Status and Prospects
Les Les Robertson WLCG Project Leader WLCG – Worldwide LHC Computing Grid Where we are now & the Challenges of Real Data CHEP 2007 Victoria BC 3 September.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Les Les Robertson LCG Project Leader LCG - The Worldwide LHC Computing Grid LHC Data Analysis Challenges for 100 Computing Centres in 20 Countries HEPiX.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
LCG-France, 22 July 2004, CERN1 LHCb Data Challenge 2004 A.Tsaregorodtsev, CPPM, Marseille LCG-France Meeting, 22 July 2004, CERN.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
May 2004Sverre Jarp1 Preparing the computing solutions for the Large Hadron Collider (LHC) at CERN Sverre Jarp, openlab CTO IT Department, CERN.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
WP8 Status – Stephen Burke – 30th January 2003 WP8 Status Stephen Burke (RAL) (with thanks to Frank Harris)
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
LHCb week, 27 May 2004, CERN1 Using services in DIRAC A.Tsaregorodtsev, CPPM, Marseille 2 nd ARDA Workshop, June 2004, CERN.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
Your university or experiment logo here LHCb Development Glenn Patrick Raja Nandakumar GridPP18, 20 March 2007.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
EGEE is a project funded by the European Union under contract IST NA4 Applications F.Harris(Oxford/CERN) NA4/HEP coordinator
CERN LCG Deployment Overview Ian Bird CERN IT/GD LHCC Comprehensive Review November 2003.
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
CHEP ‘04 LHC Computing Models and Data Challenges David Stickland Princeton University For the LHC Experiments: ALICE, ATLAS, CMS and LHCb (Though with.
Zprávy z ATLAS SW Week March 2004 Seminář ATLAS SW CZ Duben 2004 Jiří Chudoba FzÚ AV CR.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
Ian Bird LCG Deployment Area Manager & EGEE Operations Manager IT Department, CERN Presentation to HEPiX 22 nd October 2004 LCG Operations.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
LCG ARDA status Massimo Lamanna 1 ARDA in a nutshell ARDA is an LCG project whose main activity is to enable LHC analysis on the grid ARDA is coherently.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
15 December 2015M. Lamanna “The ARDA project”1 The ARDA Project (meeting with the LCG referees) Massimo Lamanna CERN.
CHEP 2006, February 2006, Mumbai 1 LHCb use of batch systems A.Tsaregorodtsev, CPPM, Marseille HEPiX 2006, 4 April 2006, Rome.
Oracle Tech DayNovember 2004 Building the world’s largest Scientific Grid Jamie Shiers Database Group, CERN, Geneva, Switzerland.
SC4 Planning Planning for the Initial LCG Service September 2005.
LCG Report from GDB John Gordon, STFC-RAL MB meeting February24 th, 2009.
2 Sep 2002F Harris EDG/WP6 meeeting at Budapest LHC experiments use of EDG Testbed F Harris (Oxford/CERN)
David Stickland CMS Core Software and Computing
1 LHCb view on Baseline Services A.Tsaregorodtsev, CPPM, Marseille Ph.Charpentier CERN Baseline Services WG, 4 March 2005, CERN.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
EGEE is a project funded by the European Union under contract IST ARDA Project Status Massimo Lamanna ARDA Project Leader NA4/HEP Cork, 19.
ATLAS Distributed Analysis Dietrich Liko IT/GD. Overview  Some problems trying to analyze Rome data on the grid Basics Metadata Data  Activities AMI.
1 DIRAC agents A.Tsaregorodtsev, CPPM, Marseille ARDA Workshop, 7 March 2005, CERN.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
Status of AliEn2 Services ALICE offline week Latchezar Betev Geneva, June 01, 2005.
Organisation of HEP Applications within NA4 F Harris (Oxford/CERN) EGEE is proposed as a project funded by the European Union under contract IST
GAG meeting, 5 July 2004, CERN1 LHCb Data Challenge 2004 A.Tsaregorodtsev, Marseille N. Brook, Bristol/CERN GAG Meeting, 5 July 2004, CERN.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
CMS Production Management Software Julia Andreeva CERN CHEP conference 2004.
June 22 L. Silvestris CMS/ARDA CMS/ARDA Lucia Silvestris INFN-Bari.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
Database Requirements Updates from LHC Experiments WLCG Grid Deployment Board Meeting CERN, Geneva, Switzerland February 7, 2007 Alexandre Vaniachine (Argonne)
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
LHCC meeting – Feb’06 1 SC3 - Experiments’ Experiences Nick Brook In chronological order: ALICE CMS LHCb ATLAS.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
Presented by: Santiago González de la Hoz IFIC – Valencia (Spain) Experience running a distributed Tier-2 and an Analysis.
Real Time Fake Analysis at PIC
INFN GRID Workshop Bari, 26th October 2004
INFN-GRID Workshop Bari, October, 26, 2004
The LHCb Software and Computing NSS/IEEE workshop Ph. Charpentier, CERN B00le.
CMS Data Challenge Experience on LCG-2
ALICE Physics Data Challenge 3
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
MC data production, reconstruction and analysis - lessons from PDC’04
R. Graciani for LHCb Mumbay, Feb 2006
LHCb Grid Computing LHCb is a particle physics experiment which will study the subtle differences between matter and antimatter. The international collaboration.
LHC Data Analysis using a worldwide computing grid
ATLAS DC2 & Continuous production
The LHCb Computing Data Challenge DC06
Presentation transcript:

EGEE is a project funded by the European Commission under contract IST NA4/HEP work F Harris (Oxford/CERN) M.Lamanna(CERN) NA4 Open meeting Catania Jul 15-16

F.Harris / M.Lamanna NA4/HEP- 2 Contents Reminder of NA4/HEP aims in LCG/EGEE context Overview of LHC experiment data challenges and issues Overview of ARDA aims and status

F.Harris / M.Lamanna NA4/HEP- 3 CMS ATLAS LHCb ALICE Storage – Raw recording rate 0.1 – 1 GByte/s Accumulating at 5-8 PetaByte/year 10 PetaByte of disk Processing – 100,000 of today’s fastest PCs LHC Experiments

F.Harris / M.Lamanna NA4/HEP- 4 RAL IN2P3 BNL FZK CNAF PIC ICEPP FNAL LHC Computing Model (simplified!!) Tier-0 – the accelerator centre Filter  raw data Reconstruction  summary data (ESD) Record raw data and ESD Distribute raw and ESD to Tier-1 Tier-1 – Permanent storage and management of raw, ESD, calibration data, meta-data, analysis data and databases  grid-enabled data service Data-heavy analysis Re-processing raw  ESD National, regional support “online” to the data acquisition process high availability, long-term commitment managed mass storage Tier-2 – Well-managed disk storage – grid-enabled Simulation End-user analysis – batch and interactive High performance parallel analysis (PROOF) USC NIKHEF Krakow CIEMAT Rome Taipei TRIUMF CSCS Legnaro UB IFCA IC MSU Prague Budapest Cambridge Tier-1 small centres Tier-2 desktops portables

F.Harris / M.Lamanna NA4/HEP- 5 RAL IN2P3 BNL FZK CNAF USC PIC ICEPP FNAL NIKHEF Krakow Taipei CIEMAT TRIUMF Rome CSCS Legnaro UB IFCA IC MSU Prague Budapest Cambridge Data distribution ~70 Gbits/sec

F.Harris / M.Lamanna NA4/HEP- 6 LCG service status Jul 9, 2004

F.Harris / M.Lamanna NA4/HEP- 7 HEP applications and data challenges using LCG-2 All have the same pattern of event simulation, reconstruction and analysis in production mode (with some limited ‘user’ analysis) All are testing their running models using Tier-0/1/2 with different levels of ambition, and doing physics with the data Larger scale user analysis to come with ARDA All have LCG-2 co-existing with use of other grids ALICE and CMS started around February LHCb in May ATLAS just getting going D0 also making some use of LCG Next slides give a flavour of the work done so far by experiments Regular reports in LCG GDB and PEB see reports of June 14 at LCG/GDB All very happy about LCG user-support ‘attitude’ – very cooperative

F.Harris / M.Lamanna NA4/HEP- 8 Tier-2 Physicist T2storage ORCA Local Job Tier-2 Physicist T2storage ORCA Local Job Tier-1 Tier-1 agent T1storage ORCA Analysis Job MSS ORCA Grid Job Tier-1 Tier-1 agent T1storage ORCA Analysis Job MSS ORCA Grid Job CMS DC04 layout (included US Grid3 and LCG-2 resources) Tier-0 Castor IB fake on-line process RefDB POOL RLS catalogue TMDB ORCA RECO Job GDB Tier-0 data distribution agents EB LCG-2 Services Tier-2 Physicist T2storage ORCA Local Job Tier-1 Tier-1 agent T1storage ORCA Analysis Job MSS ORCA Grid Job

F.Harris / M.Lamanna NA4/HEP- 9 CMS DC04 Processing Rate Processed about 30M events But DST “errors” make this pass not useful for analysis Generally kept up at T1’s in CNAF, FNAL, PIC Got above 25Hz on many short occasions But only one full day above 25Hz with full system Working now to document the many different problems

F.Harris / M.Lamanna NA4/HEP- 10 Structure of ALICE event production Master job submission, Job splitting (500 sub-jobs), RB, File catalogue, processes control, SE… Central servers CEs Sub-jobs Job processing AliEn-LCG interface Sub-jobs RB Job processing CEs Storage CERN CASTOR: disk servers, tape Output files File transfer system: AIOD LCG is one AliEn CE

F.Harris / M.Lamanna NA4/HEP- 11 ALICE DATA CHALLENGE HISTORY Start 10/03, end 29/05 (58 days active) Maximum jobs running in parallel: 1450 Average during active period: 430 CEs: Bari, Catania, Lyon, CERN-LCG, CNAF, Karlsruhe, Houston (Itanium), IHEP, ITEP, JINR, LBL, OSC, Nantes, Torino, Torino-LCG (grid.it), Pakistan, Valencia (+12 others) Sum of all sites

F.Harris / M.Lamanna NA4/HEP- 12 LHCb Grid Production (driven by DIRAC) DIRAC Job Management Service DIRAC Job Management Service DIRAC CE LCG Resource Broker Resource Broker CE 1 DIRAC Sites Agent CE 2 CE 3 Production manager Production manager GANGA UI User CLI JobMonitorSvc JobAccountingSvc AccountingDB Job monitor InfomarionSvc FileCatalogSvc MonitoringSvc BookkeepingSvc BK query webpage BK query webpage FileCatalog browser FileCatalog browser User interfaces DIRAC services DIRAC resources DIRAC Storage DiskFile gridftp bbftp rfio

F.Harris / M.Lamanna NA4/HEP- 13 LHCb DC'04 Accounting

F.Harris / M.Lamanna NA4/HEP- 14 ATLAS DC2: goals The goals include:  Full use of Geant4; POOL; LCG applications  Pile-up and digitization in Athena  Deployment of the complete Event Data Model and the Detector Description  Simulation of full ATLAS and 2004 combined Testbeam  Test the calibration and alignment procedures  Large scale physics analysis  Computing model studies (document end 2004)  Use widely the GRID middleware and tools  Run as much as possible of the production on Grids  Demonstrate use of multiple grids (LCG-2,Nordugrid,USGrid3)

F.Harris / M.Lamanna NA4/HEP- 15 “Tiers” in ATLAS DC2 (rough estimate) Country“Tier-1”SitesGridkSI2k AustraliaNG12 AustriaLCG7 CanadaTRIUMF7LCG331 CERN 1LCG700 ChinaLCG30 Czech RepublicLCG25 FranceCCIN2P31LCG~ 140 GermanyGridKa 3 LCG90 GreeceLCG10 Israel2LCG23 ItalyCNAF5LCG200 JapanTokyo1LCG127 NetherlandsNIKHEF1LCG75 NorduGridNG~30NG380 PolandLCG80 RussiaLCG~ 70 SlovakiaLCG SloveniaNG SpainPIC4LCG50 SwitzerlandLCG18 TaiwanASTW1LCG78 UKRAL8LCG~ 1000 USBNL28Grid3/LCG~ 1000 Total~ 4500

F.Harris / M.Lamanna NA4/HEP- 16 General comments on data challenges All experiments making production use of LCG-2 – stability has steadily improved. LCG have learned from feedback of experiments. This will be ongoing till 2005 Some issues being followed with LCG in a co-operative manner (Following input from CMS,ALICE and LHCb)  Mass Storage (SRM) support for variety of devices (not just CASTOR)  Debugging is hard when problems arise (develop monitoring)  Flexible s/w installation for analysis still being developed  Site Certification being developed  Disk Storage reservation fro jobsa  Developing use of RB(batches of jobs,job distribution to sites…)  File transfer stability  RLS performance issues (use of metadata and catalogues) We are learning…data challenges continuing Experiments using multi-grids Looking to ARDA for user prototype analysis

F.Harris / M.Lamanna NA4/HEP- 17 ALICE Distr. Analysis EGEE middleware Resource Providers Community ATLAS Distr. Analysis CMS Distr. Analysis LHCb Distr. Analysis SEAL PROOF GAE POOL ARDA Collaboration Coordination Integration Specification Priorities Planning Experience   Use Cases EGEE NA4 Application identification and support LCG-GAG Grid Application Group ARDA (Architectural Roadmap for Distributed Analysis)