Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)

Slides:



Advertisements
Similar presentations
Estonian Grid Mario Kadastik On behalf of Estonian Grid Tallinn, Jan '05.
Advertisements

4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Santiago González de la Hoz on behalf of ATLAS DC2 Collaboration EGC 2005 Amsterdam, 14/02/2005.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Swedish participation in DataGrid and NorduGrid Paula Eerola SWEGRID meeting,
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
David Adams ATLAS DIAL Distributed Interactive Analysis of Large datasets David Adams BNL March 25, 2003 CHEP 2003 Data Analysis Environment and Visualization.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
ATLAS computing in Geneva 268 CPU cores (login + batch) 180 TB for data the analysis facility for Geneva group grid batch production for ATLAS special.
Workload Management Massimo Sgaravatto INFN Padova.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
ATLAS Data Challenge Production and U.S. Participation Kaushik De University of Texas at Arlington BNL Physics & Computing Meeting August 29, 2003.
Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ANL/BNL Virtual Data Technologies in ATLAS Alexandre Vaniachine Pavel Nevski US-ATLAS Core/GRID software workshop Brookhaven National Laboratory May 6-7,
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
ATLAS Data Challenge Production Experience Kaushik De University of Texas at Arlington Oklahoma D0 SARS Meeting September 26, 2003.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Grid Computing Status Report Jeff Templon PDP Group, NIKHEF NIKHEF Scientific Advisory Committee 20 May 2005.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
David Adams ATLAS DIAL status David Adams BNL November 21, 2002 ATLAS software meeting GRID session.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
Zprávy z ATLAS SW Week March 2004 Seminář ATLAS SW CZ Duben 2004 Jiří Chudoba FzÚ AV CR.
…building the next IT revolution From Web to Grid…
Silicon Module Tests The modules are tested in the production labs. HIP is is participating in the rod quality tests at CERN. The plan of HIP CMS is to.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
High Energy Physics & Computing Grids TechFair Univ. of Arlington November 10, 2004.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
ATLAS Data Challenges on the Grid Oxana Smirnova Lund University October 31, 2003, Košice.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
A PanDA Backend for the Ganga Analysis Interface J. Elmsheuser 1, D. Liko 2, T. Maeno 3, P. Nilsson 4, D.C. Vanderster 5, T. Wenaus 3, R. Walker 1 1: Ludwig-Maximilians-Universität.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS Data Challenge on NorduGrid CHEP2003 – UCSD Anders Wäänänen
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Performance of The NorduGrid ARC And The Dulcinea Executor in ATLAS Data Challenge 2 Oxana Smirnova (Lund University/CERN) for the NorduGrid collaboration.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
2 Sep 2002F Harris EDG/WP6 meeeting at Budapest LHC experiments use of EDG Testbed F Harris (Oxford/CERN)
Oxana Smirnova LCG/ATLAS/Lund September 3, 2002, Budapest 5 th EU DataGrid Conference ATLAS-EDG Task Force status report.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
LHCb Data Challenge in 2002 A.Tsaregorodtsev, CPPM, Marseille DataGRID France meeting, Lyon, 18 April 2002.
Overview of ATLAS Data Challenge Oxana Smirnova LCG/ATLAS, Lund University GAG monthly, February 28, 2003, CERN Strongly based on slides of Gilbert Poulard.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Estonian Grid and LHC computing Mario Kadastik On behalf of Estonian Grid Vilnius, Oct '04.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
ATLAS Experience on Large Scale Productions on the Grid CHEP-2006 Mumbai 13th February 2006 Gilbert Poulard (CERN PH-ATC) on behalf of ATLAS Data Challenges;
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Workload Management Workpackage
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
ATLAS DC2 ISGC-2005 Taipei 27th April 2005
Computing Report ATLAS Bern
US ATLAS Physics & Computing
Grid Canada Testbed using HEP applications
MonteCarlo production for the BaBar experiment on the Italian grid
ATLAS DC2 & Continuous production
The LHCb Computing Data Challenge DC06
Presentation transcript:

Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)

ATLAS: preparing for data taking

Data Challenge 1 (DC1) Event generation completed during DC0 Main goals of DC1: Need to produce simulated data for High Level Trigger & Physics Groups Reconstruction & analysis on a large scale learn about data model; I/O performances; identify bottlenecks etc Data management Use/evaluate persistency technology Learn about distributed analysis Involvement of sites outside CERN Use of Grid as and when possible and appropriate

DC1, Phase 1: Task Flow Example: one sample of di-jet events PYTHIA event generation: 1.5 x 10 7 events split into partitions (read: ROOT files) Detector simulation: 20 jobs per partition, ZEBRA output Atlsim/Geant3 + Filter 10 5 events Atlsim/Geant3 + Filter Hits/ Digits MCTruth Atlsim/Geant3 + Filter Pythia6 Di-jet Athena-Root I/OZebra HepMC Event generation Detector Simulation (5000 evts) (~450 evts) Hits/ Digits MCTruth Hits/ Digits MCtruth

Piling up events

Future: DC2-3-4-… DC2: Originally Q3/2003 – Q2/2004 Will be delayed Goals Full deployment of Event Data Model & Detector Description Transition to the new generation of software tools and utilities Test the calibration and alignment procedures Perform large scale physics analysis Further tests of the computing model Scale As for DC1: ~ 107 fully simulated events DC3: Q3/2004 – Q2/2005 Goals to be defined; Scale: 5 x DC2 DC4: Q3/2005 – Q2/2006 Goals to be defined; Scale: 2 X DC3 Sweden can try to provide ca 3-5% contribution (?)

DC requirements so far Integrated DC1 numbers: 50+ institutes in 20+ countries Sweden enter with other Nordic countries via the NorduGrid 3500 “normalized CPUs” (80000 CPU-days) Nordic share: equivalent of 320 “normalized CPUs” (ca. 80 in real life) 5 × 10 7 events generated No Nordic participation 1 × 10 7 events simulated Nordic: ca. 3 × TB produced ( files of output) Nordic: ca. 2 TB, 4600 files More precise quantification is VERY difficult because of orders of magnitude complexity differences between different physics channels and processing steps 1. CPU time consumption: largely unpredictable, VERY irregular 2. OS: GNU/Linux, 32 bit architecture 3. Inter-processor communication: never been a concern so far (no MPI needed) 4. Memory consumption: depends on the processing step/data set, so far 512 MB have been enough 5. Data volumes: vary from KB to GB per job 6. Data access pattern: mostly unpredictable, irregular 7. Data bases: each worker node is expected to be able to access a remote database 8. Software is under constant development, will certainly exceed 1 GB, includes multiple dependencies on HEP-specific software, sometimes licensed

And a bit about Grid ATLAS DC ran on Grid since summer 2002 (NorduGrid, US Grid) Future DCs will be to large extent (if not entirely) gridified Allocated computing facilities must have all the necessary Grid middleware (but ATLAS will not provide support) Grids that we tried: NorduGrid – a Globus-based solution developed in Nordic countries, provides stable and reliable facility, executes all the Nordic share of DCs US Grid (iVDGL) – basically, Globus tools, hence missing high-level services, but still serves ATLAS well, executing ca 10% of US DC share EU DataGrid (EDG) – way more complex solution (but Globus-based, too), still in development, not yet suitable for production, but can perform simple tasks. Did not contribute to DCs Grids that are coming: LCG: will be initially strongly based on EDG, hence may not be reliable before 2004 EGEE: another continuation of EDG, still in the proposal preparation state Globus moves towards Grid Services architecture – may imply major changes both in existing solutions, and in planning