The ALICE Framework at GSI Kilian Schwarz ALICE Meeting August 1, 2005.

Slides:



Advertisements
Similar presentations
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Advertisements

1 ALICE Grid Status David Evans The University of Birmingham GridPP 16 th Collaboration Meeting QMUL June 2006.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
ALICE analysis at GSI (and FZK) Kilian Schwarz CHEP 07.
Grid for CBM Kilian Schwarz, GSI. What is Grid ? ● Sharing of distributed resources within one Virtual Organisations !!!!
ALICE Operations short summary and directions in 2012 WLCG workshop May 19-20, 2012.
1 Status of the ALICE CERN Analysis Facility Marco MEONI – CERN/ALICE Jan Fiete GROSSE-OETRINGHAUS - CERN /ALICE CHEP Prague.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Stuart Wakefield Imperial College London1 How (and why) HEP uses the Grid.
PROOF - Parallel ROOT Facility Kilian Schwarz Robert Manteufel Carsten Preuß GSI Bring the KB to the PB not the PB to the KB.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
ALICE-USA Grid-Deployment Plans (By the way, ALICE is an LHC Experiment, TOO!) Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry Pinsky—Computing.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
LOGO PROOF system for parallel MPD event processing Gertsenberger K. V. Joint Institute for Nuclear Research, Dubna.
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
The ALICE Computing F.Carminati May 4, 2006 Madrid, Spain.
The ALICE Distributed Computing Federico Carminati ALICE workshop, Sibiu, Romania, 20/08/2008.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
DataGrid is a project funded by the European Commission under contract IST rd EU Review – 19-20/02/2004 WP8 - Demonstration ALICE – Evolving.
Super Scaling PROOF to very large clusters Maarten Ballintijn, Kris Gulbrandsen, Gunther Roland / MIT Rene Brun, Fons Rademakers / CERN Philippe Canal.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
AliEn2 and GSI batch farm/disks/tape Current status Kilian Schwarz.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
1 The AliRoot framework, status and perspectives R.Brun, P.Buncic, F.Carminati, A.Morsch, F.Rademakers, K.Safarik for the ALICE Collaboration CHEP 2003.
Status of AliEn2 Services ALICE offline week Latchezar Betev Geneva, June 01, 2005.
ALICE Computing Status1 ALICE Computing Status Are we ready? What about our choices? Workshop on LHC Computing 26 October Ren é Brun CERN Several slides.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
ALICE Computing TDR Federico Carminati June 29, 2005.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Gestion des jobs grille CMS and Alice Artem Trunov CMS and Alice support.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Grid Operations in Germany T1-T2 workshop 2015 Torino, Italy Kilian Schwarz WooJin Park Christopher Jung.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Kilian Schwarz ALICE Computing Meeting GSI, October 7, 2009
The EDG Testbed Deployment Details
Report PROOF session ALICE Offline FAIR Grid Workshop #1
Data Challenge with the Grid in ATLAS
PROOF – Parallel ROOT Facility
INFN-GRID Workshop Bari, October, 26, 2004
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
ALICE Physics Data Challenge 3
ALICE – Evolving towards the use of EDG/LCG - the Data Challenge 2004
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
AliRoot status and PDC’04
MC data production, reconstruction and analysis - lessons from PDC’04
Artem Trunov and EKP team EPK – Uni Karlsruhe
April HEPCG Workshop 2006 GSI
Simulation use cases for T2 in ALICE
N. De Filippis - LLR-Ecole Polytechnique
R. Graciani for LHCb Mumbay, Feb 2006
LHC Data Analysis using a worldwide computing grid
Alice Software Demonstration
The LHCb Computing Data Challenge DC06
Presentation transcript:

The ALICE Framework at GSI Kilian Schwarz ALICE Meeting August 1, 2005

Overview  ALICE framework  What part of ALICE framework is installed where at GSI and how can it be accessed/used  ALICE Computing model (Tier architecture)  Resource consumption of individual tasks  Resources at GSI and GridKa

ALICE Framework ROOT AliRoot STEER Virtual MC G3 G4 FLUKA HIJING MEVSIM PYTHIA6 PDF CRT EMCALZDC FMD ITS MUON PHOSPMDTRD TPC TOF STRUCT START RICH RALICE EVGEN HBTP HBTAN ISAJET AliEn F. Carminati, CERN

Software installed at GSI: AliRoot  Installed at: /d/alice04/PPR/AliRoot  Newest version: AliRoot v  Environment setup via: >. gcc32login >. gcc32login >. alilogin dev/new/pro/version-number >. alilogin dev/new/pro/version-number  gcc not supported anymore  gcc not supported anymore  corresponding ROOT version initialized, too  corresponding ROOT version initialized, too * responsible person: Kilian Schwarz * responsible person: Kilian Schwarz

Software installed at GSI: ROOT (AliRoot is heavily based on ROOT)  Installed at: /usr/local/pub/debian3.0/gcc323-00/rootmgr  Newest version:  Environment setup via >. gcc32login / alilogin or rootlogin >. gcc32login / alilogin or rootlogin Responsible persons:Responsible persons: - Joern Adamczewski / Kilian Schwarz - Joern Adamczewski / Kilian Schwarz See also: also:

Software installed at GSI: geant3 (needed for simulation: accessed via VMC)  Installed at: /d/alice04/alisoft/PPR/geant3  Newest version: v1-3  Environment setup via gcc32login/alilogin  Responsible person: Kilian Schwarz

Software at GSI: geant4/Fluka (simulation: accessed via VMC)  Both so far not heavily used from ALICE  Geant4: standalone versions up to G4.7.1  newest VMC version: geant4_vmc_1.3  Fluka: not installed so far by me  Environment setup via >. gsisimlogin [-vmc] dev/new/prod/version >. gsisimlogin [-vmc] dev/new/prod/version See also linux.gsi.de/~gsisim/g4vmc.htmlSee also linux.gsi.de/~gsisim/g4vmc.htmlhttp://www- linux.gsi.de/~gsisim/g4vmc.htmlhttp://www- linux.gsi.de/~gsisim/g4vmc.html Responsible person: Kilian SchwarzResponsible person: Kilian Schwarz

Software at GSI: event generators (task: simulation)  Installed at: /d/alice04/alisoft/PPR/evgen  Available: - Pythia5 - Pythia5 - Pythia6 - Pythia6 - Venus - Venus Responsible person: Kilian SchwarzResponsible person: Kilian Schwarz

Software at GSI: AliEn The ALICE Grid Environment  Currently being set up in the version2 (AliEn2)  Installed at: /u/aliprod/alien  Idea: global production and analysis  Environment setup via..alienlogin  Copy certs from /u/aliprod/.globus or register own certs  Usage: /u/aliprod/bin/alien (proxy-init/login)  Then: register files and submit grid-jobs  Or: directly from ROOT !!!  Status: global AliEn2 production testbed currently being set up.  Will be used for LCG SC3 in September  Individual analysis of globally distributed Grid data at the latest during LCG SC via AliEn/LCG/PROOF  Non published analysis possible already now: - create AliEn-ROOT Collection (xml file readable via AliEn) - create AliEn-ROOT Collection (xml file readable via AliEn) - analyse via ROOT/PROOF (TFile::Open(“alien://alice/cern.ch/production/…”) - analyse via ROOT/PROOF (TFile::Open(“alien://alice/cern.ch/production/…”) - Web Frontend being created via ROOT/QT - Web Frontend being created via ROOT/QT  Responsible person: Kilian Schwarz

AliEn2 services (see Local scheduler ALICE VO – central services Central Task Queue Job submission File Catalogue Configuration Accounting User authentication Computing Element Workload management Job Monitoring Storage Element(s) DB Data Transfer Storage Element Cluster Monitor AliEn Site services Disk and MSS Existing site components ALICE VO – Site services integration

Software at GSI: Globus  Installed at: /usr/local/globus2.0 and /usr/local/grid/globus /usr/local/grid/globus  Versions globus2.0 and 2.4  Idea: can be used to send batch jobs to GridKa (far more resources available than at GSI)  Environment setup via:. globuslogin  Usage: > grid-proxy-init (Grid certificate needed !!!) > grid-proxy-init (Grid certificate needed !!!) > globus-job-run/submit alice.fzk.de Grid/Batch job > globus-job-run/submit alice.fzk.de Grid/Batch job Responsible person: Victor Penso/Kilian SchwarzResponsible person: Victor Penso/Kilian Schwarz

GermanGrid CA How to get a certificate in detail: See

Software at GSI: LCG  Installed at: /usr/local/grid/lcg  Newest version: LCG2.5  Idea: global batch farm  Environment setup:. lcglogin  Usage: > grid-proxy-init (Grid certificate needed !!!) > grid-proxy-init (Grid certificate needed !!!) > edg-job-submit batch/grid job (jdl-file) > edg-job-submit batch/grid job (jdl-file) See also: also: Responsible person: Victor Penso, Anar Manafov, Kilian SchwarzResponsible person: Victor Penso, Anar Manafov, Kilian Schwarz

LCG: the LHC Grid Computing project (with ca. 11k CPUs world’s largest Grid Testbed)

Software at GSI: PROOF  Installed at: /usr/local/pub/debian3.0/gcc323-00/rootmgr  Newest version: ROOT  Idea: parallel analysis of larger data sets for quick/interactive results  Personal PROOF Cluster at GSI, integrated in batch farm, can be set up via > prooflogin (e.g. number of slaves, data to be analysed, -h (help)) > prooflogin (e.g. number of slaves, data to be analysed, -h (help)) See also: also: Later personal PROOF Cluster including GSI and GridKa via Globus possibleLater personal PROOF Cluster including GSI and GridKa via Globus possible Later global PROOF Cluster via AliEn/D-Grid possibleLater global PROOF Cluster via AliEn/D-Grid possible Responsible person: Carsten Preuss, Robert Manteufel, Kilian SchwarzResponsible person: Carsten Preuss, Robert Manteufel, Kilian Schwarz

Parallel Analysis of Event Data root Remote PROOF Cluster proof TNetFile TFile Local PC $ root ana.C stdout/obj node1 node2 node3 node4 $ root root [0] tree.Process(“ana.C”) $ root root [0] tree.Process(“ana.C”) root [1] gROOT->Proof(“remote”) $ root root [0] tree.Process(“ana.C”) root [1] gROOT->Proof(“remote”) root [2] dset->Process(“ana.C”) ana.C proof proof = slave server proof proof = master server #proof.conf slave node1 slave node2 slave node3 slave node4 *.root TFile

LHC Computing Model (Monarc and Cloud) LHC Computing Model (Monarc and Cloud) One Tier 0 site at CERN for data taking ALICE (Tier 0+1) in 2008: 500 TB disk (8%), 2 PB tape, 5.6 MSI2K (26%) Multiple Tier 1 sites for reconstruction and scheduled analysis 3 PB disk (46%), 3.3 PB tape 9.1 MSI2K (42%) Tier 2 sites for simulation and user analysis 3 PB disk(46%), 7.2 MSI2K (33%)

ALICE Computing model more in detail:  T0 (CERN): long term storage for raw data, calibration and first reconstruction  T1 (5, in Germany GridKa): long term storage of second copy of raw data, 2 subsequent reconstructions, scheduled analysis tasks, reconstruction of MC Pb-Pb data, long term storage of data processed at T1s and T2s  T2 (many, in Germany GSI): generate and reconstruct simulated MC data and chaotic analysis  T0/T1/T2: short term storage in multiple copies of active data  T3 (many, in Germany  T3 (many, in Germany Münster, Frankfurt, Heidelberg, GSI) chaotic analysis

CPU requirements and Event size p-p / KSI2k x s/ev. Heavy Ion KSI2k x s/ev. Reconstruction5.468 Scheduled analysis Chaotic analysis Simulation (ev. cr. and rec.) (2-4 hours on standard PC) Raw / MB ESD / MB AOD / MB Raw MC ESD MC p-p Heavy Ion

ALICE Tier resources Tier0Tier1sTier2sTotal CPU (MSI2k) Disk (PB) Tape (PB) Bandwidth in (Gb/s) Bandwidth out (Gb/s)

GridKa (1 of 5 T1s) GridKa (1 of 5 T1s) IN2P3, CNAF, GridKa, NIKHEF, (RAL), Nordic, USA (effective ~5) ramp up time: due to shorter runs and reduced luminosity at the beginning not full resources needed: 20% 2007, 40% 2008, 100% end of Total Status CPU (kSI2k) Disk (TB) 28 (50% used) Tape (TB)

GSI + T3(support for the 10% German ALICE members) +++Total Status CPU (kSI2k) 64 Dual P4, 20 DP3, (80 DOpteron new bought) (800) T3 Disk (TB) 2.23 (0.3 free) – 15 TB new T3 Tape (TB) 190 (100 used) T3: Münster, Frankfurt, Heidelberg, GSI