ALICE RRB-T 2001-661 ALICE Computing – an update F.Carminati 23 October 2001.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
LHCb Quarterly Report October Core Software (Gaudi) m Stable version was ready for 2008 data taking o Gaudi based on latest LCG 55a o Applications.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CERN Computing Review Recommendations ATLAS Plenary 22 February 2001 Gilbert Poulard / CERN-EP-ATC.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
MICE VC June 2009Jean-Sébastien GraulichSlide 1 Feed back from the DAQ review o Facts o Detector DAQ o Important Comments o Bottom line Jean-Sebastien.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
F. Rademakers - CERN/EPLinux Certification - FOCUS Linux Certification Fons Rademakers.
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
CERN Physics Database Services and Plans Maria Girone, CERN-IT
LCG LHC Computing Grid Project – LCG CERN – European Organisation for Nuclear Research Geneva, Switzerland LCG LHCC Comprehensive.
1 The ALICE Offline Environment F.Carminati 13 June 2001.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM.
Hall-D/GlueX Software Status 12 GeV Software Review III February 11[?], 2015 Mark Ito.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
Simulations and Software CBM Collaboration Meeting, GSI, 17 October 2008 Volker Friese Simulations Software Computing.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
DataGrid is a project funded by the European Commission under contract IST rd EU Review – 19-20/02/2004 WP8 - Demonstration ALICE – Evolving.
NEC' /09P.Hristov1 Alice off-line computing Alice Collaboration, presented by P.Hristov, CERN NEC'2001 September 12-18, Varna.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
LHCb Data Challenge in 2002 A.Tsaregorodtsev, CPPM, Marseille DataGRID France meeting, Lyon, 18 April 2002.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
ARDA P.Cerello – INFN Torino ARDA Workshop June
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
1 The AliRoot framework, status and perspectives R.Brun, P.Buncic, F.Carminati, A.Morsch, F.Rademakers, K.Safarik for the ALICE Collaboration CHEP 2003.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
1 ALICE Summary LHCC Computing Manpower Review September 3, 2003.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
ALICE Computing Data Challenge VI
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Moving the LHCb Monte Carlo production system to the GRID
LHC experiments Requirements and Concepts ALICE
Alice Week Offline Day F.Carminati June 17, 2002.
Calibrating ALICE.
ALICE Physics Data Challenge 3
ALICE – Evolving towards the use of EDG/LCG - the Data Challenge 2004
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Alice Week Offline Day F.Carminati March 18, 2002 ALICE Week India.
Simulation use cases for T2 in ALICE
US ATLAS Physics & Computing
Offline framework for conditions data
Presentation transcript:

ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001

2ALICE RRB-T October 2001 ALICE Computing ALICE computing of the same order of magnitude than ATLAS or CMS Major decisions already taken (DAQ, Off-line) Move to C++ completed  TDRs all produced with the new framework Adoption of the ROOT framework Tightly knit Off-line team – single development line Physics performance and computing in a single team Aggressive DC program on the LHC prototype ALICE DAQ/Computing integration realised during data challenges in collaboration with IT/CS and IT/PDP

3ALICE RRB-T October 2001 ALICE Physics Performance Report Evaluation of acceptance, efficiency, signal resolution Step1: Simulation of ~10,000 central Pb-Pb events Step2: Signal superposition + Reconstruction 10,000 events Step3: Event Analysis Starting in November 2001 Distributed production on several ALICE sites using GRID tools

4ALICE RRB-T October 2001 ALICE GRID resources Yerevan CERN Saclay Lyon Dubna Capetown, ZA Birmingham Cagliari NIKHEF GSI Catania Bologna Torino Padova IRB Kolkata, India OSU/OSC LBL/NERSC Merida Bari 37 people 21 insitutions

5ALICE RRB-T October 2001 Writing to local disk Migration to tape ALICE Data Challenge III Need to run yearly DC of increasing complexity and size to reach 1.25GB/s ADC III gave excellent system stability during 3 months DATE throughput: 550 MB/s (max) 350 MB/s (ALICE-like) DATE+ROOT+CASTOR throughput: 120 MB/s, MB/s 2200 runs, 2* 10 7 events, 86 hours, 54 TB DATE run 500 TB in DAQ, 200 TB in DAQ+ROOT I/O, 110 TB in CASTOR 10 5 files > 1GB in CASTOR and in MetaData DB HP SMP’s: cost-effective alternative to inexpensive disk servers Online monitoring tools developed MB/s

6ALICE RRB-T October 2001 ADC IV (2002) Increase performances (200MB/s to tape, 1GB/s through the switch) Focus on computers and fabric architecture Include some L3 trigger functionality Involve 1 or 2 regional centres Use new tape generation and 10 Gbit Eth. LDC’sGDC's ~1000MB/s Phys data 1 Centre 2 Phys data 1 Centre 1

7ALICE RRB-T October 2001 Production Environment & Coordination P.Buncic Simulation production Reconstruction production Production farms Database organisation Relations with IT & other comp. centres Detector Projects ALICE Offline Computing Structure Framework & Infrastructure F.Rademakers Framework development Database technology HLT Farm Distributed computing (GRID) Data Challenge Industrial joint Projects Tech. Tracking Documentation Simulation A.Morsch Detector Simulation Physics simulation Physics validation GEANT 4 integration FLUKA integration Radiation Studies Geometrical modeller International Computing Board EC DataGRID WP8 Coordination DAQ P.Vande Vyvre Reconstruction & Physics K.Safarik Tracking Detector reconstruction HLT algorithms Global reconstruction Analysis tools Analysis algorithms Calibration & alignment algorithms Technical Support Management Board Regional Tiers ROOT FrameWk Data Challenges HLT algorithms Offline Board Chair F.Carminati Software Projects

8ALICE RRB-T October 2001 CERN Off-line effort strategy ALICE opted for a light core CERN offline team… 17 FTE’s are needed: for the moment 3 are missing plus people to be provided from the collaboration To be formalised by a Software Agreements/MOU Good precedents  GRID coordination (Torino), ALICE World Computing Model (Nantes), Detector database (Warsaw) We would like to avoid a full MoU! Imbalance between manpower for experiments and GRID Enough people pledged for GRID Both needed for the success of the project Candidate Tiers’ should provide people during phase 1 We have to design the global model, and we need outside people to develop it with us

9ALICE RRB-T October 2001 CERN Off-line effort strategy Staffing of the offline team is critical otherwise Coordination activities jeopardized (cascade effect) Data Challenges & Physics Performance Report delayed Readiness of ALICE at stake Efforts are made to alleviate the shortage Technology transfer with Information Technology experts  IRST Trento, CRS4 Cagliari, HP Recruit additional temporary staff (Project Associates) Adopt mature Open Source products (e.g. ROOT) Ask support from IT for core software (FLUKA, ROOT)  Recommended by the LHC Computing Review

10ALICE RRB-T October 2001 Conclusion The development of the ALICE Offline continues successfully The understaffing of the CORE Offline team will soon become a problem as its activity expands Additional effort will be asked to the collaboration We hope to avoid a MoU here and work with bilateral software agreements (a la ATLAS) Where the GRID project has found a good resonance the experiment part still need to be solved