CERN Computing Review Recommendations ATLAS Plenary 22 February 2001 Gilbert Poulard / CERN-EP-ATC.

Slides:



Advertisements
Similar presentations
CERN STAR TAP June 2001 Status of the EU DataGrid Project Fabrizio Gagliardi CERN EU-DataGrid Project Leader June 2001
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
DOE/NSF U.S. CMS Operations Program Review Closeout Report Fermi National Accelerator Laboratory March 10, 2015 Anadi Canepa, TRIUMF Anna Goussiou, University.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Developing Countries Access to Scientific Knowledge Ian Willers CERN, Switzerland.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
REVIEW OF NA61 SOFTWRE UPGRADE PROPOSAL. Mandate The NA61 experiment is contemplating to rewrite its fortran software in modern technology and are requesting.
23 Feb 2000F Harris Hoffmann Review Status1 Status of Hoffmann Review of LHC computing.
W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, ZEUS Computing Board Report Zeus Executive Meeting Wesley H. Smith, University.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (1/2)  Short: N Pages è May Refer to MONARC Internal Notes to Document.
LCG LHC Computing Grid Project – LCG CERN – European Organisation for Nuclear Research Geneva, Switzerland LCG LHCC Comprehensive.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
October 30, 2001ATLAS PCAP1 LHC Computing at CERN and elsewhere The LHC Computing Grid Project as approved by Council, on September 20, 2001 M Kasemann,
…building the next IT revolution From Web to Grid…
Tony Doyle - University of Glasgow 8 July 2005Collaboration Board Meeting GridPP Report Tony Doyle.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
LHCbComputing Manpower requirements. Disclaimer m In the absence of a manpower planning officer, all FTE figures in the following slides are approximate.
28/2/2000DJ for FOCUS1 LHC Computing Review  Announced by H. Hoffmann autumn 1999  First Steering Committee meeting 12/1999  By end of 'spring' (?)
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
1 LHCC RRB SG 16 Sep P. Vande Vyvre CERN-PH On-line Computing M&O LHCC RRB SG 16 Sep 2004 P. Vande Vyvre CERN/PH for 4 LHC DAQ project leaders.
Ian Bird GDB CERN, 9 th September Sept 2015
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
LCG – AA review 1 Simulation LCG/AA review Sept 2006.
1 Future Circular Collider Study Preparatory Collaboration Board Meeting September 2014 R-D Heuer Global Future Circular Collider (FCC) Study Goals and.
LHC Computing, CERN, & Federated Identities
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
News from CERN Restricted ECFA, Athens, March 3, 2001 Luciano Maiani.
November 27, 2001DOE/NSF review of US LHC S&C projects1 The Software and Computing Committee (SC2) in the LHC Computing Grid Project M Kasemann, FNAL.
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
J.J.Blaising 18 April 02DataGrid and LCG1 LCG J.J Blaising LAPP-IN2P3 LHC Computing Grid Project (LCG) Launch Workshop March RTAG Common Use Cases.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Project Execution Board Gets agreement on milestones, schedule, resource allocation Manages the progress and direction of the project Ensures conformance.
LCG Project Organisation Requirements and Monitoring LHCC Comprehensive Review November 24, 2003 Matthias Kasemann Software + Computing Committee (SC2)
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
1 Comments concerning DESY and TESLA Albrecht Wagner Comments for the 5th meeting of the ITRP at Caltech 28 June 2004 DESY and the LC What could DESY contribute.
1 ALICE Summary LHCC Computing Manpower Review September 3, 2003.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Grid site as a tool for data processing and data analysis
evoluzione modello per Run3 LHC
Alice Week Offline Day F.Carminati June 17, 2002.
New strategies of the LHC experiments to meet
US ATLAS Physics & Computing
Collaboration Board Meeting
LHC Computing Grid Project
LHC Computing, RRB; H F Hoffmann
Presentation transcript:

CERN Computing Review Recommendations ATLAS Plenary 22 February 2001 Gilbert Poulard / CERN-EP-ATC

Outline Purpose & Mandate of the review Steering Group and Panels Key parameters Main conclusions & associated recommendations –LHC Computing model –Software –Management & Resources –General

Mandate of the Review “It is an appropriate moment to review the computing plans of the LHC experiments and the corresponding preparations of IT division with the following aims: –update the assessment of regional and CERN facilities, their relative roles and the estimates of the resources required identify software packages to operate in various places identify services to be provided centrally identify activities which have to be done at CERN –assess the analysis software projects, their organisational structures, the role of CERN and possible common efforts –review and comment the overall and individual computing management structures and review the resources required”

Mandate of the Review “The results of the review will be the basis for the formulation of Computing Memoranda of Understanding which will describe: –commitments of institutes inside the collaborations –commitments of CERN to provide computing infrastructure and central facilities for the LHC era The review team should recommend actions, in particular –common actions between experiments and IT division The review reports to the Research Board and the Director General”

Working model Steering Committee (21 meetings) 3 independent panels –Worldwide Analysis and Computing Model (5) –Software project (14) –Management and Resources (16) Report from each panel (not public) Final report by the Steering Committee

Steering Committee Members: –S. Bethke Chair –H.F. Hoffmann CERN Director responsible for Scientific Computing –D. Jacobs Secretary –M. Calvetti Chair of the Management and Resources Panel –M. Kasemann Chair of the Software Project Panel –D. Linglin Chair of the Computing Model Panel In Attendance: –IT Division M. Delfino L. Robertson –ALICE F. Carminati K. Safarik –ATLAS N. McCubbin G. Poulard –CMS M. Pimia H. Newman –LHCb J. Harvey M. Cattaneo Observers: –R. Cashmore CERN Director for collider programmes –J. Engelen LHCC chairman

Computing Model panel –D. Linglin Chair –F. Gagliardi Secretary Experiment Representatives –ALICE A. Masoni A. Sandoval –ATLAS A. Putzer L. Perini –CMS H. Newman W. Jank –LHCb F. Harris M. Schmelling Experts: –Y. Morita; L. Perini; C. Michau, etc.

Software Project panel –M. Kasemann Chair –A. Pfeiffer Secretary and CERN-IT representative Experiment Representatives – ALICE R. Brun A. Morsch –ATLAS D. Barberis M. Bosman –CMS L. Taylor T. Todorov –LHCb P. Mato O. Callot Experts: –V. White, etc.

Management & Resources panel Members –M. Calvetti Chair –M. Lamanna Secretary Experiment Representatives –ALICE P. Vande Vyvre K. Safarik –ATLAS J. Huth H. Meinhard –CMS P. Capiluppi I. Willers –LHCb J. Harvey J.P. Dufey Experts: –L. Robertson T. Wenaus, etc.

Key parameters LHC design –E = 14 TeV –L = cm -2 sec -1 –  = 100 mb = cm -2 –Collision rate = L.  = 10 9 Hz p-p collisions –4 experiments approved

Key parameters Approximate numbers to remember (per experiment) –10 6 (1 MB) size of recorder p-p event ATLAS 2 MB ALICE up to 40 MB –10 2 Hz data taking rate ATLAS 270 Hz –10 7 recorded p-p events per day (out of ) –10 7 data taking seconds per year (excepted ALICE) –10 9 recorded p-p events per year –10 15 (1 PB) recorded data per year

LHC Computing Model (1) The scale of the resource requirements are accepted (2) A multi-Tier hierarchical model should be the key element of the LHC computing model (similar to the model developed by the MONARC project)  (3) Grid Technology will be used to attempt to contribute solutions to this model (4) It is vital that a well-supported Research Networking infrastructure is available by 2006 –the bandwidth between Tier0 and Tier1 centres is estimated to be 1.5 to 3 Gbps for one experiment

Multi-Tier hierarchical model Tier0 –raw data storage (large capacity); first calibration; reconstruction Tier1 –further calibrations; reconstruction; analysis; Monte-Carlo data generation; data distribution; large storage capacity –“national” or “supranational” Tier2 and “lower” –balance of simulation and analysis as important in computing power as Tier1’s –“national” or “intranational” –Tier3 (“institutional”); Tier4 (end-user workstations)

Multi-Tier hierarchical model Tier0 & Tier1 –open to all members of a collaboration under conditions to be specified in MoUs Tier1 centres should be available for the lifetime of LHC –should be managed coherently (Data Grid system)

Software (5) Joint efforts and common projects between experiments and CERN/IT are recommended  –support of widely used products should be provided (6) data challenges of increasing size and complexity must be performed  (7) CERN should sponsor a coherent program to ease the transition from Fortran to OO (physicists) (8) concerns: –maturity of current planning and resource estimates –development and support of simulation packages  –support and future evolution of analysis tools 

Software “common projects” Common projects –All data Grid systems –Data management systems persistency and the data store mass storage system common prototype for data challenges –Interactive data analysis tools –Simulation packages –Common Tools projects

Software Data Challenges ATLAS Data challenges –DC0100 GB (0.01%)end 2001 –DC1 1 TB (0.1%)2002 –DC2 100 TB ( 10%)2003

Software “support of packages” Particular importance of finding satisfactory solutions for the ongoing CERN support of Geant4, Anaphe, object database and the general libraries, along with the introduction of CERN support for FLUKA and ROOT

Management & Resources (9) current cost estimates are based on the forecast evolution of price and performance of computer hardware (10) on this basis the hardware costs for the initial set-up are estimated to 230 MCHF –for the 4 experiments, including Tier0; Tier1; Tier2 –CERN Tier0-Tier1 ~ 1/3 (11) The total investment for the initial system is due to be spent in , approximately in equal portions, assuming –LHC starts up in 2006 –LHC reaches design luminosity in 2007

Management & Resources (12) The Core Software teams of all four experiments are seriously understaffed  –Their contribution will be just as vital to the experiments as major sub- detectors –senior Management of the Collaborations must seek solution to this problem with extreme urgency (13) The staffing level of CERN/IT (as envisaged) is incompatible with an efficient running of the CERN-based LHC computing system and software support

Required human resources (FTE’s) to write the Core Software

Management & Resources (14) The approach to Maintenance and Operation of the LHC computing system includes the strategy of rolling replacement within a constant budget. Maintenance and Operation will require within each three-year operating period an amount roughly equal the initial investment

Management & Resources (15) The construction of a common prototype of the distributed computing system should be launched urgently as a joint project of the four experiments and CERN/IT, along with the Regional Centres –should grow progressively in complexity –scale to reach ~50% of the overall computing and data handling structure of one LHC experiment (16) An agreement should be reached amongst the partners in this project in which –construction; cost sharing; goals; technical solution; etc.

General recommendations (17) An LHC Software and Computing Steering Committee (SC2) must be established –composed of the highest level software and computing management in experiments, CERN/IT and Regional Tier1 Centres –to oversee the deployment of the entire LHC hierarchical computing structure (18) Under the auspices of this committee, Technical Assessment Groups (TAGs) must be established to prepare, for example, agreement on a common data management strategy

General recommendations (19) Each Collaboration must prepare, on a common pattern, a Memorandum of Understanding for LHC computing to be signed between CERN and the funding agencies, describing –the funding of and responsibilities for the hardware and the software –the human resources –the policy for access to the computing systems Interim MoU’s or software agreements should be set up and signed by the end of 2001 to ensure appropriate development of the software.

ATLAS Computing Resources

Computing power & storage capacity at CERN

Required human resources (FTE’s) to write the Core Software