Development of LHCb Computing Model F Harris

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
LHCb Computing Activities in UK Current activities UK GRID activities RICH s/w activities.
1 First Considerations on LNF Tier2 Activity E. Vilucchi January 2006.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
O. Stézowski IPN Lyon AGATA Week September 2003 Legnaro Data Analysis – Team #3 ROOT as a framework for AGATA.
POLITEHNICA University of Bucharest California Institute of Technology National Center for Information Technology Ciprian Mihai Dobre Corina Stratan MONARC.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Nick Brook Current status Future Collaboration Plans Future UK plans.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
LHCb and DataGRID - the workplan for 2001 Eric van Herwijnen Wednesday, 28 march 2001.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
1D. Olson, SDM-ISIC Mtg, 26 Mar 2002 Scientific Data Management: An Incomplete Experimental HENP Perspective D. Olson, LBNL 26 March 2002 SDM-ISIC Meeting.
26 Nov 1999 F Harris LHCb computing workshop1 Development of LHCb Computing Model F Harris Overview of proposed workplan to produce ‘baseline computing.
Detector Description in LHCb Detector Description Workshop 13 June 2002 S. Ponce, P. Mato / CERN.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
LHCb datasets and processing stages. 200 kB100 kB 70 kB 0.1 kB 10kB 150 kB 0.1 kB 200 Hz LHCb datasets and processing stages.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
June 22, 1999MONARC Simulation System I.C. Legrand1 MONARC Models Of Networked Analysis at Regional Centres Distributed System Simulation Iosif C. Legrand.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
Hall D Computing Facilities Ian Bird 16 March 2001.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
LHCb computing model and the planned exploitation of the GRID Eric van Herwijnen, Frank Harris Monday, 17 July 2000.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Grid site as a tool for data processing and data analysis
CMS High Level Trigger Configuration Management
Moving the LHCb Monte Carlo production system to the GRID
LHC experiments Requirements and Concepts ALICE
ALICE analysis preservation
for the Offline and Computing groups
Bernd Panzer-Steindel, CERN/IT
LHCb computing in Russia
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Status of Brunel team and next steps
ALICE Computing Model in Run3
Bernd Panzer-Steindel CERN/IT
ILD Ichinoseki Meeting
Scientific Computing At Jefferson Lab
ExaO: Software Defined Data Distribution for Exascale Sciences
Wide Area Workload Management Work Package DATAGRID project
LHCb Computing Project Organisation Manage Steering Group
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
Short to middle term GRID deployment plan for LHCb
Use Of GAUDI framework in Online Environment
Planning next release of GAUDI
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

Development of LHCb Computing Model F Harris Overview of proposed workplan to produce ‘baseline computing model for LHCb’ 26 Nov 1999 F Harris LHCb computing workshop

WHY are we worrying NOW about this? HOFFMAN REVIEW (starting Jan 2000) How will the LHC experiments do their computing? Answers in late 2000 The basic logical data flow model, patterns of use, resources for tasks The preferred distributed resource model (CERN,regions,institutes) Computing MOUs in 2001 Countries (UK,Germany, …) are planning now for ‘development’ facilities 26 Nov 1999 F Harris LHCb computing workshop

Proposed project organisation to do the work Tasks and Deliverables(1) Logical data flow model (all data-sets and processing tasks) Data Flow Model Specification Resource requirements Data volumes,rates,CPU needs by task (these are essential parameters for model development) - measure current status and predict the future URD giving distributions Use Cases map demands for reconstruction, simulation, analysis, calibration and alignment onto the model (e.g. physics groups working) Document ‘patterns of usage’ and resulting demands on resources - ‘workflow specification’ 26 Nov 1999 F Harris LHCb computing workshop

LHCb datasets and processing stages (must update CPU and store reqts.) 200 Hz 200 kB 100 kB 10kB 150 kB 70 kB 0.1 kB 0.1 kB

A General view of Analysis (G Corti) ( A General view of Analysis (G Corti) (? Patterns of group and user analysis) Data Acquisition Detector Simulation Raw Data MC “truth”Data Reconstruction MC generator DST (reconstructed particles , primary vertex) Physicist Analysis (Group) Analysis “Reconstructed” Physics channels 26 Nov 1999 F Harris LHCb computing workshop

Status of simulated event production (since june) E van Herwijnen 26 Nov 1999 F Harris LHCb computing workshop

Tasks and Deliverables (2) Resource distribution Produce description of distribution of LHCb institutes, regional centres and resources (equipment and people), and the connectivity Resource Map with network connectivity. List of people and equipment… Special requirements for remote working (OS platfoms,s/w distribution,videoconferencing..) URD on ‘Remote working…’ Technology Tracking (Follow PASTA. Data Management s/w, GAUDI data management….) Technology trend figures Capabilities of data management s/w 26 Nov 1999 F Harris LHCb computing workshop

F Harris LHCb computing workshop http://nicewww.cern.ch/~les/monarc/lhc_farm_mock_up/index.htm Mock-up of an Offline Computing Facility for an LHC Experiment at CERN (Les Robertson July 99 with ‘old’ experiment estimates) Purpose investigate the feasibility of building LHC computing facilities using current cluster architectures, conservative assumptions about technology evolution scale & performance technology power footprint cost reliability manageability 26 Nov 1999 F Harris LHCb computing workshop

F Harris LHCb computing workshop 26 Nov 1999 F Harris LHCb computing workshop

CMS Offline Farm at CERN circa 2006 processors tapes ……... disks storage network 12 Gbps processors ………… 1400 boxes 160 clusters 40 sub-farms tapes 1.5 Gbps 0.8 Gbps 8 Gbps 3 Gbps* 12 Gbps* farm network 480 Gbps* 0.8 Gbps (daq) 100 drives LAN-SAN routers CMS Offline Farm at CERN circa 2006 LAN-WAN routers 250 Gbps storage network 5 Gbps 0.8 Gbps 5400 disks 340 arrays ……... disks * assumes all disk & tape traffic on storage network double these numbers if all disk & tape traffic through LAN-SAN router lmr for Monarc study- april 1999

Tasks and Deliverables(3) Candidate Computer Models evaluation Map data and tasks to facilities (try different scenarios) Develop spreadsheet model with key parameters-get ‘average answers’ Develop simulation model with distributions Evaluate different models (performance,cost,risk…..) Establish a BASELINE MODEL >> BASELINE COMPUTING MODEL together with cost, performance,risk analysis 26 Nov 1999 F Harris LHCb computing workshop

The Structure of the Simulation Program (I Legrand) User Directory Config Files Initializing Data Define Activities (jobs) GUI Monarc Package Network Package Data Model Package Auxiliary tools Graphics SIMULATION ENGINE Parameters Prices .... Processing Package Statistics Regional Center, Farm, AMS, CPU Node, Disk, Tape Job, Active Jobs, Physics Activities, Init Functions, LAN, WAN, Messages Data Container, Database Database Index Dynamically Loadable Modules 26 Nov 1999 F Harris LHCb computing workshop

Proposed composition and organisation of working group Contacts from each country Contacts from other LHCb projects (can/will have multi-function people..) DAQ Reconstruction Analysis MONARC IT (PASTA +?) Project Plan (constraint - timescales to match requests from the review..) Monthly meetings? (with videoconferencing) 1st meeting week after LHCb week (first try at planning execution of tasks) Documentation all on WWW (need a WEBMASTER) 26 Nov 1999 F Harris LHCb computing workshop