26 Nov 1999 F Harris LHCb computing workshop1 Development of LHCb Computing Model F Harris Overview of proposed workplan to produce ‘baseline computing.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
LHCb Computing Activities in UK Current activities UK GRID activities RICH s/w activities.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
POLITEHNICA University of Bucharest California Institute of Technology National Center for Information Technology Ciprian Mihai Dobre Corina Stratan MONARC.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
11 Dec 2000F Harris Datagrid Testbed meeting at Milan 1 LHCb ‘use-case’ - distributed MC production
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
MONARC : results and open issues Laura Perini Milano.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
23 Feb 2000F Harris Hoffmann Review Status1 Status of Hoffmann Review of LHC computing.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Nick Brook Current status Future Collaboration Plans Future UK plans.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
LHCb and DataGRID - the workplan for 2001 Eric van Herwijnen Wednesday, 28 march 2001.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
1D. Olson, SDM-ISIC Mtg, 26 Mar 2002 Scientific Data Management: An Incomplete Experimental HENP Perspective D. Olson, LBNL 26 March 2002 SDM-ISIC Meeting.
Detector Description in LHCb Detector Description Workshop 13 June 2002 S. Ponce, P. Mato / CERN.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
LHCb datasets and processing stages. 200 kB100 kB 70 kB 0.1 kB 10kB 150 kB 0.1 kB 200 Hz LHCb datasets and processing stages.
June 22, 1999MONARC Simulation System I.C. Legrand1 MONARC Models Of Networked Analysis at Regional Centres Distributed System Simulation Iosif C. Legrand.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
VI/ CERN Dec 4 CMS Software Architecture vs Hybrid Store Vincenzo Innocente CMS Week CERN, Dec
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
Hall D Computing Facilities Ian Bird 16 March 2001.
LHCb computing model and the planned exploitation of the GRID Eric van Herwijnen, Frank Harris Monday, 17 July 2000.
Grid site as a tool for data processing and data analysis
Moving the LHCb Monte Carlo production system to the GRID
Bernd Panzer-Steindel, CERN/IT
LHCb computing in Russia
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Bernd Panzer-Steindel CERN/IT
ILD Ichinoseki Meeting
Scientific Computing At Jefferson Lab
LHCb Computing Project Organisation Manage Steering Group
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
Short to middle term GRID deployment plan for LHCb
Use Of GAUDI framework in Online Environment
Development of LHCb Computing Model F Harris
Presentation transcript:

26 Nov 1999 F Harris LHCb computing workshop1 Development of LHCb Computing Model F Harris Overview of proposed workplan to produce ‘baseline computing model for LHCb’

26 Nov 1999 F Harris LHCb computing workshop2 WHY are we worrying NOW about this? zHOFFMAN REVIEW (starting Jan 2000) yHow will the LHC experiments do their computing? Answers in late 2000 xThe basic logical data flow model, patterns of use, resources for tasks xThe preferred distributed resource model (CERN,regions,institutes) yComputing MOUs in 2001 zCountries (UK,Germany, …) are planning now for ‘development’ facilities

26 Nov 1999 F Harris LHCb computing workshop3 Proposed project organisation to do the work Tasks and Deliverables(1) nLogical data flow model (all data-sets and processing tasks) èData Flow Model Specification nResource requirements èData volumes,rates,CPU needs by task (these are essential parameters for model development) - measure current status and predict the future èURD giving distributions nUse Cases èmap demands for reconstruction, simulation, analysis, calibration and alignment onto the model (e.g. physics groups working) èDocument ‘patterns of usage’ and resulting demands on resources - ‘workflow specification’

200 kB100 kB 70 kB 0.1 kB 10kB 150 kB 0.1 kB 200 Hz LHCb datasets and processing stages (must update CPU and store reqts.)

26 Nov 1999 F Harris LHCb computing workshop5 A General view of Analysis (G Corti) (? Patterns of group and user analysis) MC generator Raw Data DST (reconstructed particles, primary vertex) Reconstruction Data Acquisition Detector Simulation (Group) Analysis MC “truth”Data “Reconstructed” Physics channels Physicist Analysis

26 Nov 1999 F Harris LHCb computing workshop6 Status of simulated event production (since june) E van Herwijnen

26 Nov 1999 F Harris LHCb computing workshop7 Tasks and Deliverables (2) zResource distribution èProduce description of distribution of LHCb institutes, regional centres and resources (equipment and people), and the connectivity è Resource Map with network connectivity. è List of people and equipment… zSpecial requirements for remote working è (OS platfoms,s/w distribution,videoconferencing..) èURD on ‘Remote working…’ zTechnology Tracking è (Follow PASTA. Data Management s/w, GAUDI data management….) èTechnology trend figures èCapabilities of data management s/w

26 Nov 1999 F Harris LHCb computing workshop8 Mock-up of an Offline Computing Facility for an LHC Experiment at CERN (Les Robertson July 99 with ‘old’ experiment estimates) zPurpose yinvestigate the feasibility of building LHC computing facilities using current cluster architectures, conservative assumptions about technology evolution xscale & performance xtechnology xpower xfootprint xcost xreliability xmanageability

26 Nov 1999 F Harris LHCb computing workshop9

250 Gbps 0.8 Gbps 8 Gbps ………… 1400 boxes 160 clusters 40 sub-farms 12 Gbps* 480 Gbps* 3 Gbps* 1.5 Gbps 100 drives 12 Gbps 5400 disks 340 arrays ……... LAN-SAN routers LAN-WAN routers CERN CMS Offline Farm at CERN circa 2006 lmr for Monarc study- april 1999 tapes 0.8 Gbps (daq) 0.8 Gbps 5 Gbps disks processor s storage network farm network * assumes all disk & tape traffic on storage network double these numbers if all disk & tape traffic through LAN-SAN router

26 Nov 1999 F Harris LHCb computing workshop11 Tasks and Deliverables(3) zCandidate Computer Models evaluation èMap data and tasks to facilities (try different scenarios) èDevelop spreadsheet model with key parameters-get ‘average answers’ èDevelop simulation model with distributions èEvaluate different models (performance,cost,risk…..) èEstablish a BASELINE MODEL è>> BASELINE COMPUTING MODEL together with cost, performance,risk analysis

26 Nov 1999 F Harris LHCb computing workshop12 The Structure of the Simulation Program (I Legrand) z User Directory Config Files Initializing Data Define Activities (jobs) GUI Monarc Package Network Package Data Model Package Auxiliary tools Graphics SIMULATION ENGINE Parameters Prices.... Processing Package Statistics Regional Center, Farm, AMS, CPU Node, Disk, Tape Job, Active Jobs, Physics Activities, Init Functions, LAN, WAN, Messages Data Container, Database Database Index Dynamically Loadable Modules

26 Nov 1999 F Harris LHCb computing workshop13 Proposed composition and organisation of working group zContacts from each country zContacts from other LHCb projects (can/will have multi-function people..) yDAQ yReconstruction yAnalysis yMONARC yIT (PASTA +?) zProject Plan (constraint - timescales to match requests from the review..) zMonthly meetings? (with videoconferencing) y1st meeting week after LHCb week (first try at planning execution of tasks) zDocumentation yall on WWW (need a WEBMASTER)