Initial Planning towards The Full Dress Rehearsal Michael Ernst.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
SLUO LHC Workshop, SLACJuly 16-17, Analysis Model, Resources, and Commissioning J. Cochran, ISU Caveat: for the purpose of estimating the needed.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
The Project AH Computing. Functional Requirements  What the product must do!  Examples attractive welcome screen all options available as clickable.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
Event Metadata Records as a Testbed for Scalable Data Mining David Malon, Peter van Gemmeren (Argonne National Laboratory) At a data rate of 200 hertz,
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Alignment Strategy for ATLAS: Detector Description and Database Issues
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
2nd September Richard Hawkings / Paul Laycock Conditions data handling in FDR2c  Tag hierarchies set up (largely by Paul) and communicated in advance.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Event Data History David Adams BNL Atlas Software Week December 2001.
ATLAS, U.S. ATLAS, and Databases David Malon Argonne National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
3rd November Richard Hawkings Luminosity, detector status and trigger - conditions database and meta-data issues  How we might apply the conditions.
Hall-D/GlueX Software Status 12 GeV Software Review III February 11[?], 2015 Mark Ito.
TRT Offline Software DOE Visit, August 21 st 2008 Outline: oTRT Commissioning oTRT Offline Software Activities oTRT Alignment oTRT Efficiency and Noise.
Planning and status of the Full Dress Rehearsal Latchezar Betev ALICE Offline week, Oct.12, 2007.
Conditions Metadata for TAGs Elizabeth Gallas, (Ryan Buckingham, Jeff Tseng) - Oxford ATLAS Software & Computing Workshop CERN – April 19-23, 2010.
U.S. ATLAS Executive Committee August 3, 2005 U.S. ATLAS TDAQ FY06 M&O Planning A.J. Lankford UC Irvine.
Post-DC2/Rome Production Kaushik De, Mark Sosebee University of Texas at Arlington U.S. Grid Phone Meeting July 13, 2005.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 24 January 2005.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Computing for Alice at GSI (Proposal) (Marian Ivanov)
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
ATLAS Computing Requirements LHCC - 19 March ATLAS Computing Requirements for 2007 and beyond.
Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( )
Pavel Nevski DDM Workshop BNL, September 27, 2006 JOB DEFINITION as a part of Production.
11th November Richard Hawkings Richard Hawkings (CERN) ATLAS reconstruction jobs & conditions DB access  Conditions database basic concepts  Types.
The Worldwide LHC Computing Grid Introduction & Housekeeping Collaboration Workshop, Jan 2007.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006.
David Adams ATLAS ATLAS Distributed Analysis (ADA) David Adams BNL December 5, 2003 ATLAS software workshop CERN.
Finding Data in ATLAS. May 22, 2009Jack Cranshaw (ANL)2 Starting Point Questions What is the latest reprocessing of cosmics? Are there are any AOD produced.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
Marco Cattaneo, 3-June Event Reconstruction for LHCb  What is the scope of the project?  What are the goals (short+medium term)?  How do we organise.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
Dario Barberis: ATLAS DB S&C Week – 3 December Oracle/Frontier and CondDB Consolidation Dario Barberis Genoa University/INFN.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
Marco Cattaneo, 20-May Event Reconstruction for LHCb  What is the scope of the project?  What are the goals (short+medium term)?  How do we organise.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Database Replication and Monitoring
U.S. ATLAS TDAQ FY06 M&O Planning
for the Offline and Computing groups
Readiness of ATLAS Computing - A personal view
Zhongliang Ren 12 June 2006 WLCG Tier2 Workshop at CERN
US ATLAS Physics & Computing
ATLAS DC2 & Continuous production
The ATLAS Computing Model
Presentation transcript:

Initial Planning towards The Full Dress Rehearsal Michael Ernst

M. Ernst Tier 2 Meeting 8 March, 2007 UCSD 2 Outline MotivationMotivation What to be exercisedWhat to be exercised Progressive Steps to ramp upProgressive Steps to ramp up MonitoringMonitoring Schedule (to be discussed)Schedule (to be discussed)

M. Ernst Tier 2 Meeting 8 March, 2007 UCSD 3 Introduction So far more questions than answers …So far more questions than answers …

M. Ernst Tier 2 Meeting 8 March, 2007 UCSD 4 Scope of Final Dress Rehearsal Stated by Fabiola in her IntroductionStated by Fabiola in her Introduction Generate O(10 7 ) events: few days of data taking, ~1 pb -1 at L=10 31 Mix and filter events to get correct physics mixture as expected at HLT output Pass events through G4 simulation (as-installed misaligned distorted geometry) Run Lvl1 simulation Produce byte streams  emulate raw data format Send raw data to Point1, pass through HLT nodes and SFO, write out events into streams, closing files at boundary of luminosity blocks Send events from Point 1 to Tier0; manipulate/merge files according to final model Perform calibration & alignment at Tier0 (and possibly also outside) Run reconstruction at Tier0 (and maybe Tier1s?)  produce ESD, AOD, TAG, DPD Distribute ESD, AOD, TAG, DPD to Tier1s and Tier2s; replicate databases Perform distributed analysis, use TAGs, producing additional group-specific DPD, etc. Run Data Quality at all levels of data production

M. Ernst Tier 2 Meeting 8 March, 2007 UCSD 5 Timescale & Duration Timescale?Timescale? Between June and October 2007 Duration?Duration? One problem is that the FDR competes for resources with the ongoing ATLAS detector commissioning which by then will be reaching its final stagesOne problem is that the FDR competes for resources with the ongoing ATLAS detector commissioning which by then will be reaching its final stages Primarily in the TDAQ, Tier-0 and Data Quality Monitoring Will require careful scheduling Series of 1 week “runs” separated by 2-3 weeks of analysis and preparation for the next one?Series of 1 week “runs” separated by 2-3 weeks of analysis and preparation for the next one? Would allow for 3-4 runs prior to low energy running These runs will provide good testbeds to exercise the ATLAS global shift operations infrastructureThese runs will provide good testbeds to exercise the ATLAS global shift operations infrastructure In fact these might be the schedule driver to have this in place Procedures in place to ensure all shift slots are covered, on-call rotas, etc.

M. Ernst Tier 2 Meeting 8 March, 2007 UCSD 6 Scale While the scope of the FDR should be as complete as possible, it’s important to scale it optimallyWhile the scope of the FDR should be as complete as possible, it’s important to scale it optimally Ability to inject events into the TDAQ system is likely to be the limiting factorAbility to inject events into the TDAQ system is likely to be the limiting factor It’s designed to get events out of Point1 at high rate, not to get them in Need to carefully evaluate injection alternativesNeed to carefully evaluate injection alternatives ROD, Lvl2, SFI, EF, SFO Understand what the rate restrictions are based on the existing hardwareUnderstand what the rate restrictions are based on the existing hardware It makes little sense to design and install additional hardware to support FDR Understand whether event cloning/replication can be used to increased throughputUnderstand whether event cloning/replication can be used to increased throughput We assume Lvl2/EF will be run in pass-through modeWe assume Lvl2/EF will be run in pass-through mode

M. Ernst Tier 2 Meeting 8 March, 2007 UCSD 7 Data Flow (adapted from Rob, Richard & Claude) Tier 0 Fast reco, calibrate Tier-1 transfer Prompt reco (bulk) Verify DB from online - config, calib -DCS,monitor +prompt calib digested status +digested status Tier-2 transfer +TAG DB (DQ status) Tier-1 Oracle replica Tier-2 replica RAW:200Hz320MB/s express calib ESD 100MB/s AOD 20MB/s ROD(B)s Front-end LVL1 LVL2 SFI (s) EF SFOs DCSOnlineDQA OfflineDQA Status summary Event, Lumi-Block OfflineDQA Analysis Model components missing

M. Ernst Tier 2 Meeting 8 March, 2007 UCSD 8 Proposed Strategy (D. Quarrie) Currently the emphasis has been on focused, orthogonal component testsCurrently the emphasis has been on focused, orthogonal component tests E.g. Tier-0 tests, SC4, TDAQ Large Scale Test, 3D project tests, etc. The Data Streaming test spans many of theseThe Data Streaming test spans many of these Provides valuable feedback on missing capabilities as well as it’s primary goalProvides valuable feedback on missing capabilities as well as it’s primary goal Proposal is to use this as the vehicle towards the FDRProposal is to use this as the vehicle towards the FDR In addition we now have some manpower to use it (at a smaller scale) as a regression testbed using RTTIn addition we now have some manpower to use it (at a smaller scale) as a regression testbed using RTT D. Quarrie’s much delayed Full Chain Test Identify missing or kludged functionality and adiabatically replace itIdentify missing or kludged functionality and adiabatically replace it Couple into ongoing component testsCouple into ongoing component tests

M. Ernst Tier 2 Meeting 8 March, 2007 UCSD 9 Coupling to Tier-0 Tests? Running since ~2 weeks (mixed experience, many problems in particular at CERNRunning since ~2 weeks (mixed experience, many problems in particular at CERN Another one in MayAnother one in May One constraint is number of nodes available at CERN in order to achieve bandwidth goalsOne constraint is number of nodes available at CERN in order to achieve bandwidth goals Real reconstruction code cannot be used without significantly reducing CPU time per event David proposes to use the real reconstruction code for the May testsDavid proposes to use the real reconstruction code for the May tests Perhaps with some Algorithms and/or output EDM disabled to fit bandwidth goals within cpu constraints An interim release 14.0.X should be available for that Subsequent tests should as far as possible use full reconstructionSubsequent tests should as far as possible use full reconstruction

M. Ernst Tier 2 Meeting 8 March, 2007 UCSD 10 Coupling to Calibration Data Challenge? Final phase of CDC is to test ability to determine and correct misalignment in timescale of 24 hoursFinal phase of CDC is to test ability to determine and correct misalignment in timescale of 24 hours As is required in computing model prior to Tier-0 processing Early CDC will use release 13.0.XEarly CDC will use release 13.0.X Final phase could also use release 14.0.XFinal phase could also use release 14.0.X Timescale matchesTimescale matches How best to couple to May Tier-0 tests?How best to couple to May Tier-0 tests?

M. Ernst Tier 2 Meeting 8 March, 2007 UCSD 11 Other Couplings Data Quality MonitoringData Quality Monitoring DDM & 3DDDM & 3D Metadata Catalog (AMI)Metadata Catalog (AMI) TAG DatabaseTAG Database COOL DatabaseCOOL Database Analysis ModelAnalysis Model Physicist involvementPhysicist involvement Absolutely crucial Need to understand proposed component test schedules and create plan coupling them togetherNeed to understand proposed component test schedules and create plan coupling them together Master planning document is at:Master planning document is at: ssioning?rev=6;filename=ATLAS-Offline-Computing.pdf

M. Ernst Tier 2 Meeting 8 March, 2007 UCSD 12 Some Technical Issues Implications of >2GB file sizeImplications of >2GB file size File sizes and how/where to merge?File sizes and how/where to merge? File sizes must be matched not only to e.g. DDM optimization, but also to job processing times (e.g. <<24 hour at Tier-0 per node) and operational constraints (e.g. run duration) Would it be technically feasible to perform merging via file concatenation instead of via read-in/write-out?Would it be technically feasible to perform merging via file concatenation instead of via read-in/write-out?