Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
F Run II Experiments and the Grid Amber Boehnlein Fermilab September 16, 2005.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
The D0 Monte Carlo Challenge Gregory E. Graham University of Maryland (for the D0 Collaboration) February 8, 2000 CHEP 2000.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Snapshot of the D0 Computing and Operations Planning Process Amber Boehnlein For the D0 Computing Planning Board.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
DØ RAC Working Group Report Progress Definition of an RAC Services provided by an RAC Requirements of RAC Pilot RAC program Open Issues DØRACE Meeting.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
DØ Computing Model & Monte Carlo & Data Reprocessing Gavin Davies Imperial College London DOSAR Workshop, Sao Paulo, September 2005.
Stephen Wolbers CHEP2000 February 7-11, 2000 Stephen Wolbers CHEP2000 February 7-11, 2000 CDF Farms Group: Jaroslav Antos, Antonio Chan, Paoti Chang, Yen-Chu.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
Spending Plans and Schedule Jae Yu July 26, 2002.
Workshop on Computing for Neutrino Experiments - Summary April 24, 2009 Lee Lueking, Heidi Schellman NOvA Collaboration Meeting.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
Amber Boehnlein, FNAL Accelerator Based Physics: ATLAS CDF CMS DO STAR Amber Boehnlein OSG Consortium Meeting January 24, 2006.
Lee Lueking 1 The Sequential Access Model for Run II Data Management and Delivery Lee Lueking, Frank Nagy, Heidi Schellman, Igor Terekhov, Julie Trumbo,
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
May Donatella Lucchesi 1 CDF Status of Computing Donatella Lucchesi INFN and University of Padova.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
Collaboration meeting, October 11, 2002Jianming Qian, University of Michigan Computing & Software DØ main executables Data and MC production Data-handling.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Frank Wuerthwein, UCSD Update on D0 and CDF computing models and experience Frank Wuerthwein UCSD For CDF and DO collaborations October 2 nd, 2003 Many.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
Remote Institute Tasks Frank Filthaut 11 February 2002  Monte Carlo production  Algorithm development  Alignment, calibration  Data analysis  Data.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
Run II Review Closeout 15 Sept., 2004 FNAL. Thanks! …all the hard work from the reviewees –And all the speakers …hospitality of our hosts Good progress.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, R. Brock,T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina,
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Comments on #3: “Motivation for Regional Analysis Centers and Use Cases” Chip Brock 3/13/2.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
DØ Computing Model and Operational Status Gavin Davies Imperial College London Run II Computing Review, September 2005.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
Monte Carlo Production and Reprocessing at DZero
ILD Ichinoseki Meeting
DØ MC and Data Processing on the Grid
The ATLAS Computing Model
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002

Amber Boehnlein, FNAL Computing Status DO has a highly successful computing structure in place  Sequential Access by MetaData (SAM) catalogs and manages data access  Robotic storage with reliable drives and media  Domino provides high I/O capacity and user access to large amounts of data  Commissioning the commodity backend  FNAL Reconstruction production farm processing increasing to 35 Hz  Basic software infrastructure in place  Fruitful collaboration with the Computing Division on joint projects.  MC generation performed at collaborating institutions  DORECO has basic functionality  Basic Filtering at L3  Online output rate is at design. The Building Blocks are in place… … we are producing first physics results

Amber Boehnlein, FNAL Computing Architecture Central Data Storage dØmino … … Central Analysis System Central Farm Remote Farm Remote Analysis Linux Desktops (ClueDØ) DØ

Amber Boehnlein, FNAL Data Flow Data Handling System Remote CentersCentral AnalysisClueD0Regional FarmCentral Farm Robotic Storage Raw Data RECO Data RECO MC User Data

Amber Boehnlein, FNAL Analysis Model dØmino … CAS Analysis CPUs to be provided by: Central Analysis System (CAS) at FCC: A PC/Linux dØmino back-end supplied and administrated by the computing division Remote Analysis Centers (RAC): Institutions with CPU, disk and personnel resources to serve collaborators Emphasis on Remote Analysis with formal agreements FNAL pursuing improved offsite connectivity Analysis needs must be better understood and modeled … ClueDØ … RAC (Regional Analysis Center)

Amber Boehnlein, FNAL Analysis Patterns Current access and analysis patterns  Physics group coordinated efforts  Derived data sets by skimming through data sets (DST or TMB)  Picked event samples of raw data for re-reco studies  Specialized reprocessing of small data sets  Physics topic analysis includes generation of test samples, trigger simulation, background studies, efficiency studies  User level analysis primarily takes place on skimmed data samples on high level tier Need to acquire more information in order to size the analysis systems and tape plant Need to simulate system to identify bottlenecks

Amber Boehnlein, FNAL Cost Estimate-June 2002 DØ Cost Estimate Total Infrastructure$400,000$310,000$570,000$645,000$295,000$2,220,000 Analysis machines$1,152,500$865,000$1,152,500$1,025,000$680,000$4,875,000 FNAL CLuED0$50,000 $250,000 Reconstruction$225,000$325,000$575,000$150,000$200,000$1,475,000 Central disk cache$150,000 $200,000 $850,000 Robotic storage$75,000$0$150,000 $525,000 Tape drives$450,000 $300,000$600,000 $2,400,000 Backup facility$100,000 Sum$2,602,500$2,150,000$2,997,500$2,820,000$2,175,000$12,745,000 5 Year planning processing, with yearly review Infrastructure includes database machines, networking, web servers, code Servers and build machines

Amber Boehnlein, FNAL Costing Assumptions  Analysis cost was estimated by modeling possible user scenarios—Demonstrated need for collaboration resources beyond FNAL for computing, but does not reflect full system costs  Disk and servers  Networking  Infrastructure such as gateway machines and code servers  Mass storage  Cache machines and drives to support extensive data export  Project Disk not included  Disk estimate only for FNAL central SAM cache on IRIX machine  Monte Carlo estimates not included  Reprocessing not included

Amber Boehnlein, FNAL FNAL Equipment Budget The guidance to the experiments is $2M/year— asked to provide $1.5M scenarios for FY 2003 Reduction in farm purchase, tape drives Use the FNAL equipment budget to provide basic level of functionality  Database and other infrastructure  Reconstruction farm  Robotic storage and tape drives  Disk cache  Basic analysis computing  Support for data access to enable offsite computing

Amber Boehnlein, FNAL Institution Contributions All Monte Carlo production takes place at regional centers Secondary reprocessing at Michigan National Partnership for Advanced Computing Initiative Center—targeting reprocessing for 15% of data set, goal for March Additional sites needed Contributions at FNAL to project disk and to CLuED0 Offsite Analysis  Task Force prototype sites at GridKa and Lyon with routine SAM delivery of the Thumbnail data sample to GridKa  SAM-GRID prototype demonstration

Amber Boehnlein, FNAL Conclusions  The D0 computing model is successful SAM, an integrated data handling system, enables flexibility in the allocation of resources and effective use disk cache and robotic storage and path into the GRID era  Use FNAL Computing budget to provide base for infrastructure, robotic storage, reconstruction and analysis  Use collaborating institution resources for project disk, analysis computing, MC generation and secondary reconstruction.