DØ RAC Working Group Report

Slides:



Advertisements
Similar presentations
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Advertisements

Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
DØ IB Meeting, Nov. 8, 2001 J. Yu, UTA, Remote Analysis Status Remote Analysis Coordination Computing hardware is rather inexpensive –CPU and storage media.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
3rd Nov 2000HEPiX/HEPNT CDF-UK MINI-GRID Ian McArthur Oxford University, Physics Department
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Snapshot of the D0 Computing and Operations Planning Process Amber Boehnlein For the D0 Computing Planning Board.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
Distribution After Release Tool Natalia Ratnikova.
- Iain Bertram R-GMA and DØ Iain Bertram RAL 13 May 2004 Thanks to Jeff Templon at Nikhef.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
DØ RAC Working Group Report Progress Definition of an RAC Services provided by an RAC Requirements of RAC Pilot RAC program Open Issues DØRACE Meeting.
SAMGrid as a Stakeholder of FermiGrid Valeria Bartsch Computing Division Fermilab.
SAM and D0 Grid Computing Igor Terekhov, FNAL/CD.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
DØ Computing Model & Monte Carlo & Data Reprocessing Gavin Davies Imperial College London DOSAR Workshop, Sao Paulo, September 2005.
DØ RACE Introduction Current Status DØRAM Architecture Regional Analysis Centers Conclusions DØ Internal Computing Review May 9 – 10, 2002 Jae Yu.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
Spending Plans and Schedule Jae Yu July 26, 2002.
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
1 GCA Application in STAR GCA Collaboration Grand Challenge Architecture and its Interface to STAR Sasha Vaniachine presenting for the Grand Challenge.
From DØ To ATLAS Jae Yu ATLAS Grid Test-Bed Workshop Apr. 4-6, 2002, UTA Introduction DØ-Grid & DØRACE DØ Progress UTA DØGrid Activities Conclusions.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
CMS Computing Model Simulation Stephen Gowdy/FNAL 30th April 2015CMS Computing Model Simulation1.
DØRACE Workshop Jae Yu Feb.11, 2002 Fermilab Introduction DØRACE Progress Workshop Goals Arrangements.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
D0 File Replication PPDG SLAC File replication workshop 9/20/00 Vicky White.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
McFarm Improvements and Re-processing Integration D. Meyer for The UTA Team DØ SAR Workshop Oklahoma University 9/26 - 9/27/2003
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
July 10, 2002DØRACE Status Report, Jae Yu DØ IB Meeting, OU Workshop 1 DØ RACE Status Report Introduction Software Distribution (DØRACE Setup) Hardware.
DØ Computing Model and Operational Status Gavin Davies Imperial College London Run II Computing Review, September 2005.
Status report NIKHEF Willem van Leeuwen February 11, 2002 DØRACE.
Database operations in CMS
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
Artem Trunov and EKP team EPK – Uni Karlsruhe
Simulation use cases for T2 in ALICE
Universita’ di Torino and INFN – Torino
Chapter 2: Operating-System Structures
Wide Area Workload Management Work Package DATAGRID project
Status report NIKHEF Willem van Leeuwen February 11, 2002 DØRACE.
Remote Analysis Coordination
DØ Internal Computing Review
ATLAS DC2 & Continuous production
Comments on #3: “Motivation for Regional Analysis Centers and Use Cases” Chip Brock 3/13/2.
Proposal for a DØ Remote Analysis Model (DØRAM)
Chapter 2: Operating-System Structures
Development of LHCb Computing Model F Harris
Presentation transcript:

DØ RAC Working Group Report DØRACE Meeting June 6, 2002 Jae Yu Progress Definition of an RAC Services provided by an RAC Requirements of RAC Pilot RAC program Open Issues

DØRAC Working Group Progress Working group has been formed at the Feb. workshop The members: I. Bertram, R. Brock, F. Filthaut, L. Lueking, P. Mattig,M. Narain , P. Lebrun, B. Thooris , J. Yu, C. Zeitnitz Had many weekly meetings and a face-to-face meeting at FNAL a few weeks ago to hash out some unclear issues A proposal (DØ Note #3984) has been worked on Specify the services and requirements for RAC’s Doc. At : http://www-hep.uta.edu/~d0race/d0rac-wg/d0rac-spec-050802-8.pdf All comments are in  Target to release after the discussion at ADM tomorrow June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

Proposed DØRAM Architecture Central Analysis Center (CAC) Desktop Analysis Stations DAS …. RAC Regional Centers Provide various services IAC Institutional Normal Interaction Communication Path Occasional Interaction June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

What is a DØRAC? An institute with large concentrated and available computing resources Many 100s of CPUs Many 10s of TBs of disk cache Many 100Mbytes of network bandwidth Possibly equipped with HPSS An institute willing to provide services to a few small institutes in the region An institute willing to provide increased infrastructure as the data from the experiment grows An institute willing to provide support personnel if necessary June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

What services do we want a DØRAC do? Provide intermediary code distribution Generate and reconstruct MC data set Accept and execute analysis batch job requests Store data and deliver them upon requests Participate in re-reconstruction of data Provide database access Provide manpower support for the above activities June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

Code Distribution Service Current releases: 4GB total  will grow to >8GB? Why needed?: Downloading 8GB once every week is not a big load on network bandwidth Efficiency of release update rely on Network stability Exploit remote human resources What is needed? Release synchronization must be done at all RACs every time a new release become available Potentially need large disk spaces to keep releases UPS/UPD deployment at RACs FNAL specific Interaction with other systems? Need administrative support for bookkeeping Current DØRACE procedure works well, even for individual users  Do not see the need for this service at this point June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

Generate and Reconstruct MC data Currently done 100% at remote sites What is needed? A mechanism to automate request processing A Grid that can Accept job requests Packages the job Identify and locate the necessary resources Assign the job to the located institution Provide status to the users Deliver or keep the results Database for noise and Min-bias addition Perhaps the most undisputable task of a DØRAC June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

Batch Job Processing This task definitely needs a DØRAC Currently rely on FNAL resources (DØmino, ClueDØ, CLUBS, etc) What is needed? Sufficient computing infrastructure to process requests Network CPU Cache storage to hold job results till the transfer Access to relevant databases A Grid that can: Accept job requests Packages the job Identify and locate the necessary resources Assign the job to the located institution Provide status to the users Deliver or keep the results This task definitely needs a DØRAC Bring input to the user or bring the exe to the input? June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

Data Caching and Delivery Service Currently only at FNAL (CAC) Why needed? To make data should be readily available to the users with minimal latency What is needed? Need to know what data and how much we want to store 100% TMB 10-20% DST?  To make up 100% of DST on the net Any RAW data at all? What about MC? 50% of the actual data Should be on disk to minimize data caching latency How much disk space? (~50TB if 100% TMB and 10% DST for RunIIa) Constant shipment of data to all RACs from the CAC Constant bandwidth occupation (14MB/sec for Run IIa RAW) Resources from CAC needed A Grid that can Locate the data (SAM can do this already…) Tell the requester about the extent of the request Decide whether to move the data or pull the job over June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

Data Reprocessing Services These include: Re-reconstruction of the actual and MC data From DST? From RAW? Re-streaming of data Re-production of TMB data sets Re-production of roottree ab initio reconstruction Currently done only at CAC offline farm June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

Reprocessing Services cont’d What is needed? Sufficiently large bandwidth to transfer necessary data or HPSS (?) DSTs RAC’s will have 10% or so already permanently stored RAW Transfer should begin when need arises RAC’s reconstruct as data gets transferred Large data storage Constant data transfer from CAC to RACs as CAC reconstructs fresh data Dedicated file server at CAC for data distribution to RACs Constant bandwidth occupation Sufficient buffer storage at CAC in case network goes down Reliable and stable network Access to relevant databases Calibration Luminosity Geometry and Magnetic Field Map June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

Reprocessing Services cont’d RAC’s have to transfer of new TMB to other sites Since only 10% or so of DST’s reside only the TMB equivalent to that portion can be regenerated Well synchronized reconstruction executablerun_time environment A grid that can Identify resources on the net Optimize resource allocation for most expeditious reproduction Move data around if necessary A dedicated block of time for concentrated CPU usage if disaster strikes Questions Do we keep copies of all data at the CAC? Do we ship DSTs and TMBs back to CAC? June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

Database Access Service Currently done only at CAC What is needed? Remote DB access software services Some copy of DB at RACs A substitute of Oracle DB at remote sites A means of synchronizing DBs A possible solution is proxy server at the central location supplemented with a few replicated DB for backup June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

What services do we want a DØRAC do? Provide intermediary code distribution Generate and reconstruct MC data set Accept and execute analysis batch job requests Store data and deliver them upon requests Participate in re-reconstruction of data Provide database access Provide manpower support for the above activities June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

DØRAC Implementation Timescale Implement First RAC by Oct. 1, 2002 CLUBs cluster at FNAL and Karlsruhe, Germany(?) Cluster associated IAC’s Transfer TMB (10kB/evt) data set constantly from CAC to the RACs Workshop on RAC in Nov., 2002 Implement the next set of RAC by Apr. 1, 2003 Implement and test DØGridware as they become available The DØGrid should work by the end of Run IIa (2004), retaining the DØRAM architecture The next generation DØGrid, a truly gridfied network without June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

Pilot DØRAC Program RAC Pilot sites: What should we accomplish? Karlsruhe, Germany  In the process of discussing with the management at the institute CLUBS, Fermilab  Need to verify (Bring it up at the CPB?) What should we accomplish? Transfer TMB files as they get produced A File server (both hardware and software) at CAC for this job Request driven or constant push? Network monitoring tools To observe network occupation and stability From CAC to RAC From RAC to IAC Allow IAC users to access the TMB Observe Use of the data set Accessing pattern Performance of the accessing system SAM system performance for locating June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

What are the needed Grid software functionality? The user account assignment? Resource (CPU and Disk space) need? What are the needed Grid software functionality? To interface with the users To locate the input and necessary resources To gauge the resources To package the job requests June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu

Some Open Issues What do we do with MC data? Iain’s suggestion is to keep DØStar format not DST Additional storage for Min-bias event samples What is the analysis scheme? The only way is to re-do all the remaining process of MC chain Require additional CPU resources DØSim Reconstruction Reco-analysis Additional disk space to buffer intermediate files Keep the DST and rootpule? Where? What are other questions we want answers from the pilot RAC program? How do we acquire sufficient funds for these resources? Which institutions are the candidates for RACs? Do we have full support from the collaboration? Other detailed issues covered in the proposal June 6, 2002 DØRAC Report DØRACE Meeting, Jae Yu