November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.

Slides:



Advertisements
Similar presentations
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Upgrade D0 farm. Reasons for upgrade RedHat 7 needed for D0 software New versions of –ups/upd v4_6 –fbsng v1_3f+p2_1 –sam Use of farm for MC and analysis.
ICHEP visit to NIKHEF. D0 Monte Carlo farm hoeve (farm) MC request schuur (barn) SAM MHz 2-CPU nodes 50 * 40 GB 1.2 TB.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Meta-Computing at DØ Igor Terekhov, for the DØ Experiment Fermilab, Computing Division, PPDG ACAT 2002 Moscow, Russia June 28, 2002.
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
The D0 Monte Carlo Challenge Gregory E. Graham University of Maryland (for the D0 Collaboration) February 8, 2000 CHEP 2000.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
1 Data Management D0 Monte Carlo needs The NIKHEF D0 farm The data we produce The SAM data base The network Conclusions Kors Bos, NIKHEF, Amsterdam Fermilab,
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
CHEP 2003Stefan Stonjek1 Physics with SAM-Grid Stefan Stonjek University of Oxford CHEP th March 2003 San Diego.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Nick Brook Current status Future Collaboration Plans Future UK plans.
September 4,2001Lee Lueking, FNAL1 SAM Resource Management Lee Lueking CHEP 2001 September 3-8, 2001 Beijing China.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
SAM and D0 Grid Computing Igor Terekhov, FNAL/CD.
1 The D0 NIKHEF Farm Kors Bos Ton Damen Willem van Leeuwen Fermilab, May
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
DØ Computing Model & Monte Carlo & Data Reprocessing Gavin Davies Imperial College London DOSAR Workshop, Sao Paulo, September 2005.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
Dzero MC production on LCG How to live in two worlds (SAM and LCG)
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
A New CDF Model for Data Movement Based on SRM Manoj K. Jha INFN- Bologna 27 th Feb., 2009 University of Birmingham, U.K.
1 DØ Grid PP Plans – SAM, Grid, Ceiling Wax and Things Iain Bertram Lancaster University Monday 5 November 2001.
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
Data reprocessing for DZero on the SAM-Grid Gabriele Garzoglio for the SAM-Grid Team Fermilab, Computing Division.
Computing plans from UKDØ. Iain Bertram 8 November 2000.
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
UK Grid Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid Prototype and Globus Technical Meeting QMW, 22nd November 2000 Glenn Patrick (RAL)
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Frank Wuerthwein, UCSD Update on D0 and CDF computing models and experience Frank Wuerthwein UCSD For CDF and DO collaborations October 2 nd, 2003 Many.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
Data Management with SAM at DØ The 2 nd International Workshop on HEP Data Grid Kyunpook National University Daegu, Korea August 22-23, 2003 Lee Lueking.
Update on NIKHEF D0 farm June June 6, 2002D0RACE2 Outline Status of D0 farm –Upgrades –Move into grid network Use of EDG testbed and DAS-2 cluster.
D0 File Replication PPDG SLAC File replication workshop 9/20/00 Vicky White.
DZero Monte Carlo Production Ideas for CMS Greg Graham Fermilab CD/CMS 1/16/01 CMS Production Meeting.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
DØ Computing Model and Operational Status Gavin Davies Imperial College London Run II Computing Review, September 2005.
SAM: Past, Present, and Future Lee Lueking All Dzero Meeting November 2, 2001.
DØ Grid Computing Gavin Davies, Frédéric Villeneuve-Séguier Imperial College London On behalf of the DØ Collaboration and the SAMGrid team The 2007 Europhysics.
Status report NIKHEF Willem van Leeuwen February 11, 2002 DØRACE.
Belle II Physics Analysis Center at TIFR
Moving the LHCb Monte Carlo production system to the GRID
Monte Carlo Production and Reprocessing at DZero
SAM at CCIN2P3 configuration issues
DØ MC and Data Processing on the Grid
Gridifying the LHCb Monte Carlo production system
Status report NIKHEF Willem van Leeuwen February 11, 2002 DØRACE.
Lee Lueking D0RACE January 17, 2002
Presentation transcript:

November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application

November 7, 2001Dutch Datagrid SARA 2 Outline The DØ experiment The application The NIKHEF DØ farm SAM (aka the DØ grid) Conclusions

November 7, 2001Dutch Datagrid SARA 3 The DØ experiment Fermi National Accelerator Lab Tevatron –Collides protons and antiprotons of 980 GeV/c –Run II DØ detector DØ collaboration –500 physicists, 72 institutions, 19 countries

November 7, 2001Dutch Datagrid SARA 4 The DØ experiment Detector Data –1,000,000 Channels –Event size 250KB –Event rate ~50 Hz –On-line Data Rate 12 MBps –Est. 2 year totals (incl Processing and analysis): 1 x 10 9 events ~0.5 PB Monte Carlo Data –5 remote processing centers –Estimate ~300 TB in 2 years.

November 7, 2001Dutch Datagrid SARA 5

November 7, 2001Dutch Datagrid SARA 6 The application Generate events Follow particles through detector Simulate detector response Reconstruct tracks Analyse results

November 7, 2001Dutch Datagrid SARA 7 The application Starts with the specification of the events Generates (intermediate) data Stores data in tape robots Declares files in database

November 7, 2001Dutch Datagrid SARA 8 The application consists of –Monte Carlo programs gen, d0gstar, sim, reco, recoanalyze –mc_runjob bunch of python scripts runs on –SGI Origin (Fermilab, SARA) –Linux farms

November 7, 2001Dutch Datagrid SARA 9 mc_runjob Creates directory structure for job Creates scripts for each jobstep Creates scripts for submission of metadata Creates job description file Submit job to batch system

November 7, 2001Dutch Datagrid SARA 10 The NIKHEF DØ farm Batch server (hoeve) –Boot/Software server –Runs mc_runjob File server (schuur) –Runs SAM 50 – 70 nodes –Run MC jobs

November 7, 2001Dutch Datagrid SARA 11

November 7, 2001Dutch Datagrid SARA 12 node At boottime: –Boots via network from batch server –NFS mounts DØ directories on batch server At runtime: –Copies input from batch server to local disk –Runs MC job steps –Stores (intermediate) output on local disk

November 7, 2001Dutch Datagrid SARA 13 File server Copies output from node to file server Declares files to SAM Stores files with SAM in robot fnal sara

November 7, 2001Dutch Datagrid SARA 14 farm server file server node SAM DB datastore fbs(rcp,sam) fbs(mcc) mcc request mcc input mcc output 1.2 TB 40 GB FNAL SARA control data metadata fbs job: 1 mcc 2 rcp 3 sam 50 +

November 7, 2001Dutch Datagrid SARA 15 NIKHEF Stores metadata in database at FNAL –sam declare import_.py –scripts prepared by mc_runjob Stores files –on tape at fnal via cache on d0mino –on disk of teras.sara.nl and migrated to tape –sam store --descrip=import_.py [--dest=teras.sara.nl:/sam/samdata/y01/w42]

November 7, 2001Dutch Datagrid SARA 16 SARA No need to install SAM Declare teras directories in SAM as destination Access protocol –May 2001rcp –October 2001 bbftp –??: gridftp

November 7, 2001Dutch Datagrid SARA 17 SAM on the Global Scale Locate files –Monte Carlo data –Raw data from detector –Calibration data –Accelerator data Submit (analysis) jobs on local station Stores results in SAM

November 7, 2001Dutch Datagrid SARA 18 SAM on the Global Scale Central Analysis Interconnected network of primary cache stations Communicating and replicating data where it is needed. MSS WAN Stations at FNAL Current active stations FNAL (several) Lyon FR (IN2P3), Amsterdam NL (NIKHEF) Lancaster UK Imperial College UK Others in US Datalogger Reco-farm ClueD0 LA N (Others)

November 7, 2001Dutch Datagrid SARA 19 Future Plans for SAM Better specification of remote data storage locations, especially in MSS. Universal user registration that allows different usernames, uid, etc. on various stations. Integration with additional analysis frameworks, Root in particular (almost ready). Event level access to data. Movement toward Grid components, GridFTP, GSI…

November 7, 2001Dutch Datagrid SARA 20 Conclusions NIKHEF DØ farm is –easy to use (Antares, L3) –easy to clone (KUN) –part of DØ data grid –moving (slowly) to grid standards

November 7, 2001Dutch Datagrid SARA 21 antares L3