02/12/02D0RACE Worshop D0 Grid: CCIN2P3 at Lyon Patrice Lebrun D0RACE Wokshop Feb. 12, 2002.

Slides:



Advertisements
Similar presentations
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Advertisements

1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
IN2P3 Status Report HTASC March 2003 Fabio HERNANDEZ et al. from CC-in2p3 François ETIENNE
O. Stézowski IPN Lyon AGATA Week September 2003 Legnaro Data Analysis – Team #3 ROOT as a framework for AGATA.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
Overview of LCG-France Tier-2s and Tier-3s Frédérique Chollet (IN2P3-LAPP) on behalf of the LCG-France project and Tiers representatives CMS visit to Tier-1.
Centre de Calcul IN2P3 Centre de Calcul de l'IN2P Boulevard Niels Bohr F VILLEURBANNE
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
BaBar MC production BaBar MC production software VU (Amsterdam University) A lot of computers EDG testbed (NIKHEF) Jobs Results The simple question:
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
DØ RAC Working Group Report Progress Definition of an RAC Services provided by an RAC Requirements of RAC Pilot RAC program Open Issues DØRACE Meeting.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
CMS Software at RAL Fortran Code Software is mirrored into RAL AFS cell every 24 hours  /afs/rl.ac.uk/cms/ Binary libraries available for: HPHP-UX
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
The Hungarian ClusterGRID Project Péter Stefán research associate NIIF/HUNGARNET
CC - IN2P3 Site Report Hepix Fall meeting 2010 – Ithaca (NY) November 1st 2010
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL DOE/NSF Review of US LHC Software and Computing Fermilab Nov 29, 2001.
D0 File Replication PPDG SLAC File replication workshop 9/20/00 Vicky White.
12 Mars 2002LCG Workshop: Disk and File Systems1 12 Mars 2002 Philippe GAILLARDON IN2P3 Data Center Disk and File Systems.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
CC-IN2P3 Pierre-Emmanuel Brinette Benoit Delaunay IN2P3-CC Storage Team 17 may 2011.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
5/12/06T.Kurca - D0 Meeting FNAL1 p20 Reprocessing Introduction Computing Resources Architecture Operational Model Technical Issues Operational Issues.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
Alice Operations In France
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
SAM at CCIN2P3 configuration issues
UK GridPP Tier-1/A Centre at CLRC
CC IN2P3 - T1 for CMS: CSA07: production and transfer
LQCD Computing Operations
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
UM D0RACE STATION Status Report Chunhui Han June 20, 2002
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

02/12/02D0RACE Worshop D0 Grid: CCIN2P3 at Lyon Patrice Lebrun D0RACE Wokshop Feb. 12, 2002

02/12/02D0RACE Worshop IN2P3 Computing Center gNational Computing Center of IN2P3 F Resources shared by many other experiments (cms, atlas, Babar, Nuclear Physics …) F 3 platforms supported üSolaris üAIX üLinux (250 workers bipro.) F Network (actual) ü155 Mbits/s with Chicago but only 20 Mbits/s Chicago - FNAL should be improved. F Foreign logins are possible üwebcc.in2p3.fr/user/logon French D0RACE Contact men: Laurent Duflot Patrice Lerbun

02/12/02D0RACE Worshop Data Grid Project vs D0 The In2p3 Computing Center Is Committed To Be Part Of The European Datagrid Project Testbed. Maybe usefull for D0 collaboration In this context, the IN2P3 Computing Center will provide its operational infrastructure and its expertise to the project testbed. Fully integrated with the current production platform, the project testbed platform will be composed of two main components: Computing element Storage element

02/12/02D0RACE Worshop Computing Element Approximately 400 CPUs running Linux RedHat 6.1 (but shared with other experiments) Controlled by the home-grown Globus-enabled batch scheduler, BQS(Batch Queueing System). These ressources are not dedicated for testing purposes, they are all available for the testbed. 130 more machines running Solaris, AIX which represent about SpecInt95 may be integrated to the testbed platform as the project evolves.

02/12/02D0RACE Worshop Storage Element Data storage and retrieval is a key element. The current storage capacity is 35TB on disk. Besides the tape staging facility, a hierarchical mass storage service is provided based on HPSS (High Performance Storage System). 50TB of data are currently served by HPSS with a disk cache of 1TB. (about 4 TB is already used by D0) The tape vault has a capacity of high-end cartridges online.

02/12/02D0RACE Worshop SAM at CCIN2P3 gSam Station is running at Lyon (ccin2p3-analysis) F ccd0 station (the only dedicated station for D0) F Very low cache size (at this time, only 5 Go and not used) F HPSS used as local disk (many TB) ü500 Go of disk cache ( üRFIO is used to access data but is not an official d0 implementation. F Sam store intensively used for MC production to transfert files from Lyon to FNAL üFiles Importation has to be tested üBBFTP with rfio is used –recompiled (rfio, to get file directely from HPSS, no intermediate disk used) –new bbrcp (nul file size means the file is in HPSS), we need here also an official implemention to take into account HPSS F A new station will be created, used only for MC production.

02/12/02D0RACE Worshop Software Installation... gNO major problem but: F Need some time and some space (8 GB AFS partition used) F we need to recompile IO package to access files on HPSS using RFIO (performance is identical as on local disk access) ü/d0dist is a problem due to AFS –all pathes start with /afs/in2p3.fr/... –CCIN2P3 denies to create this directory on all frontal (login) machines –Only one machine is dedicated to have /d0dist (CCD0) üSolution: –have a new environment variable which redefines this path ? F A CCIN2P3 version of ROOT üShared library of libRFIO.so linked with a local libshift.a is used but low IO performance (under investigation)

02/12/02D0RACE Worshop Comments gD0 Grid with Clones seems to be easy but gD0 has to do some effort to take into account sites where the architecture is little bit different F I/O (RFIO) F Disk sharing (AFS) F Storage (HPSS) F OS ( Linux 6.1 vs 7.2) F... gThe work is done but not official... F Has to be take into account in official packages of D0

02/12/02D0RACE Worshop The Man power gInstallation software F Smain Kermiche (CCPM Marseille) F Michel Jaffré (LAL Paris) gSAM F Laurent Duflot (LAL Paris) F Patrice Lebrun (IPN Lyon) gMC Production F Patrice Lebrun gD0 Grid at ccin2p3 F Patrice Lebrun and Laurent Duflot F Some one else at FNAL should be very useful ?

02/12/02D0RACE Worshop MOU gMemory Of Understanding between D0 and CCIN2P3 (may be for each RAC) F CPU üplateforms üCPU time needed F Storage üdisk üTapes F Network übandwidth F Middleware and Software F Logins F AOB