SAM at CCIN2P3 configuration issues

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Buffers & Spoolers J L Martin Think about it… All I/O is relatively slow. For most of us, input by typing is painfully slow. From the CPUs point.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
IN2P3 Status Report HTASC March 2003 Fabio HERNANDEZ et al. from CC-in2p3 François ETIENNE
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
1 Data Management D0 Monte Carlo needs The NIKHEF D0 farm The data we produce The SAM data base The network Conclusions Kors Bos, NIKHEF, Amsterdam Fermilab,
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
1 The D0 NIKHEF Farm Kors Bos Ton Damen Willem van Leeuwen Fermilab, May
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
M. Schott (CERN) Page 1 CERN Group Tutorials CAT Tier-3 Tutorial October 2009.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Outline: Tasks and Goals The analysis (physics) Resources Needed (Tier1) A. Sidoti INFN Pisa.
Tier1 Andrew Sansum GRIDPP 10 June GRIDPP10 June 2004Tier1A2 Production Service for HEP (PPARC) GRIDPP ( ). –“ GridPP will enable testing.
RAL Site report John Gordon ITD October 1999
Hepix LAL April 2001 An alternative to ftp : bbftp Gilles Farrache In2p3 Computing Center
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
02/12/02D0RACE Worshop D0 Grid: CCIN2P3 at Lyon Patrice Lebrun D0RACE Wokshop Feb. 12, 2002.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
12 Mars 2002LCG Workshop: Disk and File Systems1 12 Mars 2002 Philippe GAILLARDON IN2P3 Data Center Disk and File Systems.
GDB meeting - Lyon - 16/03/05 An example of data management in a Tier A/1 Jean-Yves Nief.
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
Compute and Storage For the Farm at Jlab
Parallel Virtual File System (PVFS) a.k.a. OrangeFS
The EDG Testbed Deployment Details
Status Report on LHC_2 : ATLAS computing
Belle II Physics Analysis Center at TIFR
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Chapter 1: Introduction
UK GridPP Tier-1/A Centre at CLRC
The INFN TIER1 Regional Centre
Stephen Burke, PPARC/RAL Jeff Templon, NIKHEF
Chapter 2 Objectives Identify Windows 7 Hardware Requirements.
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Lee Lueking D0RACE January 17, 2002
Presentation transcript:

SAM at CCIN2P3 configuration issues Patrice Lebrun - IPNL/IN2P3 07/10/02 Oklahoma D0 workshop

CCIN2P3 present actions Computing and data storage services for about 45 experiments Regional Center services for: EROS II BaBar ( Tier A) D0 ... AUGER LHC 07/10/02 Oklahoma D0 workshop

CCIN2P3 services 07/10/02 Oklahoma D0 workshop

Linux Workers 16 GB of SCSI disk/worker with 2 GB/CPU for scratch Fast Internet 100 Mb/s Linux (RedHat 6.1) -> 96 dual PIII 750MHz + 100 dual PIII 1GHz From Now Half of them are upgraded to RedHat 7.2 28 dual PIII 1.3 GHz will be installed on September 2002 07/10/02 Oklahoma D0 workshop

Networking Academic public network “Renater 2” based on virtual networking (ATM) with guaranteed bandwidth (VPN on ATM) Lyon  CERN at 155 Mb Lyon  US is going through CERN Bottleneck FNAL  Chicago ( ~ 30 Mb/s ) Lyon  STARtap at 100 Mb, STARtap to Esnet at 50 Mb (for BaBar). 07/10/02 Oklahoma D0 workshop

HPSS (High Performance Storage System) HPSS – local developments: Interface with RFIO: API: C, Fortran (via cfio) API: C++ (iostream) (for gcc and KCC) Development of RFIO 64bits will be implemented in HPSS bbftp – secure parallel ftp using RFIO interface HPSS and D0 800 GB of Cache disk (could be larger in future) About 8 TB stored (MC and Imported Data) 07/10/02 Oklahoma D0 workshop

SAM Stations Two SAM stations Dual PIII 1 GHz with 1 GB of memory RedHat 6.1 Administrators: L. Duflot and P. Lebrun Version: v4_1_0 Flavor: Linux+2.2 ccin2p3-analysis (general purpose) ccin2p3-production (dedicated to MC production) Used to store files at FNAL bbftp recompiled at Lyon with RFIO Actual limit of space for SAM cache is 10 GB and not seen by other computers. (Not used) 07/10/02 Oklahoma D0 workshop

SAM - HPSS Issues FNAL bbftp IP 100 Mb/s GB Ethernet Get file name location 2 Local mount 10 GB Client Job Locate file on tape (30s) SAM (ccd0) Robot stk 3490 20 GB 4 Tape 2 GB 1 Sam locate Worker Copy file to HPSS’s cache (12 MB/s) rfcp 5 Rfstream (iostream) 3 HPSS cchpssd0:// > 400 CPU TB 6 Copy file (RFIO) 07/10/02 Oklahoma D0 workshop

Comments Advantages Disadvantages low cost (man power, hardware ...) efficiency (high number of workers and jobs) low maintenance (no daemon running (SAM) and/or no mount (NFS) on each worker) Disadvantages No optimization performed by SAM but only by HPSS (no physics oriented) any other ?... 07/10/02 Oklahoma D0 workshop

Some words about SAM Store is done Copy file from HPSS on local disk by rfcp rfcp cchpssd0.in2p3.fr:/hpss/in2p3.fr/group/d0/mcprod/rec_filename /d0cache/import/ cp /<path>/import_description_file /d0cache/import cd /d0cache/import sam store --source=. --descrip=import_description_file rm -f * (clean buffer) If SAM gives only the file location BBFTP can read file directly from HPSS Problem with the Import description file which has to be on the same directory where the storing file is ! To SAM experts : is true or false ? 07/10/02 Oklahoma D0 workshop

Work to do D0 code is not touched: SAM requirement Implementation in rfstream class only Mode 1 (copy on local disk) check space on local disk rfcp (rfio copy) from HPSS and open file locally close and remove file from disk Mode 2 open and read file using API directly from HPSS (but 5 times slower than mode 1) SAM requirement has to return only the file name location without any check on the file (performed by HPSS) when file is located in HPPS (cchpssd0 server) Exportation from FNAL to CCIN2P3 has to be investigated Work to do 07/10/02 Oklahoma D0 workshop

DATA GRID Testbed is involved in the frame of WP6 (DataGRID) Participation in WPx groups (x=8,7,9,10) Integration of BQS batch system into Globus Batch Scheduler BQS Computing Element Batch Workers Storage Element Globus Gatekeeper HPSS ……. …… Xtage IN2P3 Computing Center User Home Laboratory 07/10/02 Oklahoma D0 workshop

Conclusion D0 code not changed only ccin2p3 iostream are modified SAM has to see cchpssd0 (HPSS server) like a local disk. Files should be accessed locally after an RFIO copy (mode 1) or open and read using API (mode 2) Need help from a SAM expert Hope to do some tests in September SAM with Data Grid should be also tested (D0GRID) at the CCIN2P3 07/10/02 Oklahoma D0 workshop