CCIN2P3 Site Report - BNL, Oct 18, 2004 1 CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

Site Report: The Linux Farm at the RCF HEPIX-HEPNT October 22-25, 2002 Ofer Rind RHIC Computing Facility Brookhaven National Laboratory.
Martin Bly RAL Tier1/A RAL Tier1/A Site Report HEPiX-HEPNT Vancouver, October 2003.
IN2P3 Status Report HTASC March 2003 Fabio HERNANDEZ et al. from CC-in2p3 François ETIENNE
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
Centre de Calcul IN2P3 Centre de Calcul de l'IN2P Boulevard Niels Bohr F VILLEURBANNE
Prague TIER2 Computing Centre Evolution Equipment and Capacities NEC'2009 Varna Milos Lokajicek for Prague Tier2.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
LAL Site Report Michel Jouvin LAL / IN2P3
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
Overview of day-to-day operations Suzanne Poulat.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility (formerly CEBAF - The Continuous Electron Beam Accelerator Facility)
8 October 1999 BaBar Storage at CCIN2P3 p. 1 Rolf Rumler BaBar Storage at Lyon HEPIX and Mass Storage SLAC, California, U.S.A. 8 October 1999 Rolf Rumler,
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
PHENIX Computing Center in Japan (CC-J) Takashi Ichihara (RIKEN and RIKEN BNL Research Center ) Presented on 08/02/2000 at CHEP2000 conference, Padova,
RAL Site report John Gordon ITD October 1999
Partner Logo A Tier1 Centre at RAL and more John Gordon eScience Centre CLRC-RAL HEPiX/HEPNT - Catania 19th April 2002.
Andrei Moskalenko Storage team, Centre de Calcul de l’ IN2P3. HPSS – The High Performance Storage System Storage at the Computer Centre of the IN2P3 HEPiX.
02/12/02D0RACE Worshop D0 Grid: CCIN2P3 at Lyon Patrice Lebrun D0RACE Wokshop Feb. 12, 2002.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 1 Summary of April 1999 HepiX mass storage meeting Focus 1 July 1999 H.Renshall PDP/IT.
CC - IN2P3 Site Report Hepix Fall meeting 2010 – Ithaca (NY) November 1st 2010
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
12 Mars 2002LCG Workshop: Disk and File Systems1 12 Mars 2002 Philippe GAILLARDON IN2P3 Data Center Disk and File Systems.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
GDB meeting - Lyon - 16/03/05 An example of data management in a Tier A/1 Jean-Yves Nief.
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
CC-IN2P3 Pierre-Emmanuel Brinette Benoit Delaunay IN2P3-CC Storage Team 17 may 2011.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Lyon Analysis Facility - status & evolution - Renaud Vernet.
ALICE Computing Data Challenge VI
The Beijing Tier 2: status and plans
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
SAM at CCIN2P3 configuration issues
UK GridPP Tier-1/A Centre at CLRC
The INFN TIER1 Regional Centre
The CCIN2P3 and its role in EGEE/LCG
CC-IN2P3 Pierre-Emmanuel Brinette IN2P3-CC Storage Team
Pierre Girard ATLAS Visit
CC-IN2P3 Jean-Yves Nief, CC-IN2P3 HEPiX, SLAC
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Lee Lueking D0RACE January 17, 2002
Presentation transcript:

CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center

CCIN2P3 Site Report - BNL, Oct 18, Services  CPU  Networking  Data storage and access  Data bases   WEB  Electronic Documents Managment (EDMS) and CAD  LDAP (OpenLDAP)  MCU  Win2000 domain service

CCIN2P3 Site Report - BNL, Oct 18, Supported platforms  Supported platforms: Linux RedHat 7.2  SL3 Solaris 2.8  Solaris 2.9 AIX 5.1

CCIN2P3 Site Report - BNL, Oct 18, Disk space  Need to make the disk storage independent of the operating system.  Disk servers based on: A3500 from Sun with 3.5 TB ESS-F20 from IBM with 21.4 TB ESS from IBM with 5.9 TB 9960 from Hitachi with 18 TB FAST900 from IBM with 32 TB (not yet in prod)

CCIN2P3 Site Report - BNL, Oct 18, Mass storage  Supported medias (all in the STK robots): DLT4000/ (Eagles)  9940 (200 GB)  HPSS – local developments: Interface with RFIO: – API: C, Fortran (via cfio) – API: C++ (iostream) (for g++ and KCC) bbftp – secure parallel ftp using RFIO interface Interface with SRB

CCIN2P3 Site Report - BNL, Oct 18, Mass storage  HPSS $HPSS_SERVER:/hpss/in2p3.fr/…  HPSS – usage: 645 TB (123 TB in May 2002, 60 TB in Oct 2001). BaBar – 245 TB AUGER – 40 TB EROS II – 32 TB D0 – 110 TB Virgo – 13 TB Other experiments: ATLAS, SNovae, DELPHI, ALICE, PHENIX, CMS

CCIN2P3 Site Report - BNL, Oct 18, Networking - LAN  Fast Ethernet (100 Mb full duplex) --> to interactive and batch services  Giga Ethernet (1 Gb full duplex) --> to disk servers and Objectivity/DB servers

CCIN2P3 Site Report - BNL, Oct 18, Networking - WAN  Academic public network “Renater 3”. Backbone 2.5 Gb Access USA 2 * 2.5 Gb CCIN2P3 access 1 Gb Tests give 400 Mb to SLAC 800 Mb to CERN

CCIN2P3 Site Report - BNL, Oct 18, BAHIA - interactive front-end Based on multi-processors:  Linux (RH72, RH73, SL3 ) -> 16 dual PentiumIII1GHz  Solaris 2.8 -> 2 Ultra-4/E450  AIX 5.1 -> 2 F40

CCIN2P3 Site Report - BNL, Oct 18, Batch system - configuration Batch based on BQS (developed at CCIN2P3)  Linux (RH72) -> 652 cpu (PIII)  Linux (RH73/LCG) -> 100 cpu (PIII)  Linux (SL3) -> 68 cpu (PIV)  Solaris 2.8 -> 38 cpu (Ultra60)  AIX > 18 cpu (43P-B50)

CCIN2P3 Site Report - BNL, Oct 18, Support for big experiments  BaBar Objectivity/DB servers (v.7.1 on Solaris 2.8 and 2.9) – 2 on 440, 8 on Netra-T, 2 on 450, 5 on 480 (common with xrootd) HPSS with interface to Objectivity (ams/oofs), RFIO and with xrootd – 245 TB (57 TB for root files) Disk cache for Obj and xrootd – 45 TB (20 TB will be added soon) SRB for import/export xrootd is replacing Obj/DB

CCIN2P3 Site Report - BNL, Oct 18, Support for big experiments  D0 SAM server (on Linux) bbftp for import/export with FNAL Usage of HPSS as SAM cashing space

CCIN2P3 Site Report - BNL, Oct 18, Present actions  Computing and data storage services for about 45 experiments (HEP, Nuclear Physics, Astro, Bio)  Support Center for EGEE 10 FTE ROC – Regional Operation Center CIC – Core Infrastructure Center Integration of BQS batch system into LCG

CCIN2P3 Site Report - BNL, Oct 18, Present actions  LCG for LHC experiments  SRB for BaBar and SNovae (Astro and Bio soon)  xrootd for BaBar and D0

CCIN2P3 Site Report - BNL, Oct 18, Present actions  Regional Center services for: EROS II BaBar (  Tier A) D0 AUGER LHC