CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley

Slides:



Advertisements
Similar presentations
Nadia LAJILI IN2P3 Computing Center Testbed Status IN2P3 Computing Center Testbed Status Lyon, February 2003.
Advertisements

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
IHEP Site Status Jingyan Shi, Computing Center, IHEP 2015 Spring HEPiX Workshop.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
IRODS usage at CC-IN2P3 Jean-Yves Nief. Talk overview What is CC-IN2P3 ? Who is using iRODS ? iRODS administration: –Hardware setup. iRODS interaction.
October 23rd, 2009 Visit of CMS Computing Management at CC-IN2P3.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Centre de Calcul IN2P3 Centre de Calcul de l'IN2P Boulevard Niels Bohr F VILLEURBANNE
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
SLAC National Accelerator Laboratory Site Report A National Lab in Transition Randy Melen, Deputy CIO Computing Division, Operations Directorate SLAC National.
LCG-France Tier-1 and Analysis Facility Overview Fabio Hernandez IN2P3/CNRS Computing Centre - Lyon CMS Tier-1 tour Lyon, November 30 th.
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
Overview of day-to-day operations Suzanne Poulat.
11/30/2007 Overview of operations at CC-IN2P3 Exploitation team Reported by Philippe Olivero.
RAL Site Report Martin Bly HEPiX Fall 2009, LBL, Berkeley CA.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
LCG-France Vincent Breton, Eric Lançon and Fairouz Malek, CNRS-IN2P3 and LCG-France ISGC Symposium Taipei, March 27th 2007.
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
CEA DSM Irfu IRFU site report. CEA DSM Irfu HEPiX Fall 0927/10/ Computing centers used by IRFU people IRFU local computing IRFU GRIF sub site Windows.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Nikhef/(SARA) tier-1 data center infrastructure
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
Computing Jiří Chudoba Institute of Physics, CAS.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
BaBar Cluster Had been unstable mainly because of failing disks Very few (
CC - IN2P3 Site Report Hepix Fall meeting 2010 – Ithaca (NY) November 1st 2010
IRFU SITE REPORT Pierrick Micout CEA/DSM/IRFU/SEDI.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
CC-IN2P3 Pierre-Emmanuel Brinette Benoit Delaunay IN2P3-CC Storage Team 17 may 2011.
Operation team at Ccin2p3 Suzanne Poulat –
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
Computer System Replacement at KEK K. Murakami KEK/CRC.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
CC-IN2P3: A High Performance Data Center for Research Dominique Boutigny February 2011 Toward a future cooperation with Israel.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
LCG-France activities
SAM at CCIN2P3 configuration issues
Luca dell’Agnello INFN-CNAF
News and computing activities at CC-IN2P3
CC-IN2P3 Pierre-Emmanuel Brinette IN2P3-CC Storage Team
Pierre Girard ATLAS Visit
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Presentation transcript:

CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley

Overview Overview o General information about CC-IN2P3 o Farms o Storage o Building Hepix Fall Meeting - Berkeley october 26th 2

CC- IN2P3 - Lyon CC- IN2P3 - Lyon o French National computing center of IN2P3 / CNRS in association with IRFU (CEA) o Users: o T1 for LHC experiments (65%), and D0, Babar o ~ 60 experiments or groups HEP, Astroparticle, Biology, Humanities Sciences o ~ 3000 non-grid users o 1 FTE dedicated support for Atlas and Cms, 1 FTE Lhcb + Alice one for Astroparticles o Computing Teams : Operation, Infrastructure, Development 59 Others : Administration, Facility management 18 ~ 45% non permanent 77 o Neither experiment nor users on site Hepix Fall Meeting - Berkeley october 26th 3

Grid projects o CC-In2p3 strongly involved in Grid projets o EGEE Operation o CIC portal Development o EGEE ROC Management o EGI (Europeen Grid Initiative) and NGI o Regional Grid over Rhones-Alpes Hepix Fall Meeting - Berkeley october 26th 4

Farms - 1/2 o Home made batch system BQS in a process of migration to another BS o Main cluster : anastasie ~ 8600 cores - 61 K-SHS06 (903 machines) X 2 in 2010 migration to SL5 in progress -> Q2Y2010 SL4 : 4704 cores (33 K-HS06) 55% SL5: 3904 cores (28 K-HS06) 45 % -> 80% mid-november ~ 9 K running Jobs ~ 70 K jobs/day shared farm for direct submitted jobs and Grid submitted jobs o Decommissioning DELL PowerEdge 1950 o new machines : DELL Blade Poweredge M 610 o Hepix Fall Meeting - Berkeley october 26th 5

Farms - 2/2 o o Parallel cluster pistoo (MPI, PVM) 544 cores 3 K-HS06 o Current migration to SL5 : ~ 1024 cores - 96 GB/machine o No Infiniband but Ethenet 10Gb o LAF : Lyon Analysis Facility o 1 master node PROOF with 20 worker nodes PROOF (160 cores) 1 serveur Xrootd o Still in a test status o mainly used/tested by Alice, exploration by cms o Services monitored by Nagios [Th 29 [33] Monitoring CC-IN2P3 services with Nagios ]Monitoring CC-IN2P3 services with Nagios Hepix Fall Meeting - Berkeley october 26th 6

Storage 1/3 o Automated cartridges Libraries o No more powderHorn STK 9310 o 3 SUN SL 8500 with 10,000 slots each (one more in 2010 => opt. Cap. = 40 PB) o 36 x drives T10K-A and 32 x T10K-B o 10 x LTO-4 (Tivoli Backup) o 13 X 9840 (HPSS small files < 100 GB) models A and B o Media T10K-Sport being replacing 9840B o Monitored by StorSentry [ We 28 [23] Monitoring tape drives and medias]Monitoring tape drives and medias o HPSS : migrated to disk movers (480TB DDN Disk Array ) - 38 tape movers Now using NIS for users authentication/authorization in replacement of DCE. ( Tu 27 session [24] IN2P3 HPSS Migration (v5.1 to 6.2) report)IN2P3 HPSS Migration (v5.1 to 6.2) report Hepix Fall Meeting - Berkeley october 26th 7

Storage 2/3 o dCache : from1.9.1 to (september) then to "golden release« - November PNFS replaced by Chimera (september : 72 hours) ~ 150 servers, ~ 2 PB 15 TB are «custorial » interfaced with HPSS TreqS [Tu 27 session [45] Optimizing tape data access]Optimizing tape data access o AFS (~30 servers, ~40 TB) o SRB Disk cache 243 TB 107 TB -- 2 PB in HPSS IRODS : 5 serveurs, 100 TB Hepix Fall Meeting - Berkeley october 26th 8

Storage 3/3 o SPS/GPFS TB TB allocated TB used 1200 client nodes - 60 filesystems millions files. o Xrootd ~ 20 servers, 400 TB+100 interfaced with HPSS and Dcache ~ 10 groups babar (Analisys) Analisys facilities (PROOF) o Oracle 10G : 6 clusters ( 55 TB -> 100 TB) o Disks : decommissioning Thumpers for Thors Hepix Fall Meeting - Berkeley october 26th 9

Building o Last enhancements (2009) o Electric Power  1,5 Mw to 3 Mw o A third new 600 Kw chiller o A 880 KW diesel generator installed o trend to water chilled racks (Idataplex and Rittal racks) o Total monitoring of infrastructure and automatic actions in progress o Next additional Computing room : very slow process for administrative reasons o Budget for a 800 sq. meters room (only computing room) o Try to get a green building (Energy recycling) o raising work will start April 2010, for a 10 months work Hepix Fall Meeting - Berkeley october 26th 10

Thank you ! Any questions ? Hepix Fall Meeting - Berkeley october 26th 11