Philippe.Olivero @cc.in2p3.fr CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd Philippe.Olivero @cc.in2p3.fr.

Slides:



Advertisements
Similar presentations
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Advertisements

ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
IRODS usage at CC-IN2P3 Jean-Yves Nief. Talk overview What is CC-IN2P3 ? Who is using iRODS ? iRODS administration: –Hardware setup. iRODS interaction.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
Overview of day-to-day operations Suzanne Poulat.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Southgrid Technical Meeting Pete Gronbech: 26 th August 2005 Oxford.
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
CEA DSM Irfu IRFU site report. CEA DSM Irfu HEPiX Fall 0927/10/ Computing centers used by IRFU people IRFU local computing IRFU GRIF sub site Windows.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
26/4/2001LAL Site Report - HEPix - LAL 2001 LAL Site Report HEPix – LAL Apr Michel Jouvin
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
STATUS OF KISTI TIER1 Sang-Un Ahn On behalf of the GSDC Tier1 Team WLCG Management Board 18 November 2014.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
CC - IN2P3 Site Report Hepix Fall meeting 2010 – Ithaca (NY) November 1st 2010
IRFU SITE REPORT Pierrick Micout CEA/DSM/IRFU/SEDI.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
CC-IN2P3 Pierre-Emmanuel Brinette Benoit Delaunay IN2P3-CC Storage Team 17 may 2011.
Operation team at Ccin2p3 Suzanne Poulat –
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
Vendredi 27 avril 2007 Management of ATLAS CC-IN2P3 Specificities, issues and advice.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
Management of the Data in Auger Jean-Noël Albert LAL – Orsay IN2P3 - CNRS ASPERA – Oct Lyon.
Computing Infrastructure – Minos 2009/12 ● Downtime schedule – 3 rd Thur monthly ● Dcache upgrades ● Condor / Grid status ● Bluearc performance – cpn lock.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
Alice Operations In France
Experience of Lustre at QMUL
The Beijing Tier 2: status and plans
HEPiX Spring 2014 Annecy-le Vieux May Martin Bly, STFC-RAL
Andrea Chierici On behalf of INFN-T1 staff
Yaodong CHENG Computing Center, IHEP, CAS 2016 Fall HEPiX Workshop
LCG-France activities
Update on Plan for KISTI-GSDC
SAM at CCIN2P3 configuration issues
Luca dell’Agnello INFN-CNAF
CC IN2P3 - T1 for CMS: CSA07: production and transfer
ATLAS Sites Jamboree, CERN January, 2017
Mattias Wadenstein Hepix 2010 Fall Meeting , Cornell
A high-performance computing facility for scientific research
PES Lessons learned from large scale LSF scalability tests
Christof Hanke, HEPIX Spring Meeting 2008, CERN
News and computing activities at CC-IN2P3
The INFN Tier-1 Storage Implementation
CC-IN2P3 Pierre-Emmanuel Brinette IN2P3-CC Storage Team
Acutelearn Technologies Tivoli Storage Manager(TSM) Training Tivoli Storage Manager Basics: Tivoli Storage Manager Overview Tivoli Storage Manager concepts.
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
The LHCb Computing Data Challenge DC06
Presentation transcript:

Philippe.Olivero @cc.in2p3.fr CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd Philippe.Olivero @cc.in2p3.fr

Overview General information Batch Systems Computing farms Storage Infrastructure (link) Quality philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011

General information about CC-IN2P3 - Lyon French National computing center of IN2P3 / CNRS in association with IRFU (CEA –Commissariat à l’Energie Atomique) Users: T1 for LHC experiments (65%) , and D0, Babar, … ~ 60 experiments or groups HEP, Astroparticle, Biology, Humanities ~ 3000 non-grid users Dedicated support for each LHC experiment , Dedicated support for Astroparticle philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011

New Batch System : Grid Engine (GE) Home made batch system BQS in a process of migration to Grid Engine after a careful evaluation of several batch- systems (2009 ) “Selecting a new batch system at CC-IN2P3” (B. Chambon ) https://indico.cern.ch/contributionDisplay.py?contribId=1&confId=118192 “Grid Engine setup at CC-IN2P3 » (B. Chambon ) https://indico.cern.ch/contributionDisplay.py?contribId=2&confId=118192 Process of migration started in june 2010, Pre-Production cluster is now open to users since mid-April Official production cluster planned for end of may End of BQS planned before end of 2011 philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011

New Batch System : Grid Engine New Grid Engine cluster for migration from BQS to Grid Engine : Open Source version 6.2 u5 mainly Sequential jobs : 2000 cores (~17% of total of seq. resources in CC) 128 cores for Parallel jobs and 32 cores for interactive jobs new racks progressively added to reach at least 60% in GE in end of June creamCE successfully installed, not open yet philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011

Computing farms (BQS+GE) ~ 120 K-HS06 (total of CPU resources) ~ 8 K-HS06 for parallel jobs ~ 12 K simultaneous jobs ~ 110 K-jobs/day ~ 30 K-HS06 added next month (36 DELL Poweredge C 6100 – 72GB) philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011

Current farms BQS vs GE Cores Sequential Parallel Interactive Total 10 792 896 11 688 GE 1 976 128 32 2 136 Total Cores   12 768 1 024 13 824 K-HS06 Sequential Parallel Interactive Total BQS 91,06 82,90 7,26 0,00 98,31 83,04 GE 18,78 17,10 1,04 0,26 20,08 16,96 Total K-HS06   109,84 100,00 8,29 118,39 100 philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011

Storage – What news ? What is planned ? Robotic T1OK-C tests planned for september ACSLS 7.3 upgrade to 8.0 in september HPSS Migration done to 7.3.2 dCache Instance EGEE from 1.9.0-9 to 1.9.5-25 in may 30 additional servers before summer TSM TSM Migration planned from 5.5 to 6.1 with 4 servers AIX 6 GPFS Migration planned to GPFS 3.4 in second half 2011 (current v3.2.1-25/26) philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011

Storage philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011

Storage USER DCACHE XROOTD iRODS SRB SPS GPFS AFS TSM RFIO HPSS Transport Disk + Tape Disk Credit : Pierre Emmanuel Brinette philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011

Storage Summary dCache HPSS SRB openAFS iRods GPFS Xrootd TSM 2 instances: LCG (1.9.5-24), & EGEE (1.9.0-9) +300 pools on 160 servers (x4540 Thor/Solaris10/ZFS & Dell R510/SL5) 5,5 PiB 16 M+ files SRB V 3.5 250 TiB disk + 3 PiB on HPSS 8 Thor 32 TB 10/ZFS + MCAT on oracle iRods V2.5 11 Thors 32/64 TB + 1 Dell v510 54 TB Xrootd (march 2010 Release) 30 Thor, 32/64 TO 1.2 PiB Thors 32/64 to Dell v510 planned for Alice HPSS V 7.3.2 on AIX p550 9.5 PB / 26 M files + 2000 different users 60 drive T10K-B / 34 drives T10K-A on 39 AIX (p505/p510/p520) Tape servers 12 IBM x3650 attached to 4 DDN Disk (4*160 TiB) / 2 FC 4Gb / 10 Gb Eth openAFS V 1.4.14.1 (servers) and V1.5.78 (worker nodes) 71 TB 50 servers, 1600 clients GPFS V 3.2.1r22 ~900 TiB, 200 M files, 65 filesystems +500 TiB on DCS9550 w/ 12 x3650 (internal redistrib -50 TiB on DS4800 (decomissioned) TSM 2 servers (AIX 5, TSM 5.5) each w/ 4 TiB on DS8300 1 billion files. ~840 TiB. 16 LTO-4 drives 1700 LTO4 3 TiB day Credit : Pierre Emmanuel Brinette and Storage experts philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011

Infrastructure New machine room in service «Overview of the new computing » (P. Trouvé) on Wed 4th 2:15PM philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011

Quality One dedicated people working as a quality manager Several works in progress : CMDB: current study to get one Ticketing system : current study to change Xoops/xHelp Ticket management to be improved Incident and Change management to be included Event management : Nagios probes enhancement to get a new Service Status page ControlRoom with 2 people on duty (Operation and ServiceDesk) To implement Itil best practices philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011

Thank you ! Questions ? philippe.olivero@cc.in2p3.fr Hepix Spring Meeting - Darmstadt - May 3rd 2011