Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.

Slides:



Advertisements
Similar presentations
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Advertisements

LCG France Network Infrastructures Centre de Calcul IN2P3 June 2007
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Overview of LCG-France Tier-2s and Tier-3s Frédérique Chollet (IN2P3-LAPP) on behalf of the LCG-France project and Tiers representatives CMS visit to Tier-1.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
October 23rd, 2009 Visit of CMS Computing Management at CC-IN2P3.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Centre de Calcul IN2P3 Centre de Calcul de l'IN2P Boulevard Niels Bohr F VILLEURBANNE
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
LCG-France Tier-1 and Analysis Facility Overview Fabio Hernandez IN2P3/CNRS Computing Centre - Lyon CMS Tier-1 tour Lyon, November 30 th.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
Experience with the WLCG Computing Grid 10 June 2010 Ian Fisk.
G.Rahal LHC Computing Grid: CCIN2P3 role and Contribution KISTI-CCIN2P3 Workshop Ghita Rahal KISTI, December 1st, 2008.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
LCG-France Vincent Breton, Eric Lançon and Fairouz Malek, CNRS-IN2P3 and LCG-France ISGC Symposium Taipei, March 27th 2007.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
ATLAS Computing at SLAC Future Possibilities Richard P. Mount Western Tier 2 Users Forum April 7, 2009.
Analysis in STEP09 at TOKYO Hiroyuki Matsunaga University of Tokyo WLCG STEP'09 Post-Mortem Workshop.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
S.Jarp CERN openlab CERN openlab Total Cost of Ownership 11 November 2003 Sverre Jarp.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Computing Coordination Aspects for HEP in Germany International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science nLCG.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
ITEP participation in the EGEE project NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
LHC Computing, CERN, & Federated Identities
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
CC - IN2P3 Site Report Hepix Fall meeting 2010 – Ithaca (NY) November 1st 2010
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Computing activities in France Dominique Boutigny CC-IN2P3 May 12, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA Restricted ECFA Meeting in Paris.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
CC-IN2P3: A High Performance Data Center for Research Dominique Boutigny February 2011 Toward a future cooperation with Israel.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
CCIN2P3 Network November 2007 CMS visit to Tier1 CCIN2P3.
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Ian Bird WLCG Workshop San Francisco, 8th October 2016
The Beijing Tier 2: status and plans
LCG France Network Infrastructures
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Data Challenge with the Grid in ATLAS
LCG-France activities
Update on Plan for KISTI-GSDC
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
A high-performance computing facility for scientific research
News and computing activities at CC-IN2P3
LHC Data Analysis using a worldwide computing grid
Presentation transcript:

Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing

12/12/2006D. Boutigny2 dapnia CC-IN2P3 presentation (1) CC-IN2P3 is providing computing resources for the whole French community working on:  Particle Physics  Nuclear Physics  Astro-Particle Physics CC-IN2P3 is located in Villeurbanne near Lyon

12/12/2006D. Boutigny3 CC-IN2P3 presentation (2)  ~70 groups are using CC-IN2P3 resources  ~10 are very active  This is more than 2500 users  3700 jobs running in //  ~ jobs in waiting state  jobs completed each day Batch system load Waiting Running CC-IN2P3 is operating 24/24 – 7/7

12/12/2006D. Boutigny4 CPU resources (1) 2.5 M SpecInt2000 CPU Consumed / Available ~ % Sept 2004 – Sept 2006: CPU × ~ 6

12/12/2006D. Boutigny5 CPU resources (2) In 2006 we bought a new rack mountable IBM computer system In 2006 we bought a new rack mountable IBM computer system –Dual CPU / Dual core Opteron machines –265 computers  1060 cores –Total power: 1.7 M SpecInt2000 according to IBM1.7 M SpecInt2000 according to IBM 1.2 M SpecInt2000 measured by us1.2 M SpecInt2000 measured by us –Cost is ~ 0.59 € / SI2K This system is in the list of the top 500 world fastest computers (exact rank is 483) This system is in the list of the top 500 world fastest computers (exact rank is 483)

12/12/2006D. Boutigny6 CPU sharing between experiments Nuclear 11%

12/12/2006D. Boutigny7 Storage resources (1) TB Mass Storage Disk Start of LHC ramp up

12/12/2006D. Boutigny8 Storage resources (2) In 2006 we bought a 400 TB disk storage system from SUN In 2006 we bought a 400 TB disk storage system from SUN –1 high end server for 24 TB –Cost is ~ 1.5 € / GB A new tape silo is currently under purchase A new tape silo is currently under purchase –Very likely SUN/STK or IBM – slots –500 GB tape drives Will evolve to 1 TB tape drives next yearWill evolve to 1 TB tape drives next year  10 PB capacity  10 PB capacity

12/12/2006D. Boutigny9 The CC-IN2P3 Tier-1 and Analysis facility CC-IN2P3 has started to build a Tier-1 and an Analysis Facility for the 4 LHC experiments CC-IN2P3 has started to build a Tier-1 and an Analysis Facility for the 4 LHC experiments –CC-IN2P3 will provide 12 to 15% of the whole T1 computing Also need to continue to provide resources for other experiments Also need to continue to provide resources for other experiments –Astroparticles is needing more and more resources The plan is to keep 20% of the resources available for non LHC experiments in 2008 The plan is to keep 20% of the resources available for non LHC experiments in 2008

12/12/2006D. Boutigny10 CPU sharing between LHC experiments ATLAS LHC-b ALICE CMS In principle the sharing should have been: 45% ATLAS 45% ATLAS 25% CMS 25% CMS 15% ALICE 15% ALICE 15% LHC-b 15% LHC-b In 2006

12/12/2006D. Boutigny11 LHC computing ramp up ~500 kSI2k.month

12/12/2006D. Boutigny12 Planned resources (1) CPU Tapes Tier-1 component Analysis Facility 22 M SI2k Disk 8.5 PB 10 PB  Numbers have been revised according to the new LHC schedule

12/12/2006D. Boutigny13 Planned resources (2) The purchase process is a long operation (6-8 months) The purchase process is a long operation (6-8 months) We need to anticipate buying in order to be ready in time We need to anticipate buying in order to be ready in time We plan to buy 40% of year N during year N-1 with a 2 step delivery We plan to buy 40% of year N during year N-1 with a 2 step delivery –In 2007 we will buy: 5.7 M SI2k5.7 M SI2k 2.1 PB of disk2.1 PB of disk 4 PB of tapes4 PB of tapes ~100% of % of non-LHC This is ~ 5000 today's cores ! 2007 CC-IN2P3 budget will be ~ 9.2 M€ (without salaries)

12/12/2006D. Boutigny14 Network The 10 Gbps optical connection (LHC-OPN) with CERN is up and running since the beginning of the year The 10 Gbps optical connection (LHC-OPN) with CERN is up and running since the beginning of the year Another 10 Gbps will be setup with Karlsruhe (FZK) (Backup T1 site) Another 10 Gbps will be setup with Karlsruhe (FZK) (Backup T1 site) 2 x 1 Gbps dedicated connection between CC-IN2P3 and FNAL is up and running 2 x 1 Gbps dedicated connection between CC-IN2P3 and FNAL is up and running –Done within the framework of a research project on grid interoperability and massive data transfer –10 months to setup with an impressive coordination between many actors

12/12/2006D. Boutigny15 Required network bandwidth for LHC In (Gb/s) Out (Gb/s) T0 – T T1 – T1 55 T1 – T2 (France) T1 – T2 (Int'l)

12/12/2006D. Boutigny16 T1-T2 connections T1-T2 connections are under active discussion at CERN (C. Eck) T1-T2 connections are under active discussion at CERN (C. Eck) –Situation is still… confused ! Connections to : Connections to : –All French T2/T3 –Belgium (CMS) –Romania (ATLAS) –Korea (ALICE) South Africa (ALICE) – SPAIN (ALICE)South Africa (ALICE) – SPAIN (ALICE) –China and Japan are mentioned to be connected to both CC-IN2P3 and ASGC (Taipei)

12/12/2006D. Boutigny17 Manpower and Operation Running a Tier-1 requires manpower and a strong organization Running a Tier-1 requires manpower and a strong organization –In 2007 large chunks of new computing equipment will arrive every 2-3 months Manpower Manpower –Total CC-IN2P3 manpower is 65 FTE –3 computing engineer hired in 2006 –Will continue to hire 3 to 4 engineers / year up to 2008 Operation Operation –Grid operation is mainly done by people hired under EGEE contracts  12 EGEE people at CC-IN2P3  ~5 FTE dedicated to Grid operation –Very strong involvement in LCG worldwide operation framework User support is also crucial User support is also crucial –We will put 1 engineer in support for each LHC experiment –At the moment we have 2.5 FTE

12/12/2006D. Boutigny18 Infrastructure (1) The exponential increase of the computing resources has a significant impact on the computing centre infrastructure The exponential increase of the computing resources has a significant impact on the computing centre infrastructure CC-IN2P3 average electrical power in kW An important work is going on in order to upgrade the computer room Electrical distribution Electrical distribution Cooling Cooling Uninterruptible Power Supply Uninterruptible Power Supply  Up to 1.6 MW of computing equipment + cooling ?

12/12/2006D. Boutigny19 Infrastructure (2) The computer room upgrade is not enough to receive all the computing hardware The computer room upgrade is not enough to receive all the computing hardware A project to build a new building with a new computer room is already started A project to build a new building with a new computer room is already started –800 m 2 new computer room –Up to 2.5 MW of computing equipment (on top of the existing 1 MW)

12/12/2006D. Boutigny20 Conclusions CC-IN2P3 is building up its Tier-1 + Analysis Facility CC-IN2P3 is building up its Tier-1 + Analysis Facility Substantial budget has been allocated for 2007 Substantial budget has been allocated for 2007 –Clearly a strong priority for IN2P3 and CEA/DAPNIA Impact on infrastructure is huge Impact on infrastructure is huge An efficient collaboration between China and CC- IN2P3 on computing matters requires to setup a good network connection now An efficient collaboration between China and CC- IN2P3 on computing matters requires to setup a good network connection now