Computing activities in France Dominique Boutigny CC-IN2P3 May 12, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA Restricted ECFA Meeting in Paris.

Slides:



Advertisements
Similar presentations
LCG-France Project Status Fabio Hernandez Frédérique Chollet Fairouz Malek Réunion Sites LCG-France Annecy, May
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
Oct 24 MOST MeetingDenis Perret-Gallix Asia-Pacific Cooperation.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
GRIF Status Michel Jouvin LAL / IN2P3
Overview of LCG-France Tier-2s and Tier-3s Frédérique Chollet (IN2P3-LAPP) on behalf of the LCG-France project and Tiers representatives CMS visit to Tier-1.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Centre de Calcul IN2P3 Centre de Calcul de l'IN2P Boulevard Niels Bohr F VILLEURBANNE
LCG-France Tier-1 and Analysis Facility Overview Fabio Hernandez IN2P3/CNRS Computing Centre - Lyon CMS Tier-1 tour Lyon, November 30 th.
Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
G.Rahal LHC Computing Grid: CCIN2P3 role and Contribution KISTI-CCIN2P3 Workshop Ghita Rahal KISTI, December 1st, 2008.
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
LCG-France Vincent Breton, Eric Lançon and Fairouz Malek, CNRS-IN2P3 and LCG-France ISGC Symposium Taipei, March 27th 2007.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Tony Doyle - University of Glasgow 8 July 2005Collaboration Board Meeting GridPP Report Tony Doyle.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Ian Bird LCG Deployment Area Manager & EGEE Operations Manager IT Department, CERN Presentation to HEPiX 22 nd October 2004 LCG Operations.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Computing Coordination Aspects for HEP in Germany International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science nLCG.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
, VilniusBaltic Grid1 EG Contribution to NEEGRID Martti Raidal On behalf of Estonian Grid.
ITEP participation in the EGEE project NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
LHC Computing, CERN, & Federated Identities
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
Grid Activity in France … September 20 th 2011 … from Grid to Clouds Dominique Boutigny Credits : Vincent Breton Hélène Cordier 1.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
IPCEI on High performance computing and big data enabled application: a pilot for the European Data Infrastructure Antonio Zoccoli INFN & University of.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
LCG-France, the infrastructure, the activities Informal meeting France-Israel November 3rd, 2009.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
CC-IN2P3: A High Performance Data Center for Research Dominique Boutigny February 2011 Toward a future cooperation with Israel.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
CCIN2P3 Network November 2007 CMS visit to Tier1 CCIN2P3.
Alice Operations In France
France-Asia Initiative
Bob Jones EGEE Technical Director
Status of WLCG FCPPL project
Status report on LHC_2: ATLAS computing
Status Report on LHC_2 : ATLAS computing
LHC Computing Grid Project Status
Data Challenge with the Grid in ATLAS
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
LCG-France activities
Russian Regional Center for LHC Data Analysis
Dagmar Adamova, NPI AS CR Prague/Rez
UK GridPP Tier-1/A Centre at CLRC
A high-performance computing facility for scientific research
LHC Data Analysis using a worldwide computing grid
Francois Le Diberder 1rst China-France on LHC and Grid in2p3.
Collaboration Board Meeting
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
GRIF : an EGEE site in Paris Region
Presentation transcript:

Computing activities in France Dominique Boutigny CC-IN2P3 May 12, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA Restricted ECFA Meeting in Paris

HEP Computing Organization in France up to the BaBar / D0 Era The computing is organized around the IN2P3 / DAPNIA Computing Center Hosting data Securing data Providing most of the computing power Physicists are located in their laboratories Connected to CC-IN2P3 Small local computing infrastructure Oriented toward DAQ developments Desktop computing Software developments Physics analysis

Computing Evolution in the LHC Era T2 T3 CC-IN2P3 still plays a central role as a Tier-1 But significant resources are building up within Tiers-2 and Tier-3 Hosting most of the data Securing data Providing a large fraction of the computing power And opening to multidisciplinary applications  Grid architecture

A Strong Network Infrastructure Originally based on a private network: PhyNet Now relying on the National Research and Education Network: RENATER Backbone of the Grid architecture Opening to the European and International network Notice the 10 Gbit/s link between CERN and CC-IN2P3  LHC Optical Private Network  First Optical Circuit of the RENATER 4 infrastructure

CC-IN2P3 Status CC-IN2P3 is a CNRS Unit of Service and Research located in Lyon DAPNIA Funding (2006) 5.75 M€ - CNRS not including salaries 1.60 M€ - CEA / DAPNIA 0.78 M€ - Contracts ~2 M € - CNRS salaries Manpower: 60,5 Full Time Equivalent

CPU resources ~1.7 MSpecInt 2000 available ~ equivalent to 1700 today’s single core processors CPU consumption at CC-IN2P3 Major upgrades ×10 in 5 years 80-85% efficiency NEC bi-Xeon Farm

CPU Usage  All the CPU nodes can be accessed through the Grid Running 7 days a week and 24h a day Very high reliability: CC-IN2P3 is 20 year old – Our engineers know how to run and operate a large computing center Queued Running

Disk resources 350 TB of disk storage available to experiments Highly reliable disk system But very expensive (7 € / GB) IBM DS 8300 After a careful evaluation, CC-IN2P3 is now purchasing new storage solutions  Cheaper (<2€/GB)  But with performance adequate for storage intensive applications (LHC)  60 high end servers just for data access

Mass Storage System (MSS) 6 StorageTek silos (36000 cartridges) Access to the MSS mainly using HPSS (High Performance Storage System) and dynamic automatic staging A lot of experience accumulated during BaBar era on Hierarchical Storage System: MSS + Disk cache ~1.2 PB in HPSS now A file on tape is copied on disk and available within 2 minutes

CC-IN2P3 Users ~70 groups ~2500 users CC-IN2P3 has been opened to non-French users since the beginning of BaBar ~ 20% of CPU resources are being used by foreign physicist / institutes Nuclear Physics 14% Astroparticles 24% AUGER HESS NEMO ANTARES SNovae …

A remark on AstroParticles Astroparticles are now eating up ¼ of the Lyon Computing Center CC-IN2P3 is a Tier-0 for AUGER PLANCK has plans for its computing in the PByte storage scale and with large CPU needs Analysis pattern may be very demanding on: Input / Output performances Memory requirements Dumb “Pizza Boxes” may not not be enough for analysis  Supercomputer models ???  Project at SLAC to develop a multi TByte central memory computer with “cheap” technology  Astroparticle computing may become very challenging in the near future

Specificities of HEP computing HEP computing: 1 event  1 complete and self contained treatment N events (1 run)  1 CPU M runs  M CPU Parallelism is obvious in experimental HEP Providing CPU resources is easy (with enough money ) CPU for meteorology, earth science or lattice QCD is much more complicated IT language: Weakly Coupled Parallelism So what are we good in ?  High performance storage is an HEP specificity Huge mass of data High performance I/O Intensive data distribution BaBar was absolutely crucial to gain this experience It is a knowledge that we can bring to other science

The W-LCG project CERN Lab a Lab c Univ. n Lab m Lab b Univ. b Univ. y Univ. x Allemagne Tier1 1 USA FermiLab UK France Italy Hollande USA Brookhaven Japon Lab d France Lab e France A distributed computing model on a Grid architecture for LHC computing ASCC/Taipei RAL/UK CCIN2P3/FR TRIUMF/CA GridKa/DE CNAF/IT PIC/SP BNL/US Tier-0 Tier-1 USC Krakow CIEMAT Rome Taipei Canada CSCS UB IFCA IC MSU IN2P3-LPC Cambridge IFIC Tier-2 Tier-3 GRIF IN2P3-Subatech SARA/NL LAPP CPPM Labo X

The French LCG Organization LHC Application Groups LHC Application Groups CC-IN2P3 Team LCG Tier-1 Grid Services, Operation & Support Team LCG Tier-1 Grid Services, Operation & Support Tier-2 Other Tier-1 WLCG: CB, OB, MB, GDB Management Board  Scientific Coord: F. Malek  Technical Coord: F. Hernandez  CC-IN2P3 head: D. Boutigny  Technical Coord. T2/T3: F. Chollet Remote Tier-2 Steering Committee

Worldwide Tier-1 InstitutionCountry Experiments served with priority ALICEATLASCMSLHCb TRIUMFCanada  CC-IN2P3France  FZK-GridKAGermany  CNAFItaly  NIKHEF/SARANetherlands    Nordic Data Grid FacilityDK/FI/NO/SE   PICSpain  ASGCTaiwan   RALUnited Kingdom  BNLUSA  FNALUSA  Total61076

The French Tier-1 Contributions to the computing effort at the level of the financial investment to the experiments ATLAS: 45% - CMS: 25% - ALICE: 15% - LHCb: 15% French T1 / offered capacity

Planned resources for W-LCG 2005 : ~1500 processors – ~1.3 M SpecInt2000 ~350 TB disk MSS ~ 1 PB 2008 : ~10 M SpecInt PB disk 4 PB MSS ×8 ×11 ×4 Moore Law: ×~4 1 order of magnitude change in 3 years  Impact on CC is big CPU Disk Current status 500 to 1000 disk servers necessary

Expected budget We are planning to hire ~6 people between now and 2008 Large investment peak in 2007/2008 We also expect 3 to 4 people / year during 4 years to switch from non-LHC to LHC activities  LHC Computing is a major effort at IN2P3 / DAPNIA In 2008 we foresee to devote the equivalent of the current Computing Center to non-LHC experiments.

Connections to T0, T1, T2 T2 T3 Connection to CERN at 10 Gb/s 3 T2 and at least 3 T3 in France Will have to make sure that the data transfers between T1 and T2 are running smoothly and with an adequate network bandwidth CC-IN2P3 and BNL will be T1 partners to ensure ATLAS data redundancy ANR project on grid interoperability and massive data transfers with Fermilab  Start with a 2× 1Gb/s dedicated bandwidth then evolve to a dedicated 10 Gb/s link  Strong involvement from RENATER in this project – Collaboration with IT science Several project to connect foreign “orphans” T2 to CC-IN2P3  China, Japan, Korea, Morocco, South Africa, Romania, Belgium IN2P3 strategy to establish collaborations with Asia A MOU has already been signed up with IHEP Beijing

The French Tiers-2 / AF 3 T2 in France Nantes (ALICE) Clermont Ferrand (ALICE – ATLAS – LHCb) GRIF - T2 for the Paris region (all 4 experiments) - This is an association of 5 laboratories: DAPNIA, LAL, IPNO, LLR, LPNHE-Paris connected through a performant network + 1 Analysis facility embedded within the CC-IN2P3 T1 Rely on the T1 for Mass Storage T2 MC production  MSS Physics data storage Current CC-IN2P3 CPU and Disk capacity

T3 the Annecy Example  A significant amount of computing resources mainly oriented towards Physics Analysis 3 T3 in France: Annecy – Marseille and Strasbourg IN2P3 is concentrating its budget on the T1  T2 and T3 have to found other resources (IN2P3 will only finance T2/T3 hardware renewal after 2008) Annecy: Budget obtained up to now Ministry : 200 k€ University: 80 k€ Required budget: ~1 M€ Connected to the W-LCG Grid but with a strong priority for local users Goal 20 TB 200 CPU Manpower: 3 FTE in running phase The T3 is strongly related to the “CREDO” project: Meeting center devoted to physics  LHC  Dark Matter  ILC Theory / Experiment

EGEE Enabling Grids for E-sciencE is a multidisciplinary project funded by EU and driven by CERN Pilot New Phase I is over – Phase II is starting now 90 partners in 32 countries Total funding for EGEE II ~ 32 M€ France is involved in several Work Package Applications Operation Network Training Quality Assurance Deep involvement from CC-IN2P3 ~10 people funded by EGEE Very important for the success of the LHC computing in France New involvement in the EGEE Network Operation Center Strong involvement from Clermont-Ferrand

After EGEE  ICAD It is crucial to maintain the Grid operational infrastructure (and developments) after EGEE II shutdown Proposal from Guy Wormser to CNRS Multidisciplinary Research Project : ICAD Provide a distributed computing and storage infrastructure as a resource to the scientific community Built upon the present EGEE/LCG infrastructure Budget growing from 0.5 M€ to 1 M€/year Very promising project – Initial approval step in CNRS has been successful

Conclusions LHC computing is a major challenge and IN2P3/DAPNIA is putting a lot of effort in making it a success  First priority (  T1) T2 / T3 will bring important resources to LHC computing / analysis The T3 model, strongly related to physics analysis is an interesting approach in my opinion Astroparticles experiment may become a significant resource consumer very soon Computing may not be considered anymore as a local matter International partnerships are crucial The international network capacity has build up much faster than expected This is what has made the LHC computing feasible