Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing activities in France Dominique Boutigny CC-IN2P3 May 12, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA Restricted ECFA Meeting in Paris.

Similar presentations


Presentation on theme: "Computing activities in France Dominique Boutigny CC-IN2P3 May 12, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA Restricted ECFA Meeting in Paris."— Presentation transcript:

1 Computing activities in France Dominique Boutigny CC-IN2P3 May 12, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA Restricted ECFA Meeting in Paris

2 HEP Computing Organization in France up to the BaBar / D0 Era The computing is organized around the IN2P3 / DAPNIA Computing Center Hosting data Securing data Providing most of the computing power Physicists are located in their laboratories Connected to CC-IN2P3 Small local computing infrastructure Oriented toward DAQ developments Desktop computing Software developments Physics analysis

3 Computing Evolution in the LHC Era T2 T3 CC-IN2P3 still plays a central role as a Tier-1 But significant resources are building up within Tiers-2 and Tier-3 Hosting most of the data Securing data Providing a large fraction of the computing power And opening to multidisciplinary applications  Grid architecture

4 A Strong Network Infrastructure Originally based on a private network: PhyNet Now relying on the National Research and Education Network: RENATER Backbone of the Grid architecture Opening to the European and International network Notice the 10 Gbit/s link between CERN and CC-IN2P3  LHC Optical Private Network  First Optical Circuit of the RENATER 4 infrastructure

5 CC-IN2P3 Status CC-IN2P3 is a CNRS Unit of Service and Research located in Lyon DAPNIA Funding (2006) 5.75 M€ - CNRS not including salaries 1.60 M€ - CEA / DAPNIA 0.78 M€ - Contracts ~2 M € - CNRS salaries Manpower: 60,5 Full Time Equivalent

6 CPU resources ~1.7 MSpecInt 2000 available ~ equivalent to 1700 today’s single core processors CPU consumption at CC-IN2P3 Major upgrades ×10 in 5 years 80-85% efficiency NEC bi-Xeon Farm

7 CPU Usage  All the CPU nodes can be accessed through the Grid Running 7 days a week and 24h a day Very high reliability: CC-IN2P3 is 20 year old – Our engineers know how to run and operate a large computing center Queued Running

8 Disk resources 350 TB of disk storage available to experiments Highly reliable disk system But very expensive (7 € / GB) IBM DS 8300 After a careful evaluation, CC-IN2P3 is now purchasing new storage solutions  Cheaper (<2€/GB)  But with performance adequate for storage intensive applications (LHC)  60 high end servers just for data access

9 Mass Storage System (MSS) 6 StorageTek silos (36000 cartridges) Access to the MSS mainly using HPSS (High Performance Storage System) and dynamic automatic staging A lot of experience accumulated during BaBar era on Hierarchical Storage System: MSS + Disk cache ~1.2 PB in HPSS now A file on tape is copied on disk and available within 2 minutes

10 CC-IN2P3 Users ~70 groups ~2500 users CC-IN2P3 has been opened to non-French users since the beginning of BaBar ~ 20% of CPU resources are being used by foreign physicist / institutes Nuclear Physics 14% Astroparticles 24% AUGER HESS NEMO ANTARES SNovae …

11 A remark on AstroParticles Astroparticles are now eating up ¼ of the Lyon Computing Center CC-IN2P3 is a Tier-0 for AUGER PLANCK has plans for its computing in the PByte storage scale and with large CPU needs Analysis pattern may be very demanding on: Input / Output performances Memory requirements Dumb “Pizza Boxes” may not not be enough for analysis  Supercomputer models ???  Project at SLAC to develop a multi TByte central memory computer with “cheap” technology  Astroparticle computing may become very challenging in the near future

12 Specificities of HEP computing HEP computing: 1 event  1 complete and self contained treatment N events (1 run)  1 CPU M runs  M CPU Parallelism is obvious in experimental HEP Providing CPU resources is easy (with enough money ) CPU for meteorology, earth science or lattice QCD is much more complicated IT language: Weakly Coupled Parallelism So what are we good in ?  High performance storage is an HEP specificity Huge mass of data High performance I/O Intensive data distribution BaBar was absolutely crucial to gain this experience It is a knowledge that we can bring to other science

13 The W-LCG project CERN Lab a Lab c Univ. n Lab m Lab b Univ. b Univ. y Univ. x Allemagne Tier1 1 USA FermiLab UK France Italy Hollande USA Brookhaven Japon Lab d France Lab e France A distributed computing model on a Grid architecture for LHC computing ASCC/Taipei RAL/UK CCIN2P3/FR TRIUMF/CA GridKa/DE CNAF/IT PIC/SP BNL/US Tier-0 Tier-1 USC Krakow CIEMAT Rome Taipei Canada CSCS UB IFCA IC MSU IN2P3-LPC Cambridge IFIC Tier-2 Tier-3 GRIF IN2P3-Subatech SARA/NL LAPP CPPM Labo X

14 The French LCG Organization LHC Application Groups LHC Application Groups CC-IN2P3 Team LCG Tier-1 Grid Services, Operation & Support Team LCG Tier-1 Grid Services, Operation & Support Tier-2 Other Tier-1 WLCG: CB, OB, MB, GDB Management Board  Scientific Coord: F. Malek  Technical Coord: F. Hernandez  CC-IN2P3 head: D. Boutigny  Technical Coord. T2/T3: F. Chollet Remote Tier-2 Steering Committee

15 Worldwide Tier-1 InstitutionCountry Experiments served with priority ALICEATLASCMSLHCb TRIUMFCanada  CC-IN2P3France  FZK-GridKAGermany  CNAFItaly  NIKHEF/SARANetherlands    Nordic Data Grid FacilityDK/FI/NO/SE   PICSpain  ASGCTaiwan   RALUnited Kingdom  BNLUSA  FNALUSA  Total61076

16 The French Tier-1 Contributions to the computing effort at the level of the financial investment to the experiments ATLAS: 45% - CMS: 25% - ALICE: 15% - LHCb: 15% French T1 / offered capacity

17 Planned resources for W-LCG 2005 : ~1500 processors – ~1.3 M SpecInt2000 ~350 TB disk MSS ~ 1 PB 2008 : ~10 M SpecInt2000 4 PB disk 4 PB MSS ×8 ×11 ×4 Moore Law: ×~4 1 order of magnitude change in 3 years  Impact on CC is big CPU Disk Current status 500 to 1000 disk servers necessary

18 Expected budget We are planning to hire ~6 people between now and 2008 Large investment peak in 2007/2008 We also expect 3 to 4 people / year during 4 years to switch from non-LHC to LHC activities  LHC Computing is a major effort at IN2P3 / DAPNIA In 2008 we foresee to devote the equivalent of the current Computing Center to non-LHC experiments.

19 Connections to T0, T1, T2 T2 T3 Connection to CERN at 10 Gb/s 3 T2 and at least 3 T3 in France Will have to make sure that the data transfers between T1 and T2 are running smoothly and with an adequate network bandwidth CC-IN2P3 and BNL will be T1 partners to ensure ATLAS data redundancy ANR project on grid interoperability and massive data transfers with Fermilab  Start with a 2× 1Gb/s dedicated bandwidth then evolve to a dedicated 10 Gb/s link  Strong involvement from RENATER in this project – Collaboration with IT science Several project to connect foreign “orphans” T2 to CC-IN2P3  China, Japan, Korea, Morocco, South Africa, Romania, Belgium IN2P3 strategy to establish collaborations with Asia A MOU has already been signed up with IHEP Beijing

20 The French Tiers-2 / AF 3 T2 in France Nantes (ALICE) Clermont Ferrand (ALICE – ATLAS – LHCb) GRIF - T2 for the Paris region (all 4 experiments) - This is an association of 5 laboratories: DAPNIA, LAL, IPNO, LLR, LPNHE-Paris connected through a performant network + 1 Analysis facility embedded within the CC-IN2P3 T1 Rely on the T1 for Mass Storage T2 MC production  MSS Physics data storage Current CC-IN2P3 CPU and Disk capacity

21 T3 the Annecy Example  A significant amount of computing resources mainly oriented towards Physics Analysis 3 T3 in France: Annecy – Marseille and Strasbourg IN2P3 is concentrating its budget on the T1  T2 and T3 have to found other resources (IN2P3 will only finance T2/T3 hardware renewal after 2008) Annecy: Budget obtained up to now Ministry : 200 k€ University: 80 k€ Required budget: ~1 M€ Connected to the W-LCG Grid but with a strong priority for local users Goal 20 TB 200 CPU Manpower: 3 FTE in running phase The T3 is strongly related to the “CREDO” project: Meeting center devoted to physics  LHC  Dark Matter  ILC Theory / Experiment

22 EGEE Enabling Grids for E-sciencE is a multidisciplinary project funded by EU and driven by CERN Pilot New Phase I is over – Phase II is starting now 90 partners in 32 countries Total funding for EGEE II ~ 32 M€ France is involved in several Work Package Applications Operation Network Training Quality Assurance Deep involvement from CC-IN2P3 ~10 people funded by EGEE Very important for the success of the LHC computing in France New involvement in the EGEE Network Operation Center Strong involvement from Clermont-Ferrand

23 After EGEE  ICAD It is crucial to maintain the Grid operational infrastructure (and developments) after EGEE II shutdown Proposal from Guy Wormser to CNRS Multidisciplinary Research Project : ICAD Provide a distributed computing and storage infrastructure as a resource to the scientific community Built upon the present EGEE/LCG infrastructure Budget growing from 0.5 M€ to 1 M€/year Very promising project – Initial approval step in CNRS has been successful

24 Conclusions LHC computing is a major challenge and IN2P3/DAPNIA is putting a lot of effort in making it a success  First priority (  T1) T2 / T3 will bring important resources to LHC computing / analysis The T3 model, strongly related to physics analysis is an interesting approach in my opinion Astroparticles experiment may become a significant resource consumer very soon Computing may not be considered anymore as a local matter International partnerships are crucial The international network capacity has build up much faster than expected This is what has made the LHC computing feasible


Download ppt "Computing activities in France Dominique Boutigny CC-IN2P3 May 12, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA Restricted ECFA Meeting in Paris."

Similar presentations


Ads by Google