November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.

Slides:



Advertisements
Similar presentations
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Advertisements

Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
October 23rd, 2009 Visit of CMS Computing Management at CC-IN2P3.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
Centre de Calcul IN2P3 Centre de Calcul de l'IN2P Boulevard Niels Bohr F VILLEURBANNE
SLAC National Accelerator Laboratory Site Report A National Lab in Transition Randy Melen, Deputy CIO Computing Division, Operations Directorate SLAC National.
LCG-France Tier-1 and Analysis Facility Overview Fabio Hernandez IN2P3/CNRS Computing Centre - Lyon CMS Tier-1 tour Lyon, November 30 th.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
Status of the DESY Grid Centre Volker Guelzow for the Grid Team DESY IT Hamburg, October 25th, 2011.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
Overview of day-to-day operations Suzanne Poulat.
Andrew McNabNorthGrid, GridPP8, 23 Sept 2003Slide 1 NorthGrid Status Andrew McNab High Energy Physics University of Manchester.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
LCG-France Vincent Breton, Eric Lançon and Fairouz Malek, CNRS-IN2P3 and LCG-France ISGC Symposium Taipei, March 27th 2007.
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
ATLAS Computing at SLAC Future Possibilities Richard P. Mount Western Tier 2 Users Forum April 7, 2009.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
October 2006ICFA workshop, Cracow1 HEP grid computing in Portugal Jorge Gomes LIP Computer Centre Lisbon Laboratório de Instrumentação e Física Experimental.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
CEA DSM Irfu IRFU site report. CEA DSM Irfu HEPiX Fall 0927/10/ Computing centers used by IRFU people IRFU local computing IRFU GRIF sub site Windows.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
Computing Jiří Chudoba Institute of Physics, CAS.
Tier-2 storage A hardware view. HEP Storage dCache –needs feed and care although setup is now easier. DPM –easier to deploy xrootd (as system) is also.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
BaBar Cluster Had been unstable mainly because of failing disks Very few (
CC - IN2P3 Site Report Hepix Fall meeting 2010 – Ithaca (NY) November 1st 2010
Computing activities in France Dominique Boutigny CC-IN2P3 May 12, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA Restricted ECFA Meeting in Paris.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks ROC FR - On the way to the EGI/NGI structure.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
CC-IN2P3 Pierre-Emmanuel Brinette Benoit Delaunay IN2P3-CC Storage Team 17 may 2011.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
J. Templon Nikhef Amsterdam Physics Data Processing Group “Grid” Computing J. Templon SAC, 26 April 2012.
CC-IN2P3: A High Performance Data Center for Research Dominique Boutigny February 2011 Toward a future cooperation with Israel.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Western Analysis Facility
SAM at CCIN2P3 configuration issues
Luca dell’Agnello INFN-CNAF
A high-performance computing facility for scientific research
News and computing activities at CC-IN2P3
Pierre Girard ATLAS Visit
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Presentation transcript:

November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status

November 28, 2007Dominique Boutigny2 CPU Consumption (October 2007) Efficiency back to 72% after falling down to 52 % Computer room upgrade work The new farm is ranked 229 in the last top 500 list of supercomputers

November 28, 2007Dominique Boutigny3 CPU Sharing between scientific domains

November 28, 2007Dominique Boutigny4 Storage allocation as of September 2007 xrootd: Allocated: 182 TB Used: 104 TB BaBar: 83 TB Virgo: 11.5 TB CMS: 3.5 TB Hess: 1.5 TB Indra: 1 TB dcache: LHC: 592 TB EGEE non LHC: 45 TB Phenix: 10 TB Calice 11 TB HPSS cache: Allocated: 66 To CMS: 6.5 TB Virgo: 2 TB D0: 12 TB General: 39 TB sps: Allocated: 154 TB Snovae: 71.5 TB Planck: 21 TB BaBar: 12 TB CMS: 8 TB Compass: 4.5 TB D0: 3.5 TB Integral: 4 TB Auger: 6.5 TB Phenix 4 TB

November 28, 2007Dominique Boutigny5 CC-IN2P3 usage by LHC 27%

November 28, 2007Dominique Boutigny6 Grid Usage at CC-IN2P3 HEP Grid / Total Main experiments using Grid at CC-IN2P3:  LHC  H1  ILC  CDF  VIRGO Grid usage is largely dominated by LHC experiments Non HEP Grid usage mainly by the biomed VO but only 0.4% of the CPU

November 28, 2007Dominique Boutigny7 Main hardware investments in 2007 CPU: 479 DELL computers Dual CPU – Quad Core (INTEL 2.33 GHz) 16 GB Memory (2 GB / core) 9.4 MSI2k (4.5 MSI2k after normalization) Resources dedicated to non LHC experiments are approximately equivalent to the total CC-IN2P3 capacity in % for LHC A new bid is on-going Probably ~ 400 computers similar to the 1 st bid or better Disk: dCache – xrootd – cache HPSS … 1.2 PB – SUN X4500 (Thumpers) 80% for LHC 600 more TB on order Very good performances Total Thumper capacity by January 2008: 2.6 PB

November 28, 2007Dominique Boutigny8 Main investments in 2007 (cont.) Disks for GPFS Bidding in progress We will probably get 500 à 600 TB of disk storage Decision to deploy the global file system GPFS to replace the old NFS space  Global negotiation for all the IN2P3 laboratories 125 k€ per year Necessary in order to scale up the so called semi-permanent storage Crucial for the LHC analysis facility Big cartridge order (T10k and LTO4) to populate the SL8500 SUN/STK Library

November 28, 2007Dominique Boutigny9 Manpower 4 people have just been hired - Development - Project leader (grid) - Communication - Infrastructure We already know that we will have 4 more people to be hired in 2008 : All computing experts

November 28, 2007Dominique Boutigny10 Computer room upgrade The computer room upgrade is now over The computer room upgrade is now over 1.5 M€ invested 1.5 M€ invested –Electrical work –Air Conditioning –UPS The computer room is now able to provide 1 MW of electrical power usable for computing equipment The computer room is now able to provide 1 MW of electrical power usable for computing equipment

November 28, 2007Dominique Boutigny11 Computer room upgrade A Tier-1 is also a big cooling and power factory !

November 28, 2007Dominique Boutigny12 New building The financial situation is being clarified and we now have very good hope to get the budget for a new building The financial situation is being clarified and we now have very good hope to get the budget for a new building –Nevertheless the budget will only allow the construction of a new computer room –The office and meeting room space will be delayed The new computer room will be designed in such a way to decrease the environmental impact The new computer room will be designed in such a way to decrease the environmental impact

November 28, 2007Dominique Boutigny13 A new organization chart

November 28, 2007Dominique Boutigny14 Institut des Grilles (Grid Institute) CNRS has decided to create a Grid Institute in order to federate the efforts on Grid development within CNRS CNRS has decided to create a Grid Institute in order to federate the efforts on Grid development within CNRS –Production Grids –Research Grids The Grid Institute will play an important role in setting up the environment for the future French National Grid Initiative (NGI) The Grid Institute will play an important role in setting up the environment for the future French National Grid Initiative (NGI) European NGIs will be the basis for the European Grid Initiative European NGIs will be the basis for the European Grid Initiative