News and computing activities at CC-IN2P3

Slides:



Advertisements
Similar presentations
CTS PRIVATE CLOUD Quarterly Customer Meeting October 23, 2013 Kay Metsker.
Advertisements

Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Cooling Product Positioning
DATA CONSOLIDATION AND MANAGEMENT SERBAN ZIRNOVAN ENTERPRISE BUSINESS MANAGER 25 MAY 2015.
Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.
GRIF Status Michel Jouvin LAL / IN2P3
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
October 23rd, 2009 Visit of CMS Computing Management at CC-IN2P3.
CTS Private Cloud Status Quarterly Customer Meeting October 22, 2014.
© 2008 by M. Stümpert, A. Garcia; made available under the EPL v1.0 | Access the power of Grids with Eclipse Mathias Stümpert (Karlsruhe Institute.
ENAM ENAM - INFRA Project 2013.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
DES Virtualization IPMA Briefing A New Opportunity - DES Legislative mandate to consolidate 5 agencies into DES Consolidate support to DES and support.
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
Progress Energy Corporate Data Center Rob Robertson February 17, 2010 of North Carolina.
Most organization’s data centers that were designed before 2000 were we built based on technologies did not exist or were not commonplace such as: >Blade.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
Virtualisation & Cloud Computing at RAL Ian Collier- RAL Tier 1 HEPiX Prague 25 April 2012.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
Nikhef/(SARA) tier-1 data center infrastructure
David Palme Portland Schools. Why I’m Here Today Portland Public Schools Small Department 1.5 FTE Old Equipment 20 Servers (8 10+ years old, 5 ESXI, 5.
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
Computing Jiří Chudoba Institute of Physics, CAS.
WP3 Information and Monitoring Rob Byrom / WP3
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
Grid Activity in France … September 20 th 2011 … from Grid to Clouds Dominique Boutigny Credits : Vincent Breton Hélène Cordier 1.
CC - IN2P3 Site Report Hepix Fall meeting 2010 – Ithaca (NY) November 1st 2010
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
March 13th, 2012 CC-IN2P3 Status and News FJPPL Meeting at CC-IN2P3.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
1 Implementing a Virtualized Dynamic Data Center Solution Jim Sweeney, Principal Solutions Architect, GTSI.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
DIRAC Distributed Computing Services A. Tsaregorodtsev, CPPM-IN2P3-CNRS FCPPL Meeting, 29 March 2013, Nanjing.
Research and Service Support Resources for EO data exploitation RSS Team, ESRIN, 23/01/2013 Requirements for a Federated Infrastructure.
FusionCube At-a-Glance. 1 Application Scenarios Enterprise Cloud Data Centers Desktop Cloud Database Application Acceleration Midrange Computer Substitution.
CC-IN2P3: A High Performance Data Center for Research Dominique Boutigny February 2011 Toward a future cooperation with Israel.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
INFN Site Report R.Gomezel November 5-9,2007 The Genome Sequencing University St. Louis.
SuperB – Naples Site Dr. Silvio Pardi. Right now the Napoli Group is employed in 3 main tasks relate the computing in SuperB Fast Simulation Electron.
NIIF Cloud Infrastructure and Services EGI Technical Forum September 20, 2011 Lyon, France Ivan Marton.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
Dell EMC Modular Data Centers
Extreme Scale Infrastructure
Oracle & HPE 3PAR.
Project Management module
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
CERN Data Centre ‘Building 513 on the Meyrin Site’
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
Long-term Grid Sustainability
ALICE Physics Data Challenge 3
Luca dell’Agnello INFN-CNAF
Gonçalo Borges on behalf of LIP
The CCIN2P3 and its role in EGEE/LCG
A high-performance computing facility for scientific research
Pre-OMB meeting Preparation for the Workshop “EGI towards H2020”
Physical Architecture Layer Design
Cristina del Cano Novales STFC - RAL
Luca dell’Agnello Daniele Cesini GDB - 13/12/2017
What is iSCSI and why is it a major selling point for NAS?
Pre-OMB meeting Preparation for the Workshop “EGI towards H2020”
SUSE CaaS and Dell EMC.
Presentation transcript:

News and computing activities at CC-IN2P3 11/09/2018 February 17th, 2010 News and computing activities at CC-IN2P3 FJPPL Meeting at CC-IN2P3

The infrastructure issue … … is solved Slope: 500W / day The building of the new computer room will start very soon We expect to get it by early 2011 February 17th, 2010

Capacity of new computer room We have built a simple computing model with the following ingredients: Serve ~40 groups with "standards needs" Fulfill LHC computing commitments and provide first class analysis capability Expect a very significant growth of the Astroparticle community needs (LSST for instance) Add some services February 17th, 2010

New computer room Assume Moore law is still valid Extrapolate up to 2019 End up with: 2011 50 racks 600 kW 2015 125 racks 1.5 MW 2019 216 racks 3.2 MW indicated power is for computing only power for cooling has to be added On top of the existing computer room (1 MW) Due to budget constraints the new computer room will have to start with a limited power  Modular design – Easily scalable Chilled water and electricity distribution is designed for the 2019 target value Equipment: transformers, chillers, UPS etc… will be added later February 17th, 2010

Design consideration The 2 storey building will be devoted to computing equipment (no offices) First storey: Services (UPS, Chillers, etc.) - 900 m2 Second Storey: Computer room - 900 m2 We decided to give up with raised floor Every fluids will come from the top: Chilled water – Power – Network February 17th, 2010

Connection between old and new computer room + elevator Inverted beams Chillers on the roof Pumps Connection between old and new computer room + elevator February 17th, 2010

Analysis farm We are setting up a prototype analysis farm for LHC experiments PROOF based I/O intensive Fast turn around to speed up analysis code development Blade center + 16 DELL blades – 16 Thread / blade – 3 Go / Thread 2 10 Gb/s network cards (blade interconnection and iSCSI storage 3 SAN crates iSCSI Dell EqualLogic PS 6010XV – 20 TB 16 600 Go 15 000 RPM SAS disks Connection to xrootd server The idea is to test this setup with different configurations and to compare to other analysis farms – Requires help from experiments The system will be extended once we have a satisfactory configuration February 17th, 2010

A task force is working in order to propose a new batch system. We have decided to stop developing our own batch system (BQS) and to migrate to a new product A task force is working in order to propose a new batch system. Extensive comparison work Review of the systems adopted in other sites Choice has been restricted to 2 products: SGE LSF Then we will proceed to the migration… The final choice will be done early March February 17th, 2010

CC-IN2P3 will continue to play a central role in grid operation Grid future strategy EGEE  EGI in a couple of months NGI : France Grille is being setup 8 partners CC-IN2P3 will continue to play a central role in grid operation At EGI level – Grid operation portal At NGI level We would also like to promote JSAGA work Leverage JSAGA work to improve grid accessibility February 17th, 2010

Ideas… Integration of grid and cloud framework Well… I still have to figure out what is exactly the cloud model for particle physics Create a France – Asia Virtual Organization / Grid Support for AIL applications Training Could support both NAREGI and gLite February 17th, 2010

Grid future strategy Annecy Grenoble Lyon TIDRA is a regional grid oriented toward data treatment Annecy Grenoble Lyon Open to any regional scientific application Easy access is crucial… … and difficult to setup ! Hope to open it to industrial world February 17th, 2010