Mattias Wadenstein Hepix 2012 Spring Meeting , Prague

Slides:



Advertisements
Similar presentations
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
Advertisements

Program Systems Institute Russian Academy of Sciences1 Program Systems Institute Research Activities Overview Extended Version Alexander Moskovsky, Program.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
High Performance Computing Center North  HPC2N 2002 all rights reserved HPC2N and SweGrid Åke Sandgren, HPC2N and SweGrid Technology Group.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
Tryggve project developing services for sensitive biomedical data: Call for Nordic use cases NeiC 2015 Conference Workshop on sensitive data Antti Pursula.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
Update on Center for High Performance Computing..
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
03/27/'07T. ISGC20071 Computing GRID for ALICE in Japan Hiroshima University Takuma Horaguchi for the ALICE Collaboration
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
HEPiX Karlsruhe May 9-13, 2005 Operated by the Southeastern Universities Research Association for the U.S. Department of Energy Thomas Jefferson National.
Tier 3 and Computing Delhi Satyaki Bhattacharya, Kirti Ranjan CDRST, University of Delhi.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
HPC Business update HP Confidential – CDA Required
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
Alex Read, Dept. of Physics Grid Activity in Oslo CERN-satsingen/miljøet møter MN-fakultetet Oslo, 8 juni 2009 Alex Read.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
A Distributed Tier-1 An example based on the Nordic Scientific Computing Infrastructure GDB meeting – NIKHEF/SARA 13th October 2004 John Renner Hansen.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
CEA DSM Irfu IRFU site report. CEA DSM Irfu HEPiX Fall 0927/10/ Computing centers used by IRFU people IRFU local computing IRFU GRIF sub site Windows.
Nikhef/(SARA) tier-1 data center infrastructure
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
The Finnish Grid Infrastructure Computing Environment and Tools Wednesday, 21 st of May EGI Community Forum 2014 Helsinki Luís Alves Systems Specialist.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
IRFU SITE REPORT Pierrick Micout CEA/DSM/IRFU/SEDI.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Hopper The next step in High Performance Computing at Auburn University February 16, 2016.
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
NDGF – a Joint Nordic Production Grid Lars Fischer ICFA Workshop on HEP Networking, Grid, and Digital Divide Issues for Global e-Science Cracow, 2 October.
HPC need and potential of ANSYS CFD and mechanical products at CERN A. Rakai EN-CV-PJ2 5/4/2016.
NDGF and the distributed Tier-I Center Michael Grønager, PhD Technical Coordinator, NDGF dCahce Tutorial Copenhagen, March the 27th, 2007.
A Nordic Tier-1 for LHC Mattias Wadenstein Systems Integrator, NDGF Grid Operations Workshop Stockholm, June the 14 th, 2007.
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
NIIF HPC services for research and education
Accessing the VI-SEEM infrastructure
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Status of WLCG FCPPL project
NDGF The Distributed Tier1 Site.
Grid site as a tool for data processing and data analysis
Mattias Wadenstein Hepix 2012 Fall Meeting , Beijing
Installing Galaxy on a cluster :
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
CERN Data Centre ‘Building 513 on the Meyrin Site’
Jeremy Maris Research Computing IT Services University of Sussex
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
Super Computing By RIsaj t r S3 ece, roll 50.
Mattias Wadenstein Hepix 2010 Fall Meeting , Cornell
Mattias Wadenstein Hepix 2011 Fall Meeting , Vancouver
SLURM for WLCG in the Nordics
Small site approaches - Sussex
Footer.
Office of Information Technology February 16, 2016
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Presentation transcript:

Mattias Wadenstein Hepix 2012 Spring Meeting 2012-04-22, Prague NDGF Site Report Mattias Wadenstein Hepix 2012 Spring Meeting 2012-04-22, Prague

NDGF Overview NDGF is a distributed site Most of the big HPC centers in Norway, Sweden, Finland and Denmark Tier1 storage at 7 Plus IJS in Slovenia 2010 Numbers CPU is total cpu, not WLCG allocated

NDGF Organisation NDGF is the LHC Tier1 activity at NeIC Nordic e-Infrastructure Collaboration NeIC hosted by Nordforsk NeIC must/should/may have other activities Director of NeIC starting work at August 1st Much smaller than previously Some critical operations staff filled Expect to hire as soon as director is in office I'm now Technical Manager of the Tier1

Local site reports HPC2N machine room update UiO new cluster update NSC update

HPC2N new machine room “As seen on HEPiX” New cluster installed About 25kW per rack Moderate ΔT – 35-40ºC in hot isle Needed an air flow control module Balance between the two cold isles

HPC2N cluster install

HPC2N grid cluster upgrade Ritsem From Ubuntu 8.04 Torque and Maui To Ubuntu 10.04 Slurm Memory enforcment on resident by means of cgroups – a small per-job swap and then OoM

UiO new cluster OSLO, Norway, March 30 -- The University of Oslo (UiO) and MEGWARE Computer Vertrieb und Service GmbH have signed a contract for a new HPC resource to be installed at the University’s central IT department (USIT). The coming system, named Abel after the famous Norwegian mathematician, will have an aggregated theoretical peak performance of nearly 260 TFLOPS, and be geared towards a wide range of research with demanding data handling applications.

NSC new clusters One with 1200 nodes One with 240 nodes for general acedemic use One with 240 nodes for weather and climate research One with 72 nodes for SAAB to calculate how to build aircrafts First part of deliveries are currently expected mid- May. All are HP SL230s, with 16 cores of Intel Xeon Sandy Bridge. Interconnect is Mellanox FDR infiniband.

NSC new machine room NSC is also building a new computer room. Originally planned to already be finished, but delayed by over a year. Will be finished next year

Questions Questions?