STEINBUCH CENTRE FOR COMPUTING - SCC www.kit.edu KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association.

Slides:



Advertisements
Similar presentations
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Advertisements

Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
STEINBUCH CENTRE FOR COMPUTING - SCC KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association.
Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
Tier-1 Evolution and Futures GridPP 29, Oxford Ian Collier September 27 th 2012.
Computing Infrastructure
UWF Computing Hardware Standards ITS Annual Recommendations for
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Jos van Wezel Doris Ressmann GridKa, Karlsruhe TSM as tape storage backend for disk pool managers.
Leaders Have Vision™ visionsolutions.com 1 Easy migration into the cloud Simple “on demand” disaster recovery With Double Take and HyperV Gabriel Chadeau.
Cloud Storage in Czech Republic Czech national Cloud Storage and Data Repository project.
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association Steinbuch Centre for Computing (SCC)
Introduction to Storage Area Network (SAN) Jie Feng Winter 2001.
Alastair Dewhurst, Dimitrios Zilaskos RAL Tier1 Acknowledgements: RAL Tier1 team, especially John Kelly and James Adams Maximising job throughput using.
STEINBUCH CENTRE FOR COMPUTING - SCC KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Introduction to DoC Private Cloud
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
RAL Site Report HEPiX Fall 2013, Ann Arbor, MI 28 Oct – 1 Nov Martin Bly, STFC-RAL.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Sandor Acs 05/07/
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
RAL Site Report HEPiX FAll 2014 Lincoln, Nebraska October 2014 Martin Bly, STFC-RAL.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft Steinbuch Centre for Computing
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
INFN Site Report R.Gomezel October 9-13, 2006 Jefferson Lab, Newport News.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
Academia Sinica Grid Computing Centre (ASGC), Taiwan
Compute and Storage For the Farm at Jlab
Extending the farm to external sites: the INFN Tier-1 experience
Experience of Lustre at QMUL
Virtualization OVERVIEW
Andrea Chierici On behalf of INFN-T1 staff
Yaodong CHENG Computing Center, IHEP, CAS 2016 Fall HEPiX Workshop
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Kolkata Status and Plan
Experience of Lustre at a Tier-2 site
Welcome! Thank you for joining us. We’ll get started in a few minutes.
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Integrating non web-based services with identity federations
Presentation transcript:

STEINBUCH CENTRE FOR COMPUTING - SCC KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association GridKa Site Report Andreas Petzold

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor 2013 GridKa Batch Farm Univa Grid Engine is running fine ~150kHS06 ~10k job slots 98 replacement machines this summer SysGen 2U 4 node chassis 2x Intel Xeon E (8-core, 2,6 GHz, 312 HS06) 3GB/core 3x 500GB HDD

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor 2013 WN Migration to SL6 Migration of GridKa compute fabric to SL6 finished Performance: +5.4% Intel Xeon E (8 cores, 2.6 GHz) HT off / HT on: SL5 + default compiler:267 HS06/335 HS06 SL6 + default compiler:283 HS06(+5.8%)/348 HS06(+3.9 %) SL5 + gcc-4.8.1:289 HS06/353 HS06 AMD Opteron 6168 (12 cores, 1.9 GHz): SL5 + default compiler:183 HS06 SL6 + default compiler:193 HS06 (+ 5.6 %) SL5 + gcc-4.8.1:187 HS06

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor 2013 Ivy Bridge Benchmarks New Intel Ivy Bridge processors on the market (E5-26## v2) Manufacturing process: micron Sandy Bridge: micron Up to 12 cores Sandy Bridge: up to 8 cores Increasing HS06 score according to number of cores: E (8 cores, 2.6 GHz, HT on, SL6, default compiler) 348 HS06 E v2 (10 cores, 2.5 GHz, HT on, SL6, default compiler) 411 HS06 Power saving of around % Thanks to DELL for providing test machine

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor 2013 Power Efficiency Power Usage (W) per Performance Score (HS06) Worker node class machines at GridKa / E5-2670v2 is a test system provided by DELL AMD 6168 Intel E5430 Intel L5420 AMD 246 AMD 270 Intel 5160 Intel E5345 Intel E5520 (HT on) Intel E (HT on) Watts per HS06 Intel E5-2670v2 (HT on)

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor 2013 GridKa dCache & xrootd 6 production dCache instances + pre-production setup 5 instances running 2.6, 1 running PB, 287 pools on 58 servers Upgrade to 2.6 instead of 2.2 recommended by dCache.org last minute decision one week before planned downtime full support for SHA-2 and xrootd monitoring great support from dCache devs CMS disk-tape separation most CMS tape pools converted to disk-only pools last CMS config changes today GridKa 1 st CMS T1 successfully migrated two xrootd instances for ALICE 2.7PB 15 servers

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor 2013 GridKa Disk Storage 9x DDN S2AA enclosures 9000 disks 796 LUNs SAN Brocade DCX 1x DDN SFA10K 10 enclosures 600 disks 1x DDN SFA12K 5 enclosures 360 disks 14PB usable storage

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor 2013 Evaluating new Storage Solutions DDN SFA12K-E allows to run server VMs directly in storage controller DDN are testing complete dCache instance inside controller expected benefits shortening long IO paths: no SAN + FC HBAs, reduced latency less hardware: less power consumption, improved MTBF possible drawbacks limited resources in storage controllers for VMs loss of redundancy

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor 2013 DDN SFA12K-E

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor 2013 Glimpse at Performance Preliminary performance evaluation IOZONE testing parallel threads on XFS file system still a lot of work ahead no tuning file system + controller setup tuning

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor 2013 GridKa Tape Storage 2x Oracle/Sun/STK SL8500 2x slots 22 LTO5, 16 LTO4 drives 1x IBM TS slots 24 LTO4 drives 1x GRAU XL 5376 slots 16 LTO3, 8 LTO4 >20k cartridges 17PB Migration to HPSS planned for 2014

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor G WAN at GridKa Current WAN setup 7x10Gb/s links to LHCOPN, LHCONE, German research network, FZU Prague + 1x1Gb/s link to Poznan participation in 100G tests at SC G equipment provided by CISCO 100G connection provided by DFN, time-shared by Aachen, Dresden, KIT plan to move LHCOPN, LHCONE to 100G link in 2014 replace old Catalyst border routers procurement of new Nexus 7k with 100G line cards already underway requires new arrangement of LHCOPN operation between KIT and DFN

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor 2013 Configuration Management Still mostly using Cfengine 2 Middleware services used as testbed for puppet Started in early 2012 Still based on old homegrown deployment infrastructure CluClo Very smooth operation Now starting to draw up plans for puppet migration wed like to try many new things: git integration, deployment management with Foreman, MCollective, … Will be step by step process

bwLSDF News from SCC/KIT outside GridKa new services for state of Baden-Württemberg run by SCC/KIT bwSync&Share dropbox for scientists winner of software evaluation: PowerFolder start of production Jan 1 st 2014 expect active 55k users from all universities, 10GB quota bwFileStorage simple/overflow storage for scientific data access via SCP, SFTP, HTTPS (r/o) provided by IBM SONAS start of production Dec 1 st 2013 bwBlockStorage iSCSI storage over WAN for universities all services based on storage hosted at Large Scale Data Facility Andreas Petzold – GridKa Site Report – HEPiX Ann Arbor 2013

bwIDM The bwIDM Project: Vision Federated access to services of the State of Baden-Württemberg Access control based on local accounts of the home organizations bwIDM is not about establishing IDM systems, its about federating existing IDM systems and services. bwServices | Federating HPC access via SAML 15

bwIDM bwIDM Overview bwIDM –…federation of 9 universities of the state of Baden-Württemberg (non) web-based services –…federates the access to non web-based services such as grid, cloud, and HPC resources. LDAP Facade –Deployable, operable, and maintainable approach to federate non web-based services: LDAP facade makes active use of the SAML-ECP and AssertionQuery profile LDAP facade offers users a high usability in trustworthy federations LDAP facade facilitates temporary trust for scientific portals Easy-to-deploy solution for service collaborations of universities, research centres or companies Single registration process per service service access Successfully deployed in testing environments Deployed Services –Federated HPC Service bwUniCluster (8640 cores, 40.8 TIB Ram, IB FDR) going live in Q4/2013 –Federated Sync&Share Service going live in Q1/2014 Any Questions? Feel free to contact me: | Federating HPC access via SAML If you have to bring non web-based services together with SAML, make use of the LDAP facade!