PSI CMS T3 Status & '1[ ] HW Plan March '16 Fabio Martinelli

Slides:



Advertisements
Similar presentations
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Deployment and Management of Grid Services.
Advertisements

BEIJING-LCG2 Tire2 Grid. Outline  Introduction about IHEP Grid  Fabric infrastructure of BeiJing-Lcg2  Middleware grid-service and supported VO’s.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Databases at TRIUMF Andrew Wong CANADA’S NATIONAL LABORATORY FOR PARTICLE AND NUCLEAR PHYSICS Owned and operated as a joint venture by a consortium of.
1 24x7 support status and plans at PIC Gonzalo Merino WLCG MB
RAL PPD Computing A tier 2, a tier 3 and a load of other stuff Rob Harper, June 2011.
Klaster obliczeniowy WLCG – cz.I Alice::WTU::LCG - skład: VOBOX  alicluster.if.pw.edu.plVM: saturn.if.pw.edu.pl CREAM-CE  aligrid.if.pw.edu.pl VM: saturn.if.pw.edu.pl.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Atlas Tier 3 Virtualization Project Doug Benjamin Duke University.
NCPHEP ATLAS/CMS Tier3: status update V.Mossolov, S.Yanush, Dz.Yermak National Centre of Particle and High Energy Physics of Belarusian State University.
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2011 System manager: Pînzaru.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Hosted SharePoint. Part 3/3: Office Live as a WSS solution Speaker Name Microsoft Corporation Hosted.
Computational Research in the Battelle Center for Mathmatical medicine.
1 Worker Node Requirements TCO – biggest bang for the buck –Efficiency per $ important (ie cost per unit of work) –Processor speed (faster is not necessarily.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
GridKa Cloud T1/T2 at Forschungszentrum Karlsruhe (FZK)
PIC port d’informació científica DateText1 November 2009 (Elena Planas) PIC Site review.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Juraj Sucik, Michal Kwiatek, Rafal.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Storage at SMU OSG Storage 9/22/2010 Justin Ross Southern Methodist University.
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
RAL Site Report HEP SYSMAN June 2016 – RAL Gareth Smith, STFC-RAL With thanks to Martin Bly, STFC-RAL.
BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab.
Royal Holloway site report Simon George RAL Jun 2010.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
ZFS overview And how ZFS is used in ECE/CIS At the University of Delaware Ben Miller.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
Servizi core INFN Grid presso il CNAF: setup attuale
status, usage and perspectives
Dynamic Extension of the INFN Tier-1 on external resources
RHEV Platform at LHCb Red Hat at CERN 17-18/1/17
Status of BESIII Distributed Computing
Pete Gronbech GridPP Project Manager April 2016
The Beijing Tier 2: status and plans
Wilson Trailer Approach to Disaster Recovery
LCG Service Challenge: Planning and Milestones
IBCP - CNRS STATUS Christelle Eloto Lyon - France
Mattias Wadenstein Hepix 2012 Fall Meeting , Beijing
NL Service Challenge Plans
Cluster / Grid Status Update
Moroccan Grid Infrastructure MaGrid
LCG Deployment in Japan
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Grid Computing for the ILC
Jeremy Maris Research Computing IT Services University of Sussex
Vanderbilt Tier 2 Project
Kolkata Status and Plan
Bernd Panzer-Steindel, CERN/IT
Model (CMS) T2 setup for end users
Glasgow Site Report (Group Computing)
Paul Kuipers Nikhef Site Report Paul Kuipers
Luca dell’Agnello INFN-CNAF
Computing Board Report CHIPP Plenary Meeting
Overview of IPB responsibilities in EGEE-III SA1
HPEiX Spring RAL Site Report
UTFSM computer cluster
Southwest Tier 2.
Christof Hanke, HEPIX Spring Meeting 2008, CERN
BEIJING-LCG2 Site Report
Small site approaches - Sussex
The INFN Tier-1 Storage Implementation
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
The CMS Beijing Site: Status and Application
Presentation transcript:

PSI CMS T3 Status & '1[6-7-8-9] HW Plan March '16 Fabio Martinelli

CPUs, VMs, dCache Storage, ZFS NFSv4 Storage ( news ) WNs/UIs CPU Cores/Node HS06/node HS06/core Tot cores Tot HS06 20 * WN SL6 disposed soon X5560 8 117.53 14.69 160 2350 11 * WN SL6 E5-2670 16 263 16.44 176 2893 4 * WN SL6 AMD 6272 32 241 7.53 128 964 Tot. 36 Tot. ~460 Tot. ~6200 6 * UI SL6 192 1446 dCache Storage TB Net per System 3 * SUN x4500 ( Read-Only ) 15 5 * SUN x4540 ( Read-Only ) 31 1 * SGI IS5500 ( Read-Write ) 270 1 * NetApp E5400 ( Read-Write ) Files are replicated on the Read-Only pools to improve the storage bandwidth Tot. ~200 ( Read-Only ) Tot. ~550 ( Read-Write ) VMs Sun Grid Engine master + MySQL DB Site BDII, dCache SRM, dCache PostgreSQL Ganglia Web, LDAP Server , Nagios 4 CMS Frontier ( Squid ), CMS PhEDEx OSSEC, SALTSTACK CVMFS ( Squid ) ZFS NFSv4 Storage TB Net per System 1 * HP G9 ( NFSv4 server ) ~10 ( 24*600GB 15K SAS disks ) 1 * HP G9 ( backup server ) ~23 ( 12*3TB 7.2K SATA disks )

The new T3 shared /home 1 x HP DL380 G9 featuring : 24 x SAS 600GB 15k disks 2 x SAS 146GB 15k disks ( mdadm RAID1 ) 1 x HBA Controller P440 for the 24+2 disks 256GB RAM 2 x CPU E5-2660v3 ( Tot 40 cores ) 8 x 1Gbit/s Ethernet to be replaced in Spring by 2 x 10Gbit/s BASE-T Cost ~25KCHF

The new T3 shared /home 1 x HP DL380 G9 featuring ( backup server ) : 12 x SATA 3TB disks 2 x SAS 146GB 15k disks ( mdadm RAID1 ) 1 x HBA Controller P440 for the 12+2 disks 64GB RAM 2 x CPU E5-2660v3 ( Tot 40 cores ) 8 x 1Gbit/s Ethernet to be replaced in Spring by 2 x 10Gbit/s BASE-T Cost ~10KCHF

[root@nfs4-server ~]# zfs list -t filesystem -o name,used,mountpoint,compressratio NAME USED MOUNTPOINT LZ4 RATIO data01 4.86T /zfs/data01 1.35x ← SETTINGS ARE INHERITHED BY DESCENDENTS data01/shome 4.34T /zfs/data01/shome 1.33x data01/shome/amarini 43.2G /zfs/data01/shome/amarini 1.12x data01/shome/aspiezia 6.27G /zfs/data01/shome/aspiezia 2.14x data01/shome/bbilin 1.61M /zfs/data01/shome/bbilin 1.00x data01/shome/bianchi 74.1G /zfs/data01/shome/bianchi 1.14x data01/shome/caber 19.1G /zfs/data01/shome/caber 1.19x data01/shome/casal 123G /zfs/data01/shome/casal 1.32x ... [root@wn ~]# grep nfs4 /proc/mounts ← THE WN MOUNTs/UMOUNTs A USER FS WHEN HIS JOB RUNs/ENDs t3nfs01:/zfs/data01/ /mnt nfs4 rw,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.33.123.90,minorversion=0,local_lock=none,addr=192.33.123.71 0 0 t3nfs01:/zfs/data01/shome /mnt/shome nfs4 rw,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.33.123.90,minorversion=0,local_lock=none,addr=192.33.123.71 0 0 t3nfs01:/zfs/data01/shome/amarini /mnt/shome/amarini nfs4 rw,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.33.123.90,minorversion=0,local_lock=none,addr=192.33.123.71 0 0 ...

Questions ?