A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.

Slides:



Advertisements
Similar presentations
QMUL e-Science Research Cluster Introduction (New) Hardware Performance Software Infrastucture What still needs to be done.
Advertisements

CHEPREO Tier-3 Center Achievements. FIU Tier-3 Center Tier-3 Centers in the CMS computing model –Primarily employed in support of local CMS physics community.
Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Tier-1 experience with provisioning virtualised worker nodes on demand Andrew Lahiff, Ian Collier STFC Rutherford Appleton Laboratory, Harwell Oxford,
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Tier 3g Infrastructure Doug Benjamin Duke University.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
Lucien Boland and Sean Crosby Research Computing.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
UCSD CMS 2009 T2 Site Report Frank Wuerthwein James Letts Sanjay Padhi Abhishek Rana Haifen Pi Presented by Terrence Martin.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
Atlas Tier 3 Virtualization Project Doug Benjamin Duke University.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
1 Development of a High-Throughput Computing Cluster at Florida Tech P. FORD, R. PENA, J. HELSBY, R. HOCH, M. HOHLMANN Physics and Space Sciences Dept,
WLCG operations A. Sciabà, M. Alandes, J. Flix, A. Forti WLCG collaboration workshop July , Barcelona.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
Florida Tier2 Site Report USCMS Tier2 Workshop Livingston, LA March 3, 2009 Presented by Yu Fu for the University of Florida Tier2 Team (Paul Avery, Bourilkov.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
What’s Coming? What are we Planning?. › Better docs › Goldilocks – This slot size is just right › Storage › New.
IHEP Computing Site Report Shi, Jingyan Computing Center, IHEP.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Tier 3 Support and the OSG US ATLAS Tier2/Tier3 Workshop at UChicago August 20, 2009 Marco Mambelli –
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
9/22/10 OSG Storage Forum 1 CMS Florida T2 Storage Status Bockjoo Kim for the CMS Florida T2.
The HEPiX IPv6 Working Group David Kelsey (STFC-RAL) EGI OMB 19 Dec 2013.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Any Data, Anytime, Anywhere Dan Bradley representing the AAA Team At OSG All Hands Meeting March 2013, Indianapolis.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Atlas Tier 3 Overview Doug Benjamin Duke University.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
Bob Ball/University of Michigan
Introduction to Distributed Platforms
Andrea Chierici On behalf of INFN-T1 staff
Yaodong CHENG Computing Center, IHEP, CAS 2016 Fall HEPiX Workshop
Brookhaven National Laboratory Storage service Group Hironori Ito
Composition and Operation of a Tier-3 cluster on the Open Science Grid
TeraScale Supernova Initiative
HEPSYSMAN Summer th May 2019 Chris Brew Ian Loader
Presentation transcript:

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo HEP Computing Group Outline  Infrastructure  Resources  Management & Operation  Contributions to CMS  Summary

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE2 HistoryHistory Started out as a grid3 site more than a decade ago Played a key role in the formation of the Grid laboratory of Wisconsin (GLOW) HEP/CS (Condor team) collaboration Designed standalone MC production system Adapted CMS software, and ran it robustly in non- dedicated environments (UW grid & beyond) Selected as one of the 7 CMS Tier2 sites in the US Became a member of WLCG and subsequently OSG Serving all OSG supported VOs besides CMS

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE3 InfrastructureInfrastructure 3 machine rooms, 16 racks Power supply – 650 kW Cooling Chilled water based air coolers and POD based hot aisles

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE4 Compute / Storage Resources CMS ~25M Hrs Chem IceCube 39M Hrs Last 1 Yr. Compute (SL6 OS) T2 HEP Pool – 5300 cores (54K HS06) New purchase this year will add 1400 cores Dedicated to CMS CHTC Pool – cores Opportunistic Storage FS (Hadoop) OSG released hadoop-2.0 since March PB non-replicated storage with replication factor=2 800 TB will be added soon

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE5 Network Configuration Internet2 NLR ESNET Chicago Purdue FNAL Nebraska UW Campus T2 LAN Server Switch 100Gb New 3x10Gb 10Gb 1Gb 4x10Gb Perfsonar (Latency & Bandwidth) nodes are used to debug LAN and WAN (+ cloud USCMS) issues 100Gb New

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE6 100Gb Upgrade Strong support from UW-campus network team Upgraded to 100Gb switch for WAN this summer Room-to-room bandwidth will be 60Gb (by the end of the year) This will push data transfers to more than 20Gb Current Max. transfer rate to Wisconsin ~ 10Gb

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE7 IPv4  IPv6 A total of 350 machines (Compute/Storage Nodes/Elements) Connectivity to outside world using Dual Stack IPv4/v6 Network IPv6 is currently statically configured, IPv4 DHCP initially OSG services work with IPv6-only and Dual Stack mode : IPv6 is enabled for GridFTP servers and works in IPv4/v6 Xrootd (non-OSG release) has also been tested to work with IPv6-only and Dual Stack mode Hadoop, SRM communications haven’t been tested with IPv6

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE8 Software and Services File systems & proxy service AFS, NFS, CernVM-FS, Frontier/Squid Job batch system HTCondor OSG software stack Globus, GUMS, glexec, GRAM-CEs, SEs, HTCondor-CE(New) Storage and Services Hadoop (hdfs), BeStMan2 SRM, GridFTP, Xrootd Cluster management & monitoring Puppet, ganeti, Nagios, Ganglia

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE9 Cluster Management & Monitoring Puppet Being used for last 2+ years Controls every aspect of software deployment for T2 Integrated with Foreman for monitoring Ganeti (Debian) Virtual machine manager drbd, kvm as the underlying tech. SRM, GUMS, HTCondor-CE, RSV, CVMFS, Puppet, PhEDEx Nagios Hardware, disks temp. etc… Ganglia Services, memory, cpu/disk usage, I/O, network, storage OSG and CMS dedicated tools RSV, SAM, Hammer Cloud, Dashboard

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE10 Any Data, Anytime, Anywhere Goal : Make all CMS data transparently available to any CMS physicist, anywhere Transparent and efficient local/remote data access : no need to know about data location Reliable access i.e. failures are hidden from the user’s view Ability to run CMS software from non-CMS managed worker nodes Scheduling excess demand for CPUs to overflow to remote resources The technologies that make this possible: xrootd (read data from anywhere) cvmfs + parrot (read software from anywhere) glideinWMS/HTCondor(send jobs anywhere)

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE11 Any Data Anytime Anywhere Scale Tests AAA scale test using HTCondor: Tier-2 cluster at Wisconsin provides 10K condor job slots for the scale test (running parallel to main condor pool) Underlying technology for data access : Xrootd Works with heterogenous storage systems 10K File Read Test From/To Wisconsin through FNAL global Xrootd redirector on 08/08/14

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE12 SummarySummary The site is in good health and performing well Making our best effort to maintain the high availability/reliability while productively serving CMS and the grid community.

A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE13 Thank You ! Questions / Comments ?