BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes

Slides:



Advertisements
Similar presentations
Applying Data Grids to Support Distributed Data Management Storage Resource Broker Reagan W. Moore Ian Fisk Bing Zhu University of California, San Diego.
Advertisements

Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
Data management at T3s Hironori Ito Brookhaven National Laboratory.
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
MAGDA Roger Jones UCL 16 th December RWL Jones, Lancaster University MAGDA  Main authors: Wensheng Deng, Torre Wenaus Wensheng DengTorre WenausWensheng.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
BNL Facility Status and Service Challenge 3 HEPiX Karlsruhe, Germany May 9~13, 2005 Zhenping Liu, Razvan Popescu, and Dantong Yu USATLAS/RHIC Computing.
July 29' 2010INDIA-CMS_meeting_BARC1 LHC Computing Grid Makrand Siddhabhatti DHEP, TIFR Mumbai.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
Future home directories at CERN
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Oct 24, 2002 Michael Ernst, Fermilab DRM for Tier1 and Tier2 centers Michael Ernst Fermilab February 3, 2003.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Globus and ESGF Rachana Ananthakrishnan University of Chicago
GridKa December 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Doris Ressmann dCache Implementation at FZK Forschungszentrum Karlsruhe.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Martina Franca (TA), 07 November Installazione, configurazione, testing e troubleshooting di Storage Element.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Atlas Tier 3 Overview Doug Benjamin Duke University.
CVMFS Alessandro De Salvo Outline  CVMFS architecture  CVMFS usage in the.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
Riccardo Zappi INFN-CNAF SRM Breakout session. February 28, 2012 Ingredients 1. Basic ingredients (Fabric & Conn. level) 2. (Grid) Middleware ingredients.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
a brief summary for users
CernVM-FS vs Dataset Sharing
Dynamic Extension of the INFN Tier-1 on external resources
WLCG IPv6 deployment strategy
Data Management Summary of Experiment Prospective
Database Replication and Monitoring
Global Data Access – View from the Tier 2
BNL Box Hironori Ito Brookhaven National Laboratory
ATLAS Grid Information System
Future of WAN Access in ATLAS
Experiences with http/WebDAV protocols for data access in high throughput computing
Service Challenge 3 CERN
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Software infrastructure for a National Research Platform
BNL FTS services Hironori Ito.
dCache Scientific Cloud
Introduction to Data Management in EGI
FTS Monitoring Ricardo Rocha
Storage Protocol overview
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
STORM & GPFS on Tier-2 Milan
ATLAS Sites Jamboree, CERN January, 2017
XROOTd for cloud storage
Brookhaven National Laboratory Storage service Group Hironori Ito
Hironori Ito Brookhaven National Laboratory
WLCG Demonstrator R.Seuster (UVic) 09 November, 2016
DCache things Paul Millar … on behalf of the dCache team.
WLCG and support for IPv6-only CPU
Data Management cluster summary
Australia Site Report Sean Crosby DPM Workshop – 13 December 2013.
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
INFNGRID Workshop – Bari, Italy, October 2004
IPv6 update Duncan Rand Imperial College London
The LHCb Computing Data Challenge DC06
Presentation transcript:

BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes Easy Condor Quota Gui dCache Storage DDN will be retired Hitachi G400 will be added Replication of data have been enabled for certain storage. Transparent to users NFS4.1 enabled using different NFS door Newer kernel on clients New dCache version and change in the configuration Tier 3 Currently about 70 users with condor queues and storage space. Ceph HPSS Tape

Condor Group Quota Web Gui https://github.com/fubarwrangler/group-quota By William Strecker Kellogg

Condor Group Quota Web Gui Overflow Queue Easy Edit

Condor Group Quota Web Gui Slider Name

Tier 3 Setup Separate Condor queues User belongs to an institution Each institution has own group. Separate Storage dCache Part of production dCache Own 5TB space Write once Access by https (browser), Webdav, XRoot, NFS 4.1/PNFS, gridftp, srm. GPFS Own space NX Desktop Remote desktop Mac, Widows, Linux, Android supported

Tier 3 Condor groups Each institution has own group with easily adjustable quota. The group quota can overflow to the parent quota.

Tier 3 Condor groups Group quota can be easily adjusted by Web Gui.

Storage download tool Deficiency of current rucio download. Client host becomes proxy between the source and destination Transfers are not controlled. Many simultaneous requests are DoS to source storage. No catalogs or other management tools. New tool Use Rucio and FTS RESTful APIs. Use FTS for transfers Provides local personal catalog in sqlite3 db. Retry many sources. Retry is not automatic. It will use different source by retry. Provides tools to delete It can be used for any other FTS aware storage. DDM endpoint is not necessary. It can easily use better Database. (eg. MySQL, Postgres, etc...) Retry can be automated.

NX Desktop Gnome, KDE, XDM desktop are supported. It provides the persistent work environment. Network outage will not terminate your desktop.

Current Ceph Setup

Some activities in the recent weeks Ceph S3 network traffic Some activities in the recent weeks

Ceph Project Monthly Ceph Meeting Share working knowledge. The first meeting among CERN, BNL and RAL Plan to expand a bit for other sites in WLCG Ceph Day during the next ATLAS Computing and Software workshop http://www.eventbrite.com/e/ceph-day-switzerland-tickets-17342577115 BNL XRootd Cache over Ceph FS. Data cache by ATLAS Use of FRM Ceph as front end to HPSS tape system.

Ceph Project Monthly Ceph Meeting Share working knowledge. The first meeting among CERN, BNL and RAL Plan to expand a bit for other sites in WLCG Ceph Day during the next ATLAS Computing and Software workshop http://www.eventbrite.com/e/ceph-day-switzerland-tickets-17342577115 BNL XRootd Cache over Ceph FS. Data cache by ATLAS Use of FRM Ceph as front end to HPSS tape system.

HPSS Performance Monitor By David Yu The fraction of tape drives used for reading from and writing to tape can be adjusted. Over 20 tape drives Performance varies with size of files.

Data Size Distribution in HPSS TAPE Write Small files Read ATLAS use of TAPE is dominated by small files. The small files are not as efficient as the large ones in TAPE system.