BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August 6-7 2009 Tanya Levshina Fermilab.

Slides:



Advertisements
Similar presentations
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
Advertisements

SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
A. Sim, CRD, L B N L 1 Data Management Foundations Workshop, Mar. 3, 2009 Storage in OSG and BeStMan Alex Sim Scientific Data Management Research Group.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
Florida Tier2 Site Report USCMS Tier2 Workshop Fermilab, Batavia, IL March 8, 2010 Presented by Yu Fu For the Florida CMS Tier2 Team: Paul Avery, Dimitri.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
SRM at Clemson Michael Fenn. What is a Storage Element? Provides grid-accessible storage space. Is accessible to applications running on OSG through either.
Introduction to OSG Storage Suchandra Thapa Computation Institute University of Chicago March 19, 20091GSAW 2009 Clemson.
A. Sim, CRD, L B N L 1 US CMS Workshop, Mar. 3, 2009 Berkeley Storage Manager (BeStMan) Alex Sim Scientific Data Management Research Group Computational.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
OSG Fundamentals Scot Kronenfeld - Marco Mambelli –
Tier 3(g) Cluster Design and Recommendations Doug Benjamin Duke University.
G RID M IDDLEWARE AND S ECURITY Suchandra Thapa Computation Institute University of Chicago.
Sandor Acs 05/07/
SLAC Experience on Bestman and Xrootd Storage Wei Yang Alex Sim US ATLAS Tier2/Tier3 meeting at Univ. of Chicago Aug 19-20,
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Experience with the Thumper Wei Yang Stanford Linear Accelerator Center May 27-28, 2008 US ATLAS Tier 2/3 workshop University of Michigan, Ann Arbor.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Introduction to OSG Fundamentals Suchandra Thapa Computation Institute University of Chicago March 19, 20091GSAW 2009 Clemson.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
KIT – The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) Hadoop on HEPiX storage test bed at FZK Artem Trunov.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
OSG Storage VDT Support and Troubleshooting Concerns Tanya Levshina.
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
Southwest Tier 2 (UTA). Current Inventory Dedidcated Resources  UTA_SWT2 320 cores - 2GB/core Xeon EM64T (3.2GHz) Several Headnodes 20TB/16TB in IBRIX/DDN.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
TANYA LEVSHINA Monitoring, Diagnostics and Accounting.
9/22/10 OSG Storage Forum 1 CMS Florida T2 Storage Status Bockjoo Kim for the CMS Florida T2.
OSG STORAGE OVERVIEW Tanya Levshina. Talk Outline  OSG Storage architecture  OSG Storage software  VDT cache  BeStMan  dCache  DFS:  SRM Clients.
A. Sim, CRD, L B N L 1 Production Data Management Workshop, Mar. 3, 2009 BeStMan and Xrootd Alex Sim Scientific Data Management Research Group Computational.
Atlas Tier 3 Overview Doug Benjamin Duke University.
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
S. Pardi Computing R&D Workshop Ferrara 2011 – 4 – 7 July SuperB R&D on going on storage and data access R&D Storage Silvio Pardi
OSG Fundamentals Alain Roy Terrence Martin Suchandra Thapa.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
a brief summary for users
Status of BESIII Distributed Computing
Experience of Lustre at QMUL
Classic Storage Element
NL Service Challenge Plans
Cluster / Grid Status Update
Berkeley Storage Manager (BeStMan)
Southwest Tier 2 Center Status Report
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Bernd Panzer-Steindel, CERN/IT
Installing and Running a CMS T3 Using OSG Software - UCR
Experience of Lustre at a Tier-2 site
SAM at CCIN2P3 configuration issues
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Computing Board Report CHIPP Plenary Meeting
Southwest Tier 2.
NGS Oracle Service.
Brookhaven National Laboratory Storage service Group Hironori Ito
The INFN Tier-1 Storage Implementation
Large Scale Test of a storage solution based on an Industry Standard
Welcome to our Nuclear Physics Computing System
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Presentation transcript:

BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab

Storage in VDT BeStMan/FS  BeStMan-full mode  BeStMan-gateway  Xrootd XrootdFS GridFtp-Xrootd  Hadoop will be distributed via VDT by the end of 2009, could be currently installed via rpms from GridFtp-HDFS  Gratia transfer probe dCache 8/7/09 BeStMan/DFS support in VDT 2

Why to use VDT? Provides means to easily configure and enable services  One method to start/stop/enable/disable for all services (vdt- control)  Common syntax for all configuration scripts Limits configuration options to most commonly used Provides support for:  handling CA, CRL, logs rotation Installs grid software, common libraries  Globus, GridFTP, GUMS-client, VOMS-client, SRM-clients Relatively easy way to upgrade software  Some problem exists with preserving configuration 8/7/09 BeStMan/DFS support in VDT 3

When VDT is not enough When you want to create more sophisticated configuration you should use the “native” configuration scripts. The following additional options could be specified for BeStMan:  Different path for cache, CAs directory  Different checksum type (e.g md5)  Change event log path or logging level  Change the number of concurrent file transfers  Change the number of grdiftp parallel streams, gridftp buffer size VDT provides very minimal xrootd configuration 8/7/09 BeStMan/DFS support in VDT 4

BeStMan in OSG 1.2 BeStMan could be installed from OSG:  pacman -get Installs the following services:  fetch-crl  vdt-rotate-logs  vdt-update-certs  gsiftp  gratia-gridftp-transfer  gums-host-cron  edg-mkgridmap  bestman Installs libraries (prima, globus, openssl) You can configure BeStMan in two modes:  Full mode - full implementation of srm protocol  Gateway mode – partial implementation of srm protocol (doesn’t support dynamic space allocation) 8/7/09 BeStMan/DFS support in VDT 5

BeStMan full mode in VDT Limited amount of options you can specify  --server default “y” adds bestman start/stop in /etc/init.d  --user default “daemon”  --cert default /etc/grid-security/hostcert.pem  --key default /etc/grid-security/hostkey.pem  --http-port default  --https-port default  --globus-tcp-port-range default none  --volatile-file-lifetime default 1800  --cache-size default your file system size  --gums-host default none  --gums-port default none  --with-transfer-servers default localhost, port 2811  --with-allowed-paths default “any”  --with-blocked-paths default “none” 8/7/09 BeStMan/DFS support in VDT 6

BeStMan-gateway mode in VDT Same options as for BeStMan-fullmode Additional options:  --enable-gateway # mandatory  --with-tokens-list if you want to use static space reservation If you are planning to run BeStMan-gateway/Xrootd:  --use-xrootd instead of –enable-gateway 8/7/09 BeStMan/DFS support in VDT 7

Gratia Transfer Probe Gratia is the accounting service for OSG.  Provides the stakeholders with a reliable and accurate set of views of the Grid resources usage.  Job and other accounting information gathered by Gratia probes run on the compute element or other site nodes are reported to a Gratia collectors OSG collector:  Records are forwarded to the EGEE accounting system, APEL: Reports the details of each file transfer by GridFTP server Gets this information from the gridftp and gridftp- authorization logs Runs as a cron job 8/7/09 BeStMan/DFS support in VDT 8

Gratia transfer sites report 8/7/09 BeStMan/DFS support in VDT 9

Where to find configuration and log files? VDT installation and configuration log:  $VDT_LOCATION/vdt-install.log BeStMan  Logs: $VDT_LOCATION/vdt-app-data/bestman/logs/event.srm.log  Cache: $VDT_LOCATION/vdt-app-data/bestman/cache  Configuration: $VDT_LOCATION/bestman/conf/bestman.rc GridFTP  Logs: $VDT_LOCATION/globus/var/log Gratia probe  Logs: $VDT_LOCATION/gratia/var/logs  Configuration: $VDT_LOCATION/gratia/probe/gridftp-transfer/ProbeConfig Xrootd  Logs: $VDT_LOCATION/xrootd/var/log  Configuration: $VDT_LOCATION/xrootd/etc XrootdFS  Configuration: $VDT_LOCATION/xrootdfs/bin/start.sh 8/7/09 BeStMan/DFS support in VDT 10

BeStMan and FS BeStMan-gateway has been installed and is successfully working on the following file systems:  NFS  Ibrix  PVFS2  GPFS  Xrootd  Hadoop  Lustre  REDDnet 8/7/09 BeStMan/DFS support in VDT 11

BeStMan-gateway/FS BeStMan-gateway/Ibrix (ATLAS T2 – Oklahoma University)  3.2 GHz Xeon, 2 GB, 73 GB, 10 Gbps – + 12 TB Ibrix BeStMan-gateway/NFS (ATLAS T2 – Oklahoma University – ITB cluster)  1.4 GHz P4, 512 MB, 20 GB, 100 Mbps  Using NFS-mounted ext3 file system as storage location 8/7/09 BeStMan/DFS support in VDT 12

BeStMan-gateway/Hadoop (I) Caltech:  BeStman SRM server: 8 cores, 2.33GHz, 12GB RAM, 2x1 GbE ethernet, 4 x 750GB SATA drives  4 GridFtp servers with 2 x 10GbE Name Node 8 cores, 16GB RAM (2GB for Name node jvm)  Data nodes: 82 data nodes, 277TB available space, batch worker nodes with fuse-mount (62) 8 cores, 2.5GHz, 16GB RAM, 1GbE ethernet, 2 x 1TB SATA drives (12) 8 cores, 2.33GHz, 12GB RAM, 1GbE ethernet, 4 x 750GB SATA drives (4) 8 cores, 2.33GHz, 8GB RAM, 1GbE ethernet, 13TB hardware raid across 24 drives (4) 8 cores, 2.33 GHz, 8GB RAM, 1GbE ethernet, 6.5TB hardware raid across 12 drives (2) Sun x4500 Thumpers with 36TB zfs with 10GbE ethernet Nebraska  3 BeStMan Servers  5 GridFTP servers for HDFS; 1 with 10Gbps card, 4 with 1Gbps cards  Name node 8 core (2.2GHz Opteron) 16GB RAM  Data nodes: 140 nodes, ~350TB raw online currently, 500+TB soon, most are batch worker nodes 53x 4 core 2.2GHz Opterons 4G RAM 78x 8 core 2.2GHz Opterons with 16GB RAM 2x 4 core 2.8GHz Opterons with 8GB RAM (Sun x4500 'Thumpers' with 48x 1TB disks) 3x 4 core 2.2GHz Opterons with 8GB RAM 2x 4 core 2.0GHz Xeons with 4GB RAM (SCSI attached vaults) 8/7/09 BeStMan/DFS support in VDT 13

BeStMan-gateway/Hadoop (II) USCD  BeStMan 8 core, 8GB Mem, dynamic gridftp selector  Name Node 8 core, 16 GB Mem  Data Nodes 4/8 core, 8/16Gb mem, 1GB up-link: 15 data nodes, 42TB, batch worker nodes with fuse-mount LIGO:  1 BeStMan, 1 GridFtp server  Data Nodes :13 storage nodes,23TB, Batch worker nodes with fuse-mount 8/7/09 BeStMan/DFS support in VDT 14

BeStMan-gateway/Xrootd SLAC  BeStMan dual 1.8 GHZ, mem 2GB, disk 1GB  2 gridftp servers, they are dual AMD 2218 (2.6GHZ), mem 8GB,3x1GB  Redirector and CNS: dual 1.8 GHZ, mem 2GB, disk GB  Data servers, total space ~ 268TB usable 3x Sun x4500 (thumper), dual AMD 285, 16GB, 4x1GB and 17TB usable space on ZFS 7x Sun x4500 (thumper), dual AMD 290, 16GB, 4x1GB and 31TB usable space on ZFS BNL, IN2P3, INFN, FZK, RAL – all have just xrootd installation 8/7/09 BeStMan/DFS support in VDT 15

BeStMan-gateway/Lustre LQCD – Fermilab  65TB  4 servers are used to serve 2 satabeast and the metadata server is c0-hosted on one of these nodes TTU 8/7/09 BeStMan/DFS support in VDT 16

BeStMan-gateway/REDDnet Vanderbilt University  2 quad-core Opteron CPU’s, 16 GB RAM, 10 Gbit Ethernet  1700 batch slots 8/7/09 BeStMan/DFS support in VDT 17

Useful links BeStMan documentation  Hadoop documentation      Xrootd documentation   Lustre documentation   REDDnet web page   OSG Gratia report  VDT documentation  OSG Installation Guides  OSG Storage group mailing list: 8/7/09 BeStMan/DFS support in VDT 18