SE Installation and configuration (Disk Pool Manager)

Slides:



Advertisements
Similar presentations
12th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS E-infrastructure shared between Europe and Latin America SRM Installation and Configuration.
Advertisements

EGEE is a project funded by the European Union under contract IST Using SRM: DPM and dCache G.Donvito,V.Spinoso INFN Bari
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America DPM Server Installation Luciano Diaz ICN-UNAM.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America DPM Server Installation Claudio Cherubino INFN – Catania.
9th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS E-infrastructure shared between Europe and Latin America SRM Installation and Configuration.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
E-science grid facility for Europe and Latin America UI PnP and UI Installation User and Site Admin Tutorial Riccardo Bruno – INFN Catania.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) VOMS Installation and configuration Bouchra
DPM Server Installation Claudio Cherubino INFN - Catania.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Workload Management System + Logging&Bookkeeping Installation.
E-science grid facility for Europe and Latin America LFC Server Installation and Configuration Antonio Calanducci INFN Catania.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE middleware: gLite Data Management EGEE Tutorial 23rd APAN Meeting, Manila Jan.
Enabling Grids for E-sciencE Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008.
9th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS E-infrastructure shared between Europe and Latin America CE + WN installation and configuration.
12th EELA Tutorial for Users and System Administrators E-infrastructure shared between Europe and Latin America User Interface installation.
4th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS E-infrastructure shared between Europe and Latin America CE + WN installation and configuration.
E-infrastructure shared between Europe and Latin America Introduction to the tutorial for site managers Vanessa Hamar Universidad de Los.
SEE-GRID-SCI Storage Element Installation and Configuration Branimir Ackovic Institute of Physics Serbia The SEE-GRID-SCI.
INFSO-RI Enabling Grids for E-sciencE Introduction Data Management Ron Trompert SARA Grid Tutorial, September 2007.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America DPM Server Installation Claudio Cherubino.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America LFC Server Installation and Configuration.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America SRM + gLite IO Server install Emidio Giorgio.
INFSO-RI Enabling Grids for E-sciencE SRMv2.2 in DPM Sophie Lemaitre Jean-Philippe.
12th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin.
12th EELA Tutorial for Users and Managers E-infrastructure shared between Europe and Latin America LFC Server Installation and Configuration.
INFSO-RI Enabling Grids for E-sciencE University of Coimbra gLite 1.4 Data Management System Salvatore Scifo, Riccardo Bruno Test.
GLite WN Installation Giuseppe LA ROCCA INFN Catania ACGRID-II School 2-14 November 2009 Kuala Lumpur - Malaysia.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Architecture of LHC File Catalog Valeria Ardizzone INFN Catania – EGEE-II NA3/NA4.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) LFC Installation and Configuration Dong Xu IHEP,
Site BDII and CE Installation Muhammad Farhan Sjaugi, UPM 2009 November , UM Malaysia 1.
Security recommendations DPM Jean-Philippe Baud CERN/IT.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America LFC Server Installation and Configuration.
16-26 June 2008, Catania (Italy) First South Africa Grid Training LFC Server Installation and Configuration Antonio Calanducci INFN Catania.
Riccardo Zappi INFN-CNAF SRM Breakout session. February 28, 2012 Ingredients 1. Basic ingredients (Fabric & Conn. level) 2. (Grid) Middleware ingredients.
Martedi 8 novembre 2005 Consorzio COMETA “Progetto PI2S2” FESR Data Management System Annamaria Muoio -- INFN Catania PI2S2 First Tutorial -- Messina,
EGEE Data Management Services
Jean-Philippe Baud, IT-GD, CERN November 2007
GFAL Grid File Access Library
LFC Server Installation & Configuration
gLite Information System
SE Installation and configuration (Disk Pool Manager)
DPM Installation Configuration
Classic Storage Element
StoRM: a SRM solution for disk based storage systems
Federico Bitelli bitelli<at>fis.uniroma3.it
StoRM Troubleshooting session
Troubleshooting su Installazione SE [DPM]
Installation and configuration of a top BDII
LFC Installation and Configuration
gLite Data management system overview
gLite SE(DPM) Installation
StoRM Architecture and Daemons
UI PnP and gLite UI installation
gLite User Interface Installation
Hands-On Session: Data Management
LFC Installation and configuration
GFAL 2.0 Devresse Adrien CERN lcgutil team
The INFN Tier-1 Storage Implementation
Data Management Ouafa Bentaleb CERIST, Algeria
WMS LB topBDII Installation and Configuration
Data services in gLite “s” gLite and LCG.
DPM Hands-on Session AEGIS Training for Site Administrators
Architecture of the gLite Data Management System
gLite Data and Metadata Management
Grid Management Challenge - M. Jouvin
gLite User Interface Installation and configuration
UI Installation and Configuration
WMS+LB Server Installation and Configuration
INFNGRID Workshop – Bari, Italy, October 2004
Data Management system in gLite middleware
Presentation transcript:

SE Installation and configuration (Disk Pool Manager) Africa 4 2010 - Joint EUMEDGRID-Support/EPIKH School for Site Admin Cairo, 17-22 October 2010 Federico Bitelli bitelli<@>fis.uniroma3.it Physics Dep. Roma TRE University / INFN ROMA3

Overview of grid Data Managment DPM Overview DPM Installation Troubleshooting Location, Meeting title, dd.mm.yyyy

Grid Overview We know that HPC (High Performance Computing) could be resume in two main challenges: CPU power; Storage system. GRID has found a kind of solution to these items and in this talk we’ll analyze the GRID design of Storage System.

OVERVIEW Assumptions: Files: Also… Users and programs produce and require data the lowest granularity of the data is on the file level (we deal with files rather than data objects or tables) Data = files Files: Mostly, write once, read many Located in Storage Elements (SEs) Several replicas of one file in different sites Accessible by Grid users and applications from “anywhere” Also… WMS can send (small amounts of) data to/from jobs: Input and Output Sandbox Files may be copied from/to local filesystems (WNs, UIs) to the Grid (SEs)

The Grid DataManagment Challenge NEEDS Requirements SOLUTIONS Heterogeneity: Data are stored on different storage systems using different access technologies. A common interface to storage resources is required in order to hide the underlying complexity. Storage Resource Manager (SRM) interface; gLite File I/O Server Distribution: Data are stored in different locations; in most cases there is no shared file system or common namespace. Data need to be moved between different locations. There is need to keep track where data is stored. File Transfer Service (FTS) – to move files among GRID sites. Catalog – to keep track where data are stored. Data Retrieving: Applications are located in different places from where data are stored. There is need of scheduled reliable file transfer service. File Transfer Service Data Scheduler File Placement Service Transfer Agent File Transfer Library Security: Data must be managed according to the VO membership access control policy. Centralized Access control Service. File Authorization Service

Overview Data Management System is the subsystem of the gLite Middleware which takes care about data manipulation for both all other GRID services and user application. DMS provides all operation that users can perform on the data. Creating files/directories Renaming files/directories Deleting files/directories Moving files/directories Listing directories Creating symbolic links Etc …..

DMS – Objectives Metadata Management File Management DMS provides two main capabilities: File Management Metadata Management File is the simplest way to organize data Metadata are “attributes” that describe other data Metadata Management cataloguing secure database access database schema virtualization File Management storage (save, copy, read, list, …) placement (replica, transfer, ….) security (access control, ….);

Data Management Services Data Management System is composed by three main modules: Storage Element, Catalog and File Transfer Service. Storage Element – common interface to storage Storage Resource Manager Castor, dCache, DPM, storm…. POSIX-I/O gLite-I/O, rfio, dcap, xrootd Access protocols gsiftp, https, rfio, … Catalogs – keep track where data is stored File Catalog Replica Catalog File Authorization Service Metadata Catalog File Transfer – scheduled reliable file transfer Data Scheduler (only designs exist so far) File Transfer Service gLite FTS and glite-url-copy; (manages physical transfer) Globus RFT, Stork File Placement Service gLite FPS (FTS and catalog interaction in a transactional way)

OVERVIEW Storage Element is the service which saves/loads files to/from local storages. These local storages can be both, a disk or large storage systems. Functions: File storage. Storage resources administration interface. Storage space administration. gLite 3.1 data access protocols: File Transfer: GSIFTP (GridFTP) File I/O (Remote File access): gsidcap insecure RFIO secured RFIO (gsirfio)

SE Types Classic SE: Mass Storage Systems (Castor) GridFTP server Insecure RFIO daemon (rfiod) – only LAN limited file access Single disk or disk array No quota management not supported anymore Mass Storage Systems (Castor) Files migrated between front-end disk and back-end tape storage hierarchies Insecure RFIO (Castor) Provide a SRM interface with all the benefits Disk pool managers (dCache and gLite DPM) manage distributed storage servers in a centralized way Physical disks or arrays are combined into a common (virtual) file system Disks can be dynamically added to the pool Secure remote access protocols (gsidcap for dCache, gsirfio for DPM) SRM interface Storm ● Solution best suited to cope with large storage (> or >> 100 TB) ● Makes full advantage of parallel filesystem (GPFS, Lustre) ● SRM v2.2 interface

Overview Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared distributed storage systems. This effort supports the mission in providing the technology needed to manage the rapidly growing distributed data volumes, as a result of faster and larger computational facilities.

SRM (Storage Resource Manager ) dCache You as a user need to know all the systems!!! Storm I talk to them on your behalf I will even allocate space for your files And I will use transfer protocols to send your files there Castor SRM gLite DPM

Disk Pool Manager Overview The Disk Pool Manager (DPM) is a lightweight solution for disk storage management, which offers the SRM interfaces. It may act as a replacement for the obsolete classical SE with the following advantages :  SRM interface (both v1.1 and v2.2)  Better scalability : DPM is allow to manage 100+ TB distributing the load over several servers High performances  Light-weight management The DPM head node has to have one filesystem in this pool, and then an arbitrary number of disk servers can be added by YAIM. The DPM disk servers can have multiple filesystems in the pool. The DPM head node also hosts the DPM and DPNS databases, as well as the SRM web service interfaces.

Disk Pool Manager Overview

SRM-enabled client, etc. DPM architecture /dpm /domain /home CLI, C API, SRM-enabled client, etc. /vo DPM head node file DPM Name Server Namespace Authorization Physical files location DPM Server Requests queuing and processing Space management SRM Servers (v1.1, v2.1, v2.2) Disk Servers Physical files Direct data transfer from/to disk server (no bottleneck) data transfer … DPM disk servers

DPM architecture Usually the DPM head node hosts: SRM server (srmv1 and/or srmv2) : receives the SRM requests and pass them to the DPM server; DPM server : keeps track of all the requests; DPM name server (DPNS) : handles the namespace for all the files under the DPM control; DPM RFIO server : handles the transfers for the RFIO protocol; DPM Gridftp server : handles the transfer for the Gridftp protocol.

Installing DPM

Installing pre-requisites /1 Start from a fresh install of SLC 5.X (In this tutorial use X86_64) Installation will install all dependencies, including other necessary gLite modules external dependencies

Installing pre-requisites /2 We need a dedicated partition for the storage area Check the partition # df –h Filesystem Size Used Avail Use% Mounted on /dev/sda1 9.7G 820M 8.4G 9% / /dev/sda2 19G 33M 18G 1% /storage none 125M 0 125M 0% /dev/shm

Adding Disk FOR THIS TUTORIA L ADD A DISK FOR VIRTUAL MACHINE Edit VM settings before start the VM Add a second Disk (scsi) (20GB is enough for tutorial) Start the Virtual Machine Login fdisk -l (to check disk exists) fdisk /dev/sdb (and create a primary partition (n p 1 enter enter)) print and write( p w) mkfs /dev/sdb1 mkdir /storage mount /dev/sdb1 /storage (edit /etc/fstab to properly mount disk at boot !!!) Location, Meeting title, dd.mm.yyyy

Just for GILDA INFRASTRUCTURE Repository settings cd /etc/yum.repos.d wget http://server1.eun.eg/mrepo/repo/sl5/x86_64/dag.repo wget http://server1.eun.eg/mrepo/repo/sl5/x86_64/ig.repo wget http://server1.eun.eg/mrepo/repo/sl5/x86_64/lcg-ca.repo Specific repository for DPM wget http://server1.eun.eg/mrepo/repo/sl5/x86_64/glite-se_dpm_mysql.repo -O /etc/yum.repos.d/glite-se_dpm_mysql.repo Copy the gilda utils: wget http://grid018.ct.infn.it/mrepo/repos/gilda.repo -O /etc/yum.repos.d/gilda.repo Just for GILDA INFRASTRUCTURE

Installing pre-requisites /3 Syncronization among all gLite nodes is mandatory. So install ntp #yum install ntp You can check ntpd’s status

Installing pre-requisites /4 Check the FQDN (fully qualified domain name) hostname Ensure that the hostnames of your machines are correctly set. Run the command: #hostname –f if your hostname is incorrect : edit the file /etc/sysconfig/network and set the HOSTNAME variable, then restart network service

Installing pre-requisites /5 Request host certificates for the SE to your RA ( http://roc.africa-grid.org/index.php?option=com_content&view=article&id=1149&Itemid=110 ) http://roc.africa-grid.org/index.php?option=com_content&view=article&id=1171&Itemid=492 Copy host certificate (hostcert.pem and hostkey.pem) in /etc/grid-security. Change files permission #chmod 644 /etc/grid-security/hostcert.pem #chmod 400 /etc/grid-security/hostkey.pem

Installation #yum clean all #yum update #yum install java (Install the Cas) #yum install lcg-CA (Install the Cas) #yum install mysql-server (correction to previous ) (For this tutorial add the pakage gilda_utils) #yum install gilda_utils Install the metapackage – yum install <metapackage>: #yum install ig_SE_dpm_mysql #yum install ig_SE_dpm_disk

DPM Configuration #ls /root/sitedir/services Create a copy of site-info.def template to your reference directory for the installation (e.g. /root/sitedir): cp /opt/glite/yaim/examples/siteinfo/ig-site-info.def /root/sitedir/mysite-info.def Copy the directory ‘services’ in the same location cp –r /opt/glite/yaim/examples/siteinfo/services /root/sitedir/. # ls /root/sitedir/ my-ig-site-info.def services #ls /root/sitedir/services glite-se_dpm_disk glite-se_dpm_mysq ig-hlr Edit the site-info.def file A good syntax test for your site configuration file is to try to source it manually running the command: #source site-info.def #(after you end editing)

site.def MY_DOMAIN = mydomainname your domain name (check it) JAVA_LOCATION= “ /usr/java/latest“ #java location Check MYSQL_PASSWORD=passwd_root #the root Mysql password VOS="gilda eumed" #The VO we want … ALL_VOMS_VOS=“gilda eumed“

Support for GILDA VO WMS_HOST=gilda-wms-01.ct.infn.it LB_HOST="gilda-wms-01.ct.infn.it:9000" LFC_HOST=lfc-gilda.ct.infn.it BDII_HOST=gilda-bdii.ct.infn.it VOS=“eumed gilda“  add here the VOs you want to support GILDA_GROUP_ENABLE="gilda" VO_GILDA_VOMS_CA_DN="/C=IT/O=INFN/CN=INFN CA" VO_GILDA_SW_DIR=$VO_SW_DIR/gilda VO_GILDA_DEFAULT_SE=$DPM_HOST VO_GILDA_STORAGE_DIR=$CLASSIC_STORAGE_DIR/gilda VO_GILDA_VOMS_SERVERS="voms://voms.ct.infn.it:8443/voms/gilda?/gilda" VO_GILDA_VOMSES="gilda voms.ct.infn.it 15001 /C=IT/O=INFN/OU=Host/L=Catania/CN=voms.ct.infn.it gilda" Location, Meeting title, dd.mm.yyyy

Support for GILDA VO Add gilda poolaccount in /opt/lcg/yaim/examples/users.conf according the following format: UID:LOGIN:GID:GROUP:VO:FLAG: example: 4451:gildasgm:4400:gilda:gilda:sgm: 4401:gilda001:4400:gilda:gilda:: 4402:gilda002:4400:gilda:gilda:: 4403:gilda002:4400:gilda:gilda:: 4404:gilda002:4400:gilda:gilda:: ……contnue with the number of users you want) Add the following lines to /opt/lcg/yaim/examples/groups.conf "/VO=gilda/GROUP=/gilda/ROLE=SoftwareManager":::sgm: "/VO=gilda/GROUP=/gilda":::: “/gilda/”:::::

site.def (DPM) In the files glite-se_dpm_disk glite-se_dpm_mysql) Set the variables: DPM_HOST= <your host>.$MY_DOMAIN #FQDN of DPM head node DPM_DB_USER=dpmmgr #The user for our database DPM_DB_PASSWORD=mysql_pass MYSQL password DPMFSIZE=200M # The space to be reserved by default for a file stored in the DPM DPMPOOL=Permanent #**The name and type of the pool including file system(ex: Permanent) DPM_FILESYSTEMS="$DPM_HOST:/storage" # The filesystems parts of the DPM_DB_HOST=$DPM_HOST DPM_INFO_PASS=the-dpminfo-db-user-pwd SE_GRIDFTP_LOGFILE=/var/log/dpm-gsiftp/dpm-gsiftp.log **The DPM can handle two #different kinds of file systems: * volatile : the files contained in a volatile file system can be removed by the system at any time, unless they are pinned by a user. * permanent : the files contained in a permanent file system cannot be removed by the system.

Firewall configuration The following ports have to be open: DPM server: port 5015/tcp must be open locally at your site at least (can be incoming access as well), DPNS server: port 5010/tcp must be open locally at your site at least (can be incoming access as well), SRM servers: ports 8443/tcp (SRMv1) and 8444/tcp (SRMv2) must be opened to the outside world (incoming access), RFIO server: port 5001/tcp must be open to the outside world (incoming access), in the case your site wants to allow direct RFIO access from outside, Gridftp server: control port 2811/tcp and data ports 40000-45000/tcp (or any range specified by GLOBUS_TCP_PORT_RANGE) must be opened to the outside world (incoming access). FOR THIS TUTORIAL JUST STOP IPTABLES #service iptables stop

Middleware Configuration /opt/glite/yaim/bin/ig_yaim -c -s <your-site-info.def> -n ig_SE_dpm_mysql /opt/glite/yaim/bin/ig_yaim -c -s <your-site-info.def> -n ig_SE_dpm_disk If you want install the disks on another machine you can run /opt/glite/bin/ig_yaim –c –s site-info.def ig_SE_dpm_disk on the other machine Then run (on dpm_mysql machine) dpm-addfs --poolname Permanent --server diskserverhostname –fs /storage2

After configuration remember to manually run the script /etc/cron After configuration remember to manually run the script /etc/cron.monthly/create-default-dirs-DPM.sh as suggested by yaim log. This script create and set the correct permissions on VO storage directories; it will be run monthly via cron.

DPM Server Testing

Testing DPM A simple test for checking if the DPM server is correctly exporting the filesystem is: dpm-qryconf

Post configuration Login into the UI: Set the variables: DPM_HOST : “export DPM_HOST=<your DPM host>” DPNS_HOST : “export DPNS_HOST=<your DPM host>” Execute following commands : dpm-qryconf dpns-ls / dpns-mkdir dpns-rm #uberftp yourdmphost.domain (chek if this connection works!) Then try to really copy a file using globus # globus-url-copy file:/tmp/myfile gsiftp://yourdpmhost/dpm/roma3.infn.it/home/gilda/testfile If you have a FTS Server u could try to use your new DPM as endpoint using glite-transfer-submit

Other command to build NameSpace dpns-mkdir dpns-chmod dpns-chown dpns-setacl And commands to add pools and filesystems dpm-addfs dpm-addpool

Mysql The critical point of DPM is the database (mysql) In a production site take the appropriate cautions to backup the database . If you miss your database you will miss all your data!!! Consider to take a full backup of the machine or use of mysql replica (http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html)

mysql DB And take a look at mysql db #mysql –p –u dpmmgr +------------------+ | Tables_in_dpm_db | | dpm_copy_filereq | | dpm_fs | | dpm_get_filereq | | dpm_pending_req | | dpm_pool | | dpm_put_filereq | | dpm_req | | dpm_space_reserv | | dpm_unique_id | | schema_version | And take a look at mysql db #mysql –p –u dpmmgr Enter password:***** mysql> show databases; +----------+ | Database | | cns_db | | dpm_db | | mysql | | test | +--------------------+ | Tables_in_cns_db | | Cns_class_metadata | | Cns_file_metadata | | Cns_file_replica | | Cns_groupinfo | | Cns_symlinks | | Cns_unique_gid | | Cns_unique_id | | Cns_unique_uid | | Cns_user_metadata | | Cns_userinfo | | schema_version | mysql>connect dpm_db; mysql>show tables; mysql>connect cns_db; mysql>show tables;

Log-files If you have some problem try to analyze your log-files /var/log/dpns/log /var/log/dpm/log /var/log/dpm-gsiftp/dpm-gsiftp.log /var/log/srmv1/log /var/log/srmv2/log /var/log/srmv2.2/log /var/log/rfio/log SE_GRIDFTP_LOGFILE=//var/log/globus-gridftp.log (Files can be in different location depending on the version of packages installed)

Log-files The DPM and DPNS logs are similar. Below we describe only the DPM log. Each line contains : a timestamp, the process id of the daemon and the number of the thread taking care of the request, the name of the method called, the kind of request (put, get or copy), the error number (POSIX error numbers), useful information about the request (token/request id, file, etc.)

/var/log/dpm/log 04/14 20:37:26 29958,22 dpm_srv_put: DP092 - put request by /C=IT/O=GILDA/OU=Personal Ce rtificate/L=Universita Roma Tre Dipartimento di Fisica/CN=Federico Bitelli/Email=bitelli @fis.uniroma3.it (101,102) from se1.eumedgrid.eun.eg 04/14 20:37:26 29958,22 dpm_srv_put: DP098 - put 1 02cb2c08-d89e-4c39-94b0- 04/14 20:37:26 29958,22 dpm_srv_put: DP098 - put 0 /dpm/eumedgrid.eun.eg/home/eumed/TESTSABATO1 04/14 20:37:26 29958,22 dpm_serv: incrementing reqctr 04/14 20:37:26 29958,22 dpm_serv: msthread signalled 04/14 20:37:26 29958,22 dpm_srv_put: returns 0, status=DPM_QUEUED 04/14 20:37:26 29958,1 msthread: calling Cpool_assign_ext 04/14 20:37:26 29958,1 msthread: decrementing reqctr 04/14 20:37:26 29958,1 msthread: calling Cpool_next_index_timeout_ext 04/14 20:37:26 29958,1 msthread: thread 1 selected 04/14 20:37:26 29958,1 msthread: calling Cthread_mutex_lock_ext 04/14 20:37:26 29958,1 msthread: reqctr = 0 04/14 20:37:26 29958,2 dpm_srv_proc_put: processing request 02cb2c08-d89e-4c39-94b0-b85487ce32fb from /C=IT/O=GILDA/OU=Personal Certificate/L=Universita Roma Tre Dipartimento di Fisica/CN=Federico Bitelli/Email=bitelli@fis.uniroma3.it 04/14 20:37:27 29958,2 dpm_srv_proc_put: calling Cns_stat 04/14 20:37:27 29958,2 dpm_srv_proc_put: calling Cns_creatx 04/14 20:37:27 29958,2 dpm_srv_proc_put: calling dpm_selectfs 04/14 20:37:27 29958,2 dpm_selectfs: selected pool: Permanent 04/14 20:37:27 29958,2 dpm_selectfs: selected file system: se1.eumedgrid.eun.eg:/storage 04/14 20:37:27 29958,2 dpm_selectfs: se1.eumedgrid.eun.eg:/storage reqsize=209715200, elemp->free=43106873344, poolp->free=43106873344 04/14 20:37:27 29958,2 dpm_srv_proc_put: returns 0, status=DPM_SUCCESS

/var/log/dpns/log 04/12 18:21:33 32276,0 Cns_srv_readdir: returns 0 04/12 20:45:29 32276,0 Cns_serv: Could not establish security context: _Csec_get_voms_creds: Cannot find certificate of AC issuer for vo eumed ! 04/12 20:45:43 32276,0 Cns_serv: Could not establish security context: _Csec_get_voms_creds: Cannot find certificate of AC issuer for vo eumed !...... 04/13 16:46:47 20681,0 Cns_srv_readdir: NS092 - closedir request by /C=IT/O=GILDA/OU=Personal Certificate/L=Universita Roma Tre Dipartimento di Fisica/CN=Federico Bitelli/Email=bitelli@fis.uniroma3.it (101,102) from rb.eumedgrid.eun.eg 04/13 16:46:47 20681,0 Cns_srv_readdir: returns 0 04/13 16:46:55 20681,0 Cns_srv_lstat: NS092 - lstat request by /C=IT/O=GILDA/OU=Personal Certificate/L=Universita Roma Tre Di partimento di Fisica/CN=Federico Bitelli/Email=bitelli@fis.uniroma3.it (101,102) from rb.eumedgrid.eun.eg 04/13 16:46:55 20681,0 Cns_srv_lstat: NS098 - lstat 0 /dpm/eumedgrid.eun.eg/home/eumed 04/13 16:46:55 20681,0 Cns_srv_lstat: returns 0 04/13 16:46:55 20681,0 Cns_srv_opendir: NS092 - opendir request by /C=IT/O=GILDA/OU=Personal Certificate/L=Universita Roma Tre Dipartimento di Fisica/CN=Federico Bitelli/Email=bitelli@fis.uniroma3.it (101,102) from rb.eumedgrid.eun.eg VOMS-VO-EUMED WAS MISSING

dpm-gsiftp.log DATE=20070412192530.997070 HOST=se1.eumedgrid.eun.eg PROG=wuftpd NL.EVNT=FTP_INFO START=20070412192530.967208 USER=eumed001 F ILE=/dpm/eumedgrid.eun.eg/home/eumed/PROVA BUFFER=87380 BLOCK=65536 NBYTES=5 VOLUME=?(rfio-file) STREAMS=1 STRIPES=1 DEST=1[1 92.168.0.221] TYPE=STOR CODE=226 DATE=20070412192645.807072 HOST=se1.eumedgrid.eun.eg PROG=wuftpd NL.EVNT=FTP_INFO START=20070412192645.777019 USER=eumed001 F DATE=20070412192820.436793 HOST=se1.eumedgrid.eun.eg PROG=wuftpd NL.EVNT=FTP_INFO START=20070412192820.407125 USER=eumed001 F DATE=20070412224551.260282 HOST=se1.eumedgrid.eun.eg PROG=wuftpd NL.EVNT=FTP_INFO START=20070412224551.217144 USER=eumed002 F ILE=/dpm/eumedgrid.eun.eg/home/eumed/pippo2 BUFFER=87380 BLOCK=65536 NBYTES=0 VOLUME=?(rfio-file) STREAMS=1 STRIPES=1 DEST=1[ 192.168.0.221] TYPE=STOR CODE=226

srmv logs The SRM (v1 and v2) logs contain : a timestamp, the process id of the daemon and the number of the thread taking care of the request, the DN of the user, the kind of request (PrepareToPut, PrepareToGet, etc.), the SRM error number, useful information about the request (GUID, SURL, etc.)

Adding a new Disk Server On the Disk Server, repeat all the step you made for the Head Node and then: edit the site.def add your new file system: DPM_FILESYSTEMS=“YourDiskServer:/storage02" # yum install ig_SE_dpm_disk # /opt/glite/yaim/bin/ig_yaim -c -s site-info.def -n ig_SE_dpm_disk On the Head Node: # dpm-addfs –-poolname Permanent –-server YourDiskServer -fs /storage02

DPM ADANCED TOOL Location, Meeting title, dd.mm.yyyy

DPM quotas DPM terminology Unix-like quotas A DPM pool is a set of filesystems on DPM disk servers Unix-like quotas Quotas are defined per disk pool Usage in a given pool is per DN and per VOMS FQAN Primary group gets charged for usage Quotas in a given pool can be defined/enabled per DN and/or per VOMS FQAN Quotas can be assigned by admin Default quotas can be assigned by admin and applied to new users/groups contacting the DPM

DPM quotas Unix-like quota interfaces User interface dpns-quota gives quota and usage information for a given user/group (restricted to the own user information) Administrator interface dpns-quotacheck to compute the current usage on an existing system dpns-repquota to list the usage and quota information for all users/groups dpns-setquota to set or change quotas for a given user/group

Reference http://www.gridpp.ac.uk/wiki/Disk_Pool_Manager https://twiki.cern.ch/twiki/bin/view/LCG/DpmGeneralDescription http://igrelease.forge.cnaf.infn.it/doku.php?id=doc:guides:install-3_2