SE Installation and configuration (Disk Pool Manager)

Slides:



Advertisements
Similar presentations
12th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS E-infrastructure shared between Europe and Latin America SRM Installation and Configuration.
Advertisements

EGEE is a project funded by the European Union under contract IST Using SRM: DPM and dCache G.Donvito,V.Spinoso INFN Bari
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America DPM Server Installation Luciano Diaz ICN-UNAM.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America DPM Server Installation Claudio Cherubino INFN – Catania.
9th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS E-infrastructure shared between Europe and Latin America SRM Installation and Configuration.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Overview of software tools for gLite installation & configuration.
Ninth EELA Tutorial for Users and Managers E-infrastructure shared between Europe and Latin America LFC Server Installation and Configuration.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) VOMS Installation and configuration Bouchra
DPM Server Installation Claudio Cherubino INFN - Catania.
StoRM Some basics and a comparison with DPM Wahid Bhimji University of Edinburgh GridPP Storage Workshop 31-Mar-101Wahid Bhimji – StoRM.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Workload Management System + Logging&Bookkeeping Installation.
E-science grid facility for Europe and Latin America LFC Server Installation and Configuration Antonio Calanducci INFN Catania.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE middleware: gLite Data Management EGEE Tutorial 23rd APAN Meeting, Manila Jan.
Enabling Grids for E-sciencE Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008.
9th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS E-infrastructure shared between Europe and Latin America CE + WN installation and configuration.
12th EELA Tutorial for Users and System Administrators E-infrastructure shared between Europe and Latin America User Interface installation.
4th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS E-infrastructure shared between Europe and Latin America CE + WN installation and configuration.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America BDII Server Installation and Configuration.
E-infrastructure shared between Europe and Latin America Introduction to the tutorial for site managers Vanessa Hamar Universidad de Los.
SEE-GRID-SCI Storage Element Installation and Configuration Branimir Ackovic Institute of Physics Serbia The SEE-GRID-SCI.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America DPM Server Installation Claudio Cherubino.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America LFC Server Installation and Configuration.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America SRM + gLite IO Server install Emidio Giorgio.
INFSO-RI Enabling Grids for E-sciencE SRMv2.2 in DPM Sophie Lemaitre Jean-Philippe.
12th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin.
12th EELA Tutorial for Users and Managers E-infrastructure shared between Europe and Latin America LFC Server Installation and Configuration.
GLite WN Installation Giuseppe LA ROCCA INFN Catania ACGRID-II School 2-14 November 2009 Kuala Lumpur - Malaysia.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Architecture of LHC File Catalog Valeria Ardizzone INFN Catania – EGEE-II NA3/NA4.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) LFC Installation and Configuration Dong Xu IHEP,
Security recommendations DPM Jean-Philippe Baud CERN/IT.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America LFC Server Installation and Configuration.
User Interface (UI) Installation Bandung ITB Desember 2009.
16-26 June 2008, Catania (Italy) First South Africa Grid Training LFC Server Installation and Configuration Antonio Calanducci INFN Catania.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Workload Management System + Logging&Bookkeeping Installation.
Regional SEE-GRID-SCI Training for Site Administrators
Overview of software tools for gLite installation & configuration
EGEE Data Management Services
Jean-Philippe Baud, IT-GD, CERN November 2007
LFC Server Installation & Configuration
Elisa Ingrà Consortium GARR- Roma
DPM Installation Configuration
Classic Storage Element
StoRM: a SRM solution for disk based storage systems
Federico Bitelli bitelli<at>fis.uniroma3.it
StoRM Troubleshooting session
Troubleshooting su Installazione SE [DPM]
Installation and configuration of a top BDII
LFC Installation and Configuration
gLite SE(DPM) Installation
StoRM Architecture and Daemons
UI Installation and Configuration
SE Installation and configuration (Disk Pool Manager)
gLite User Interface Installation
Hands-On Session: Data Management
LFC Installation and configuration
GFAL 2.0 Devresse Adrien CERN lcgutil team
The INFN Tier-1 Storage Implementation
Data Management Ouafa Bentaleb CERIST, Algeria
WMS LB topBDII Installation and Configuration
DPM Hands-on Session AEGIS Training for Site Administrators
Architecture of the gLite Data Management System
gLite Data and Metadata Management
Grid Management Challenge - M. Jouvin
gLite User Interface Installation and configuration
UI Installation and Configuration
WMS+LB Server Installation and Configuration
INFNGRID Workshop – Bari, Italy, October 2004
EUMEDGRID-Support Site Information
Data Management system in gLite middleware
Presentation transcript:

SE Installation and configuration (Disk Pool Manager) Nabil Talhaoui(talhaoui@cnrst.ma) Joint EPIKH/EUMEDGRID-Support Event in Rabat Morocco, 03.06.2011

Overview of grid Data Managment DPM Overview DPM Installation Troubleshooting Location, Meeting title, dd.mm.yyyy

Grid Overview We know that HPC (High Performance Computing) could be resume in two main challenges: CPU power; Storage system. GRID has found a kind of solution to these items and in this talk we’ll analyze the GRID design of Storage System.

OVERVIEW Assumptions: Files: Also… Users and programs produce and require data the lowest granularity of the data is on the file level (we deal with files rather than data objects or tables) Data = files Files: Mostly, write once, read many Located in Storage Elements (SEs) Several replicas of one file in different sites Accessible by Grid users and applications from “anywhere” Also… WMS can send (small amounts of) data to/from jobs: Input and Output Sandbox Files may be copied from/to local filesystems (WNs, UIs) to the Grid (SEs)

Overview Data Management System is the subsystem of the gLite Middleware which takes care about data manipulation for both all other GRID services and user application. DMS provides all operation that users can perform on the data. Creating files/directories Renaming files/directories Deleting files/directories Moving files/directories Listing directories Creating symbolic links Etc …..

DMS – Objectives Metadata Management File Management DMS provides two main capabilities: File Management Metadata Management File is the simplest way to organize data Metadata are “attributes” that describe other data Metadata Management cataloguing secure database access database schema virtualization File Management storage (save, copy, read, list, …) placement (replica, transfer, ….) security (access control, ….);

Data Management Services Data Management System is composed by three main modules: Storage Element, Catalog and File Transfer Service. Storage Element – common interface to storage Storage Resource Manager Castor, dCache, DPM, storm…. POSIX-I/O gLite-I/O, rfio, dcap, xrootd Access protocols gsiftp, https, rfio, … Catalogs – keep track where data is stored File Catalog Replica Catalog File Authorization Service Metadata Catalog File Transfer – scheduled reliable file transfer Data Scheduler (only designs exist so far) File Transfer Service gLite FTS and glite-url-copy; (manages physical transfer) Globus RFT, Stork File Placement Service gLite FPS (FTS and catalog interaction in a transactional way)

OVERVIEW Storage Element is the service which saves/loads files to/from local storages. These local storages can be both, a disk or large storage systems. Functions: File storage. Storage resources administration interface. Storage space administration. gLite 3.2 data access protocols: File Transfer: GSIFTP (GridFTP) File I/O (Remote File access): gsidcap insecure RFIO secured RFIO (gsirfio)

SE Types Classic SE: Mass Storage Systems (Castor) GridFTP server Insecure RFIO daemon (rfiod) – only LAN limited file access Single disk or disk array No quota management not supported anymore Mass Storage Systems (Castor) Files migrated between front-end disk and back-end tape storage hierarchies Insecure RFIO (Castor) Provide a SRM interface with all the benefits Disk pool managers (dCache and gLite DPM) manage distributed storage servers in a centralized way Physical disks or arrays are combined into a common (virtual) file system Disks can be dynamically added to the pool Secure remote access protocols (gsidcap for dCache, gsirfio for DPM) SRM interface Storm ● Solution best suited to cope with large storage (> or >> 100 TB) ● Makes full advantage of parallel filesystem (GPFS, Lustre) ● SRM v2.2 interface

Overview Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared distributed storage systems. This effort supports the mission in providing the technology needed to manage the rapidly growing distributed data volumes, as a result of faster and larger computational facilities.

SRM (Storage Resource Manager ) dCache You as a user need to know all the systems!!! Storm I talk to them on your behalf I will even allocate space for your files And I will use transfer protocols to send your files there Castor SRM gLite DPM

Disk Pool Manager Overview The Disk Pool Manager (DPM) is a lightweight solution for disk storage management, which offers the SRM interfaces. It may act as a replacement for the obsolete classical SE with the following advantages :  SRM interface (both v1.1 and v2.2)  Better scalability : DPM is allow to manage 100+ TB distributing the load over several servers High performances  Light-weight management The DPM head node has to have one filesystem in this pool, and then an arbitrary number of disk servers can be added by YAIM. The DPM disk servers can have multiple filesystems in the pool. The DPM head node also hosts the DPM and DPNS databases, as well as the SRM web service interfaces.

Disk Pool Manager Overview

SRM-enabled client, etc. DPM architecture /dpm /domain /home CLI, C API, SRM-enabled client, etc. /vo DPM head node DPM Name Server Namespace Authorization Physical files location DPM Server Requests queuing and processing Space management SRM Servers (v1.1, v2.1, v2.2) Disk Servers Physical files Direct data transfer from/to disk server (no bottleneck) file data transfer … DPM disk servers

DPM architecture Usually the DPM head node hosts: SRM server (srmv1 and/or srmv2) : receives the SRM requests and pass them to the DPM server; DPM server : keeps track of all the requests; DPM name server (DPNS) : handles the namespace for all the files under the DPM control; DPM RFIO server : handles the transfers for the RFIO protocol; DPM Gridftp server : handles the transfer for the Gridftp protocol.

Installing DPM

Installing pre-requisites /1 Start from a fresh install of SLC 5.X (In this tutorial use X86_64) Installation will install all dependencies, including other necessary gLite modules external dependencies

Installing pre-requisites /2 We need a dedicated partition for the storage area Check the partition # df –h Filesystem Size Used Avail Use% Mounted on /dev/sda1 9.7G 820M 8.4G 9% / /dev/sda2 19G 33M 18G 1% /storage none 125M 0 125M 0% /dev/shm

Adding Disk FOR THIS TUTORIA L ADD A DISK FOR VIRTUAL MACHINE Edit VM settings before start the VM Add a second Disk (scsi) (20GB is enough for tutorial) Start the Virtual Machine Login fdisk -l (to check disk exists) fdisk /dev/sdb (and create a primary partition new partition (n p 1 enter enter)) print and write( p w) mkfs /dev/sdb1 mkdir /storage mount /dev/sdb1 /storage (edit /etc/fstab to properly mount disk at boot !!!) LABEL=/storage /storage ext3 defaults 1 2 Location, Meeting title, dd.mm.yyyy

REPOS="dag lcg-CA glite-SE_dpm_mysql glite-SE_dpm_disk.repo " Repository settings cd /etc/yum.repos.d/ Specify the mrepo host: export MREPO=http://repo.magrid.ma/yumrepo/glite32 Configure the repository as follows: REPOS="dag lcg-CA glite-SE_dpm_mysql glite-SE_dpm_disk.repo " Get repositories with: for name in $REPOS; do wget $MREPO/$name.repo -O /etc/yum.repos.d/$name.repo; done for name in $REPOS;do wget $MREPO/$name.repo -O /etc/yum.repos.d/$name.repo; done

Installing pre-requisites /3 Syncronization among all gLite nodes is mandatory. So install ntp #yum install ntp You can check ntpd’s status

Installing pre-requisites /4 Check the FQDN (fully qualified domain name) hostname Ensure that the hostnames of your machines are correctly set. Run the command: #hostname –f if your hostname is incorrect : edit the file /etc/sysconfig/network and set the HOSTNAME variable, then restart network service

Installation #yum clean all #yum update #yum install lcg-CA (Install the Cas) #yum install mysql-server #yum install mysql-devel Install the metapackage – yum install <metapackage>: #yum install glite-SE_dpm_mysql #yum install glite-SE_dpm_disk

Installation-Host certificate Copy host certificate located in /root/ as pcXXcert.pem and pcXXkey.pem to /etc/grid-security (hostcert.pem and hostkey.pem). Change files permission #chmod 644 /etc/grid-security/hostcert.pem #chmod 400 /etc/grid-security/hostkey.pem ftp://repo.magrid.ma/pub/GridSchoolConfFiles/

DPM Configuration #ls /root/sitedir/services Create a copy of site-info.def template to your reference directory for the installation (e.g. /root/sitedir): cp /opt/glite/yaim/examples/siteinfo/site-info.def /root/sitedir/mysite-info.def Copy the directory ‘services’ in the same location cp –r /opt/glite/yaim/examples/siteinfo/services /root/sitedir/. # ls /root/sitedir/ my-site-info.def services #ls /root/sitedir/services glite-se_dpm_disk glite-se_dpm_mysq ig-hlr Edit the site-info.def file A good syntax test for your site configuration file is to try to source it manually running the command: #source site-info.def #(after you end editing)

site.def MY_DOMAIN = mydomainname your domain name (check it) MYSQL_PASSWORD=passwd_root #the root Mysql password VOS=“eumed" #The VO we want … ALL_VOMS_VOS=“eumed“

Support for eumed VO WMS_HOST=wms-01.eumedgrid.eu LB_HOST="wms-01.eumedgrid.eu:9000" LFC_HOST=lfc.ulakbim.gov.tr BDII_HOST=wms-01.eumedgrid.eu VOS=“eumed “  add here the VOs you want to support VO_EUMED_SW_DIR=$VO_SW_DIR/eumed VO_EUMED_DEFAULT_SE=$SE_HOST VO_EUMED_STORAGE_DIR=$CLASSIC_STORAGE_DIR/eumed VO_EUMED_VOMS_SERVERS="'vomss://voms2.cnaf.infn.it:8443/voms/eumed?/eumed' 'vomss://voms-02.pd.infn.it:8443/voms/eumed?/eumed'" VO_EUMED_VOMSES="'eumed voms2.cnaf.infn.it 15016 /C=IT/O=INFN/OU=Host/L=CNAF/CN=voms2.cnaf.infn.it eumed' 'eumed voms-02.pd.infn.it 15016 /C=IT/O=INFN/OU=Host/L=Padova/CN=voms-02.pd.infn.it eumed'" VO_EUMED_VOMS_CA_DN="'/C=IT/O=INFN/CN=INFN CA' '/C=IT/O=INFN/CN=INFN CA'" VO_EUMED_WMS_HOSTS="prod-wms-01.pd.infn.it wms.ulakbim.gov.tr wms-01.eumedgrid.eu" Location, Meeting title, dd.mm.yyyy

Support for eumedVO 3101:eumed001:2418:eumed:eumed:: Add gilda poolaccount in /opt/glite/yaim/examples/users.conf according the following format: UID:LOGIN:GID:GROUP:VO:FLAG: example: 3101:eumed001:2418:eumed:eumed:: 3102:eumed002:2418:eumed:eumed:: 3103:eumed003:2418:eumed:eumed:: 3104:eumed004:2418:eumed:eumed:: 3105:eumed005:2418:eumed:eumed:: .. Add the following lines to /opt/gite/yaim/examples/groups.conf "/eumed/ROLE=SoftwareManager":::sgm: "/eumed":::: ftp://repo.magrid.ma/pub/GridSchoolConfFiles/

site.def (DPM) In the files glite-se_dpm_disk glite-se_dpm_mysql) Set the variables: DPM_HOST= <your host>.$MY_DOMAIN #FQDN of DPM head node DPM_DB_USER=dpmmgr #The user for our database DPM_DB_PASSWORD=mysql_pass MYSQL password DPMFSIZE=200M # The space to be reserved by default for a file stored in the DPM DPMPOOL=Permanent #**The name and type of the pool including file system(ex: Permanent) DPM_FILESYSTEMS="$DPM_HOST:/storage" # The filesystems parts of the DPM_DB_HOST=$DPM_HOST DPM_INFO_PASS=the-dpminfo-db-user-pwd SE_GRIDFTP_LOGFILE=/var/log/dpm-gsiftp/dpm-gsiftp.log DPM_HOST= <your host>.$MY_DOMAIN DPM_DB_USER=dpmmgr DPM_DB_PASSWORD=grid2011 DPMFSIZE=200M DPMPOOL=Permanent DPM_FILESYSTEMS="$DPM_HOST:/storage" DPM_DB_HOST=$DPM_HOST DPM_INFO_PASS=the-dpminfo-db-user-pwd SE_GRIDFTP_LOGFILE=/var/log/dpm-gsiftp/dpm-gsiftp.log **The DPM can handle two #different kinds of file systems: * volatile : the files contained in a volatile file system can be removed by the system at any time, unless they are pinned by a user. * permanent : the files contained in a permanent file system cannot be removed by the system.

Firewall configuration The following ports have to be open: DPM server: port 5015/tcp must be open locally at your site at least (can be incoming access as well), DPNS server: port 5010/tcp must be open locally at your site at least (can be incoming access as well), SRM servers: ports 8443/tcp (SRMv1) and 8444/tcp (SRMv2) must be opened to the outside world (incoming access), RFIO server: port 5001/tcp must be open to the outside world (incoming access), in the case your site wants to allow direct RFIO access from outside, Gridftp server: control port 2811/tcp and data ports 40000-45000/tcp (or any range specified by GLOBUS_TCP_PORT_RANGE) must be opened to the outside world (incoming access). FOR THIS TUTORIAL JUST STOP IPTABLES #service iptables stop

Middleware Configuration /opt/glite/yaim/bin/yaim -c -s <your-site-info.def> -n glite-SE_dpm_mysql /opt/glite/yaim/bin/yaim -c -s <your-site-info.def> -n glite-SE_dpm_disk If you want install the disks on another machine you can run /opt/glite/bin/yaim –c –s site-info.def ig_SE_dpm_disk on the other machine Then run (on dpm_mysql machine) dpm-addfs --poolname Permanent --server diskserverhostname –fs /storage2

After configuration remember to manually run the script /etc/cron After configuration remember to manually run the script /etc/cron.monthly/create-default-dirs-DPM.sh as suggested by yaim log. This script create and set the correct permissions on VO storage directories; it will be run monthly via cron.

DPM Server Testing

Testing DPM A simple test for checking if the DPM server is correctly exporting the filesystem is: /opt/lcg/bin/dpm-qryconf

Post configuration Login into the UI(ui01.magrid.ma): Set the variables: DPM_HOST : “export DPM_HOST=pcXX.magrid.ma” DPNS_HOST : “export DPNS_HOST=pcXX.magrid.ma” Execute following commands : dpm-qryconf dpns-ls / dpns-mkdir dpns-rm # globus-url-copy file:/tmp/myfile gsiftp://yourdpmhost/dpm/magrid/home/eumed/testfile #uberftp yourdmphost.domain (chek if this connection works!) Then try to really copy a file using globus

Other command to build NameSpace dpns-mkdir dpns-chmod dpns-chown dpns-setacl And commands to add pools and filesystems dpm-addfs dpm-addpool

Mysql The critical point of DPM is the database (mysql) In a production site take the appropriate cautions to backup the database . If you miss your database you will miss all your data!!! Consider to take a full backup of the machine or use of mysql replica (http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html)

mysql DB And take a look at mysql db #mysql –p –u dpmmgr +------------------+ | Tables_in_dpm_db | | dpm_copy_filereq | | dpm_fs | | dpm_get_filereq | | dpm_pending_req | | dpm_pool | | dpm_put_filereq | | dpm_req | | dpm_space_reserv | | dpm_unique_id | | schema_version | And take a look at mysql db #mysql –p –u dpmmgr Enter password:***** mysql> show databases; +----------+ | Database | | cns_db | | dpm_db | | mysql | | test | +--------------------+ | Tables_in_cns_db | | Cns_class_metadata | | Cns_file_metadata | | Cns_file_replica | | Cns_groupinfo | | Cns_symlinks | | Cns_unique_gid | | Cns_unique_id | | Cns_unique_uid | | Cns_user_metadata | | Cns_userinfo | | schema_version | mysql>connect dpm_db; mysql>show tables; mysql>connect cns_db; mysql>show tables;

Log-files If you have some problem try to analyze your log-files /var/log/dpns/log /var/log/dpm/log /var/log/dpm-gsiftp/dpm-gsiftp.log /var/log/srmv1/log /var/log/srmv2/log /var/log/srmv2.2/log /var/log/rfio/log SE_GRIDFTP_LOGFILE=//var/log/globus-gridftp.log (Files can be in different location depending on the version of packages installed)

Reference http://www.gridpp.ac.uk/wiki/Disk_Pool_Manager https://twiki.cern.ch/twiki/bin/view/LCG/DpmGeneralDescription http://igrelease.forge.cnaf.infn.it/doku.php?id=doc:guides:install-3_2