FP6−2004−Infrastructures−6-SSA-026409 www.eu-eela.org E-infrastructure shared between Europe and Latin America gLite Data Management System Giuseppe Andronico.

Slides:



Advertisements
Similar presentations
Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
Advertisements

Workflows over Grid-based Web services General framework and a practical case in structural biology gLite 3.0 Data Management David García Aristegui Grid.
Grid Data Management Assaf Gottlieb - Israeli Grid NA3 Team EGEE is a project funded by the European Union under contract IST EGEE tutorial,
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
The LCG File Catalog (LFC) Jean-Philippe Baud – Sophie Lemaitre IT-GD, CERN May 2005.
E-science grid facility for Europe and Latin America Updates on Storage and Cataloguing Annamaria Muoio - INFN Tutorial for trainers 01/07/2008.
Data Management Kelly Clynes Caitlin Minteer. Agenda Globus Toolkit Basic Data Management Systems Overview of Data Management Data Movement Grid FTP Reliable.
EGEE-II INFSO-RI Enabling Grids for E-sciencE gLite Data Management System Yaodong Cheng CC-IHEP, Chinese Academy.
Don Quijote Data Management for the ATLAS Automatic Production System Miguel Branco – CERN ATC
INFSO-RI Enabling Grids for E-sciencE gLite Data Management Services - Overview Mike Mineter National e-Science Centre, Edinburgh.
FESR Consorzio COMETA Grid Introduction and gLite Overview Corso di formazione sul Calcolo Parallelo ad Alte Prestazioni (edizione.
The LCG File Catalog (LFC) Jean-Philippe Baud – Sophie Lemaitre IT-GD, CERN May 2005.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE middleware Data Management in gLite.
CYBERINFRASTRUCTURE FOR THE GEOSCIENCES Data Replication Service Sandeep Chandra GEON Systems Group San Diego Supercomputer Center.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE middleware: gLite Data Management EGEE Tutorial 23rd APAN Meeting, Manila Jan.
Enabling Grids for E-sciencE Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008.
Author: Andrew C. Smith Abstract: LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to.
INFSO-RI Enabling Grids for E-sciencE Experiences with LFC and comparison with RNS Erwin Laure Jean-Philippe.
E-science grid facility for Europe and Latin America Data Management Services E2GRIS1 Rafael Silva – UFCG (Brazil) Universidade Federal.
INFSO-RI Enabling Grids for E-sciencE Scenarios for Integrating Data and Job Scheduling Peter Kunszt On behalf of the JRA1-DM Cluster,
1 LHCb File Transfer framework N. Brook, Ph. Charpentier, A.Tsaregorodtsev LCG Storage Management Workshop, 6 April 2005, CERN.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
INFSO-RI Enabling Grids for E-sciencE Αthanasia Asiki Computing Systems Laboratory, National Technical.
INFSO-RI Enabling Grids for E-sciencE The gLite File Transfer Service: Middleware Lessons Learned form Service Challenges Paolo.
SEE-GRID-SCI Storage Element Installation and Configuration Branimir Ackovic Institute of Physics Serbia The SEE-GRID-SCI.
INFSO-RI Enabling Grids for E-sciencE Introduction Data Management Ron Trompert SARA Grid Tutorial, September 2007.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Grid2Win: Porting of gLite middleware to.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America gLite Data Management System Tony Calanducci.
David Adams ATLAS ATLAS distributed data management David Adams BNL February 22, 2005 Database working group ATLAS software workshop.
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Data management in LCG and EGEE David Smith.
EGI-Engage Data Services and Solutions Part 1: Data in the Grid Vincenzo Spinoso EGI.eu/INFN Data Services.
Data Management The European DataGrid Project Team
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Architecture of the gLite DMS Juan Eduardo.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Data management in EGEE.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Data Management Hands-on Juan Eduardo Murrieta.
1 DIRAC Data Management Components A.Tsaregorodtsev, CPPM, Marseille DIRAC review panel meeting, 15 November 2005, CERN.
INFSO-RI Enabling Grids for E-sciencE University of Coimbra gLite 1.4 Data Management System Salvatore Scifo, Riccardo Bruno Test.
INFSO-RI Enabling Grids for E-sciencE University of Coimbra Data Management System gLite – LCG – FiReMan Salvatore Scifo INFN Catania.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Architecture of LHC File Catalog Valeria Ardizzone INFN Catania – EGEE-II NA3/NA4.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Algiers, EUMED/Epikh Application Porting Tutorial, 2010/07/04.
Grid Data Management Assaf Gottlieb Tel-Aviv University assafgot tau.ac.il EGEE is a project funded by the European Union under contract IST
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Data Management Maha Metawei
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America LFC Server Installation and Configuration.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Riccardo Zappi INFN-CNAF SRM Breakout session. February 28, 2012 Ingredients 1. Basic ingredients (Fabric & Conn. level) 2. (Grid) Middleware ingredients.
Martedi 8 novembre 2005 Consorzio COMETA “Progetto PI2S2” FESR Data Management System Annamaria Muoio -- INFN Catania PI2S2 First Tutorial -- Messina,
EGEE Data Management Services
Jean-Philippe Baud, IT-GD, CERN November 2007
GFAL Grid File Access Library
GFAL: Grid File Access Library
Architecture of the gLite Data Management System
gLite Basic APIs Christos Filippidis
StoRM: a SRM solution for disk based storage systems
Vincenzo Spinoso EGI.eu/INFN
gLite Data Management Services
Java API del Logical File Catalog (LFC)
gLite Data management system overview
Introduction to Data Management in EGI
Hands-On Session: Data Management
Architecture of the gLite Data Management System
Data Management Ouafa Bentaleb CERIST, Algeria
Data services in gLite “s” gLite and LCG.
Architecture of the gLite Data Management System
gLite Data and Metadata Management
INFNGRID Workshop – Bari, Italy, October 2004
Data Management system in gLite middleware
Presentation transcript:

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America gLite Data Management System Giuseppe Andronico INFN Sezione di Catania 2° EELATutorial, Merida,

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Outline Grid Data Management Challenge Storage Elements and SRM File and Replica Catalogs (LFC) Data Movement (File Transfer Components)

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, The Grid DM Challenge Heterogeneity –Data are stored on different storage systems using different access technologies Distribution –Data are stored in different locations – in most cases there is no shared file system or common namespace –Data need to be moved between different locations –Need common interface to storage resources  Storage Resource Manager (SRM) –Need to keep track where data is stored  File and Replica Catalogs –Need scheduled, reliable file transfer  File transfer and placement services

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Storage Element – save data and provide a common interface –Storage Resource Manager(SRM) Castor, dCache, DPM, … –Native Access protocolsrfio, dcap, nfs, … –Transfer protocolsgsiftp, ftp, … Catalogs – keep track where data are stored –File Catalog –Replica Catalog –File Authorization Service –Metadata Catalog File Transfer – schedules reliable file transfer –Data Scheduler (only designs exist so far) –File Transfer ServicegLite FTS (manages physical transfers) Data Management Services Overview AMGA Metadata Catalogue LCG File Catalog (LFC)

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, SRM in an example She is running a job which needs: Data for physics event reconstruction Simulated Data Some data analysis files She will write files remotely too They are at CERN In dCache They are at Fermilab In a disk array They are at Nikhef in a classic SE

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, SRM in an example dCache Own system, own protocols and parameters Castor No connection with dCache or classic SE classic SE Independent system from dCache or Castor You as a user need to know all the systems!!! SRM I talk to them on your behalf I will even allocate space for your files And I will use transfer protocols to send your files there

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Storage Resource Management Data are stored on disk pool servers or Mass Storage Systems storage resource management needs to take into account –Transparent access to files (migration to/from disk pool) –File pinning –Space reservation –File status notification –Life time management SRM (Storage Resource Manager) takes care of all these details – SRM is a Grid Service that takes care of local storage interaction and provides a Grid interface to outside world

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Grid Storage Requirements Manage local storage and interface to Mass Storage Systems like –HPSS, CASTOR, DiskeXtender (UNITREE), … Provide an SRM interface Support basic file transfer protocols –GridFTP mandatory –Others if available (https, ftp, etc) Support a native I/O access protocol –POSIX (like) I/O client library for direct access of data

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, gLite Storage Element

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, File and Replica Catalogs LCG-2 File and Replica Catalogs (LFC)

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, What is a catalog gLite UI File Catalog SE

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Users and applications need to locate files (or replicas) on the whole Grid. The File Catalog is the service which allows it and it maintains the mappings between LFNs, GUIDs and SURLs. In LCG-2, file cataloguing operations are provided by the LFC (LCG File Catalog); it is the best substitute of the oldest RLS (Replica Location Server). LCG-2 File & Replica Catalog (I)

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, The past –RLS is the first catalog used in LCG middleware –It works with 2 sub services: LRC (Local Replica Catalog) maps LFN onto GUID and the RMC (Replica Metadata Catalog) maps GUID into SURLs. The present –LFC is deployed as a centralized service and its endpoint is published on the Information Service in order to be found by the LCG DMS tools and/or other GRID services. Note1: endpoint is the URL of the service. Note2: if in the site are deployed both RLS and LFC, remember that they are not mirrored, therefore it is user responsibility to ensure data consistency among different catalogs entries. LCG-2 File & Replica Catalog (II)

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, SRM File and Replica Catalog Files & replicas: Name Conventions (LFC) Symbolic Link in logical filename space Logical File Name (LFN) –An alias created by a user to refer to some item of data, e.g. “lfn:cms/ /run2/track1” Globally Unique Identifier (GUID) –A non-human-readable unique identifier for an item of data, e.g. “guid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6” Site URL (SURL) (or Physical File Name (PFN) or Site FN) –The location of an actual piece of data on a storage system, e.g. “srm://pcrd24.cern.ch/flatfiles/cms/output10_1” (SRM) “sfn://lxshare0209.cern.ch/data/alice/ntuples.dat” (Classic SE) Transport URL (TURL) –Temporary locator of a replica + access protocol: understood by a SE, e.g. “rfio://lxshare0209.cern.ch//data/alice/ntuples.dat” Symbolic Link 1 Symbolic Link n GUID Physical File SURL n Physical File SURL TURL 1 TURL n.... LFN

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, The LFC It keeps track of the location of copies (replicas) of Grid files LFN acts as main key in the database. It has: –Symbolic links to it (additional LFNs) –Unique Identifier (GUID) –System metadata –Information on replicas –One field of user metadata

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, LFC Features –Cursors for large queries –Timeouts and retries from the client –User exposed transactional API (+ auto rollback on failure) –Hierarchical namespace and namespace operations (for LFNs) –Integrated GSI Authentication + Authorization –Access Control Lists (Unix Permissions and POSIX ACLs) –Checksums –Integration with VOMS

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Data Management CLIs & APIs lcg_utils: lcg-* commands + lcg_* API calls –Provide (all) the functionality needed by the LCG user –Transparent interaction with file catalogs and storage interfaces when needed –Abstraction from technology of specific implementations Grid File Access Library (GFAL): API –Adds file I/O and explicit catalog interaction functionality –Still provides the abstraction and transparency of lcg_utils edg-gridftp tools: CLI –Complete the lcg_utils with low level GridFTP operations –Functionality available as API in GFAL –May be generalized as lcg-* commands

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, lcg-utils commands Sets file status to “Done” for a given SURL in a SRM requestlcg-sd Gets the TURL for a given SURL and transfer protocollcg-gt Replication between SEs and registration of the replicalcg-rep Delete one filelcg-del Copies a file to a SE and registers the file in the cataloglcg-cr Copies a grid file to a local destinationlcg-cp File Catalog Interaction Lists the replicas for a given GUID, SURL or LFNlcg-lr Get the GUID for a given LFN or SURLlcg-lg Lists the alias for a given SURL, GUID or LFNlcg-la Unregisters in LFC a file placed in a SElcg-uf Registers in LFC a file placed in a SElcg-rf Remove an alias in LFC for a given GUIDlcg-ra Add an alias in LFC for a given GUIDlcg-aa Replica Management

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, LFC C API lfc_deleteclass lfc_delreplica lfc_endtrans lfc_enterclass lfc_errmsg lfc_getacl lfc_getcomment lfc_getcwd lfc_getpath lfc_lchown lfc_listclass lfc_listlinks lfc_listreplica lfc_lstat lfc_mkdir lfc_modifyclass lfc_opendir lfc_queryclass lfc_readdir lfc_readlink lfc_rename lfc_rewind lfc_rmdir lfc_selectsrvr lfc_setacl lfc_setatime lfc_setcomment lfc_seterrbuf lfc_setfsize lfc_starttrans lfc_stat lfc_symlink lfc_umask lfc_undelete lfc_unlink lfc_utime send2lfc lfc_access lfc_aborttrans lfc_addreplica lfc_apiinit lfc_chclass lfc_chdir lfc_chmod lfc_chown lfc_closedir lfc_creat lfc_delcomment lfc_delete Low level methods (many POSIX-like):

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, LFC commands Add/replace a commentlfc-setcomment Set file/directory access control listslfc-setacl Remove a file/directorylfc-rm Rename a file/directorylfc-rename Create a directorylfc-mkdir List file/directory entries in a directorylfc-ls Make a symbolic link to a file/directorylfc-ln Get file/directory access control listslfc-getacl Delete the comment associated with the file/directorylfc-delcomment Change owner and group of the LFC file-directorylfc-chown Change access mode of the LFC file/directorylfc-chmod Summary of the LFC Catalog commands

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, LFC other commands Managing ownership and permissions: lfc-chmod lfc-chown Managing ACLs: lfc-getacl lfc-setacl Renaming: lfc-rename Removing: lfc-rm Remember that per user mapping can change in every session. The default is for LFNs and directories to be VO-wide readable. Consistent user mapping will be added soon. An LFN can only be removed if it has no SURLs associated. LFNs should be removed by lcg-del, rather than lfc-rm.

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, File names and identifiers in gLite Globally unique identifier Site URL Transport URL: includes protocol user need only see these

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, gLite File I/O at work Storage Element MSS SRM Combined Catalog (FAS) Replica Catalog Metadata Catalog File Catalog Resolve LFN to GUIDResolve GUID to SURL Resolve GUID to Metadata 2. Check for the right access 3. Access the file by SURL The LFN or GUID is presented to the I/O server. The I/O client library accepts either LFN or GUID as an input to the API. The File Authorization Service check for the user is allowed to access the file in the given way. The GUID or LFN is resolved into the SURL, which is used by the local SRM to access the file. Worker node Client 1. Ask for the file by GUID/LFN

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Data Movement Service gLite FTS

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Data Movement Service (1) Many Grid applications will distribute a LOT of data across the Grid sites Need efficient and easy way to manage data movement service gLite File Transfer Service FTS –Manage the network and the storage at both ends –Define the concept of a CHANNEL: a link between two SEs –Channels can be managed by the channel administrators, i.e. the people responsible for the network link and storage systems –These are potentially different people for different channels –Optimize channel bandwidth usage – lots of parameters that can be tuned by the administrator –VOs using the channel can apply their own internal policies for queue ordering (i.e. professor’s transfer jobs are more important than student’s)

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Data Movement Service (3) File movement is asynchronous – submit a job –Held in file transfer queue FPS fetches job transfer requests, contact File Catalogue obtaining source / destination SURLs Task execution is demanded to FTS User can monitor job status through jobID FTS maintains state of job transfers When job is done, FPS will update file entry in the catalogue adding the new replica

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Baseline: GridFTP Data transfer and access protocol for secure and efficient data movement Standardized in the Global Grid Forum extends the standard FTP protocol –Public-key-based Grid Security Infrastructure (GSI) or Kerberos support (both accessible via GSS-API -Third-party control of data transfer -Parallel data transfer -Striped data transfer Partial file transfer -Automatic negotiation of TCP buffer/window sizes -Support for reliable and restartable data transfer -Integrated instrumentation, for monitoring ongoing transfer performance

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Reliable File Transfer GridFTP is the basis of most transfer systems Retry functionality is limited –Only retries in case of network problems; no possibility to recover from GridFTP server crash GridFTP handles one transfer at a time –No possibility to do bulk optimization –No possibility to schedule parallel transfers Need a layer on top of GridFTP that provides reliable scheduled file transfer –FTS/FPS –Globus RFT (layer on top of single gridftp server) –Condor Stork

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Data Movement Stack

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, References gLite homepage – DM subsystem documentation – FTS/FPS user guide – CLI-v1.0.pdfhttps://edms.cern.ch/file/591792/1/EGEE-TECH Transfer- CLI-v1.0.pdf

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Questions…

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Data services in gLite File Access Patterns: –Write once, read-many –Rare append-only updates with one owner –Frequently updated at one source - replicas check/pull new version –(NOT frequent updates, many users, many sites) File naming –Mostly, see the “logical file name” (LFN) –LFN must be unique:  includes logical directory name  in a VO namespace –E.g. /gLite/myVOname.org/runs/12aug05/data1.res 3 service types for data –Storage –Catalogs –Movement

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Client SRM Storage The client asks the SRM for the file providing an SURL (Site URL) 2.The SRM asks the storage system to provide the file 3.The storage system notifies the availability of the file and its location 4.The SRM returns a TURL (Transfer URL), i.e. the location from where the file can be accessed 5.The client interacts with the storage using the protocol specified in the TURL 3 4 SRM Interactions

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, I/O server interactions Provided by site Provided by VO

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, Data Movement Service (2) File movement is asynchronous – submit a job –Held in file transfer queue Data scheduler –Single service per VO – can be distributed –VO can apply policies (priorities, preferred sites, recovery modes..) Client interfaces: –Browser –APIs –Web service “File transfer” –Uses SURL “File placement” –Uses LFN or GUID, accesses Catalogues to resolve them (work in progress)

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, FTS vs FPS (1) File Transfer Service (FTS) –Acts only on SRM SURLs or gsiftp URLs – submit(source-SURL, destination-SURL) File Placement Service (FPS) –A plug-in into the File Transfer that allows to act on logical file names (LFNs) –Interacts with replica catalogs (similar to gLite-I/O) –Registers replicas in the catalog – submit(transferJobs) (transferJob = sourceLFN, destinationSE) Job DB FTS WebService FPS plugin Catalog

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, FTS vs FPS (2) Using the File Transfer Service (FTS) –Initiate and monitor transfer –Plugin takes care of catalog interactions Using the File Placement Service (FPS) –Lookup source SURL in replica catalog –Initiate and monitor transfer –After successful transfer register new replica in the catalog FTS and FPS offer the same interface –Difference only in input parameters to the submit command –Different configuration  SURLs vs. LFNs  FPS requires catalog endpoint

FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America 2° EELA Tutorial, Merida, An overview of Data Movement Services Data Scheduler (DS) Keep track of user/service transfer requests File Transfer/Placement Service (FTS/FPS) Transfer Queue (Table) Transfer Agent (Network)