WLCG Demonstrator R.Seuster (UVic) 09 November, 2016

Slides:



Advertisements
Similar presentations
ATLAS T1/T2 Name Space Issue with Federated Storage Hironori Ito Brookhaven National Laboratory.
Advertisements

Storage: Futures Flavia Donno CERN/IT WLCG Grid Deployment Board, CERN 8 October 2008.
WebFTS as a first WLCG/HEP FIM pilot
SERVER Betül ŞAHİN What is this? Betül ŞAHİN
Near-realtime Software. September 22, 2009GONG H-alpha Review2 Laminate From Wikipedia: A laminate is a material constructed by uniting two or more layers.
Design Windows Media Services Infrastructure. Module 7: Design Windows Media Services Infrastructure Design Windows Media Services for live streaming.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
Introduction to CVMFS A way to distribute HEP software on cloud Tian Yan (IHEP Computing Center, BESIIICGEM Cloud Computing Summer School.
How computer’s are linked together.
PhysX CoE: LHC Data-intensive workflows and data- management Wahid Bhimji, Pete Clarke, Andrew Washbrook – Edinburgh And other CoE WP4 people…
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen CERN
WebFTS File Transfer Web Interface for FTS3 Andrea Manzi On behalf of the FTS team Workshop on Cloud Services for File Synchronisation and Sharing.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
ITGS Network Architecture. ITGS Network architecture –The way computers are logically organized on a network, and the role each takes. Client/server network.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
XROOTD AND FEDERATED STORAGE MONITORING CURRENT STATUS AND ISSUES A.Petrosyan, D.Oleynik, J.Andreeva Creating federated data stores for the LHC CC-IN2P3,
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
10 May 2001WP6 Testbed Meeting1 WP5 - Mass Storage Management Jean-Philippe Baud PDP/IT/CERN.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
CERN IT Department t SDC 1 AGIS: Topology & SchedConfig parameters Production /Analysis share configuration The HTTP ecosystem 3 September.
Efi.uchicago.edu ci.uchicago.edu Storage federations, caches & WMS Rob Gardner Computation and Enrico Fermi Institutes University of Chicago BigPanDA Workshop.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
DCache/XRootD Dmitry Litvintsev (DMS/DMD) FIFE workshop1Dmitry Litvintsev.
Data Hosting and Security Overview January, 2011.
IT-SDC : Support for Distributed Computing Dynafed FTS3 Human Brain Project use cases Fabrizio Furano Alejandro Alvarez.
Cofax Scalability Document Version Scaling Cofax in General The scalability of Cofax is directly related to the system software, hardware and network.
Comparison of VPS Hosting and Cloud Hosting Features.
J Jensen / WP5 /RAL UCL 4/5 March 2004 GridPP / DataGrid wrap-up Mass Storage Management J Jensen
Daniele Bonacorsi Andrea Sciabà
Dynamic Extension of the INFN Tier-1 on external resources
WLCG IPv6 deployment strategy
on behalf of the Belle II Computing Group
DPM at ATLAS sites and testbeds in Italy
Database Replication and Monitoring
Virtualization and Clouds ATLAS position
Global Data Access – View from the Tier 2
ATLAS Grid Information System
NA61/NA49 virtualisation:
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Belle II Physics Analysis Center at TIFR
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
Future of WAN Access in ATLAS
PanDA setup at ORNL Sergey Panitkin, Alexei Klimentov BNL
Dynafed, DPM and EGI DPM workshop 2016 Speaker: Fabrizio Furano
POW MND section.
WLCG experiments FedCloud through VAC/VCycle in the EGI
Introduction to CVMFS A way to distribute HEP software on cloud
A full demonstration based on a “real” analysis scenario
Future challenges for the BELLE II experiment
Storage Protocol overview
Taming the protocol zoo
WLCG Management Board, 16th July 2013
Simulation use cases for T2 in ALICE
Hironori Ito Brookhaven National Laboratory
Monitoring Of XRootD Federation
ADC Requirements and Recommendations for Sites
Ákos Frohner EGEE'08 September 2008
DCache things Paul Millar … on behalf of the dCache team.
Magento Enterprise Hosting Magento enterprise hosting is an internet space provider which provides website space and other services to make their website.
Dynamic DNS support for EGI Federated cloud
Grid Canada Testbed using HEP applications
Australia Site Report Sean Crosby DPM Workshop – 13 December 2013.
Grid Computing 6th FCPPL Workshop
Job Submission Via File Transfer
Thursday AM, Lecture 2 Brian Lin OSG
Presentation transcript:

WLCG Demonstrator R.Seuster (UVic) 09 November, 2016 WLCG storage, Cloud Resources and volatile storage into HTTP/WebDAV-based regional federations R.Seuster (UVic) 09 November, 2016

Status at UVic Installation of dynafed was easy, and so far federated two canadian sites on the westcoast, SFU and UVic. Adding more is merely uncommenting lines in a configuration file Discussing with CERN of how to do tests in production setup Did few tests and comparisons of different protocols vs. different network latencies Web location: https://dynafed01.heprc.uvic.ca:8443/myfed/

temporarily stored some ATLAS data locally on web server (Was) accessible via http(s), xrootd and ssh and was used for some comparisons of transfer rates for different protocols removed now Federated ATLAS data at UVic and SFU Accessible e.g. via curl, aria2c, and also davix-get/put requires valid x509 cert

Plans Utilize a local test queue and a test storage element. ATLAS analysis queue is more flexible with input and output data than the production system. With this we can test access to local and federated data, ATLAS DDM does not need to be involved next, test things closer to production: use test storage element. Possibly copy data locally and in dedicated space on other storage elements (details still under discussion with ATLAS DDM) First tests will be reading only, writing to storage element later Further investigations needed on how does production system know what data is on storage elements ? Currently unclear and under discussion with ATLAS DDM (own copy of data, federate all data on many SEs, ...)