11-July-2008Fabrizio Furano - Data access and Storage: new directions1.

Slides:



Advertisements
Similar presentations
The Replica Location Service In wide area computing systems, it is often desirable to create copies (replicas) of data objects. Replication can be used.
Advertisements

Potential Data Access Architectures using xrootd OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC
The Google File System. Why? Google has lots of data –Cannot fit in traditional file system –Spans hundreds (thousands) of servers connected to (tens.
2/18/2004 Challenges in Building Internet Services February 18, 2004.
NAMED DATA NETWORKING IN CLIMATE RESEARCH AND HEP APPLICATIONS CHEP2015, APRIL , OKINAWA, JAPAN Susmit Shannigrahi*, Artur Barczyk^, Christos Papadopoulos*,
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
1 Status of the ALICE CERN Analysis Facility Marco MEONI – CERN/ALICE Jan Fiete GROSSE-OETRINGHAUS - CERN /ALICE CHEP Prague.
1 Content Distribution Networks. 2 Replication Issues Request distribution: how to transparently distribute requests for content among replication servers.
Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google∗
Distributed Data Stores – Facebook Presented by Ben Gooding University of Arkansas – April 21, 2015.
ALICE DATA ACCESS MODEL Outline ALICE data access model - PtP Network Workshop 2  ALICE data model  Some figures.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
CERN IT Department CH-1211 Genève 23 Switzerland t XROOTD news Status and strategic directions ALICE-GridKa operations meeting 03 July 2009.
The Next Generation Root File Server Andrew Hanushevsky Stanford Linear Accelerator Center 27-September-2004
ALICE data access WLCG data WG revival 4 October 2013.
Scalla/xrootd Andrew Hanushevsky, SLAC SLAC National Accelerator Laboratory Stanford University 19-May-09 ANL Tier3(g,w) Meeting.
Scalla/xrootd Andrew Hanushevsky SLAC National Accelerator Laboratory Stanford University 29-October-09 ATLAS Tier 3 Meeting at ANL
Scalable Web Server on Heterogeneous Cluster CHEN Ge.
Scalla/xrootd Introduction Andrew Hanushevsky, SLAC SLAC National Accelerator Laboratory Stanford University 6-April-09 ATLAS Western Tier 2 User’s Forum.
July-2008Fabrizio Furano - The Scalla suite and the Xrootd1 cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd Client Client A small 2-level cluster. Can.
FAX Redirection Topology Wei Yang 7/16/121. Redirector hardware at CERN Redundant redirectors for EU, UK, DE, FR – Redundant (the “+” sign below) VMs.
Xrootd Monitoring Atlas Software Week CERN November 27 – December 3, 2010 Andrew Hanushevsky, SLAC.
July-2008Fabrizio Furano - The Scalla suite and the Xrootd1.
Site operations Outline Central services VoBox services Monitoring Storage and networking 4/8/20142ALICE-USA Review - Site Operations.
Xrootd Update Andrew Hanushevsky Stanford Linear Accelerator Center 15-Feb-05
Public Caching Michael Albrecht Rory Carmichael. Motivation Distributed filesystems are useful in a variety of circumstances Active Storage  Co-locality.
Redirector xrootd proxy mgr Redirector xrootd proxy mgr Xrd proxy data server N2N Xrd proxy data server N2N Global Redirector Client Backend Xrootd storage.
CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Xrootd explained
02-June-2008Fabrizio Furano - Data access and Storage: new directions1.
Performance and Scalability of xrootd Andrew Hanushevsky (SLAC), Wilko Kroeger (SLAC), Bill Weeks (SLAC), Fabrizio Furano (INFN/Padova), Gerardo Ganis.
CERN IT Department CH-1211 Genève 23 Switzerland t Scalla/xrootd WAN globalization tools: where we are. Now my WAN is well tuned! So what?
CMS Issues. Background – RAL Infrastructure TM Nsd Xrd- mgr TM Nsd Xrd- mgr TM Rhd stagerd TGW Rhd stagerd TGW Cupv Vmgr Vdqm nsd Cupv Vmgr Vdqm nsd Cupv.
Scalla/xrootd Andrew Hanushevsky, SLAC SLAC National Accelerator Laboratory Stanford University 08-June-10 ANL Tier3 Meeting.
Scalla Advancements xrootd /cmsd (f.k.a. olbd) Fabrizio Furano CERN – IT/PSS Andrew Hanushevsky Stanford Linear Accelerator Center US Atlas Tier 2/3 Workshop.
Xrootd Monitoring and Control Harsh Arora CERN. Setting Up Service  Monalisa Service  Monalisa Repository  Test Xrootd Server  ApMon Module.
AliEn central services Costin Grigoras. Hardware overview  27 machines  Mix of SLC4, SLC5, Ubuntu 8.04, 8.10, 9.04  100 cores  20 KVA UPSs  2 * 1Gbps.
ALICE DATA ACCESS MODEL Outline 05/13/2014 ALICE Data Access Model 2  ALICE data access model  Infrastructure and SE monitoring.
CERN IT Department CH-1211 Genève 23 Switzerland t ALICE XROOTD news New xrootd bundle release Fixes and caveats A few nice-to-know-better.
Federated Data Stores Volume, Velocity & Variety Future of Big Data Management Workshop Imperial College London June 27-28, 2013 Andrew Hanushevsky, SLAC.
11-June-2008Fabrizio Furano - Data access and Storage: new directions1.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES L. Betev, A. Grigoras, C. Grigoras, P. Saiz, S. Schreiner AliEn.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
/ Fast Web Content Delivery An Introduction to Related Techniques by Paper Survey B Li, Chien-chang R Sung, Chih-kuei.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
09-Apr-2008Fabrizio Furano - Scalla/xrootd status and features1.
Network ManagerConnection Manager Connectivity and Messaging block Protocol Marshaller Factory.
ALICE Grid operations +some specific for T2s US-ALICE Grid operations review 7 March 2014 Latchezar Betev 1.
DCache/XRootD Dmitry Litvintsev (DMS/DMD) FIFE workshop1Dmitry Litvintsev.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
PRIN STOA-LHC: STATUS BARI BOLOGNA-18 GIUGNO 2014 Giorgia MINIELLO G. MAGGI, G. DONVITO, D. Elia INFN Sezione di Bari e Dipartimento Interateneo.
Intelligent Workload Management across Database Replicas 19 August 2015 Intelligent Workload Management, Ritika Nevatia 2 Ritika Nevatia Under the supervision.
CERN IT Department CH-1211 Genève 23 Switzerland t Xrootd LHC An up-to-date technical survey about xrootd- based storage solutions.
The ALICE Analysis -- News from the battlefield Federico Carminati for the ALICE Computing Project CHEP 2010 – Taiwan.
a brief summary for users
Clustered Web Server Model
Dynamic Storage Federation based on open protocols
Global Data Access – View from the Tier 2
CyberSKA: Global Federated e-Infrastructure
Xrootd explained Cooperation among ALICE SEs
GT Dynamic Federations
Data access and Storage
FDR readiness & testing plan
Brookhaven National Laboratory Storage service Group Hironori Ito
Data access performance through parallelization and vectored access
Scalla/XRootd Advancements
Support for ”interactive batch”
CS 345A Data Mining MapReduce This presentation has been altered.
CS533 Concepts of Operating Systems Class 11
System Implementation
Presentation transcript:

11-July-2008Fabrizio Furano - Data access and Storage: new directions1

11-July-2008Fabrizio Furano - Data access and Storage: new directions2

11-July-2008Fabrizio Furano - Data access and Storage: new directions3

11-July-2008Fabrizio Furano - Data access and Storage: new directions4

11-July-2008Fabrizio Furano - Data access and Storage: new directions5

11-July-2008Fabrizio Furano - Data access and Storage: new directions6

11-July-2008Fabrizio Furano - Data access and Storage: new directions7

11-July-2008Fabrizio Furano - Data access and Storage: new directions8

11-July-2008Fabrizio Furano - Data access and Storage: new directions9 Pre-xfer data “locally” Pre-xfer data “locally” Legacy remote access Legacy remote access Remote access+ Remote access+ Data Processing Data access Overhead Need for potentially useless replicas And a huge Bookkeeping! Overhead Need for potentially useless replicas And a huge Bookkeeping! Latency Wasted CPU cycles But easy to understand Latency Wasted CPU cycles But easy to understand Interesting! Efficient practical Interesting! Efficient practical

Application 11-July-2008Fabrizio Furano - Data access and Storage: new directions10 Client1 Server Client2 Client3 TCP (control)‏ Clients still see One Physical connection per server Clients still see One Physical connection per server TCP(data)‏ Async data gets automaticall y splitted Async data gets automaticall y splitted

11-July-2008Fabrizio Furano - Data access and Storage: new directions11

11-July-2008Fabrizio Furano - Data access and Storage: new directions12

11-July-2008Fabrizio Furano - Data access and Storage: new directions13

11-July-2008Fabrizio Furano - Data access and Storage: new directions14

11-July-2008Fabrizio Furano - Data access and Storage: new directions15

11-July-2008Fabrizio Furano - Data access and Storage: new directions16 cmsd xrootd Prague NIHAM … any other cmsd xrootd CERN cmsd xrootd ALICE global redirector (alirdr) all.role meta manager all.manager meta alirdr.cern.ch:1312root://alirdr.cern.ch/Includes CERN, GSI, and others xroot clusters Meta Managers can be geographically replicated Can have several in different places for region- aware load balancing Meta Managers can be geographically replicated Can have several in different places for region- aware load balancing cmsd xrootd GSI all.manager meta alirdr.cern.ch:1312 all.role manager

cmsd xrootd GSI 11-July-2008Fabrizio Furano - Data access and Storage: new directions17 cmsd xrootd Prague NIHAM … any other cmsd xrootd CERN cmsd xrootd ALICE global redirector all.role meta manager all.manager meta alirdr.cern.ch:1312 all.role manager But missing a file? Ask to the global metamgr Get it from any other collaborating cluster all.manager meta alirdr.cern.ch:1312 Local clients work normally

11-July-2008Fabrizio Furano - Data access and Storage: new directions18

11-July-2008Fabrizio Furano - Data access and Storage: new directions19

11-July-2008Fabrizio Furano - Data access and Storage: new directions20

11-July-2008Fabrizio Furano - Data access and Storage: new directions21

11-July-2008Fabrizio Furano - Data access and Storage: new directions22

11-July-2008Fabrizio Furano - Data access and Storage: new directions23

11-July-2008Fabrizio Furano - Data access and Storage: new directions24

11-July-2008Fabrizio Furano - Data access and Storage: new directions25

11-July-2008Fabrizio Furano - Data access and Storage: new directions26

11-July-2008Fabrizio Furano - Data access and Storage: new directions27 Courtesy of Gerardo Ganis (CERN PH-SFT)

11-July-2008Fabrizio Furano - Data access and Storage: new directions28 Courtesy of Gerardo Ganis (CERN PH-SFT)