Silvio Pardi spardi@na.infn.it R&D Storage Silvio Pardi spardi@na.infn.it.

Slides:



Advertisements
Similar presentations
Distributed Xrootd Derek Weitzel & Brian Bockelman.
Advertisements

EHarmony in Cloud Subtitle Brian Ko. eHarmony Online subscription-based matchmaking service Available in United States, Canada, Australia and United Kingdom.
Distributed Tier1 scenarios G. Donvito INFN-BARI.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Workload Management Massimo Sgaravatto INFN Padova.
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
WNoDeS – Worker Nodes on Demand Service on EMI2 WNoDeS – Worker Nodes on Demand Service on EMI2 Local batch jobs can be run on both real and virtual execution.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
THE NAPLES GROUP: RESOURCES SCoPE Datacenter of more than CPU/core and 300TB including infiniband and MPI Library in supporting Fast Simulation activy.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Hussein Suleman University of Cape Town Department of Computer Science Digital Libraries Laboratory February 2008 Data Curation Repositories:
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Service Availability Monitor tests for ATLAS Current Status Tests in development To Do Alessandro Di Girolamo CERN IT/PSS-ED.
EGEE is a project funded by the European Union under contract IST Enabling bioinformatics applications to.
THE GLUE DOMAIN DEPLOYMENT The middleware layer supporting the domain-based INFN Grid network monitoring activity is powered by GlueDomains [2]. The GlueDomains.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
HADOOP Dr. Silvio Pardi INFN-Naples.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
G. Russo, D. Del Prete, S. Pardi Frascati, 2011 april 4th-7th The Naples' testbed for the SuperB computing model: first tests G. Russo, D. Del Prete, S.
PRIN STOA-LHC: STATUS BARI BOLOGNA-18 GIUGNO 2014 Giorgia MINIELLO G. MAGGI, G. DONVITO, D. Elia INFN Sezione di Bari e Dipartimento Interateneo.
S. Pardi Computing R&D Workshop Ferrara 2011 – 4 – 7 July SuperB R&D on going on storage and data access R&D Storage Silvio Pardi
Riccardo Zappi INFN-CNAF SRM Breakout session. February 28, 2012 Ingredients 1. Basic ingredients (Fabric & Conn. level) 2. (Grid) Middleware ingredients.
SuperB – Naples Site Dr. Silvio Pardi. Right now the Napoli Group is employed in 3 main tasks relate the computing in SuperB Fast Simulation Electron.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
CMS Experience with Indigo DataCloud
CernVM-FS vs Dataset Sharing
Dynamic Extension of the INFN Tier-1 on external resources
Workload Management Workpackage
DPM at ATLAS sites and testbeds in Italy
Distributed storage, work status
Status of BESIII Distributed Computing
SuperB – INFN-Bari Giacinto DONVITO.
WP18, High-speed data recording Krzysztof Wrona, European XFEL
XNAT at Scale June 7, 2016.
Eleonora Luppi INFN and University of Ferrara - Italy
Global Data Access – View from the Tier 2
Overview of the Belle II computing
A testbed for the SuperB computing model
SuperB and its computing requirements
Report of Dubna discussion
Outline Benchmarking in ATLAS Performance scaling
GGF OGSA-WG, Data Use Cases Peter Kunszt Middleware Activity, Data Management Cluster EGEE is a project funded by the European.
INFN-GRID Workshop Bari, October, 26, 2004
Future challenges for the BELLE II experiment
Luca dell’Agnello INFN-CNAF
Status of Storm+Lustre and Multi-VO Support
Artem Trunov and EKP team EPK – Uni Karlsruhe
Simulation use cases for T2 in ALICE
ICT meeting Business needs
TYPES OF SERVER. TYPES OF SERVER What is a server.
Ákos Frohner EGEE'08 September 2008
An easier path? Customizing a “Global Solution”
 YongPyong-High Jan We appreciate that you give an opportunity to have this talk. Our Belle II computing group would like to report on.
Wide Area Workload Management Work Package DATAGRID project
Website Testing Checklist
Job Application Monitoring (JAM)
MonteCarlo production for the BaBar experiment on the Italian grid
Production Manager Tools (New Architecture)
Presentation transcript:

Silvio Pardi spardi@na.infn.it R&D Storage Silvio Pardi spardi@na.infn.it

EXPERIENCES

EXPERIENCES

RECOMMENDATION

RACCOMANDATION Daniele Bonaccorsi Gerd Behrmann LUCA DELL’AGNELLO

FUTURE WORKS AND POSSIBILE SINERGY

Alex Martin

Storage R&D Action Item

Incoming people are welcome! People Involved Domenico Del Prete – INFN-Naples Diacono Domenico – INFN-Bari Donvito Giacinto – INFN-Bari Armando Fella – INFN-Pisa Silvio Pardi – INFN-Naples Spinoso Vincenzo – INFN-Bari Guido Russo – INFN-Naples Alex Martin – Queen Mary Brian Bockelman In testing HDSF? EMI People? Other People? Incoming people are welcome!

Action and Activity Writing the Storage R&D program for the general document “The SuperB Computing R&D program” My Duty - the document is at 50% I plan to complete with the outcome of this meeting.

Activity on-going “Local” Storage system test: Going on testing users code against interesting storage solutions: Lustre, Hadoop, Xrootd, GlusterFS… This will be interesting not only from a Site Admin point of view but also, for the end-user in order to “monitor” the performance of the experiment code (see Brian Talk – slide 26) Test on “WN co-located storage solutions” vs “Server Based solutions” This will be tested also to understand the implication of a geographically-distributed hadoop-like storage A Dedicate cluster in INFN-Naples for this activities All these activities is going on and will go on for a long time in the future People already working from INFN-Bari and INFN-Naples Results will go into CTDR

Activity on-going “Grid” Data Management: Going on testing DIRAC to exploit DMS features Start submitting (with DIRAC) few simple SuperB Grid jobs using DIRAC DMS features in order to retrieve and store data Testing the dataset replication using DIRAC DMS features Testing the metadata handling DIRAC features trying to gather requirements from Physics people (starting with FastSim?) This activities is already started in INFN-Bari It is related to an activity in Distributed Computing that are evaluating DIRAC as job submission tool First results in 6-7 months from now The results of both (Distributed Computing and Distributed Storage) activities will go in CTDR

Activity on-going – 2 “Grid” Software Distribution: How distributed the software over the sites using automatic file distribution system. Test Based on the CVMFS showed by Alessandro de Salvo in the previous session. This activities is already started in INFN-Naples for the ALTAS collaboration Understand if this solution is applicable also for other use-case The results of both (Distributed Computing and Distributed Storage) activities will go in CTDR

Future Activity “Job Locality”: “Distributed Sites” NFSv4.1: Trying to understand if we can use a paradigm in which the job run as closer as possible to the data (See “Bricks” in Lustre or “Racks” in hadoop or similar in Gluster) “Distributed Sites” Testing how we can set-up a distributed Tier1 in italy: Which file-system works better in this environment (i.e. Hadoop, GlusterFS, dCache)? How to replicate services? How to distribute the jobs? When we will have 10Gbps network with GARRX? These activities will start at INFN-BARI and INFN-Naples in September NFSv4.1: Testing if pNFS could be of interest for the SuperB community in the next year Section Interested INFN-Bari, INFN-Pisa and INFN-Naples but no solution is usable at the moment => we need to follow the issue Remote data access: Testing is SuperB code is already usable in high latency network and provide feedback to the developers - This activity will start in Bari in October

Thank you