SuperB – INFN-Bari Giacinto DONVITO.

Slides:



Advertisements
Similar presentations
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Advertisements

Distributed Tier1 scenarios G. Donvito INFN-BARI.
INFN Testbed status report L. Gaido WP6 meeting CERN - October 30th, 2002.
DONVITO GIACINTO (INFN) ZANGRANDO, LUIGI (INFN) SGARAVATTO, MASSIMO (INFN) REBATTO, DAVID (INFN) MEZZADRI, MASSIMO (INFN) FRIZZIERO, ERIC (INFN) DORIGO,
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Physicists's experience of the EGEE/LCG infrastructure usage for CMS jobs submission Natalia Ilina (ITEP Moscow) NEC’2007.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
INFSO-RI Enabling Grids for E-sciencE Project Gridification: the UNOSAT experience Patricia Méndez Lorenzo CERN (IT-PSS/ED) CERN,
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
WNoDeS – Worker Nodes on Demand Service on EMI2 WNoDeS – Worker Nodes on Demand Service on EMI2 Local batch jobs can be run on both real and virtual execution.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.
E-science grid facility for Europe and Latin America Using Secure Storage Service inside the EELA-2 Infrastructure Diego Scardaci INFN (Italy)
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
THE NAPLES GROUP: RESOURCES SCoPE Datacenter of more than CPU/core and 300TB including infiniband and MPI Library in supporting Fast Simulation activy.
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
Eine Einführung ins Grid Andreas Gellrich IT Training DESY Hamburg
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
Lofar Information System on GRID A.N.Belikov. Lofar Long Term Archive Prototypes: EGEE Astro-WISE Requirements to data storage Tiers Astro-WISE adaptation.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) The Egyptian Grid Infrastructure Maha Metawei
EGEE is a project funded by the European Union under contract IST Enabling bioinformatics applications to.
StoRM: status report A disk based SRM server.
INFN GRID Production Infrastructure Status and operation organization Cristina Vistoli Cnaf GDB Bologna, 11/10/2005.
StoRM + Lustre Proposal YAN Tian On behalf of Distributed Computing Group
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Enabling Grids for E-sciencE INFN Workshop – May 7-11 Rimini 1 Grid Accounting Status at INFN Riccardo Brunetti INFN-TORINO.
S. Pardi Computing R&D Workshop Ferrara 2011 – 4 – 7 July SuperB R&D on going on storage and data access R&D Storage Silvio Pardi
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Validation tests of CNAF storage infrastructure Luca dell’Agnello INFN-CNAF.
EGRID Project: Experience Report Implementation of a GRID Infrastructure for the Analysis of Economic and Financial data.
PaaS services for Computing and Storage
Kilian Schwarz ALICE Computing Meeting GSI, October 7, 2009
GridPP37, Ambleside Adrian Coveney (STFC)
Dynamic Extension of the INFN Tier-1 on external resources
Distributed storage, work status
BaBar-Grid Status and Prospects
LCG Service Challenge: Planning and Milestones
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
StoRM: a SRM solution for disk based storage systems
Pete Gronbech GridPP Project Manager April 2017
LCG 3D Distributed Deployment of Databases
Silvio Pardi R&D Storage Silvio Pardi
Added value of new features of the ATLAS computing model and a shared Tier-2 and Tier-3 facilities from the community point of view Gabriel Amorós on behalf.
INFN-GRID Workshop Bari, October, 26, 2004
Grid Computing for the ILC
Brief overview on GridICE and Ticketing System
Accounting at the T1/T2 Sites of the Italian Grid
MC data production, reconstruction and analysis - lessons from PDC’04
The INFN Tier-1 Storage Implementation
R. Graciani for LHCb Mumbay, Feb 2006
LHC Data Analysis using a worldwide computing grid
Pierre Girard ATLAS Visit
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Job Application Monitoring (JAM)
Servizi di Grid e impatto sulla rete
gLite The EGEE Middleware Distribution
Site availability Dec. 19 th 2006
Installation/Configuration
The LHCb Computing Data Challenge DC06
Presentation transcript:

SuperB – INFN-Bari Giacinto DONVITO

Site configuration ~ 1000 Core shared among several VO/groups 10% of this resources could be used by “not founding” VOs Farm accessible both from local batch system and grid Two different CEs: lcg-CE and CREAM-CE ~500TB of storage Available locally with Lustre (distributed) file-system We provide SRM end-point by means of a StoRM installation

Experience on supporting SuperB The farm is reconfigured to support also SuperB activities: The required list of (32bit) libraries where installed on all the WN Exploiting a local tool that enforces the consistency between WNs The Lustre file-system is configured such a way that the SuperB users could access both using SRM interface and locally to the SuperB area Reading from lustre as a local file-system is already possible Specific ACLs are configured in order to allow also writing on lustre without using SRM interface This will increase the performance and the reliability in writing the output

Problems encountered Writing to lustre file-system requires particular set-up of “umask” at user level Long jobs failing due to a configuration error at site level => Fixed!

SuperB activity: future In the future, Bari will join the distributed computing infrastructure in order to provide Tier1 computational resources to SuperB experiment Both network (1Gbit/s now, 10Gbit/s in the near future) and the infrastructure will easily fit with the requirement of the experiment TIER-1