Main title ERANET - HEP Group info (if required) Your name ….

Slides:



Advertisements
Similar presentations
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Advertisements

4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Nov. 18, EGEE and gLite are registered trademarks EGEE-III, Regional, and National.
OSG Public Storage and iRODS
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Romanian SA1 report Alexandru Stanciu ICI.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Nicholas LoulloudesMarch 3 rd, 2009 g-Eclipse Testing and Benchmarking Grid Infrastructures using the g-Eclipse Framework Nicholas Loulloudes On behalf.
EGI-InSPIRE Steven Newhouse Interim EGI.eu Director EGI-InSPIRE Project Director.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
INFSO-RI Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
SEE-GRID-SCI Regional Grid Infrastructure: Resource for e-Science Regional eInfrastructure development and results IT’10, Zabljak,
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Main title HEP in Greece Group info (if required) Your name ….
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Responsibilities of ROC and CIC in EGEE infrastructure A.Kryukov, SINP MSU, CIC Manager Yu.Lazin, IHEP, ROC Manager
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Towards Production Grids in Greenfield Regions Dr. Ognjen Prnjat European and Regional Grid Management GRNET - Greek Research & Technology Network
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
TNC 2006 Catania 17 th May Technical Challenges of Establishing a Pilot Grid Infrastructure in South Eastern Europe Emanouil Atanassov on behalf.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America EELA Infrastructure (WP2) Roberto Barbera.
INFSO-RI Enabling Grids for E-sciencE Introduction to Grid Computing, EGEE and Bulgarian Grid Initiatives - Plovdiv,
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks SA1: Grid Operations Maite Barroso (CERN)
EGEE-III-INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE-III All Activity Meeting Brussels,
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGI Operations Tiziana Ferrari EGEE User.
Local Intelligent Networks and Energy Active Regions in Flanders Carlo Mol - VITO.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Monitoring of the LHC Computing Activities Key Results from the Services.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Dr. Isabel Campos Plasencia (IFCA-CSIC) Spanish NGI Coordinator ES-GRID The Spanish National Grid Initiative.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
Overview of ATLAS Israel Computing RECFA, April Overview of ATLAS Israel Computing Overview of ATLAS Israel Computing RECFA Meeting Tel Aviv University,
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Components Selection Validation Integration Deployment What it could mean inside EGI
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE Operations: Evolution of the Role of.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Resource Provisioning EGI_DS WP3 consolidation workshop, CERN Fotis Karayannis, GRNET.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Summary of OSG Activities by LIGO and LSC LIGO NSF Review November 9-11, 2005 Kent Blackburn LIGO Laboratory California Institute of Technology LIGO DCC:
Pedro Andrade > IT-GD > D4Science Pedro Andrade CERN European Organization for Nuclear Research GD Group Meeting 27 October 2007 CERN (Switzerland)
EGI-InSPIRE Project Overview1 EGI-InSPIRE Overview Activities and operations boards Tiziana Ferrari, EGI.eu Operations Unit Tiziana.Ferrari at egi.eu 1.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
1 1 – Statistical information about our resource centers; ; 2 – Basic infrastructure of the Tier-1 & 2 centers; 3 – Some words about the future.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Role and Challenges of the Resource Centre in the EGI Ecosystem Tiziana Ferrari,
Belle II Physics Analysis Center at TIFR
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
Fotis Karayannis HellasGrid Scientific Coordinator GRNET
GRIF : an EGEE site in Paris Region
Presentation transcript:

Main title ERANET - HEP Group info (if required) Your name ….

Consortium NTUA – Prof. Evangelos Gazis IASA – Prof. Paris Sphicas AUTH – Prof. Chariclia Petridou Demokritos – Prof. Christos Markou UoIoannina – tbs

Current Grid Resources NTUA 1 Grid Cluster (GR-03-HEPNTUA) 74 cores 4.5TB storage dedicated to HEP VOs IASA 2 Grid Clusters (GR-06-IASA, HG-02-IASA) 140 cores (additional 160 cores in the next days) 10TB storage (additional 10TB in the next days) more than 30% allocated for the LHC/HEP VOs The HG-02-IASA cluster is a part of the HellasGrid infrastructure, owned by GRNET. AUTH 2 Grid Clusters (GR-01-AUTH, HG-03-AUTH) 300 cores 70TB storage more than 30% allocated for the LHC/HEP VOs The HG-03-AUTH cluster is a part of the HellasGrid infrastructure, owned by GRNET. Demokritos 1 Grid Cluster (GR-05-Demokritos) XX cores YYTB storage dedicated to HEP VOs UoIoannina 1 Grid Cluster (GR-07-UOI-HEPLAB) 112 cores 200TB storage dedicated to the CMS experiment

Expertise in Grid technology (1) At least two teams of the consortium, IASA & AUTH: – Have strong cooperation with GRNET – They are two of the major stakeholders of the Greek National Grid Initiative (NGI_GRNET/HellasGrid) – Operate Grid production-level sites since 2003 – Deep knowledge in day-to-day operations of the Grid distributed core services (including Virtual Organization Management Systems, Information Systems, Workload management services, Logical File Catalogs e.t.c.) – Experienced in testing and certification of grid services and middleware – Datacenter/Grid monitoring – Clustering, data management, network monitoring

Expertise in Grid technology (2) Participation in the pan-European projects – CROSSGRID, GRIDCC, EGEE-I, EGEE-II, EGEE-ΙΙΙ, EUChinaGrid, SEEGRID and since 6/2010 in EGI Additionally, both teams are responsible for running at a National and/or International level services: – HellasGrid Certification Authority [ – European Grid Application Database [ – Unified Middleware Deployment global repository [

Expertise in Grid management On behalf of GRNET, IASA and AUTH personnel carried-out regional and/or national level responsibilities/roles: – EGI task leader for Reliable Infrastructure Provision – EGI task leader for User Community Technical Services – Deputy Regional Coordinator for the operations in the South Eastern Europe region (period 4/2009-4/ EGEEIII) – Country representative/coordinator for the HellasGrid (period 5/2009-4/ EGEEIII) – Manager for the Policy, International Cooperation & Standardization Status Report Activity of EGEE-IIII

HEP data transfers (1) CMS phedex commissioned links for data transfers

HEP data transfers (2) ATLAS – DQ2 - Don Quijote (second release) – DQ2 is built on top of Grid data transfer tools

Installed HEP SW The CRAB-CMS Remote Analysis Builder it is installed on the UI that is hosted at IASA The GANGA frontend for job management on the UI of AUTH Almost all the sites of the consortium have the most up-to- date software installed

Some indicative statistics More than 580k jobs since 1/2005 More than 1.5M normalized cpu hours since 1/2005

Needs ….. What our HEP consortium needs in terms of Physics and Why … – [ChristosM]

Specs of a T2 site Based on CMS and Atlas specifications, the minimum requirements that should be covered for a T2 site, are: Computing5-6k HEPSpec2006 aprox cores (~70 nodes with dual Quad core) Storage> 200TB available disk space (aprox 240TB disks, with 1/12 redundancy - RAID)

Proposed distributed T2 site Distributed in two locations in order Take advantage of existing Infrastructure and Support Teams Scenario 1: Computing and Storage Infrastructure in both locations – One Tier 2 with two sub-clusters. – Higher Availability Redundancy in case of site failure Allow for flexible maintenance windows Scenario 2: Computing and Storage Infrastructure decoupled – One Tier2 with one sub-cluster – Split requirement on technical expertise Each Support team focuses on specific technical aspects (Storage - Computing) Reduced requirement for overlapped manpower & homogeneity effort

Things to be discussed … (this slide it only for internal discussion/decisions during the EVO meeting on 18/6) Offering a segment (i.e. 20%) of the resources for the SEE VO – benefit the academic/scientific communities of the SEE VO. Pledged resources will be provided/guaranteed by us (MoU maybe ???) For example: – Guaranteed 50k/year from the consortium, divided as: 50% for computing resources upgrade and/or expansion 40% for additional storage 10% maintenance of the infrastructure (A/C, UPS, Electrical infra, …)