Main title HEP in Greece Group info (if required) Your name ….

Slides:



Advertisements
Similar presentations
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Advertisements

4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
LCG-France Project Status Fabio Hernandez Frédérique Chollet Fairouz Malek Réunion Sites LCG-France Annecy, May
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Romanian SA1 report Alexandru Stanciu ICI.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Nicholas LoulloudesMarch 3 rd, 2009 g-Eclipse Testing and Benchmarking Grid Infrastructures using the g-Eclipse Framework Nicholas Loulloudes On behalf.
EGI-InSPIRE Steven Newhouse Interim EGI.eu Director EGI-InSPIRE Project Director.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
INFSO-RI Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
SEE-GRID-SCI Regional Grid Infrastructure: Resource for e-Science Regional eInfrastructure development and results IT’10, Zabljak,
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Main title ERANET - HEP Group info (if required) Your name ….
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
Alex Read, Dept. of Physics Grid Activity in Oslo CERN-satsingen/miljøet møter MN-fakultetet Oslo, 8 juni 2009 Alex Read.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Responsibilities of ROC and CIC in EGEE infrastructure A.Kryukov, SINP MSU, CIC Manager Yu.Lazin, IHEP, ROC Manager
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Towards Production Grids in Greenfield Regions Dr. Ognjen Prnjat European and Regional Grid Management GRNET - Greek Research & Technology Network
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
TNC 2006 Catania 17 th May Technical Challenges of Establishing a Pilot Grid Infrastructure in South Eastern Europe Emanouil Atanassov on behalf.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America EELA Infrastructure (WP2) Roberto Barbera.
INFSO-RI Enabling Grids for E-sciencE Introduction to Grid Computing, EGEE and Bulgarian Grid Initiatives - Plovdiv,
CERN - IT Department CH-1211 Genève 23 Switzerland t Oracle Real Application Clusters (RAC) Techniques for implementing & running robust.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks SA1: Grid Operations Maite Barroso (CERN)
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGI Operations Tiziana Ferrari EGEE User.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Julia Andreeva on behalf of the MND section MND review.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Monitoring of the LHC Computing Activities Key Results from the Services.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
Dr. Isabel Campos Plasencia (IFCA-CSIC) Spanish NGI Coordinator ES-GRID The Spanish National Grid Initiative.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
Overview of ATLAS Israel Computing RECFA, April Overview of ATLAS Israel Computing Overview of ATLAS Israel Computing RECFA Meeting Tel Aviv University,
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE Operations: Evolution of the Role of.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Resource Provisioning EGI_DS WP3 consolidation workshop, CERN Fotis Karayannis, GRNET.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Pedro Andrade > IT-GD > D4Science Pedro Andrade CERN European Organization for Nuclear Research GD Group Meeting 27 October 2007 CERN (Switzerland)
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
Using HLRmon for advanced visualization of resource usage Enrico Fattibene INFN - CNAF ISCG 2010 – Taipei March 11 th, 2010.
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
1 1 – Statistical information about our resource centers; ; 2 – Basic infrastructure of the Tier-1 & 2 centers; 3 – Some words about the future.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Role and Challenges of the Resource Centre in the EGI Ecosystem Tiziana Ferrari,
(Prague, March 2009) Andrey Y Shevel
EGEE is a project funded by the European Union
Belle II Physics Analysis Center at TIFR
HellasGrid CA & euGridPMA
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
Fotis Karayannis HellasGrid Scientific Coordinator GRNET
GRIF : an EGEE site in Paris Region
Presentation transcript:

Main title HEP in Greece Group info (if required) Your name ….

LHC HEP groups in Greece AUTH - ATLAS – ????? Demokritos - CMS – 6 Senior Physicists, 2 post-docs, 3 Ph.D. Students NTUA - ATLAS – ????? UoA - ATLAS, CMS, ALICE – ??????? UoI - CMS – ???????

Current HEP Grid Resources AUTH 2 Grid Clusters (GR-01-AUTH, HG-03-AUTH) 300 cores 70TB storage more than 30% allocated for the LHC/HEP VOs Prof. Chariclia PETRIDOU The HG-03-AUTH cluster is a part of the HellasGrid infrastructure, owned by GRNET. Demokritos 1 Grid Cluster (GR-05-Demokritos) 120 cores 25TB storage dedicated to HEP VOs Dr. Christos MARKOU NTUA 1 Grid Cluster (GR-03-HEPNTUA) 74 cores 4.5 TB storage Dedicated to HEP VOs Prof. Evagelos GAZIS UoA - IASA 2 Grid Clusters (GR-06-IASA, HG-02-IASA) 140 cores (additional 160 cores in the next days) 10TB storage (additional 10TB in the next days) more than 30% allocated for the LHC/HEP VOs Prof. Paris SPHICAS The HG-02-IASA cluster is a part of the HellasGrid infrastructure, owned by GRNET. Uo Ioannina 1 Grid Cluster (GR-07-UOI-HEPLAB) 112 cores 200TB storage dedicated to the CMS experiment Prof. Kostas FOUDAS

Expertise in Grid technology (1) At least two teams of the consortium, IASA, AUTH and Demokritos: – Have strong cooperation with GRNET – They are major stakeholders of the Greek National Grid Initiative (NGI_GRNET/HellasGrid) – Operate Grid production-level sites since 2003 – Deep knowledge in day-to-day operations of the Grid distributed core services (including Virtual Organization Management Systems, Information Systems, Workload management services, Logical File Catalogs e.t.c.) – Experienced in testing and certification of grid services and middleware – Datacenter/Grid monitoring – Clustering, data management, network monitoring

Expertise in Grid technology (2) Participation in the pan-European projects – CROSSGRID, GRIDCC, EGEE-I, EGEE-II, EGEE-ΙΙΙ, EUChinaGrid, SEEGRID and since 6/2010 in EGI Additionally, the teams are responsible for running at a National and/or International level services: – HellasGrid Certification Authority [ – European Grid Application Database [ – Unified Middleware Deployment global repository [

Expertise in Grid management On behalf of GRNET, personnel of our teams carried-out regional and/or national level responsibilities/roles: – EGI task leader for Reliable Infrastructure Provision – EGI task leader for User Community Technical Services – Deputy Regional Coordinator for the operations in the South Eastern Europe region (period 4/2009-4/ EGEEIII) – Country representative/coordinator for the HellasGrid (period 5/2009-4/ EGEEIII) – Manager for the Policy, International Cooperation & Standardization Status Report Activity of EGEE-III – Coordinator of the Direct User Support group in EGEE-III

HEP data transfers (1) CMS phedex commissioned links for data transfers

HEP data transfers (2) ATLAS – DQ2 - Don Quijote (second release) – DQ2 is built on top of Grid data transfer tools

Installed HEP SW The CRAB-CMS Remote Analysis Builder it is installed on the UI that is hosted at IASA The GANGA frontend for job management on the UI of AUTH Almost all the sites of the consortium have the most up-to- date software installed

Some indicative statistics More than 580k jobs since 1/2005 More than 1.5M normalized cpu hours since 1/2005

Needs ….. What our HEP consortium needs in terms of Physics and Why … – [ChristosM]

Specs of a T2 site Based on CMS and Atlas specifications, the minimum requirements that should be covered for a T2 site, are: Computing5-6k HEPSpec2006 aprox cores (~70 nodes with dual Quad core) Storage> 250TB available disk space (aprox 300TB disks, with 1/6 redundancy - RAID)

Proposed distributed T2 site Distributed in two locations in order Take advantage of existing Infrastructure and Support Teams Scenario 1: Computing and Storage Infrastructure in both locations – One Tier 2 with two sub-clusters. – Higher Availability Redundancy in case of site failure Allow for flexible maintenance windows Scenario 2: Computing and Storage Infrastructure decoupled – One Tier2 with one sub-cluster – Split requirement on technical expertise Each Support team focuses on specific technical aspects (Storage - Computing) Reduced requirement for overlapped manpower & homogeneity effort

Things to be discussed … (this slide it only for internal discussion/decisions during the EVO meeting on 18/6) Offering a segment (i.e. 20%) of the resources for the SEE VO – benefit the academic/scientific communities of the SEE VO. Pledged resources will be provided/guaranteed by us (MoU maybe ???) For example: – Guaranteed 50k/year from the consortium, divided as: 50% for computing resources upgrade and/or expansion 40% for additional storage 10% maintenance of the infrastructure (A/C, UPS, Electrical infra, …)