1 The S.Co.P.E. Project and its model of procurement G. Russo, University of Naples Prof. Guido Russo.

Slides:



Advertisements
Similar presentations
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Advertisements

Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
EU-GRID Work Program Massimo Sgaravatto – INFN Padova Cristina Vistoli – INFN Cnaf as INFN members of the EU-GRID technical team.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Workload Management Massimo Sgaravatto INFN Padova.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
Los Angeles County eCloud Overview November 26, 2012.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
IT Infrastructure Chap 1: Definition
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Federico Ruggieri INFN-CNAF GDB Meeting 10 February 2004 INFN TIER1 Status.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
RAL PPD Computing A tier 2, a tier 3 and a load of other stuff Rob Harper, June 2011.
Main title ERANET - HEP Group info (if required) Your name ….
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Spending Plans and Schedule Jae Yu July 26, 2002.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
THE NAPLES GROUP: RESOURCES SCoPE Datacenter of more than CPU/core and 300TB including infiniband and MPI Library in supporting Fast Simulation activy.
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
Manchester Site report Sabah Salih HEPP The University of Manchester UK HEP Tier3.
CERN IT Department CH-1211 Genève 23 Switzerland Introduction to CERN Computing Services Bernd Panzer-Steindel, CERN/IT.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
Lofar Information System on GRID A.N.Belikov. Lofar Long Term Archive Prototypes: EGEE Astro-WISE Requirements to data storage Tiers Astro-WISE adaptation.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
THE GLUE DOMAIN DEPLOYMENT The middleware layer supporting the domain-based INFN Grid network monitoring activity is powered by GlueDomains [2]. The GlueDomains.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
Analysis and Forming of Energy Efficiency and Green IT Metrics Framework for Sonera Helsinki Data Center HDC Matti Pärssinen Thesis supervisor: Prof. Jukka.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
NERSC/LBNL at LBNL in Berkeley October 2009 Site Report Roberto Gomezel INFN 1.
INFN Site Report R.Gomezel November 5-9,2007 The Genome Sequencing University St. Louis.
SuperB – Naples Site Dr. Silvio Pardi. Right now the Napoli Group is employed in 3 main tasks relate the computing in SuperB Fast Simulation Electron.
Brief introduction about “Grid at LNS”
Workload Management Workpackage
Paul Kuipers Nikhef Site Report Paul Kuipers
A testbed for the SuperB computing model
LCG 3D Distributed Deployment of Databases
UK GridPP Tier-1/A Centre at CLRC
HIGH-PERFORMANCE COMPUTING SYSTEM FOR HIGH ENERGY PHYSICS
Wide Area Workload Management Work Package DATAGRID project
gLite The EGEE Middleware Distribution
Presentation transcript:

1 The S.Co.P.E. Project and its model of procurement G. Russo, University of Naples Prof. Guido Russo

2 Table of content  The S.Co.P.E. Project  The Procurement  The INFN Prototype The S.Co.P.E. Project PON- MiUR

3 The S.Co.P.E.: Italian acronymic for high Performance Cooperative and distributed System for scientific Elaboration. The S.Co.P.E. Project PON- MiUR

4 The S.Co.P.E. Project – The topology The S.Co.P.E. Project PON- MiUR Optical Fiber CAMPUS-GRID MEDICINE CSI ENGINEERING

5 The S.Co.P.E. Project – The CAMPUS-GRID The S.Co.P.E. Project PON- MiUR DMA DiChi DSF C.S.I. GARR

6 The S.Co.P.E. Project – Available resource Progetto S.Co.P.E. PON - MiUR Metropolitan Area Grid to share the available resource CAMPUS GRID MEDICINE ENGINEERING CSI NETWORK 2.4 Gbit/s

7 The S.Co.P.E. Project – The services The S.Co.P.E. Project PON - MiUR Workload Management Services, to schedule the tasks on the distributed resources, in a transparently way for the users, by using the optimization algorithms, in order to obtain an high QoS. Data Management Services, to manage the data replication, the metadata catalog and the data collective catalogs. Information Index Services, to discover and collect the resources available on the Grid. Logging & Book keeping Services, to store the history of the job submitted at the Grid in order to extract accounting information. Resource Monitoring Services, to control constantly the status of grid resources, in order to prevent the faults and to guarantee high availability.

8 The S.Co.P.E. Project – The Procurement The S.Co.P.E. Project PON- MiUR INTEGRATION OF THE EXISTING COMPUTATIONAL AND STORAGE RESOURCES WITH NEW HIGH PERFORMANCE SYSTEMS

9 The S.Co.P.E. Project – The Governances The S.Co.P.E. Project PON- MiUR Technical support on site for System monitoring Faults recovery Low level management

10 The Procurement - Goals The S.Co.P.E. Project PON- MiUR INTEGRATE THE EXISTING HARDWARE HETEROGENEOUS PLATFORMS NEEDED IMPLEMENT A SYSTEM ALL INCLUSE -READY TO USE READY TO USE

11 The Procurement – The Requirements The S.Co.P.E. Project PON- MiUR THE IMPLEMENTED HARDWARE MUST SUPPORT THE SPECIFIC OPERATIVE SYSTEM, THE GRID MIDDLEWARE AND THE SCIENTIFIC APPLICATION: Scientific Linux (SL) 3.x.x 4.x.x INFN-GRID (last available release) Scientific Library

12 The Procurement – The location The S.Co.P.E. Project PON- MiUR THE NEW HARDWARE WILL BE IMPLEMENTED IN A CENTRAL SITE IN THE CAMPUS-GRID

13 The Procurement – Supply The S.Co.P.E. Project PON- MiUR THE PROCUREMENTS ARE ORGANIZED IN 4 MAIN SUPPLY: SERVERS POWER SUPPLY- ELECTRICAL AND SECURITY SYSTEM COMPUTATIONAL AND STORAGE SYSTEMS - SOFTWARE IMPLEMENTATION NETWORKS SYSTEM AND WEARING

14 The Procurement – Supply N°1 The S.Co.P.E. Project PON- MiUR SERVERS MANAGEMENT SERVER UNITS DISK POOL – S.A.N

15 The Procurement – Supply N°2 The S.Co.P.E. Project PON- MiUR SITE ADJUST POWER SUPPLY SYSTEMS SECURITY SYSTEMS ENVIRONMENTAL SYSTEMS CONDITIONATE AIR SYSTEMS ELECTRICAL PANNELS

16 The Procurement – Supply N°3 The S.Co.P.E. Project PON- MiUR COMPUTATIONAL AND STORAGE SYSTEM - SOFTWARE IMPLEMENTATION CLUSTER OF DUAL PROCESSOR FOR HIGH PERFORMANCE AND HIGH THRUGHPUT APPLICATIONS SMP SYSTEM FOR STRONLGY COUPLED APPLICATIONS STORAGE SYSTEM IN GRID ENVIRONMENT RACK & WEARING ENVIRONMENTAL MONITROING SYSTEMS SOFTWARE IMPLEMENTATION – SYSTEMS INTEGRATION

17

18 The Procurement – Supply N°4 The S.Co.P.E. Project PON- MiUR NETWORKS SYSTEM AND WEARING WIRED OR WIRELESS LINK TO SITE IN COROGLIO WIRED LINK TO ENGINEERING DEPARTMENTS IN AGNANO WIRELESS LINK TO THE ASTRONOMICAL OBSERVATORY OF CAPODIMONTE NETWORK SYSTEMS

19 The Procurement – Network Supply The S.Co.P.E. Project PON- MiUR Optical Fiber CAMPUS-GRID MEDICINE CSI ENGINEERING ASTRONOMICAL OBSERVATORY NEW ENGINEERING COROGLIO SITE

20 The Prototype For The Atlas Tier 2 The S.Co.P.E. Project PON- MiUR Hierarchical model Data Server CPU Server desktop CPU Server desktop CERN Tier 0 CPU Server Tier 2 (“Regional” Centers) Tier-2 Esp.Atlas: RM1, NA Tier 3-4 (Dip. e Institute) Tier 1 (“Regional” naz. and internaz. Center)

21 The Prototype For The Atlas Tier 2 The S.Co.P.E. Project PON- MiUR INFN-GRID.ITINTEL19 bi-proc2.81 TBRH7.3 SciLinux PBS GT2 LCG INFN-ATLASINTEL17 bi-proc2.88 TB INFN-AstroParticleINTEL8 bi-proc2.8 INFN-BABAR-CMSINTEL10 bi-proc2.8 ASTROAMD9 bi-proc TBGentoo SciLinux PBS AMRAAMD99 bi-proc TBSlackware SciLinux PBS DSF-INFMAlphaEV716 bi-proc1,2140 GB True-64 LSF DiChi-BioteknetAlphaEV6810 quad- proc 1,41 TBTrue-64 LSF DiChi-LSDMAlphaDS204 bi-proc INSTMOpteron12 bi-proc2,4 CNR-SPACIItanium268 bi-proc1,51.2 TBLinux-HP LSF ProgettoProc.N. proc.GHzStorageS.O. s/w m/w grid

22 PC1 Environmental Monitoring (T, HV, UPS) PC3 HW Monitoring (Operating Systems) PC2 SW Monitoring (servers, storage) PC4 GRID Monitoring (infrastruttura, servizi) SERVER Hihg level GUI DataBase NAS Backup system UPS 60 kVA 8 biproc. dual core Rack Rittal + CMC Unit 8 biproc. dual core 20 TB TIER-2 ATLAS ROOM Rack Rittal + CMC Unit CHILLER OUT SIDE TIER-2 ATLAS FIRESUPPRESSIONFIRESUPPRESSION P R E S E N T E R Campust Grid Network Cooling Unit switch