Portuguese Grid Infrastruture(s) Gonçalo Borges Jornadas LIP 2010 Braga, Janeiro 2010.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site Adam Padee ( ) Ryszard Gokieli ( ) Krzysztof.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
1 Northwestern University Information Technology Data Center Elements Research and Administrative Computing Committee Presented October 8, 2007.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
VMware Infrastructure Alex Dementsov Tao Yang Clarkson University Feb 28, 2007.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
5.3 HS23 Blade Server. The HS23 blade server is a dual CPU socket blade running Intel´s new Xeon® processor, the E5-2600, and is the first IBM BladeCenter.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
ww w.p ost ers essi on. co m E quipped with latest high end computing systems for providing wide range of services.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
Prague TIER2 Computing Centre Evolution Equipment and Capacities NEC'2009 Varna Milos Lokajicek for Prague Tier2.
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
1 KFUPM Enterprise Network Sadiq M. Sait
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Storage Systems Market Analysis Dec 04. Storage Market & Technologies.
NCICB Systems Architecture Bill Britton Terrapin Systems LPG/NCICB Dedicated Support.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
Indiana University’s Name for its Sakai Implementation Oncourse CL (Collaborative Learning) Active Users = 112,341 Sites.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
Sandor Acs 05/07/
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
October 2006ICFA workshop, Cracow1 HEP grid computing in Portugal Jorge Gomes LIP Computer Centre Lisbon Laboratório de Instrumentação e Física Experimental.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
LIP workshop LIP Computing Overview Jorge Gomes Laboratório de Instrumentação e Física Experimental de Partículas.
Queensland University of Technology CRICOS No J VMware as implemented by the ITS department, QUT Scott Brewster 7 December 2006.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Computing at LIP Jorge Gomes LIP Laboratório de Instrumentação e Física Experimental de Partículas LIP Workshop 2008, Luso, 12 January 2008.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Grid Activities in Portugal Gonçalo Borges Jornadas LIP 2010 Braga, Janeiro 2010.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
By: Joel Dominic and Carroll Wongchote 4/18/2012.
Australian Institute of Marine Science Jukka Pirhonen.
Dirk Zimoch, EPICS Collaboration Meeting October SLS Beamline Networks and Data Storage.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
Brief introduction about “Grid at LNS”
Ryan Leonard Storage and Solutions Architect
Video Security Design Workshop:
e-infrastructures in Portugal
Gonçalo Borges on behalf of LIP
Introduction to Networks
Virtualization and Cloud Computing
Cost Effective Network Storage Solutions
Presentation transcript:

Portuguese Grid Infrastruture(s) Gonçalo Borges Jornadas LIP 2010 Braga, Janeiro 2010

2 Portuguese EGEE Infrastructure 8 Sites – LIP-Lisboa, LIP-Coimbra – NCG-INGRID-PT – UPorto (3 clusters) – UMINHO-CP, DI-UMINHO – IEETA – CFP-IST Resources – 2500 job slots – Hundrends of Terabytes for Storage space Support for more than 20 VOs – WLCG,EGEE, EELA – INGRID, IBERGRID

RCTS 10 Gbps link between Lisbon and Porto – ~ 1 Gbps to other important regions Important improvements are foreseen – Better International high speed connectivity  New links through Spain are (almost) operational o Minho and Galicia o Spanish Estremadura o  5 Gbps – Better Geant connectivity – Better redundancy  Ring between both countries 5 Gpbs

4 INGRID overview INGRID: Iniciativa National GRID – Push for resource sharing in Portugal – Follows the path of EGI – Helps to fulfill the Portuguese responsibilities in the framework of present European Projects  EGEE, WLCG, IBERGRID... INGRID Management Commitee: – FCT, UMIC and LIP (technical coordination) Steps towards INGRID deployment... – Selection of 13 pilot applications  HEP, Weather Forecast, Civil Protection,... – Deploy a dedicated data centre to host resources  Located in LNEC  Works as the INGRID seed / core infrastructure

5 INGRID main node Computer room area 370m 2 – 85 cm raised floor Electrical power: – 1st step:1000 kVA – 2nd step 2000 kVA Protected power – 5x UPS 200kVA – Diesel generator Chilled water cooling: – Chillers with free-cooling (2x375kW) – Close-control units (6x150kW+47kW) Fire detection – Very Early Warning Smoke Detection – Fire extinction being installed

6 INGRID main node Central 10 Gigabit Ethernet switch Vlan Grid Cluster Vlan Core Services Vlan Site Services Vlan ISCSI Arrays Vlan Support Services Vlan User Services Vlan Routers Internet 3.5 Gbps LIP-Lisbon + LIP-Coimbra 10 Gbps FW/NAT 2 FW/NAT 1 FW Different VLANs: Scalability, Security, Easier Management – Inbound/Outbound traffic to the grid local farm via 2 linux boxes  FW/NAT servers connected to the central switch at 10 Gbps.

7 INGRID main node Core 10gigabit Ethernet switch... Computing Blade Centres SGE cluster Storage = Lustre + StoRM Support Services Blade Centres net Firewall

8 Core Services Fault Tolerance Redundant services distributed between 2 blade centres – Solution based on Xen Virtual machines  Xen images available Storage accessible via Internet SCSI (iSCSI)  Controlled by OCFS2 shared cluster file system... o guaranties full data coherence and data integrity in simultaneous data accesses from multiply hosts... Xen Images Repository ISCSI ARRAY 1 12 x 1TB SATA disks Raids 10, 6 LUNS ISCSI ARRAY 6 12 x 1TB SATA disks Raids 10, 6 LUNS Central 10 Gigabit Ethernet switch Blade centre 2 x 512 MB iSCSI controllers OCFS2 Filesystem

9 Computing INGRID main node Core services ( failures have a critical impact on the infrastructure availability) Core services ( failures have a critical impact on the infrastructure availability) – 2 different blade centres with server direct attached storage – Redudant power suplies, management and network connectivity – IBM blades running SL5 x86_64 Xen Dom0 Kernels:  24 LS22 IBM blades (2 quad-core AMD opteron GHz)  2 Gigabit Ethernet interfaces; Intelligent Platform Management Interface (version 2.0)  146 GB of local storage under 2 SAS disks in Raid mirror  192 cores, 3 GB of RAM per core (24GB RAM / blade) High Throughput Computing Servers High Throughput Computing Servers – Bladecentres hosting 12 / 14 blades running SL5 x86_64:  100 LS22/HS21 IBM blades (2 quad-core AMD opteron GHz)  42 Proliant BL460 G6 HP blades (2 quad-core Intel Xeon 2.67GHz)  1136 cores, 3 GB of RAM per core (24GB RAM / blade) High Performance Computing Servers High Performance Computing Servers – IBM bladecentre H with infiniband switch running SL5 x86_64:  14 LS22 IBM Blades (2 quad-core AMD opteron GHz)  20 Gbps Double Data Rate Host Channel Adaptors  112 cores, 4 GB of RAM per core (32GB RAM / blade)

10 Storage INGRID main node Storage servers and expansion boxes – 13 IBM X3650 servers running SL5 x86_64  2 quad-core Intel(R) Xeon(R) L GHz  2 SAS disks (74GB) deployed in Raid mirror  Each server has associated 40 TB of effective storage o 2 LSI Mega Raid Controllers (linking to the expansion boxes) o Expansion boxes in Raid 5 Volumes with 1 TB SATA-II disks  Total of ~ 620 TB of online grid storage space  Storage Network interfaces o Two builtin 10/100/1000 BASE-T Broadcom Gigabit Ethernet o One NetXen 10 Gigabit Ethernet PCIe interface – 2 HP DL380 G6 running SL5 x86_64  Expansion boxes with 4 x 12 SAS disks 450GB 15K RPM disks  10 Gigabit Ethernet, 2 NetXen 10 Gigabit Ethernet PCIe interface – 2 HP DL360 G5 dedicated servers to run gridftp services – Grid access enabled via the StoRM SRM interface – Sun Microsystems’ Lustre – Sun Microsystems’ Lustre cluster shared file system

HP Proliant BL380 G6 Computing INGRID main node 11 IBM Blade Center E IBM System X3650 Server 4x12TB expansions IBM Blade Center E Core services iSCSI (SAN) HP bladecenter C7000 HP Proliant BL460C G6