Javier Magnin Brazilian Center for Research in Physics & ROC-LA

Slides:



Advertisements
Similar presentations
CBPF J. Magnin LAFEX-CBPF. Outline What is the GRID ? Why GRID at CBPF ? What are our needs ? Status of GRID at CBPF.
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Polish Infrastructure for Supporting Computational Science in the European Research Space EUROPEAN UNION Services and Operations in Polish NGI M. Radecki,
Grid Initiatives for e-Science virtual communities in Europe and Latin America GISELA : - A Resource Provider - An opportunity for VRCs.
Joining the Grid Andrew McNab. 28 March 2006Andrew McNab – Joining the Grid Outline ● LCG – the grid you're joining ● Related projects ● Getting a certificate.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
08/11/908 WP2 e-NMR Grid deployment and operations Technical Review in Brussels, 8 th of December 2008 Marco Verlato.
Session 7a, 10 May 2007 IST-Africa 2007 Copyright 2007 EELA Project: IST
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Pilot Test-bed Operations and Support Work.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
EGI-InSPIRE Steven Newhouse Interim EGI.eu Director EGI-InSPIRE Project Director.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
Monitoring the Grid at local, national, and Global levels Pete Gronbech GridPP Project Manager ACAT - Brunel Sept 2011.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE – paving the way for a sustainable infrastructure.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES GGUS Overview ROC_LA CERN
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America EELA Infrastructure (WP2) Roberto Barbera.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks SA1: Grid Operations Maite Barroso (CERN)
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Dr. Isabel Campos Plasencia (IFCA-CSIC) Spanish NGI Coordinator ES-GRID The Spanish National Grid Initiative.
European Middleware Initiative (EMI) Alberto Di Meglio (CERN) Project Director.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
EMI INFSO-RI Testbed for project continuous Integration Danilo Dongiovanni (INFN-CNAF) -SA2.6 Task Leader Jozef Cernak(UPJŠ, Kosice, Slovakia)
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI A pan-European Research Infrastructure supporting the digital European Research.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
May pre GDB WLCG services post EGEE Josva Kleist Michael Grønager Software Coord NDGF Director CERN, May 12 th 2009.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Role and Challenges of the Resource Centre in the EGI Ecosystem Tiziana Ferrari,
CESGA QR2 SA1-SWE Partner Coordination Meeting 2 CICA, Sevilla
JRA1 Middleware re-engineering
Bob Jones EGEE Technical Director
Dynamic Extension of the INFN Tier-1 on external resources
WLCG IPv6 deployment strategy
Introduction to GISELA, CHAIN and EPIKH projects
Status of WLCG FCPPL project
WLCG Workshop 2017 [Manchester] Operations Session Summary
THE GISELA PROJECT Herbert Hoeger WP2 Manager - ULA (Venezuela)
(Prague, March 2009) Andrey Y Shevel
Regional Operations Centres Core infrastructure Centres
The EDG Testbed Deployment Details
Clouds , Grids and Clusters
SA1 Status Report EGEE Grid Operations & Management
Grid site as a tool for data processing and data analysis
European Middleware Initiative (EMI)
Ian Bird GDB Meeting CERN 9 September 2003
NGIs – Turkish Case : TR-Grid
CREAM Status and Plans Massimo Sgaravatto – INFN Padova
ATLAS support in LCG.
GRID activities INFN/CNAF
EGEE support for HEP and other applications
Overview of IPB responsibilities in EGEE-III SA1
Grid Computing.
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
UTFSM computer cluster
UK Status and Plans Scientific Computing Forum 27th Oct 2017
A Messaging Infrastructure for WLCG
Nordic ROC Organization
Interoperability & Standards
SA1 ROC Meeting Bologna, October 2004
LCG Operations Workshop, e-IRG Workshop
Connecting the European Grid Infrastructure to Research Communities
Input on Sustainability
EGI Webinar - Introduction -
Future EU Grid Projects
GRIF : an EGEE site in Paris Region
User Accounting Integration Spreading the Net.
Presentation transcript:

Javier Magnin Brazilian Center for Research in Physics & ROC-LA GRID and ROC-LA Javier Magnin Brazilian Center for Research in Physics & ROC-LA Workshop in Physics and Technology at CERN - Bogota, Oct. 20-22, 2010

Outline The concept of GRID Different GRID “flavors” The Worldwide LHC Computing GRID: WLCG The ROC-LA How to become a site member of ROC-LA The typical structure of a site under ROC-LA ROC-LA outreach Conclusions

The concept of GRID Web is a service for sharing information over the Internet, Grid is a service for sharing computer power and data storage capacity over the Internet.

Web is a service for sharing information over the Internet, Grid is a service for sharing computer power and data storage capacity over the Internet. The five big ideas Resource Sharing Secure Access Resource Use The Death of Distances Open Standards The concept of GRID

The concept of GRID Isolated resources Cluster 1 Cluster 5 Cluster 4

The concept of GRID Cluster 1 Cluster 5 Cluster 4 Cluster 3 Cluster 2 Resource sharing  direct access to remote computers, software and data Death of distances  high speed connections between computers Open standards  applications made to run on one resource will run on all other

The concept of GRID Security: Cluster 1 Cluster 5 Cluster 4 Cluster 3 Cluster 2 Security: Digital certificates to identify clusters in the GRID

The concept of GRID Cluster 1 Cluster 5 Cluster 4 Cluster 3 Cluster 2 VO 1 VO 2 VO 3 VO 4 Secure access  Access policy, authentication and authorization Resource use  you should be able to calculate the optimal allocation of resources

The concept of GRID VO (Virtual Organization): Cluster 1 Cluster 5 Cluster 4 Cluster 3 Cluster 2 VO 1 VO 2 VO 3 VO 4 VO (Virtual Organization): Is a group of people sharing common interests. Individuals are identified by digital certificates Individuals pertaining to a given VO have granted access to resources supporting the given VO

The concept of GRID ROC /NGI Monitoring services: Cluster 1 Cluster 5 Cluster 4 Cluster 3 Cluster 2 VO 1 VO 2 VO 3 VO 4 ROC /NGI Monitoring services: ROC/NGI´s in the European GRID Something else in the American GRID

The concept of GRID COD

The concept of GRID In summary, the GRID is: Hardware  a large amount of computers across the world Network  to interconnect the hardware Middleware  the software which brings together all the hardware across the internet to setup the GRID infrastructure A set of usage rules  Digital Certificates, VO’s, monitoring structures and goals, etc. which sound like a “Virtual Super Computer”

Different GRID “flavors” There exists several GRID initiatives around the world Europe EGI Nordugrid WLCG etc. National projects Open Science Grid (OSG) - USA TeraGrid - USA CNGrid - China Garuda - India National Grid Service – United Kingdom INFN Grid – Italy etc

Different GRID “flavors” They mostly differ in the middleware gLite Unicore ARC dCache.org Globus Toolkit etc.

Different GRID “flavors” They mostly differ in the middleware gLite Unicore ARC dCache.org Globus Toolkit etc. EMI (European Middleware Initiative): Common middleware Three years project Started in May 2010 26 partners across Europe

Different GRID “flavors” They mostly differ in the middleware gLite Unicore ARC dCache.org Globus Toolkit etc. EMI (European Middleware Initiative): Common middleware Three years project Started in May 2010 26 partners across Europe Conversations with OSG

Different GRID “flavors” EMI Objectives: consolidate a middleware distribution simplifying services and components, evolve functionalities following the requirements of the community Improve usability Improve security Standardization interoperability service integration Integration with new technologies messaging - for monitoring, accounting, service management, etc. virtualization – usage of virtual machines

The Worldwide LHC computing GRID: WLCG WLCG is a collaboration linking grid infrastructure and computer centers worldwide More than 130 computing centers in 34 countries 210190 CPU cores (Oct. 2010) available to process, and analyze data produced at LHC Large storage capacity (some Pb) Equally available to all partners, regardless of their physical location Individual institutions contribute to WLGC with hardware and human resources

The Worldwide LHC computing GRID: WLCG WLCG consists of three mains layers or Tiers, made up of computer centers: Tier-0: One site (the CERN computing center). All data from LHC passes through this central hub. Contributes with less than 20% of the total computing power Tier-1: 11 large computer centers with large storage capacity and round-the-clock support. Used for processing of raw data, data analysis and data storage. Tier-2: about 160 smaller computing centers with enough computing power and storage for data analysis and MC generation

The Worldwide LHC computing GRID: WLCG WLCG consists of three mains layers or Tiers, made up of computer centers:

The Worldwide LHC computing GRID: WLCG

The ROC-LA Created as a joint effort by CBPF, ICN-UNAM and UNIANDES It was a response to the end of the ROC-CERN as a catch-all ROC absence of a ROC infrastructure in LA ROC-LA operations started in Sept. 30, 2009 CBPF, ICN-UNAM and UNIANDES were under ROC-CERN until Sept. 30, 2009 ROC-LA technical team trained at CERN during 6 months in 2009 – (financial support by WLCG and host institutions) Fully founded by the host institutions

The ROC-LA Structure of ROC-LA Management Board: Dr. Carlos Avila (UNIANDES) Dr. Javier Magnin (CBPF) Dr Lukas Nelen (ICN-UNAM) Technical Board: Eng. Luciano Diaz (ICN-UNAM) Eng. Andrés Olguin (UNIANDES) Eng. Renato Santana (CBPF) Technical support team provided by the host Institutions

The ROC-LA Objectives of ROC-LA To provide a GRID infrastructure for LA (HEP and non-HEP sites) To give support to the GRID infrastructure by certifying, testing and monitoring sites in Latin America Services provided by ROC-LA Information  through the central GOCDB, site-BDII, APEL Support  GGUS tickets Monitoring  Nagios, SAM-tests, GSTAT

The ROC-LA Responsibilities of ROC-LA Follow up and dispatch tickets addressed to ROC-LA Review (on a daily basis) the backlog of tickets and take appropriated actions Review (on a daily basis) alarms and take appropriated actions Certify new sites and follow up new sites in certification process Provide a support mailing list

The ROC-LA UNIANDES CBPF 216 cores 312 cores SAMPA 120 cores ICN-UNAM UTFSM 44 cores

The ROC-LA ROC-LA in numbers: 5 sites in production (WLCG) 750 cores CBPF 312 cores UNIANDES 216 cores SAMPA 120 cores ICN-UNAM 58 cores UTFSM 44 cores ROC-LA in numbers: 5 sites in production (WLCG) 750 cores 1 site in final tests (LNCC-COMCIDIS) 1 site in certification process (UFRJ-NACAD) Support to the VO’s: LHCb, CMS, ALICE, ATLAS, biomed, fussion, Auger, oper.vo.eu-eela.eu, prod.vo.eu-eela.eu

How to become a site member of ROC-LA First Send e-mail to support@roc-la.org asking to become a site member describing your hardware and network infrastructure and human resources Describe also your needs in terms of VO’s your site will support Sign up the Letter of Agreement then The ROC-LA technical board will start the process of certification and setup of your site

The typical structure of a site under ROC-LA It depends on your needs/resources but at least you will have A Computing Element (CE)  this is the portal of your site A Storage Element (SE)  for data storage – depending on your needs (VO depending) A number of Working Nodes  for processing – typically a few (as much as you can!) servers with multi-core processors RAM memory  for the WN’s – depending on the data you will have to process (VO depending) A System Manager (of course!) Internet connection (of course!)

ROC-LA outreach Home page: http://www.roc-la.org I ROC-LA Workshop http://indico.cern.ch/conferenceDisplay.py?ovw=True&confId=108833 I ROC-LA Workshop CERN, Oct. 6 to 8, 2010

ROC-LA outreach Home page: http://www.roc-la.org I ROC-LA Workshop http://indico.cern.ch/conferenceDisplay.py?ovw=True&confId=108833 I ROC-LA Workshop CERN, Oct. 6 to 8, 2010 Annual workshop open to all ROC-LA users

Conclusions GRID in LA shows a slow but consistent growth GRID activities in LA are mainly driven by the HEP community (which is the largest user in the region) ROC-LA commitment is to provide a sustainable infrastructure for the GRID community/users in LA monitoring services technical support certification of sites/users The ROC-LA born to provide support to the HEP community, but is open to the non-HEP users too ROC-LA is financially supported by its hosts institution