Grid for CBM Kilian Schwarz, GSI. What is Grid ? ● Sharing of distributed resources within one Virtual Organisations !!!!

Slides:



Advertisements
Similar presentations
INFSO-RI Enabling Grids for E-sciencE EGEE and gLite Slides by: Erwin Laure EGEE Deputy Middleware Manager.
Advertisements

1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Data Management Expert Panel - WP2. WP2 Overview.
EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
Andrew McNab - EDG Access Control - 14 Jan 2003 EU DataGrid security with GSI and Globus Andrew McNab University of Manchester
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GGF Toronto Spitfire A Relational DB Service for the Grid Peter Z. Kunszt European DataGrid Data Management CERN Database Group.
GLite, the next generation middleware for Grid computing Oxana Smirnova (Lund/CERN) Nordic Grid Neighborhood Meeting Linköping, October 20, 2004 Uses material.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
Sergey Belov, Tatiana Goloskokova, Vladimir Korenkov, Nikolay Kutovskiy, Danila Oleynik, Artem Petrosyan, Roman Semenov, Alexander Uzhinskiy LIT JINR The.
Grid Toolkits Globus, Condor, BOINC, Xgrid Young Suk Moon.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
INFSO-RI Enabling Grids for E-sciencE Comparison of LCG-2 and gLite Author E.Slabospitskaya Location IHEP.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
David Adams ATLAS ATLAS Distributed Analysis David Adams BNL March 18, 2004 ATLAS Software Workshop Grid session.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
EGEE Catalogs Peter Kunszt EGEE Data Management Middleware Service Grids NeSC, July 2004 EGEE is a project funded by the.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
Responsibilities of ROC and CIC in EGEE infrastructure A.Kryukov, SINP MSU, CIC Manager Yu.Lazin, IHEP, ROC Manager
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
EGEE is a project funded by the European Union under contract IST Middleware Planning for LCG/EGEE Bob Jones EGEE Technical Director e-Science.
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
LCG EGEE is a project funded by the European Union under contract IST LCG PEB, 7 th June 2004 Prototype Middleware Status Update Frédéric Hemmer.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
EGEE-II INFSO-RI Enabling Grids for E-sciencE The GILDA training infrastructure.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
Middleware for Campus Grids Steven Newhouse, ETF Chair (& Deputy Director, OMII)
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Glite. Architecture Applications have access both to Higher-level Grid Services and to Foundation Grid Middleware Higher-Level Grid Services are supposed.
Rutherford Appleton Lab, UK VOBox Considerations from GridPP. GridPP DTeam Meeting. Wed Sep 13 th 2005.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
EGEE is a project funded by the European Union under contract INFSO-RI Middleware for the next Generation Grid Infrastructure Erwin Laure EGEE Deputy.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Data Manipulation with Globus Toolkit Ivan Ivanovski TU München,
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
II EGEE conference Den Haag November, ROC-CIC status in Italy
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Bob Jones EGEE Technical Director
Regional Operations Centres Core infrastructure Centres
EGEE Middleware Activities Overview
JRA3 Introduction Åke Edlund EGEE Security Head
ALICE FAIR Meeting KVI, 2010 Kilian Schwarz GSI.
LCG middleware and LHC experiments ARDA project
Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002
Status of Grids for HEP and HENP
gLite The EGEE Middleware Distribution
Presentation transcript:

Grid for CBM Kilian Schwarz, GSI

What is Grid ? ● Sharing of distributed resources within one Virtual Organisations !!!!

Europa:267 Institute, 4603 User Sonstige:208 Institute, 1632 User LHC Wissenschaftler weltweit

Start of CBM Grid ● There are considerations to start a CBM Grid ● Task: distributed MC production ● Potential sites: 3 (Bergen, Dubna, GSI) ● After positive experiences the Grid can be enlarged to more sites and tasks, like distributed analysis

requirements * Globus-style X509 user certificates issued for CBM by GermanGrid CA * How to get a certificate ? at GSI: >. globuslogin > grid-cert-request –cn “ ” certificate request file and private key will be stored in $HOME/.globus The request file has to be signed (openssl) by the CA responsible person and mailed to GermanGrid CA The certificate will be mailed back via

GermanGrid CA How to get a certificate in detail: See

requirements: CBM VO Server (one per VO) additional sites: - Bergen, Dubna additional users: - to be added

Globus/LCG – creation of grid-mapfile necessary for each site ● E.g. with gLite-security tools: - adjust $GLITE_LOCATION/etc/glite- mkgridmap.conf add: “group ldap://glite001.gsi.de:8389/o=cbm,dc=de,dc= de” - Create grid-mapfile $GLITE_LOCATION/sbin/glite-mkgridmap – output=/etc/grid-security/grid-mapfile

user creation on each site (support of CBM VO) Each site has to create cbm-user-IDs onto which the Grid-users will be mapped: EGEE/LCG: a certain number of POOL accounts, e.g. cbmvo00 – cbmvo10 Globus & AliEn: one production user: via this userID the jobs will be submitted. E.g. cbmprod

CBM software environment ● To be able to send real CBM jobs to the Grid, the participating sites have to * Install the CBM software and prepare the environment * Or the job has to bring it’s own environment (static links)

Agreement on common Grid middleware basically, the possibilities are: - Globus - NorduGrid - LCG-2 - AliEn - gLite (EGEE) - gLite (AliEn)

LHC Computing Grid Project Fundamental Goal of the LCG To help the experiments’ computing projects Phase 1 – prepare and deploy the environment for LHC computing Phase 2 – acquire, build and operate the LHC computing service SC2 – Software & Computing Committee  SC2 includes the four experiments, Tier 1 Regional Centres  SC2 identifies common solutions and sets requirements for the project PEB – Project Execution Board  PEB manages the implementation  organising projects, work packages  coordinating between the Regional Centres

EDG Middleware Architecture Collective Services Information & Monitoring Replica Manager Grid Scheduler Local ApplicationLocal Database Underlying Grid Services Computing Element Services Authorization Authenticatio n and Accounting Replica Catalog Storage Element Services SQL Database Services Fabric services Configuration Management Node Installation & Management Monitoring and Fault Tolerance Resource Management Fabric Storage Management Grid Fabric Local Computing Grid Grid Application Layer Data Management Job Management Metadata Management Service Index APPLICATIONS GLOBUS CondorG (via VDT) M / W

Dubna (JINR): LCG-2 site

Dubna (JINR): LCG-2 site LCG-test mostly successful

JINR (LCG-2 site: job-submit )

Timeline oAfter only 2 years of development, we have deployed a distributed computing environment which meets the needs of Alice experiment Simulation & Reconstruction Event mixing Analysis oUsing Open Source components (representing 99% of the code), internet standards (SOAP,XML, PKI…) and scripting language (perl) was the key element that alllowed quick prototyping and very fast development cycles First production (distributed simulation) 10% DC (analysis) P. Buncic, CERN

Building AliEn P. Saiz, CERN

AliEn Grid (ALICE VO): ● 77 configured sites worldwide

DC Monitoring: ● Monalisa:

lxts05.gsi.de: AliEn client (PANDA VO)

JINR and Bergen: AliEn sites

Grids and Open Standards Increased functionality, standardization Time Custom solutions Open Grid Services Arch GGF: OGSI, … (+ OASIS, W3C) Multiple implementations, including Globus Toolkit Web services Globus Toolkit Defacto standards GGF: GridFTP, GSI X.509, LDAP, FTP, … App- specific Services

Architecture Guiding Principles ● Lightweight (existing) services – Easily and quickly deployable – Use existing services where possible as basis for re-engineering ● Interoperability – Allow for multiple implementations ● Resilience and Fault Tolerance ● Co-existence with deployed infrastructure – Run as an application (e.g. on LCG-2; Grid3) – Reduce requirements on site components ● Basically globus and SRM – Co-existence (and convergence) with LCG-2 and Grid3 are essential for the EGEE Grid service ● Service oriented approach – WSRF still being standardized – No mature WSRF implementations exist to date, no clear picture about the impact of WSRF hence: start with plain WS ● WSRF compliance is not an immediate goal, but we follow the WSRF evolution ● WS-I compliance is important

Approach ● Exploit experience and components from existing projects – AliEn, VDT, EDG, LCG, and others ● Design team works out architecture and design – Architecture: – Design: ● Components are initially deployed on a prototype infrastructure – Small scale (CERN & Univ. Wisconsin) – Get user feedback on service semantics and interfaces ● After internal integration and testing components are delivered to SA1 and deployed on the pre-production service EDGVDT... LCG EGEE...AliEn

gLite (AliEn) * From now on used by ALICE for globally distributed analysis in connection with PROOF (at GSI:  PROOF at GSI )

gLite (EGEE) * Will replace LCG-2.X in near? future, but nobody has real experience with it

summary (middlewares) ● LCG-2: GSI and Dubna - pro: large distribution, support - contra: difficult to set up, no distributed analysis ● AliEn: GSI, Dubna, Bergen - pro: in production since contra: unsecure future, no support Globus 2: GSI, Dubna, Bergen? - pro/contra: simple, but functioning (no RB, no FC, no support) gLite/GT4: new on the market - pro/contra: nobody has production experience (gLite)

lxg01-05.gsi.de ● LCG test installation, visible in LCG – preproduction testbed ● Trying to port LCG to Debian Linux