Eine Einführung ins Grid Andreas Gellrich IT Training DESY Hamburg 28.10.2008.

Slides:



Advertisements
Similar presentations
International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
The LHC experiments AuthZ Interoperation requirements GGF16, Athens 16 February 2006 David Kelsey CCLRC/RAL, UK
INFSO-RI Enabling Grids for E-sciencE Sven Hermann, Clemens Koerdt Forschungszentrum Karlsruhe ROC DECH Report.
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
Alain Romeyer - Dec Grid computing for CMS What is the Grid ? Let’s start with an analogy How it works ? (Some basic ideas) Grid for LHC and CMS.
EGEE-III Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Andreas Gellrich, DESY The Grid – The Future of Scientific.
Technology on the NGS Pete Oliver NGS Operations Manager.
Parallel Programming on the SGI Origin2000 With thanks to Moshe Goldberg, TCC and Igor Zacharov SGI Taub Computer Center Technion Mar 2005 Anne Weill-Zrahia.
Basic Grid Job Submission Alessandra Forti 28 March 2006.
POLITEHNICA University of Bucharest California Institute of Technology National Center for Information Technology Ciprian Mihai Dobre Corina Stratan MONARC.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Welcome e-Science in the UK Building Collaborative eResearch Environments Prof. Malcolm Atkinson Director 23 rd February 2004.
Peer to Peer & Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science The University.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Configuring and Maintaining EGEE Production.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
A.Guarise – F.Rosso 1 Enabling Grids for E-sciencE INFSO-RI Comprehensive Accounting Views on large computing farms. Andrea Guarise & Felice Rosso.
Enabling Grids for E-sciencE ENEA and the EGEE project gLite and interoperability Andrea Santoro, Carlo Sciò Enea Frascati, 22 November.
Grid Infrastructure for the ILC Andreas Gellrich DESY European ILC Software and Physics Meeting Cambridge, UK,
L ABORATÓRIO DE INSTRUMENTAÇÃO EM FÍSICA EXPERIMENTAL DE PARTÍCULAS Enabling Grids for E-sciencE Grid Computing: Running your Jobs around the World.
D. Olson, L B N L 1 STAR Collab. Mtg. 13 Aug 2003 Grid Enabling a small Cluster Doug Olson Lawrence Berkeley National Laboratory STAR Collaboration Meeting.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
The EDGeS project receives Community research funding 1 SG-DG Bridges Zoltán Farkas, MTA SZTAKI.
WNoDeS – Worker Nodes on Demand Service on EMI2 WNoDeS – Worker Nodes on Demand Service on EMI2 Local batch jobs can be run on both real and virtual execution.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
Grid Glasgow Outline LHC Computing at a Glance Glasgow Starting Point LHC Computing Challenge CPU Intensive Applications Timeline ScotGRID.
Institute of High Energy Physics ( ) NEC’2005 Varna, Bulgaria, September Participation of IHEP in EGEE.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Next steps with EGEE EGEE training community.
June 24-25, 2008 Regional Grid Training, University of Belgrade, Serbia Introduction to gLite gLite Basic Services Antun Balaž SCL, Institute of Physics.
…building the next IT revolution From Web to Grid…
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
WebFTS File Transfer Web Interface for FTS3 Andrea Manzi On behalf of the FTS team Workshop on Cloud Services for File Synchronisation and Sharing.
Grid DESY Andreas Gellrich DESY EGEE ROC DECH Meeting FZ Karlsruhe, 22./
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks The GILDA t-Infrastructure Roberto Barbera.
Grid Glasgow Outline LHC Computing at a Glance Glasgow Starting Point LHC Computing Challenge CPU Intensive Applications Timeline ScotGRID.
Glite. Architecture Applications have access both to Higher-level Grid Services and to Foundation Grid Middleware Higher-Level Grid Services are supposed.
30/07/2005Symmetries and Spin - Praha 051 MonteCarlo simulations in a GRID environment for the COMPASS experiment Antonio Amoroso for the COMPASS Coll.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Recent improvements in HLRmon, an accounting portal suitable for national Grids Enrico Fattibene (speaker), Andrea Cristofori, Luciano Gaido, Paolo Veronesi.
SAM Sensors & Tests Judit Novak CERN IT/GD SAM Review I. 21. May 2007, CERN.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Using Certificate & Simple Job Submission Jinny Chien ASGC.
SAM Status Update Piotr Nyczyk LCG Management Board CERN, 5 June 2007.
Storage Management on the Grid Alasdair Earl University of Edinburgh.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
HLRmon Enrico Fattibene INFN-CNAF 1EGI-TF Lyon, France19-23 September 2011.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Overveiw of the gLite middleware Yaodong Cheng
] Open Science Grid Ben Clifford University of Chicago
Grid Technology in Production at DESY Andreas Gellrich * DESY ACAT May 2005 *
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI solution for high throughput data analysis Peter Solagna EGI.eu Operations.
Grid Computing: Running your Jobs around the World
SuperB – INFN-Bari Giacinto DONVITO.
Moroccan Grid Infrastructure MaGrid
Grid Computing for the ILC
Grid Computing and the National Analysis Facility
CS258 Spring 2002 Mark Whitney and Yitao Duan
The GENIUS portal and the GILDA t-Infrastructure
GRIF : an EGEE site in Paris Region
Installation/Configuration
Presentation transcript:

Eine Einführung ins Grid Andreas Gellrich IT Training DESY Hamburg

Andreas GellrichDESY, The Power Grid

Andreas GellrichDESY, The Grid Dream GRIDMIDDLEWAREGRIDMIDDLEWARE Visualizing Supercomputer, PC-Cluster Data Storage, Sensors, Experiments Internet, Networks Desktop Mobile Access

Andreas GellrichDESY, Grid Computing Grid Computing is about virtualization of global resources. It is about transparent access to globally distributed resources such as data and compute cycles A Grid infrastructure consists of services to access resources and (of course) of the resources itself  Opposite to distributed computing, Grid resources are not centrally controlled  Hence it is mandatory to use standard, open, general-purpose protocols and interfaces  A Grid must deliver nontrivial qualities of services In general Grid infrastructures are generic; without any dependencies of the applications / experiments

Andreas GellrichDESY, Grid Types Data Grids: Provisioning of transparent access to data which can be physically distributed within Virtual Organizations (VO) Computational Grids: allow for large-scale compute resource sharing within Virtual Organizations (VO) (Information Grids): Provisioning of information and data exchange, using well defined standards and web services Jobs are transient ; data is persistent.

Andreas GellrichDESY, Grid Building Blocks A Virtual Organization (VO) is a dynamic collection of individuals, institutions, and resources which is defined by certain sharing rules.  A VO represents a collaboration  Users authenticate with personal certificates (Authentication)  Users are members of a certain VO (Authorization)  Certificates are issued by a Certification Authority (CA) Grid Infrastructure  Core Services (mandatory per VO)  VO Membership Services  Grid Information Services  Workload Management System  Resources (brought in by partners (Grid sites))

Andreas GellrichDESY, Overview 02-Feb-2007

Andreas GellrichDESY, Site3 Site2 output UI LFC WN SE GRIS CE GRIS BDII JDL WMS Batch LFC VOMS Grid Services grid-mapfile Site1 ssh usercert.pem / userkey.pem certs GIIS store world Services View R-GMA VO exclusive SW

Andreas GellrichDESY, VO & Grid DESY VOs hosted at DESY: Global: ‘hone’, ‘ilc’, (‘xfel.eu’), ‘zeus’ Regional: ‘calice’, ‘ildg’ Local: ‘desy’, ‘hermes’, ‘icecube’ VOs supported at DESY: Global: ‘atlas’, ‘biomed’, ‘cms’, ‘lhcb’, ‘dteam’, ‘ops’ Regional: ‘dech’, ‘xray.vo.egee-eu.org’ Grid Computing Resources at DESY: (CE) grid-ce3.desy.de hosts (all) hera-ce0.desy.de hosts (‘hone’, ‘hermes’, ‘zeus’) Grid Storage Resources at DESY: (SE) [srm-dcache.desy.de] O(50 TB) w/ tape backend Dcache-se-desy.desy.de O(50 TB) w/ tape backend dcache-se-atlas.desy.de O(200 TB) w/ tape backend dcache-se-cms.desy.de O(200 TB) w/ tape backend

Andreas GellrichDESY, Operations View output UI LFC WN GRIS SE GRIS CE BDII JDL WMS Torque/Maui LFC VOMS Grid Services grid-mapfile DESY-HH ssh usercert.pem / userkey.pem certs GIIS store world R-GMA VO exclusive SW

Andreas GellrichDESY, jobs are transient data is persistent

Andreas GellrichDESY, The LHC Computing Model Tier2 Centre ~1 TIPS Online System Offline Farm ~20 TIPS CERN Computer Centre >20 TIPS GridKa Regional Centre US Regional Centre French Regional Centre Italian Regional Centre Institute Institute ~0.25TIPS Workstations ~100 MBytes/sec Mbits/sec One bunch crossing per 25 ns 100 triggers per second Each event is ~1 Mbyte Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Data for these channels should be cached by the institute server Physics data cache ~PBytes/sec ~ Gbits/sec or Air Freight Tier2 Centre ~1 TIPS ~Gbits/sec Tier 0 Tier 1 Tier 3 1 TIPS = 25,000 SpecInt95 PC (1999) = ~15 SpecInt95 DESY ~1 TIPS Tier 2