Future of AliEn ALICE © | Offline Week | 18-21 June 2013| Predrag Buncic.

Slides:



Advertisements
Similar presentations
© 2007 Open Grid Forum Data Management Challenge - The View from OGF OGF22 – February 28, 2008 Cambridge, MA, USA Erwin Laure David E. Martin Data Area.
Advertisements

1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
1 ALICE Grid Status David Evans The University of Birmingham GridPP 16 th Collaboration Meeting QMUL June 2006.
Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
Database Architectures and the Web
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
High Performance Computing Course Notes Grid Computing.
Distributed components
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
Stuart K. PatersonCHEP 2006 (13 th –17 th February 2006) Mumbai, India 1 from DIRAC.Client.Dirac import * dirac = Dirac() job = Job() job.setApplication('DaVinci',
1 Status of the ALICE CERN Analysis Facility Marco MEONI – CERN/ALICE Jan Fiete GROSSE-OETRINGHAUS - CERN /ALICE CHEP Prague.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
AliEn uses bbFTP for the file transfers. Every FTD runs a server, and all the others FTD can connect and authenticate to it using certificates. bbFTP implements.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
DISTRIBUTED COMPUTING
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
Enabling Grids for E-sciencE ENEA and the EGEE project gLite and interoperability Andrea Santoro, Carlo Sciò Enea Frascati, 22 November.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
Frontiers in Massive Data Analysis Chapter 3.  Difficult to include data from multiple sources  Each organization develops a unique way of representing.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
Enabling Grids for E-sciencE System Analysis Working Group and Experiment Dashboard Julia Andreeva CERN Grid Operations Workshop – June, Stockholm.
June 24-25, 2008 Regional Grid Training, University of Belgrade, Serbia Introduction to gLite gLite Basic Services Antun Balaž SCL, Institute of Physics.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
ALICE Offline Week | CERN | November 7, 2013 | Predrag Buncic AliEn, Clouds and Supercomputers Predrag Buncic With minor adjustments by Maarten Litmaath.
Grid in Alice – Status and Perspective Predrag Buncic P.Saiz, A. Peters C. Cristiou, J-F. Grosse-Oetringhaus A. Harutyunyan, A. Hayrapetyan.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
David Adams ATLAS ATLAS distributed data management David Adams BNL February 22, 2005 Database working group ATLAS software workshop.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
+ AliEn site services and monitoring Miguel Martinez Pedreira.
ANALYSIS TOOLS FOR THE LHC EXPERIMENTS Dietrich Liko / CERN IT.
The ATLAS Strategy for Distributed Analysis on several Grid Infrastructures D. Liko, IT/PSS for the ATLAS Distributed Analysis Community.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Enabling Grids for E-sciencE CMS/ARDA activity within the CMS distributed system Julia Andreeva, CERN On behalf of ARDA group CHEP06.
ALICE experiences with CASTOR2 Latchezar Betev ALICE.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
E-commerce Architecture Ayşe Başar Bener. Client Server Architecture E-commerce is based on client/ server architecture –Client processes requesting service.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Federating Data in the ALICE Experiment
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Design rationale and status of the org.glite.overlay component
INFN-GRID Workshop Bari, October, 26, 2004
LCG middleware and LHC experiments ARDA project
Software models - Software Architecture Design Patterns
MWCN`03 Singapore 28 October 2003
The LHCb Computing Data Challenge DC06
Presentation transcript:

Future of AliEn ALICE © | Offline Week | June 2013| Predrag Buncic

Predrag Buncic2

Split, 5 October Overview Grid in Alice  Dreaming about Grid (2001 – 2005)  Waking up (2005 – 2006)  Grid Future (2007+) Conclusions

Split, 5 October Five Twelve years ago… 5 4

Split, 5 October Alice Grid AliEn stack User Interface VTD/OSG stackEDG stack Nice! Now I do not have to worry about ever changing GRID environment…

Split, 5 October New approach  Using standard protocols and widely used Open Source components  Interface to many Grids End-to-end solution  SOA (Service Oriented Architecture) SOAP/Web Services (18) –Core Services (Brokers, Optimizers, etc) –Site Services –Package Manager –Other (non Web) Services (ldap, database proxy, posix I/O)  Distributed file and metadata catalogue  API and a set of user interfaces Used as production system for Alice since end of 2001  Survived 5 software years AliEn v1.0 (2001)

Split, 5 October Technology matrix 2.x3.X (OGSA)4.X (WS)Globus *.xEDG 1.xLCG2.x 0.xgLite1.x/2.x3.x 2.X (WS)AliEn1.X (WS) WS OGSA Proprietary protocol (Globus) Open Grid Services Architecture (Globus) Web Services (W3) ?

Split, 5 October Before: Globus model RB Site E Site DSite F Flat grid, each user interacts directly with site resources Site A Site CSite B

Split, 5 October AliEn model Site C V.O. #3 Site B Site A V.O.#1 V.O.#2 Site E Site DSite F Grid Service Provider (Supersite): Hosts Core Services (per V.O) Resource Provider (Site): Hosts an instance of CE, SE Services (per V.O.) Virtual Organisation: Collection of Sites, Users & Services

Split, 5 October Reducing M/W Scope API Grid Middle ware Common Service Layer (description, discovery, resource access) Low level network, message transport layer (TCP/IP -> HTTP -> SOAP ) Baseline Services

Split, 5 October Alien v2.0 Implementation of gLite architecture  gLite architecture was derived from AliEn New API Service and ROOT API  Shell, C++, perl, java bindings Analysis support  Batch and interactive  ROOT/PROOF interfaces  Complex XML datasets and tag files for event level metadata  Handling of complex workflows New (tactical) SE and POSIX I/O  Using xrootd protocol in place of aiod (glite I/O) Job Agent model  Improved job execution efficiency (late binding)

Split, 5 October Distributed Analysis File Catalogue Query User Job (many events) Job Output Data set (ESD ’ s, AOD ’ s) Output file 1Output file 2 Output file n Job Optimizer Job Broker Sub-job 1 Sub-job 2 Sub-job n CE and SE Processing CE and SE Processing CE and SE Processing File-merging Job Submit to CE with closest SE Grouped by SE files location

Split, 5 October ROOT / AliEn UI

Split, 5 October Data movement (xrootd) Step 1: produced data is sent to CERN  Up to 150 MB/sec data rate (limited by the amount of available CPUs) – ½ of the rate during Pb+Pb data export Total of 0.5 PB of data registered in CASTOR2 (300K files,1.6 GB/file)

Split, 5 October Data movement (FTS) Step 2: data is replicated from CERN to the T1s  Test of LCG File Transfer Service  Goal is 300 MB/sec – exercise is still ongoing

Addressing the issues and problems encountered so far and trying to guess the technology trends  Scalability and complexity We would like Grid to grow but how to manage complexity of such system?  Intra-VO scheduling How to manage priorities with VO in particular for analysis?  Security How to reconcile (Web) Service Oriented Architecture with growing security paranoia aimed at closing all network ports? How to fulfil the legal requirements for process and file traceability?  Grid collaborative environment How to work together on the Grid? Split, 5 October Next Step: Alien v3.0

Split, 5 October Grid economy model Simple economy concept on top of existing fair share model  Users pay (virtual) money for utilizing Grid resources  Sites earn money by providing resources  The more user is prepared to ‘pay’ for job execution, sooner it is likely to be executed Rationale  To motivate sites to provide more resources with better QOS  To make users aware of the cost of their work Implementation (in AliEn v2-12)  Lightweight Banking Service for Grid (LBSG) Account creation/deletion Funds addition Funds transaction Retrieval of transactions’ list and balance

Split, 5 October Overlay Messaging Networks Due to security concerns, any service that listens on open network port is seen as very risky Solution  We can use Instant Messaging protocols to create overlay network to avoid opening ports IM can be used to route SOAP messages between central and site services –No need for incoming connectivity on site head node It provides presence information for free –simplifies configuration and discovery  XMPP (Jabber) A set of open technologies for streaming XML between two clients –Many open-source implementation –Distributed architecture –Clients connect to servers –Direct connections between servers  Jabber is used by Google IM/Talk This channel could be used to connect grid users with Google collaborative tools

Split, 5 October Sandboxing and Auditing The concept of VO identity gaining recognition  Model is accepted by OSG and exploited by LHC experiments  VO acts as an intermediary on behalf of its users Task Queue – repository for user requests –AliEn, Dirac, Panda Computing Element –Requests jobs and submits them to local batch system  Recently this model was extended to the worker node Job Agent (pilot job) running under the VO identity on the worker node serves many real users The next big step in enhancing Grid security would be to run the Job Agents (pilot jobs) within a Virtual Machine This can provide a perfect process and file sandboxing Software which is run inside a VM can not negatively affect the execution of another VM

Split, 5 October The Big Picture Virtual Cluster (User layer) Virtual Grid (V.O. layer) Physical Grid (Common layer) Large “physical grid”  Reliably execute jobs, store, retrieve and move files Individual V.O. will have at given point in time access to a subset of these resources  Using standard tools to submit the job (Job Agents as well as other required components of VO grid infrastructure) to physical grid sites  This way V.O. ‘upper’ middleware layer will create an overlay, a grid tailored to V.O needs but on smaller scale  At this scale, depending on the size of the VO, some of the existing solutions might be applicable Individual users interacting with V.O middleware will typically see a subset of the resources available to the entire VO  Each session will have certain number of resources allocated  In the most complicated case, users will want to interactively steer a number of jobs running concurrently on a many of Grid sites  Once again an overlay (Virtual Cluster) valid for duration of user session AliEn/PROOF demo at SC05

Split, 5 October Conclusions Alice is using Grid resources to carry out production (and analysis) since 2001 At present, common Grid software does not provide sufficient and complete solution  Experiments, including Alice, have developed their own (sometimes heavy) complementary software stack In Alice, we are reaching a ‘near production’ level of service based on AliEn components combined with baseline LCG services  Testing of the ALICE computing model with ever increasing complexity of tasks  Seamless integration of interactive and batch processing models Strategic alliance with ROOT  Gradual build up of the distributed infrastructure in preparation for data taking in 2007  Improvements of the AliEn software hidden thresholds are only uncovered under high load storage still requires a lot of work and attention Possible directions  Convergence of P2P and Web technologies  Complete virtualization of distributed computational resources,  By layering experiment software stack on top of basic physical Grid infrastructure we can reduce the scale of the problem and make Grid really work

Predrag Buncic22 7 years later we are almost there….

23 Grid performance Stable operation Average 38.5K jobs running in parallel

Predrag Buncic24 And where are the others?

Predrag Buncic25

Predrag Buncic26

Predrag Buncic27

Predrag Buncic28

Predrag Buncic29

Predrag Buncic30

Predrag Buncic31

Predrag Buncic32 Unlike 7-8 years ago, we are not leading the pack We are operating 24x7 system which is difficult to evolve We are losing the people Unimaginable things are happening CMS and ATLAS are collaborating and working on common software solutions Shall we stand by or work with them? Can we afford to go alone? Which way we take more risk? Go back to original AliEn ideas? Keep the AliEn as user interface, shop for components? It woks, why do we need to fix it? For discussion…