Joint JRA1/JRA3/NA4 session

Slides:



Advertisements
Similar presentations
Data Management Expert Panel - WP2. WP2 Overview.
Advertisements

Workload management Owen Maroney, Imperial College London (with a little help from David Colling)
EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Computational grids and grids projects DSS,
DataGrid WP1 Massimo Sgaravatto INFN Padova. WP1 (Grid Workload Management) Objective of the first DataGrid workpackage is (according to the project "Technical.
LCG Middleware Testing in 2005 and Future Plans E.Slabospitskaya, IHEP, Russia CERN-Russia Joint Working Group on LHC Computing March, 6, 2006.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
- Distributed Analysis (07may02 - USA Grid SW BNL) Distributed Processing Craig E. Tull HCG/NERSC/LBNL (US) ATLAS Grid Software.
EGEE is a project funded by the European Union under contract IST Project Technical Forum C. Loomis (LAL-Orsay) GAG (CERN) July 6,
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
LCG EGEE is a project funded by the European Union under contract IST LCG PEB, 7 th June 2004 Prototype Middleware Status Update Frédéric Hemmer.
Getting started DIRAC Project. Outline  DIRAC information system  Documentation sources  DIRAC users and groups  Registration with DIRAC  Getting.
David Adams ATLAS DIAL/ADA JDL and catalogs David Adams BNL December 4, 2003 ATLAS software workshop Production session CERN.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
LCG ARDA status Massimo Lamanna 1 ARDA in a nutshell ARDA is an LCG project whose main activity is to enable LHC analysis on the grid ARDA is coherently.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
EGEE is a project funded by the European Union under contract IST WS-Based Advance Reservation and Co-allocation Architecture Proposal T.Ferrari,
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
JAliEn Java AliEn middleware A. Grigoras, C. Grigoras, M. Pedreira P Saiz, S. Schreiner ALICE Offline Week – June 2013.
David Adams ATLAS ATLAS-ARDA strategy and priorities David Adams BNL October 21, 2004 ARDA Workshop.
Testing the HEPCAL use cases J.J. Blaising, F. Harris, Andrea Sciabà GAG Meeting April,
EGEE is a project funded by the European Union under contract IST “The LCG ARDA prototypes” Workshop summary Organised by ARDA and the gLite.
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
ARDA Massimo Lamanna / CERN Massimo Lamanna 2 TOC ARDA Workshop Post-workshop activities Milestones (already shown in December)
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Enabling Grids for E-sciencE Work Load Management & Simple Job Submission Practical Shu-Ting Liao APROC, ASGC EGEE Tutorial.
Enabling Grids for E-sciencE Claudio Cherubino INFN DGAS (Distributed Grid Accounting System)
INFSO-RI Enabling Grids for E-sciencE EGEE is a project funded by the European Union under contract IST Report from.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
EGEE is a project funded by the European Union under contract IST “The ARDA project” Massimo Lamanna / CERN on behalf of the LCG-ARDA project.
DIRAC: Workload Management System Garonne Vincent, Tsaregorodtsev Andrei, Centre de Physique des Particules de Marseille Stockes-rees Ian, University of.
EGEE Data Management Services
Evolution of storage and data management
Grid Computing: Running your Jobs around the World
Real Time Fake Analysis at PIC
L’analisi in LHCb Angelo Carbone INFN Bologna
David Bouvet Fabio Hernandez IN2P3 Computing Centre - Lyon
2. OPERATING SYSTEM 2.1 Operating System Function
Belle II Physics Analysis Center at TIFR
Design rationale and status of the org.glite.overlay component
Running a job on the grid is easier than you think!
and Alexandre Duarte OurGrid/EELA Interoperability Meeting
GGF OGSA-WG, Data Use Cases Peter Kunszt Middleware Activity, Data Management Cluster EGEE is a project funded by the European.
Data Challenge with the Grid in ATLAS
EGEE Review: Application Assessment of gLite
Comparison of LCG-2 and gLite v1.0
INFN-GRID Workshop Bari, October, 26, 2004
Mobile Operating System
Roberto Barbera (a nome di Livia Torterolo)
Introduction to Grid Technology
Grid Deployment Board meeting, 8 November 2006, CERN
Short update on the latest gLite status
Thoughts on Applications Area Involvement in ARDA
5. Job Submission Grid Computing.
Monitoring Session Intro
OGSA Data Architecture Scenarios
Interoperability & Standards
LCG middleware and LHC experiments ARDA project
A conceptual model of grid resources and services
Overview of gLite Middleware
gLite Job Management Christos Theodosiou
Servizi di Grid e impatto sulla rete
gLite The EGEE Middleware Distribution
Production Manager Tools (New Architecture)
Presentation transcript:

Joint JRA1/JRA3/NA4 session M.Lamanna NA4-HEP and LCG ARDA

Programme Introduction Presentations: Panel discussion “Generic Application” Biomed ARDA Panel discussion

JRA1-JRA3-NA4 session Objectives Presentations Panel discussion Where are we within NA4 in using the new middleware? Focus on technical key points Way forward? Presentations “Generic” + Biomed + HEP (ARDA) Panel discussion Why: Technical discussions ≠ presentations Who: Panelists + YOU What: Preparation work (list of hot issues) Discuss issues emerging from the presentations/discussions

Panel discussion Panelists: And all the audience! JRA1 JRA3 NA4 Frederic Hemmer Erwin Laure JRA3 Olle Mulmo David Groop NA4 Birker Koblitz, Massimo Lamanna, Dietrich Liko (HEP) Johan Montagnat, Ignacio Blanquer (Biomed) Roberto Barbera, Giuseppe Andronico (“Generic” applications) And all the audience!

Themes for the panel discussion (1) Discovery services: More or less missing Security related Encryption in the storage of data (avoid people with local administrator rights to compromise privacy) and in the communication layer ACL granularity Groups / Users Files / Groups of files (“directories”) Hook on privacy management through applications Stub to provide anonimisation by 3rd party applications Grid access via shell: Value and limitation of the current approaches. Possible extensions Job Definition Language: Motivation: ease of use and interoperability Job submission services: Missing functionality: access to stdout/stderr and worker node details during execution Motivation: debugging

Themes for the panel discussion (2) Compute Element: Compare the experience of DIRAC, gPTM3D, others… with the current plans Motivation: best exploit successful experience on LCG2 and other systems Goals: Will this work on gLite? Should gLite explicitly account for it? Metadata: Present the current status of the joint ARDA/gLite effort for feedback and discussion Software installation services: Compare the use cases (experience) not only in HEP Motivation: it is a critical component. More applications should be involved (almost only HEP so far)

OSG contributions As a follow-up of the 3rd ARDA workshop, we have agreed to have a persistent link with OSG Preparation for the next OSG workshop going on Mid December Agreed to select issues to discuss Current brainstorming ideas from OSG Data Management What is the definition, role and scope of LFNs What are the responsibilities of the Replica Catalog - what are the precise means of the requirements and constraints of its capabilities What are the responsibilities of the File Catalog - what are the requirements JDL Interesting subject Sign in the arda@cern.ch (from arda.cern.ch) to join our meeting (the idea is to have one meeting every fortnight (phone + VRVS))

Job submission service 1. Position in the queue; 2. Splitting information (if applicable); 3. Estimated time before running; 4. Estimated cost (arbitrary units); 5. Actual cost (arbitrary units); 6. Time and date of submission; 7. Time of start of execution; 8. Time of completion; 9. Priority; 10. Completion status; 11. CPU time used; 12. Real time elapsed; 13. Input I/O (amount and rate); 14. Output I/O (amount and rate); 15. CPU time left; 16. Executable running; 17. Dataset accessed; 18. CE, WN and SE used; 19. Current stdout, stderr; 20. Job status; 21. Job environment variables; 22. User who has submitted the job; 23. User attributes for the job specified in the job catalogue; 24. Queue used; From the HepCAL document

Compute element Compare the experience of DIRAC, gPTM3D, others… with the current plans Agent-based “mobile systems: in DIRAC these agents are distributed by the LCG2 WMS and then they collect tasks from an experiment-specific service Robustness Handcrafted control channels: in DIRAC, the communication layer is done via instant-message technologies Duplication of effort on the application side? Provisions for interactivity control gPTM3D and ALICE (and maybe others) Response time Quality-of-service and (low) latency requirements

Security Data encryption Fine grain access control on disk AND during network transmission raise the problem of encryption keys control Fine grain access control access control needed at individual AND group level Both data and metadata concerned metadata may be even more confidential than data same level of protection/access pattern expected Hook on privacy manager sometimes need to control data privacy at the application level (need middleware hook to filter data being set out) Need to protect privacy of grid users hiding user names and programs being executed