WP18: High-Speed Data Recording Krzysztof Wrona, European XFEL 07 October 2011 CRISP.

Slides:



Advertisements
Similar presentations
Slide: 1 Welcome to the workshop ESRFUP-WP7 User Single Entry Point.
Advertisements

Enabling Access to Sound Archives through Integration, Enrichment and Retrieval WP1. Project Management.
Priority Research Direction (I/O Models, Abstractions and Software) Key challenges What will you do to address the challenges? – Develop newer I/O models.
High Performance Computing Course Notes Grid Computing.
WP5 – Knowledge Resource Sharing and Management Kick-off Meeting – Valkenburg 8-9 December 2005 Dr. Giancarlo Bo Giunti Interactive Labs S.r.l.
Lecture 3 Managing your project How? Milestones Deliverables Meeting tutors.
DECISION SUPPORT SYSTEM DEVELOPMENT
Lecture Nine Database Planning, Design, and Administration
INTERNET2 COLLABORATIVE INNOVATION PROGRAM DEVELOPMENT Florence D. Hudson Senior Vice President and Chief Innovation.
DORII Joint Research Activities DORII Joint Research Activities Status and Progress 4 th All-Hands-Meeting (AHM) Alexey Cheptsov on.
Mantychore Oct 2010 WP 7 Andrew Mackarel. Agenda 1. Scope of the WP 2. Mm distribution 3. The WP plan 4. Objectives 5. Deliverables 6. Deadlines 7. Partners.
Summary DCS Workshop - L.Jirdén1 Summary of DCS Workshop 28/29 May 01 u Aim of workshop u Program u Summary of presentations u Conclusion.
National eInfrastructure Seminar on infrastructures for research data 17. February 2012
WP3 Zetabyte –Exascale Storage Virtualization. How does it fit in? Traditional ProvidersCloud ProvidersXXX Providers Providing: Work/Archive Storage,
1 ARTISAN 1st Plenary Meeting May 8th – 9th 2012 ARTISAN 1st Plenary Meeting May 8th – 9th 2012 WP2 “Design and implementation of infrastructure capturing.
UN/CS/RAI/USAA/DB01/ Development of a Strategic Plan for a Digital Archives Programme Common Services Working Group on Archives and Records.
DASISH Final Conference Common Solutions to Common Problems.
PISTE – Progress Report Thanos M. Demiris, PhD INTRACOM S.A.
The ESS.VIP Programme: a response to the challenges facing the ESS Mariana Kotzeva, ESS VIP Programme Coordinator Advisor Hors Classe ESTAT.
The Grid System Design Liu Xiangrui Beijing Institute of Technology.
BlogForever Project Presentation Vangelis Banos, Project Manager, ALTEC Software Stratos Arampatzis, Dissemination Manager, Tero Dr. Alexandra Cristea,
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
CyberInfrastructure workshop CSG May Ann Arbor, Michigan.
Jenny Linnerud, 27/10/2011, Cologne1 ESSnet CORE Common Reference Environment ESSnet workshop in Cologne 27th and 28th of October 2011.
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
CERN openlab V Technical Strategy Fons Rademakers CERN openlab CTO.
This document produced by Members of the Helix Nebula Partners and Consortium is licensed under a Creative Commons Attribution 3.0 Unported License. Permissions.
Net-Centric Software and Systems I/UCRC A Framework for QoS and Power Management for Mobile Devices in Service Clouds Project Lead: I-Ling Yen, Farokh.
9 Systems Analysis and Design in a Changing World, Fourth Edition.
GridPP Presentation to AstroGrid 13 December 2001 Steve Lloyd Queen Mary University of London.
Enabling Access to Sound Archives through Integration, Enrichment and Retrieval Annual Review Meeting - Introduction.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
ICN Baseline Scenarios draft-pentikousis-icn-scenarios-04 K. Pentikousis (Ed.), B. Ohlman, D. Corujo, G. Boggia, G. Tyson, E. Davies, P. Mahadevan, S.
National Research Council - Pisa - Italy Marco Conti Italian National Research Council (CNR) IIT Institute MobileMAN MobileMAN: II year expected results.
Network design Topic 6 Testing and documentation.
Highest performance parallel storage for HPC environments Garth Gibson CTO & Founder IDC HPC User Forum, I/O and Storage Panel April 21, 2009.
1 The ILC Control Work Packages. ILC Control System Work Packages GDE Oct Who We Are Collaboration loosely formed at Snowmass which included SLAC,
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
EURO EURO A High Intensity Neutrino Oscillation Facility in Europe Introduction Aims Structure Tasks Link to other activities Future dates.
Framework for Collecting, Reporting, & Sharing Senior Design III - Spring 2007 Louis Von Eye.
TIARA – WP6 Involving Industry in TIARA Lucio Rossi (WPD) CERN.
Thomas Gutberlet HZB User Coordination NMI3-II Neutron scattering and Muon spectroscopy Integrated Initiative WP5 Integrated User Access.
INDIGO – DataCloud WP5 introduction INFN-Bari CYFRONET RIA
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No Advanced European.
CRISP WP18, High-speed data recording Krzysztof Wrona, European XFEL PSI, 18 March 2013.
BrightnESS is funded by the European Union's Framework Programme for Research and Innovation Horizon 2020, under grant agreement Building a Research.
E-infrastructure requirements from the ESFRI Physics, Astronomy and Analytical Facilities cluster Provisional material based on outcome of workshop held.
Presented by SciDAC-2 Petascale Data Storage Institute Philip C. Roth Computer Science and Mathematics Future Technologies Group.
© The InfoCitizen Consortium Project Presentation Agent based negotiation for inter- and intra-enterprise coordination employing a European Information.
Summary of IAPP scientific activities into 4 years P. Giannetti INFN of Pisa.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
Percipient StorAGe for Exascale Data Centric Computing Exascale Storage Architecture based on “Mero” Object Store Giuseppe Congiu Seagate Systems UK.
WP8 - System controls and integration
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Budget JRA2 Beneficiaries Description TOT Costs incl travel
JRA3 Introduction Åke Edlund EGEE Security Head
Introduction the IT and DM Topic
The ILC Control Work Packages
CRISP WP16 F2F Meeting, RAL Sep 27
WP7 objectives, achievements and plans
Ákos Frohner EGEE'08 September 2008
WP18, High-speed data recording
Computing Infrastructure for DAQ, DM and SC
Mirjam van Daalen, (Stephan Egli, Derek Feichtinger) :: Paul Scherrer Institut Status Report PSI PaNDaaS2 meeting Grenoble 6 – 7 July 2016.
Scheduled Accomplishments
Wide Area Workload Management Work Package DATAGRID project
Agenda Purpose for Project Goals & Objectives Project Process & Status Common Themes Outcomes & Deliverables Next steps.
WP6 – EOSC integration J-F. Perrin (ILL) 15th Jan 2019
Network Technology Evolution
Network Technology Evolution
Presentation transcript:

WP18: High-Speed Data Recording Krzysztof Wrona, European XFEL 07 October 2011 CRISP

Objective Provide solutions for:  high-speed recording of data to permanent storage and archive  optimised and secured access to data using standard protocols 2 K.Wrona CERN,

Motivations The rapid developments and increasing complexity of experimental techniques in many scientific domains and the usage of highly advanced instruments and detectors results in extremely high data rates exceeding tens of GB/s. Cost effective recording of data to storage systems and archives becomes an increasingly complex and challenging task, especially in cases where real time data reduction is not possible and the complete data streams must be recorded on the storage systems  no simple selection criteria, noisy images  experiment lasts for very short time (scale of hours or days)  experiment setup is changed frequently (feedback) K.Wrona CERN,

Initial considerations Aim for cost-effective solution Data originates in multiple sources Integration of commercial devices Parallel data streams for transfer optimization Writing data to storage system with high throughput (~10GB/s per device) Writing data to the archive  Splitting data streams for archiving and offline storage  Concurrent write and read K.Wrona CERN,

Participants Participating institutes/contact persons  XFEL (36pm) – K.Wrona  ESRF (12pm) – A.Goetz  DESY (19pm) – F.Schluenzen  ESS (8pm) – K. Leffman, S.Skelboe  ILL (2pm) – J.F. Perrin  UOXF.DB (36pm) – D.Wallom  GANIL (1pm) – ? 5 K.Wrona CERN,

Task 1 Assembling requirements and use cases for high-speed data recording to storage systems and data archives.  Description of use cases  Identification of common critical issues  Requirement document by month 9 Reviewing available technologies, selecting tools, and investigating their usability for defined use cases  Survey on hardware and software technologies  Exchange of experience  Mapping concrete solutions to identified use cases Coordination activities  Monthly video conference/phone meetings  Face-to-face meeting after 6 months 6 K.Wrona CERN,

Task 2 Collecting requirements for data protection and understanding their implications for high-speed data recording and data access.  Description of use cases  Definition of data protection model  Integration with authentication system (WP16)  Requirement document (month 12) Evaluating existing data-protection schemes including simple and advanced access-control models (i.e. NFSv4 and POSIX ACLs) and mapping them according to the defined requirements Coordination activities  see task 1 7 K.Wrona CERN,

Task 3 Defining and selecting use case applications requiring high throughput data access.  Selection of several flagship applications  Definition of requirements and benchmark criteria Evaluating the usability of standard access protocols  i.e. NFS4.1 (pNFS)  integration issues with storage, network and computing infrastructures Defining data-access architecture  Parallel access, caching strategies, etc  Architecture document (month 24) 8 K.Wrona CERN,

Task 4 Implementing the prototype system for the selected data- protection and data-access models according to the architecture design.  Complete test bed system will be implemented at selected facility(ies)  Other institutes may implement system partially Re-evaluating and refining the system architecture. Improving implementation. Deploying the prototype system and demonstrating its functionality. 9 K.Wrona CERN,

Timeline Overall timeline  Definition of use cases and requirements – start 1m  Evaluation of software and hardware components – start 10m  Designing system architecture – start 10m  Implementation and testing – start 24m  Improvements and optimization – 30m  Report drafting – 33m 10 K.Wrona CERN,

Milestones & Deliverables M1. Requirements for data recording to storage media  31 June 2012  Document approved by all WP members M2. Identification of data protection requirements and storage implications  30 September 2012  Document approved by all WP members M3. High-speed data access architecture design  30 September 2013  Architecture document approved by all WP members D1. Report on prototype system and future work  30 September K.Wrona CERN,