KIT visit to Cascina , 22 March 2019

Slides:



Advertisements
Similar presentations
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Advertisements

Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
LIGO-G E ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech Argonne, May 10, 2004.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Integration and Sites Rob Gardner Area Coordinators Meeting 12/4/08.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
LIGO-G9900XX-00-M ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech.
CRISP & SKA WP19 Status. Overview Staffing SKA Preconstruction phase Tiered Data Delivery Infrastructure Prototype deployment.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Grid Computing at Yahoo! Sameer Paranjpye Mahadev Konar Yahoo!
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
17 th October 2005CCP4 Database Meeting (York) CCP4(i)/BIOXHIT Database Project: Scope, Aims, Plans, Status and all that jazz Peter Briggs, Wanjuan Yang.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Franco Carbognani, EGO LSC-Virgo Meeting May 2007 Status and Plans LIGO-G Z Software Management.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
Jan 12, 2009LIGO-G Z1 DMT and NDS2 John Zweizig LIGO/Caltech Ligo PAC, Caltech, Jan 12, 2009.
CSI 3125, Preliminaries, page 1 SERVLET. CSI 3125, Preliminaries, page 2 SERVLET A servlet is a server-side software program, written in Java code, that.
State of LSC Data Analysis and Software LSC Meeting LIGO Hanford Observatory November 11 th, 2003 Kent Blackburn, Stuart Anderson, Albert Lazzarini LIGO.
WG3 Report Michele Punturo Harald Lück. WG3 composition Co-Chairmen M.Punturo INFN Perugia, Italy H.Lück MPI für Gravitationsphysik, AEI, Hannover, Germany.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
Gregory Mendell, LIGO Hanford Observatory LIGO-G WLIGO-G WLIGO-G W LIGO S5 Reduced Data Set Generation March 2007.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
F. Carbognani Software Engineering for the Virgo Project at EGOGeneva-iCALEPCS 14/10/2005 Software Engineering for the Virgo Project at EGO F. Carbognani.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
WP1 WP2 WP3 WP4 WP5 COORDINATOR WORK PACKAGE LDR RESEARCHER ACEOLE MID TERM REVIEW CERN 3 RD AUGUST 2010 Magnoni Luca Early Stage Researcher WP5 - ATLAS.
CATI Pitié-Salpêtrière CATI: A national platform for advanced Neuroimaging In Alzheimer’s Disease Standardized MRI and PET acquisitions Across a wide network.
1 SUZAKU HUG 12-13April, 2006 Suzaku archive Lorella Angelini/HEASARC.
1.3 ON ENHANCING GridFTP AND GPFS PERFORMANCES A. Cavalli, C. Ciocca, L. dell’Agnello, T. Ferrari, D. Gregori, B. Martelli, A. Prosperini, P. Ricci, E.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Emanuele Leonardi PADME General Meeting - LNF January 2017
Dynamic Extension of the INFN Tier-1 on external resources
WLCG IPv6 deployment strategy
WLCG Workshop 2017 [Manchester] Operations Session Summary
Simulation Production System
“A Data Movement Service for the LHC”
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Database Replication and Monitoring
Virtualization and Clouds ATLAS position
Data Analytics and CERN IT Hadoop Service
Advanced Topics in Concurrency and Reactive Programming: Case Study – Google Cluster Majeed Kassis.
Overview of the Belle II computing
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
LCG 3D Distributed Deployment of Databases
Database Services at CERN Status Update
BOSS: the CMS interface for job summission, monitoring and bookkeeping
Virgo Status Detector Status Computing Data Analysis status and Plans
Controlling a large CPU farm using industrial tools
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
BOSS: the CMS interface for job summission, monitoring and bookkeeping
ProtoDUNE SP DAQ assumptions, interfaces & constraints
Computing Infrastructure for DAQ, DM and SC
The DAQ and IT infrastructures of KM3NeT
Leigh Grundhoefer Indiana University
Nicolas Rothbacher University of Puget Sound
Project Information Management Jiwei Ma
Wide Area Workload Management Work Package DATAGRID project
MMG: from proof-of-concept to production services at scale
ATLAS DC2 & Continuous production
Virgo computing Michele Punturo Computing - VW
Report on VIRGO Computing Data Processing Infrastructure (DPI)
The LHCb Computing Data Challenge DC06
Presentation transcript:

KIT visit to Cascina , 22 March 2019 Virgo Computing Franco Carbognani KIT visit to Cascina , 22 March 2019

KIT visit to Cascina , 22 March 2019 Virgo Computing Reorg Virgo Offline and Online Computing management has been reorganized as one of the outcomes of the 2018 External Computing Committee (ECC ) recommendations This has also lead to the definition of the “AdVirgo Computing and Data Processing Infrastructure Work Breakdown Structure (WBS)” document AdVirgo Computing Model, Implementation Plan and Management Plan documents are being reviewed and updated as deliverables of WP10 KIT visit to Cascina , 22 March 2019

AdVirgo Data Flow schema Advanced Virgo Computing Model

AdVirgo Data Flow: The GW170814 case The signal arrives Data composed into frames Calibration of the data Veto, DQ flags production h(t) transfer Low-latency matched-filter pipelines Upload to GraceDB Data written into on-line storage Low-latency data quality Low-latency sky localization GCN Circular sent out Data written into Cascina Mass Storage Data transfer toward aLIGO and CCs KIT visit to Cascina , 22 March 2019

AdVirgo Computing Layers 17/10/2018 Advanced Virgo Computing Model

Data Processing Infrastructure Virgo Week Jan 2019 KIT visit to Cascina , 22 March 2019

Computing and Data Processing CDP WP8 Offline Computing Services WP9 Data Analysis Tools and Pipelines WP10 LVC Computing Strategy WP12 New Developments WP11 Public Data Open Science WP8.1 CVMFS support at CCs WP9.1 Continuous Waves (CW) WP10.1 Updated Virgo Computing Model WP11.1 O2 Data Release WP12.1 ML/DL strategies for GW detections WP8.2 Usage reporting in LVC accounting WP9.2 Compact Binary Coalescence (CBC) WP11.2 Open Data Science Workshop KIT visit to Cascina , 22 March 2019

Virgo Platform & Services (WP1) Expansion of Raw Data Circular Buffer Expanded to 700TB => ~ 6.3 months at 45 MB/s 1PB final expansion (~12 months) ongoing Online Storage buffer upgraded Condor farm, used for Noemi, Detchar and cWB low latency pipeline expanded to reach 160 cpus Cascina site link bandwidth increased from to 1Gb/s to 10Gb/s. Bulk data transfer to Tier1 CC Legacy solution, GFAL2/iRODS based running smoothly during ER14 LIGO/Virgo common web access via ligo.org authorization Completed for VIM, other relevant web pages will follow KIT visit to Cascina , 22 March 2019

Software Management (WP2) Supporting release cycles associated to Engineering Runs and Science Run A joint Ligo-Virgo Software Configuration Control Board (SCCB) is in place for the relevant Low Latency software Software stored on a centralized SVN archive in Cascina. Transition from SVN to Git ongoing. Currently mirroring relevant packages from Virgo SVN to git.ligo.org Setting up CVMFS for Virgo software distribution Joint Ligo-Virgo proposal “Sustainable software development and distribution with the conda package manager and the conda-forge” accepted and initial work has started KIT visit to Cascina , 22 March 2019

Online System Processes (WP3) KIT visit to Cascina , 22 March 2019

Data Quality & Detchar (WP4)  All analysis run at Cascina on EGO machines One dedicated machine for high CPU/memory analysis 5 virtual machines for online analysis Condor farm for the data quality reports (DQRs) Outputs uploaded to GraceDB to produce joint LIGO-Virgo DQRs Analysis of the past day run on a nightly basis Interactive machines: farmn & ctrl  Main input: raw data  Few TB of storage  DBs: segments, noise lines, etc.  Web server to display results and browse the associated reports KIT visit to Cascina , 22 March 2019

Low Latency data distribution (WP5) Activities and plans Transfer LIGO h(t) frames from LLH and LHO to EGO directly (by-passing the CIT node) Evaluating use of Kafka instead of Cm for intersite h(t) frame exchange (WP5.1) KIT visit to Cascina , 22 March 2019

Low Latency data distribution (WP5) Kafka evaluation We are evaluating the use of Kafka for the Cascina – aLIGO link as a replacement for the current Cm (Virgo specific) based solution. Kafka is a modern message-query which embed a smart fail-over mechanism and will be used by aLIGo for all other Low Latency links (including toward Kagra). Status: Setup parallel live data Kafka streams (CIT <-> Cascina) being compared with Cm based ones KIT visit to Cascina , 22 March 2019

Bulk Data Handling (WP7) The main problem We have been confronted with a strong diversity among Tier1 CC that has always prevented the development of a unified solution For the post O3 timeframe we are aiming to a unified solution Investigate and implement a unifying solution based on a EGI/OSG framework in coordination with Tier1 CCs Must provide a full Data Transfer and Access integrated solution for a full mesh topology among all AdVirgo/LIGO data endpoints KIT visit to Cascina , 22 March 2019

Offline-computing (WP8 – WP9) The main problem The majority of the LVC common pipelines: Run on HTCondor clusters fully devoted to LIGO Look for data and libraries in a local shared file system Pipelines are intrinsically uncompliant with GRID and it’s difficult to have them running at CNAF, CCIN2P3, Nikhef, …. For a better utilization of Virgo computing resources: Pilot projects to enable main CBC detection pipelines to run on Virgo resources directly, via European grid submission, or via OSG Exploration of DIRAC and Rucio for workflows Support of O2 CW analyses with additional computing power provided at CNAF and Nikhef/SURFSara clusters Setting up CVMFS areas at Virgo CCs for data KIT visit to Cascina , 22 March 2019

Offline-computing (WP10 – WP12) New draft of updated Virgo computing model (May 2019) Longer-term goal: integrated LVC computing model Open Data prep, formatting, publication, etc. Translation to local languages; mirror server in Europe Open Data Science Workshop in Paris (Apr 8-10 2019) Exploration and development of ML/DL pipelines 1st Conference on ML for GWs, Geophysics, Robotics and Control Systems (Jan 2019 at EGO) KIT visit to Cascina , 22 March 2019