Remote Online Farms TDAQ Sander Klous ACAT April

Slides:



Advertisements
Similar presentations
1 AMY Detector (eighties) A rather compact detector.
Advertisements

Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
27 th June 2008Johannes Albrecht, BEACH 2008 Johannes Albrecht Physikalisches Institut Universität Heidelberg on behalf of the LHCb Collaboration The LHCb.
Kostas KORDAS INFN – Frascati XI Bruno Touschek spring school, Frascati,19 May 2006 Higgs → 2e+2  O (1/hr) Higgs → 2e+2  O (1/hr) ~25 min bias events.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Slide: 1 Richard Hughes-Jones T2UK, October 06 R. Hughes-Jones Manchester 1 Update on Remote Real-Time Computing Farms For ATLAS Trigger DAQ. Richard Hughes-Jones.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Timm Morten Steinbeck, Computer Science.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
1 The ATLAS Online High Level Trigger Framework: Experience reusing Offline Software Components in the ATLAS Trigger Werner Wiedenmann University of Wisconsin,
Virtual Organization Approach for Running HEP Applications in Grid Environment Łukasz Skitał 1, Łukasz Dutka 1, Renata Słota 2, Krzysztof Korcyl 3, Maciej.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
Worldwide event filter processing for calibration Calorimeter Calibration Workshop Sander Klous September 2006.
1 Chasing the Higgs boson with a worldwide distributed trigger system Sander Klous NIKHEF VENI proposal 2006.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Calibration streams in the Event Filter. Status report Mainz, Thursday 13 October 2005 Sander Klous – NIKHEF On behalf of the EF calibration team: Martine.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
Remote Online Farms Sander Klous
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Network Performance for ATLAS Real-Time Remote Computing Farm Study Alberta, CERN Cracow, Manchester, NBI MOTIVATION Several experiments, including ATLAS.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
Prospects for the use of remote real time computing over long distances in the ATLAS Trigger/DAQ system R. W. Dobinson (CERN), J. Hansen (NBI), K. Korcyl.
Geneva – Kraków network measurements for the ATLAS Real-Time Remote Computing Farm Studies R. Hughes-Jones (Univ. of Manchester), K. Korcyl (IFJ-PAN),
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
CHEP March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester 1 ATLAS TDAQ Networking, Remote Compute Farms & Evaluating SFOs Richard Hughes-Jones The.
Routing and Streaming in the HLT TDAQ Week Sander Klous Wednesday, May 17, 2006.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
Interactive European Grid Environment for HEP Application with Real Time Requirements Lukasz Dutka 1, Krzysztof Korcyl 2, Krzysztof Zielinski 1,3, Jacek.
Experience with multi-threaded C++ applications in the ATLAS DataFlow Szymon Gadomski University of Bern, Switzerland and INP Cracow, Poland on behalf.
Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct The ATLAS Data Acquisition.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment is one of the four major experiments operating at the Large Hadron Collider.
GridPP Meeting Jan 2003 R. Hughes-Jones Manchester ATLAS Trigger/DAQ Real-time use of the Grid Network Richard Hughes-Jones The University of Manchester.
The ATLAS DAQ System Online Configurations Database Service Challenge J. Almeida, M. Dobson, A. Kazarov, G. Lehmann-Miotto, J.E. Sloper, I. Soloviev and.
Software for the CMS Cosmic Challenge Giacomo BRUNO UCL, Louvain-la-Neuve, Belgium On behalf of the CMS Collaboration CHEP06, Mumbay, India February 16,
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
A Fast Hardware Tracker for the ATLAS Trigger System A Fast Hardware Tracker for the ATLAS Trigger System Mark Neubauer 1, Laura Sartori 2 1 University.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment [1] is one of the four major experiments operating at the Large Hadron Collider.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
ANDREA NEGRI, INFN PAVIA – NUCLEAR SCIENCE SYMPOSIUM – ROME 20th October
Computing for High Energy Physics Experiments Simon Connell - UJ The Standard Model and beyond ? The LHC and ATLAS at CERN Data Processing Credits K Assamagan.
THE ATLAS COMPUTING MODEL Sahal Yacoob UKZN On behalf of the ATLAS collaboration.
Emanuele Leonardi PADME General Meeting - LNF January 2017
1 MANAGING THE DIGITAL INSTITUTION.
5/14/2018 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
U.S. ATLAS TDAQ FY06 M&O Planning
Ricardo Gonçalo, RHUL BNL Analysis Jamboree – Aug. 6, 2007
CMS High Level Trigger Configuration Management
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
Controlling a large CPU farm using industrial tools
ALICE – First paper.
Introduction to Grid Technology
High Level Triggering Fred Wickens.
Operating the ATLAS Data-Flow System with the First LHC Collisions
Off-line & GRID Computing
US CMS Testbed.
Grid Canada Testbed using HEP applications
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
High Energy Physics Computing Coordination in Pakistan
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
12/3/2018 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
1/2/2019 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
LHCb Trigger, Online and related Electronics
Computing activities at Victoria
Presentation transcript:

Remote Online Farms TDAQ Sander Klous ACAT 2007 25 April Pos429.ppt vrijdag 28 december 2018 Remote Online Farms 01 11 010 001 1101 1110 11001 01011 110110 001101 1111111 0111000 11101010 01001110 110111001 000101101 1111010001 0101111100 111101001111 010110000101 Sander Klous On behalf of the Remote Online Farms Working Group TDAQ ACAT 2007 25 April W H  t Z 970234

Large Hadron Collider and ATLAS Pos429.ppt Large Hadron Collider and ATLAS vrijdag 28 december 2018 The experiments in the Large Hadron Collider in general, and ATLAS in particular, need to solve yet another problem. 970234

Data processing nightmare Pos429.ppt vrijdag 28 december 2018 There is no way to store all the info produced by ATLAS 40 million events per second x 1.5 MB/event = 60 TB per second In fact: 99.9995% of the data is thrown away So… The data processing nightmare is all about storage Unfortunately… No Rigorous multilevel trigger system First level in hardware Higher levels in software But what if your favorite channel is not in the 0.0005%? 970234

Online bottleneck Well, that’s a problem… Scarce CPU resources Pos429.ppt Online bottleneck vrijdag 28 december 2018 Well, that’s a problem… Scarce CPU resources Physics events are selected on inclusive signatures High energy leptons, missing ET, high pT jets, etc. Resources assignment must be balanced carefully So… the data processing nightmare is all about CPU No! Sorry, but there is more to it… LVL1 HLT So… The data processing nightmare is all about CPU 970234

Some are more equal than others… Pos429.ppt vrijdag 28 december 2018 Some are more equal than others… Physics selection Networking enables us to prioritize these activities So… The data processing nightmare is all about networking Detector calibration 970234

Economics Computing grid In fact, it is about balance Pos429.ppt vrijdag 28 december 2018 Data Acquisition 40 MHz Level 1 Level 2 Accept 1 in 500 Accept 1 in 50 Accept 1 in 10 Level 3 In fact, it is about balance Maximize performance  Minimize costs Amsterdam NIKHEF/SARA Network switch Computing grid 970234

Gary Stix, editor of Scientific American Pos429.ppt Is it worth the effort? vrijdag 28 december 2018 January 2001 Gary Stix, editor of Scientific American 970234

This can be difficult… The basics Pos429.ppt vrijdag 28 december 2018 This can be difficult… The basics Copenhagen Edmonton Krakow Manchester Amsterdam North Area ATLAS Detectors Level 1 Trigger Remote Event Processing Farms ROB ROB ROB ROB RF RF Data Collection Network RF RF SFI SFI SFI Packet Switched (GEANT) Back End Network lightpath L2PU Event Filter EF SFO Bdlg. 513 Mass storage Switch Local Farm The “Magni” Cluster 970234

Stream implementation Pos429.ppt vrijdag 28 december 2018 Stream implementation Remote Stream n Stream 1 Stream 2 From LVL1 (Partial) Event LVL1 Info RoutingTag StreamTag (Partial) Event LVL1 Info RoutingTag StreamTag 1 Event LVL1 Info RoutingTag StreamTag 2 Ath / CALid Ath / PESA PT Event LVL1 Info LVL2 SFO RoiB Ros/Robin pRos Stream selection Add to StreamTag1 Event LVL1 Info RoutingTag StreamTag 2 EFD (Partial) Event LVL1 Info RoutingTag StreamTag 1 L2sv Input Create RoutingTag Routing DFM B L2pu ExtPT SFI (partially) build Ath / CalStr Ath / CALIB PT Add to RoutingTag Routing Create StreamTag D C Trash ExtPT Add to StreamTag2 Duplicating Output Output Stripping 970234

Grid/Proxy implementation Pos429.ppt Grid/Proxy implementation vrijdag 28 december 2018 SFI UI SFI SFI Broker EFD Dispatcher Buffer EFD Buffer Proxy PT Infrastructure monitoring HEP VO Database PT PT PT PT PT PT int.eu.grid HEP VO Application Monitoring Local PT Farm CE PT PT CE Events Remote PTs Worker Nodes Worker Nodes 970234

Pos429.ppt vrijdag 28 december 2018 970234

Pos429.ppt vrijdag 28 december 2018 970234

Open issues Data management Software management Database access Pos429.ppt Open issues vrijdag 28 december 2018 Data management Software management Database access Authentication, Authorization and Accounting Performance and reliability Looking for PhD student… If you are interested: Mail to sander@nikhef.nl 970234

Conclusion Remote online farms are interesting Pos429.ppt Conclusion vrijdag 28 december 2018 Remote online farms are interesting From a physics perspective From a computer science perspective From an organizational perspective Infrastructure is put in place Many open questions More news next year PhD candidates: mail to sander@nikhef.nl 970234