Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June 20071 Computer Instrumentation Introduction Jos Vermeulen, UvA / NIKHEF Topical.

Slides:



Advertisements
Similar presentations
Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
Advertisements

A Gigabit Ethernet Link Source Card Robert E. Blair, John W. Dawson, Gary Drake, David J. Francis*, William N. Haberichter, James L. Schlereth Argonne.
Kostas KORDAS INFN – Frascati XI Bruno Touschek spring school, Frascati,19 May 2006 Higgs → 2e+2  O (1/hr) Higgs → 2e+2  O (1/hr) ~25 min bias events.
June 2006Juan A. Valls - FPA Project1 Producción y validación de los RODs Read-Out Driver (ROD)
DSP online algorithms for the ATLAS TileCal Read Out Drivers Cristobal Cuenca Almenar IFIC (University of Valencia-CSIC)
11 November 2003ATLAS MROD Design Review1 The MROD The Read Out Driver for the ATLAS MDT Muon Precision Chambers Marcello Barisonzi, Henk Boterenbrood,
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
Data Acquisition Software for CMS HCAL Testbeams Jeremiah Mans Princeton University CHEP2003 San Diego, CA.
High Level Triggering Fred Wickens. High Level Triggering (HLT) Introduction to triggering and HLT systems –What is Triggering –What is High Level Triggering.
K. Honscheid RT-2003 The BTeV Data Acquisition System RT-2003 May 22, 2002 Klaus Honscheid, OSU  The BTeV Challenge  The Project  Readout and Controls.
DAQ System at the 2002 ATLAS Muon Test Beam G. Avolio – Univ. della Calabria E. Pasqualucci - INFN Roma.
MSS, ALICE week, 21/9/041 A part of ALICE-DAQ for the Forward Detectors University of Athens Physics Department Annie BELOGIANNI, Paraskevi GANOTI, Filimon.
1 Modelling parameters Jos Vermeulen, 2 June 1999.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
14 Sep 2005DAQ - Paul Dauncey1 Tech Board: DAQ/Online Status Paul Dauncey Imperial College London.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
ILC Trigger & DAQ Issues - 1 ILC DAQ issues ILC DAQ issues By P. Le Dû
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
DAQ Issues for the 12 GeV Upgrade CODA 3. A Modest Proposal…  Replace aging technologies  Run Control  Tcl-Based DAQ components  mSQL  Hall D Requirements.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
R&D for First Level Farm Hardware Processors Joachim Gläß Computer Engineering, University of Mannheim Contents –Overview of Processing Architecture –Requirements.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Gueorgui ANTCHEVPrague 3-7 September The TOTEM Front End Driver, its Components and Applications in the TOTEM Experiment G. Antchev a, b, P. Aspell.
Background Physicist in Particle Physics. Data Acquisition and Triggering systems. Specialising in Embedded and Real-Time Software. Since 2000 Project.
The new CMS DAQ system for LHC operation after 2014 (DAQ2) CHEP2013: Computing in High Energy Physics Oct 2013 Amsterdam Andre Holzner, University.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb front-end electronics and its interface to the DAQ.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
CHEP March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
LNL 1 SADIRC2000 Resoconto 2000 e Richieste LNL per il 2001 L. Berti 30% M. Biasotto 100% M. Gulmini 50% G. Maron 50% N. Toniolo 30% Le percentuali sono.
DØ Online Workshop3-June-1999S. Fuess Online Computing Overview DØ Online Workshop 3-June-1999 Stu Fuess.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
Experience with multi-threaded C++ applications in the ATLAS DataFlow Szymon Gadomski University of Bern, Switzerland and INP Cracow, Poland on behalf.
ATLAS TDAQ RoI Builder and the Level 2 Supervisor system R. E. Blair, J. Dawson, G. Drake, W. Haberichter, J. Schlereth, M. Abolins, Y. Ermoline, B. G.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
S. Durkin, CMS EMU Meeting U.C. Davis Feb. 25, DMB Production 8-layer PC Board, 2 Ball-Grid Array FPGA’s, 718 Components/Board 550 Production Boards.
Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct The ATLAS Data Acquisition.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
New DAQ at H8 Speranza Falciano INFN Rome H8 Workshop 2-3 April 2001.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Niko Neufeld HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
AFP Trigger DAQ and DCS Krzysztof Korcyl Institute of Nuclear Physics - Cracow on behalf of TDAQ and DCS subsystems.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Infrastructures and Installation of the Compact Muon Solenoid Data AcQuisition at CERN on behalf of the CMS DAQ group TWEPP 2007 Prague,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
CODA Graham Heyes Computer Center Director Data Acquisition Support group leader.
ATLAS DCS ELMB PRR, March 4th 2002, H.J.Burckhart1 Embedded Local Monitor Board ELMB  Context  Aim  Requirements  Add-ons  Our aims of PRR.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
LECC2004 BostonMatthias Müller The final design of the ATLAS Trigger/DAQ Readout-Buffer Input (ROBIN) Device B. Gorini, M. Joos, J. Petersen, S. Stancu,
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
Jos VermeulenTopical Lectures, Computer Instrumentation, DCS, FE Readout, June Computer Instrumentation Detector Control and Front-End Readout Jos.
1 Nicoletta GarelliCPPM, 03/25/2011 Overview of the ATLAS Data-Acquisition System o perating with proton-proton collisions Nicoletta Garelli (CERN) CPPM,
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
Jos VermeulenTopical Lectures, Computer Instrumentation, TDAQ, June Computer Instrumentation Triggering and DAQ Jos Vermeulen, UvA / NIKHEF Topical.
LHCb and InfiniBand on FPGA
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
Operating the ATLAS Data-Flow System with the First LHC Collisions
TDAQ commissioning and status Stephen Hillier, on behalf of TDAQ
Presentation transcript:

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Computer Instrumentation Introduction Jos Vermeulen, UvA / NIKHEF Topical lectures, 28 June 2007

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Computing essential ingredient of high-energy physics instrumentation for: trigger data-acquisition on-line monitoring experiment control calibration These lectures: Relevant hardware and software techniques, the ATLAS T/DAQ/DCS system serves as illustration

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June A Toroidal LHC ApparatuS (ATLAS): One of two general-purpose experiments at the Large Hadron Collider at CERN. Under construction in Point 1, opposite to the main entrance of CERN

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June SDX1 USA15 UX15 US15 Main control room Main Entrance CERN Point 1 A side (Geneva) C side (Jura) x y z

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Cut-away view of the ATLAS Detector

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June 20076

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June 20077

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June 20078

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June June 13, 2007: Lowering of side A End Cap Toroid (ECT)

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Descent through the access shaft

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Descent in the cavern

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Landed Vessel produced by Schelde Exotech, cold mass components by Brush HMA BV

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June The relation with computing …. June issue of ATLAS e-news ( Richard Stallman, the president and founder of the Free Software Foundation ….. went on a pit tour, where he had a chance to admire our freshly installed end-cap toroid!

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Computer instrumentation in high-energy physics: Detector Control System (DCS) Trigger Data-AcQuisition (DAQ)

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June ATLAS Detector Control System (DCS) LCS USAL1 LCS USAL2 LCS US15 LCS SDX1 LCS 1LCS 2LCS 3LCS 4 CoolingRacksEnvironELMB HEC HV Temp Barrel HV FE Crates HV LV Purity Front-End Systems Magnet CERN LHC DSS Data Viewer AlarmStatusWeb Operator Interface DCS_IS WAN CIC PixelSCTTRTLArMDTTGCRPCCSC LAN Tile Common Infrastructure Controls Local Control Stations Detector Safety System DCS Information Service Underground Subdetector Control Stations Embedded Local Monitor Box Global Control Stations

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June SDX1 USA15 UX15 ATLAS Trigger / DAQ DataFlow Overview

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Event data ≤ 100 kHz, 1600 fragments of ~ 1 kByte each ATLAS detector Read- Out Drivers ( RODs ) First- level trigger Read-Out Subsystems ( ROSs ) UX15 USA15 SDX1 USA15 UX15 Dedicated links Timing Trigger Control (TTC) 1600 Read- Out Links 10 Gigabit Ethernet ATLAS Trigger / DAQ DataFlow Overview RoI Builder Regions Of Interest VME ~160 PCs Data of events accepted by first-level trigger Event data requests Delete commands Requested event data ~40x10 ~320x1 Gbit/s ~20 switches

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Event data ≤ 100 kHz, 1600 fragments of ~ 1 kByte each ATLAS detector Read- Out Drivers ( RODs ) First- level trigger Read-Out Subsystems ( ROSs ) LVL2 Super- visor UX15 USA15 SDX1 CERN computer centre SDX1 USA15 UX15 Dedicated links Timing Trigger Control (TTC) 1600 Read- Out Links 10 Gigabit Ethernet ATLAS Trigger / DAQ DataFlow Overview RoI Builder DataFlow Manager Event Filter (EF) pROS ~ 870~1500 Regions Of Interest VME Data of events accepted by first-level trigger Event data requests Delete commands Requested event data stores LVL2 output dual 1, 2 or 4-core CPU nodes ~100~30 Network switches Event data pulled: partial ≤ 100 kHz, full ~ 3 kHz Event rate ~ 200 Hz Data storage Local Storage SubFarm Outputs (SFOs) LVL2 Farm + switches Network switches Event Builder SubFarm Inputs (SFIs) Second- level trigger ~40x10 ~320x1 Gbit/s ~20 switches ~160 PCs Gigabit Ethernet

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Event data ≤ 100 kHz, 1600 fragments of ~ 1 kByte each ATLAS detector Read- Out Drivers ( RODs ) First- level trigger Read-Out Subsystems ( ROSs ) UX15 USA15 Dedicated links Timing Trigger Control (TTC) 1600 Read- Out Links RoI Builder VME Data of events accepted by first-level trigger RODs, ROS PCs and ROBINs 10 Gigabit Ethernet Read-Out Drivers (ROD): subdetector-specific, collect and process data (no event selection) output via Read-Out Links (ROL, 200 MByte/s optical fibers) to buffers on ROBIN cards in Read-Out Subsystem (ROS) PCs Same type of ROLs, ROBINs and ROS PCs used for all sub-detectors ROBINs: 64-bit 66 MHz PCI-X cards 3 ROL inputs ROS PCs: 4U rack-mounted PCs with 4 ROBINs => 12 ROLs per ROS PC ~40x10 ~320x1 Gbit/s ~20 switches ~150 PCs

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Region of Interest (RoI) Builder receives for each first-level accept information from first-level trigger and passes formatted information to one of the LVL2 supervisors. LVL2 supervisor decides for one of the processors in the LVL2 farm and sends it the RoI information. LVL2 processor requests data from the ROSs as needed (possibly in several steps), produces an accept or reject and informs the LVL2 supervisor. Result of processing is stored in pseudo-ROS (pROS) for an accept. LVL2 supervisor passes decision to the DataFlow Manager. Trigger/DAQ DataFlow associated with second-level (LVL2) trigger First- level trigger Read-Out Subsystems ( ROSs ) LVL2 Super- visor USA15 Timing Trigger Control (TTC) 1600 Read- Out Links 10 Gigabit Ethernet RoI Builder Regions Of Interest Event data requests Delete commands Requested event data ~40x10 ~320x1 Gbit/s ~20 switches ~150 PCs SDX1 DataFlow Manager Event Filter (EF) pROS ~ 870~1500 dual 1, 2 or 4-core CPU nodes ~100~30 Network switches Event rate ~ 200 Hz Local Storage SubFarm Outputs (SFOs) LVL2 Farm + switches Network switches Event Builder SubFarm Inputs (SFIs)

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June For each accepted event the DataFlow Manager decides for a Sub-Farm Input (SFI) and sends it a request to take care of the building of a complete Event. The SFI sends requests to all ROSs for data of the event to be built. Completion of building is reported to the DataFlow Manager. For rejected events and for events for which event Building has completed the DataFlow Manager sends "clears" to the ROSs for up to 100 events together. On request the event data are passed from SFI to an Event Filter processor. Trigger/DAQ DataFlow associated with Event Building Event Building rate ~ kHz First- level trigger Read-Out Subsystems ( ROSs ) LVL2 Super- visor USA15 Timing Trigger Control (TTC) 1600 Read- Out Links 10 Gigabit Ethernet RoI Builder Regions Of Interest Event data requests Delete commands Requested event data ~40x10 ~320x1 Gbit/s ~20 switches ~150 PCs SDX1 DataFlow Manager Event Filter (EF) pROS ~ 870~1500 dual 1, 2 or 4-core CPU nodes ~100~30 Network switches Event rate ~ 200 Hz Local Storage SubFarm Outputs (SFOs) LVL2 Farm + switches Network switches Event Builder SubFarm Inputs (SFIs)

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Simple model ("paper model") used to predict average number of ROS PCs and ROLs from which data is needed for LVL2 trigger processing, for example for design luminosity trigger menu, per first-level accept: 16.2 ROLs or 8.4 ROS PCs -> RoI-driven processing is a key property of the ATLAS LVL2 system, but also makes the system more complex and its performance not so straight- forward to predict. With kByte per fragment need network bandwidth of ~ 2 GByte/s at 100 kHz first-level trigger accept rate (instead of ~ 150 GByte/s for full read-out at 100 kHz) for LVL2 traffic Second-Level (LVL2) Trigger and Event Building: rates Event Building rate ~ kHz (bandwidth ~ GByte/s) First- level trigger Read-Out Subsystems ( ROSs ) LVL2 Super- visor USA15 Timing Trigger Control (TTC) 1600 Read- Out Links 10 Gigabit Ethernet RoI Builder Regions Of Interest Event data requests Delete commands Requested event data Event rate ~ 200 Hz ~40x10 ~320x1 Gbit/s ~20 switches ~150 PCs SDX1 DataFlow Manager Event Filter (EF) pROS ~ 870~1500 dual 1, 2 or 4-core CPU nodes ~100~30 Network switches Event rate ~ 200 Hz Local Storage SubFarm Outputs (SFOs) LVL2 Farm + switches Network switches Event Builder SubFarm Inputs (SFIs)

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Application Specific Integrated Circuits (ASICs) Field Programmable Gate Arrays (FPGAs) Digital Signal Processors (DSPs) Microcontrollers Embedded microprocessors Crate processors Personal Computers (PCs) Compute servers Linux, multi-threaded C++, Java for GUIs usually no OS, cross development of software written in C or C++, exceptionally in assembler "programmed" in VHDL or Verilog + "fitting", specialists only, mostly electronic engineers, "firmware" can be changed in-circuit in many designs expensive, design by specialists, cannot be changed after production Computing hardware

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June VME bus: parallel bus in crates PCI, PCI-X bus : parallel bus in PCs PCI-Express: serial point-to-point connections inside PC CAN: serial bus system used for controls JTAG: serial connections between integrated circuits SPI, I2C: short distance serial connections RocketIO links of Xilinx FPGAs: up to Gbit/s serial point-to-point links for connecting FPGAs GOL: Gigabit Optical Link, developed by CERN (radiation hard sender) S-link: protocol for point-to-point links, developed by CERN TTC: Timing and Trigger Control system: for broadcasting trigger decisions, but can also transmit control information, developed by CERN Switched Ethernet: 100 Mbit, 1 Gigabit, 10 Gigabit Other switched network technology (not in ATLAS, but e.g. in CMS: Myrinet) Connection technology

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Interrupts Direct Memory Access (DMA) Memory-mapped I/O Error detection using parity bits or a Cyclic Redundancy Check (CRC) -> See presentation Sander Klous Memory management Drivers Multi-threaded programming Inter-process communication Remote process invocation Traffic shaping to avoid queueing Databases Techniques

Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Tomorrow: examples and discussion of different types of computing hardware and interconnection technology and of relevant techniques in the context of: I.ATLAS DCS and front-end readout (TTC, on-detector, RODs) II.ATLAS Triggering and DAQ