Niko Neufeld PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.

Slides:



Advertisements
Similar presentations
SM. FlexPod site readiness assessment services verify that all required Layer 1 Physical Infrastructure items are in- place, properly installed, and correctly.
Advertisements

LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
Niko Neufeld PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.
LHCb Upgrade Optical Fibers from Cavern to the Surface - what has been done ? - prototypes status and next coming steps - what is missing for final installation.
Niko Neufeld CERN PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.
Experimental Area Meeting V. Bobillier1 Good connection to earth (in many points) of every metallic parts to be installed in the cavern is very.
Infrastructure for LHCb Upgrade workshop – MUON detector Infrastructure for LHCB Upgrade workshop: MUON power, electronics, cable Muon upgrade in a nutshell.
© 2010 Colt Telecom Group Limited. All rights reserved. Next Generation Data Centre Design Akber Jaffer 2.
Computer communication
1 Nexans Cabling Solutions Easy and reliable migration from fast to faster to100G.
MSS, ALICE week, 21/9/041 A part of ALICE-DAQ for the Forward Detectors University of Athens Physics Department Annie BELOGIANNI, Paraskevi GANOTI, Filimon.
Overview of basic issues.   The design of the (Outer) Tracker Upgrade for phase 2 has been ongoing for more than 4 years  So far the (implicit) assumption.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
Farm Completion Beat Jost and Niko Neufeld LHCb Week St. Petersburg June 2010.
1 Alice DAQ Configuration DB
1 5 December 2012 Laurent Roy Infrastructure / Electronics Upgrade -Cabling (long distance) -Rack and Crate (space, electrical distribution, cooling) -Detector.
Status and plans for online installation LHCb Installation Review April, 12 th 2005 Niko Neufeld for the LHCb Online team.
Ken Wyllie, CERN Upgrade infrastructure, Feb LHCb Upgrade Electronics Infrastructure K. Wyllie & L. Roy.
13 June 2013Laurent Roy Optical Fibers / Electronics Upgrade -Fibers need for Upgrade -> Installation ? -Two techniques possible for multi fibers installation.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
DAQ and Trigger upgrade U. Marconi, INFN Bologna Firenze, March
The new CMS DAQ system for LHC operation after 2014 (DAQ2) CHEP2013: Computing in High Energy Physics Oct 2013 Amsterdam Andre Holzner, University.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
LEAF 03 March Optical Fibre Services for Experiments Luit Koert de Jonge/TS-EL-OF.
CERN EN-EL Cabling and Optical Fibre Section CERN EN-EL Cabling and Optical Fibre Section Mini Workshop, Simao Machado, EN/EL/CF.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Why it might be interesting to look at ARM Ben Couturier, Vijay Kartik Niko Neufeld, PH-LBC SFT Technical Group Meeting 08/10/2012.
1 LHCC RRB SG 16 Sep P. Vande Vyvre CERN-PH On-line Computing M&O LHCC RRB SG 16 Sep 2004 P. Vande Vyvre CERN/PH for 4 LHC DAQ project leaders.
1 ENH1 Detector Integration Meeting 18/6/2015 D.Autiero (IPNL) WA105:  Detector installation requirements  Detector racks and power requirements Note:
Niko Neufeld, CERN/PH. ALICE – “A Large Ion Collider Experiment” Size: 26 m long, 16 m wide, 16m high; weight: t 35 countries, 118 Institutes Material.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
Niko Neufeld Electronics Upgrade Meeting, April 10 th 2014.
Computer and Network Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH-LBC RTTC meeting,
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Common meeting of CERN DAQ teams CERN May 3 rd 2006 Niko Neufeld PH/LBC for the LHCb Online team.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Niko Neufeld HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
Niko Neufeld, CERN. Trigger-free read-out – every bunch-crossing! 40 MHz of events to be acquired, built and processed in software 40 Tbit/s aggregated.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
Niko Neufeld LHCC Detector Upgrade Review, June 3 rd 2014.
Computer Centre Upgrade Status & Plans Post-C5, October 11 th 2002 CERN.ch.
PCIe40 — a Tell40 implementation on PCIexpress Beat Jost DAQ Mini Workshop 27 May 2013.
1 Event Building L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
DAQ Overview + selected Topics Beat Jost Cern EP.
Update on Optical Fibre issues Rainer Schwemmer. Key figures Minimum required bandwidth: 32 Tbit/s –# of 100 Gigabit/s links > 320, # of 40 Gigabit/s.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
THEATRE OF DREAMS: LHCB BEYOND THE PHASE 1 UPGRADE, April 7 th 2016, Manchester Niko Neufeld, CERN/EP (with a lot of help & input from Ken Wyllie)
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
High throughput computing collaboration (HTCC) Jon Machen, Network Software Specialist DCG IPAG, EU Exascale Labs INTEL Switzerland.
Oracle & HPE 3PAR.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
LHCb and InfiniBand on FPGA
Niko Neufeld  (quasi) real-time connectivity requirements ”CERN openlab workshop.
Challenges in ALICE and LHCb in LHC Run3
Niko Neufeld LHCb Upgrade Online Computing Challenges CERN openlab Workshop on Data Center Technologies and Infrastructures, Mar 2017.
HF Test station at P5 A.Mestvirishvili.
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
ETD/Online Report D. Breton, U. Marconi, S. Luitz
Electronics, Trigger and DAQ for SuperB
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Presentation transcript:

Niko Neufeld PH/LBC

Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter Farm up to 4000 servers UX85B Point 8 surface subfarm switch TFC x 100 Gbit/s subfarm switch Online storage Clock & fast commands 8800 Versatile Link 8800 Versatile Link throttle from PCIe40 Clock & fast commands 6 x 100 Gbit/s ECS

DAQ cost is driven by number and type of interconnects shorter  cheaper faster  cheaper per unit of data transported price of switching technology telecom (feature-rich and expensive) vs data-centre (high- volume and inexpensive) Data-centre operation much easier on the surface in a non-controlled area Current LHCb data-centre is in UX85A Data-centre cost is definitely lowest for pre- fabricated (“container”) solution 3

Most compact system achieved by locating all Online components in a single location Power, space and cooling constraints allow such an arrangement only on the surface: containerized data-centre Versatile links connecting detector to readout-boards need to cover 300 m LHCb Online & Trigger TDR - LHCC Detector Upgrade Review Niko Neufeld D2 & D1 (current ECS & farm long-distance optical fibres Container data- centre

9000 links from detector Eventbuilder system of 500 – 600 servers and O(10) switches Eventfilter farm of up to 4000 servers, will start with ~ 1000 servers, O(100) switches Experiment Control System infrastructure, O(100) servers, storage O(10) Petabyte Power for cooling and air- conditioning systems Depends on adopted cooling solution (  tomorrow) But certainly < 10% of total ItemPower (per item) Eventbuilder server500 W Eventbuilder switch5 kW Eventfilter serverup to 350 W Eventfilter / Controls switch 300 W Controls server300 W Storage25 kW

Data-centre will house event-builder, part of ECS and TFC and event-filter farm + required network equipment No central storage Require up to 2 MW (min 800 kW) 3-phase current (220 V) Mandatory 20 kW EOD for 30 min for critical ECS & TFC services Desirable – but not mandatory - ~ 400 kW EOD for 2 minutes for event- builder nodes for clean shutdown (custom electronics inside) Power should be available with the arrival of the containers starting from

Base-line scenario Very little – some ECS infrastructure in D2 / D3 barracks More than covered with 100 kW EOD (available today) ✔ Plan B in case of problem with long- distance versatile link No indication of any problem yet  should know by the end of the year In that case need additional 400 kW (available today) Battery backup would be desirable to ensure clean shutdown of servers (2 minutes) 7

Existing infrastructure 70 kW on EOD (dual-feed, redundant) Used for critical services (ECS and storage) Will be kept for Run3 and beyond A small increase (30 kW) in the same configuration is desirable (30 kW) but not mandatory More on-site storage Cooling should be made redundant (  tomorrow) 8

LocationMax. Load [kW] out of which on battery UX85A D1 & D R-007 (SX85) (additional) Data-centre S (800 kW min) 20 The upgraded LHCb online system has significantly larger power-needs than the current one, in particular in the event- building and event-filter. All new power-needs are in the to-be-built new data- centre from 2017 For existing, re-used location current power and battery backup are sufficient 9

Long-distance optical fibres from UX85B via PM to S8 data-centre won’t talk about patch-cords here (SD responsibility) Successfully verified installation with the collaboration of EN/EL and EN/MEF (thanks!) Verified for both types: pre-connectorized cables blown fibres with spliced pig-tails Long distance fibres are OM3 with MPO12 elite connectors Will need between 1000 and 1200 MPO12 fibres Needed as soon as possible after the start of LS2 because indispensable for commissioning. Clearly this is a major cost-factor in the upgrade: need a competitive price 10