ALICE – First paper
ALICE Set-up TOF TRD HMPID ITS PMD Muon Arm PHOS TPC Size: 16 x 26 meters Weight: 10,000 tons TOF TRD HMPID ITS PMD Muon Arm PHOS ALICE Set-up TPC
ALICE TPC Large volume gas detector Drift volume and MPWC at the end caps 3-dim. “continuous” tracking device for charged particles x,y of pad z derived from drift time Designed to record up to 20000 tracks Event rate: about 1 kHz Typical event size for a central Pb+Pb collision: about 75 MByte 3
ALICE TPC: 5 years of construction
Trigger system Minimal requirements High level requirements Detect collisions Initialise readout of detectors Initialise data transfer to data acquisition (DAQ) Protection against pile-up High level requirements Select interesting events Needs real-time processing of raw data and extraction of physics observables trigger detector trigger system Detectors Why? interaction rate (e.g. 8 kHz for Pb+Pb) > detector readout rate (e.g. 1 kHz for TPC) > DAQ archiving rate (50 - 100 Hz) Readout electronics raw data high-level trigger trigger DAQ processed data
What to trigger on? trigger Every central Pb+Pb collisions produces a QGP - no need for a QGP-trigger But hard probes are (still) rare at high momentum In addition, the reconstruction efficiency of heavy quark probes is very low E.g. Detection of hadronic charm decays: D0 K– + + about 1 D0 per event (central Pb-Pb) in ALICE acceptance after cuts signal/event = 0.001 background/event = 0.01 trigger
PHOS L0 trigger PbO4W- crystal calorimeter for photons, neutral mesons, 1 to > 100 GeV Array of crystals + APD + preamp + trigger logic + readout DAQ L0 trigger tasks shower finder energy sum implementation FPGA VHDL firmware L0/L1 trigger 8
PHOS – muon tracks
D0 trigger Detection of hadronic charm decays: D0 K– + + (6.75%), c = 124 m HLT code D0 finder: cut on d0(K)*d0() TPC tracker TPC+ITS track fitter displaced decay vertex finder ITS TPC Preliminary result: invariant mass resolution is within a factor of two compared to offline
Introducing the High Level Trigger ALICE data rates (example TPC) TPC is the largest data source with 570132 channels, 512 timebins and 10 bit ADC value. Central Pb+Pb collisions event rates: ~200 Hz (past/future protected) event sizes: ~75 Mbyte (after zero-suppression) data rates: ~ 15 Gbyte/sec TPC data rate alone exceeds by far the total DAQ bandwidth of 1.25 Gbyte/sec HLT tasks Event selection based on software trigger Efficient data compression
HLT requirements Full event reconstruction in real-time Event analysis Main task: reconstruction of up to 10000 charged particle trajectories Method: Pattern recognition in the TPC Cluster finder Track finder Track fit Global track fit ITS-TPC-TRD Vertex finder Event analysis Trigger decision
HLT architecture HLT is a generic high performance cluster Detectors DAQ HLT Mass storage
HLT building blocks (1) Hardware Nodes Network Infrastructure Sufficient computing power for p+p 121 Front-End PCs: 968 CPU cores, 1.935 TB RAM, equipped with custom PCI card for receiving detector data 51 Computing PCs: 408 CPU cores,1.104 TB RAM Network Infiniband backbone, GigaBit ethernet Infrastructure 20 redundant servers for all critical systems
HLT building blocks (2) Software Cluster management and monitoring Data transport and process synchronisation framework Interfaces to online systems: Experiement control system, Detector control system, Offline DB,... Event reconstruction and trigger applications
First paper
First paper
First paper
First paper
First paper
Planning pp run November 200 collision @ 900 GeV December 106 collisions @ 900 GeV Some collisions @ 2.4 TeV February-> collisions @ 7 TeV