Presentation is loading. Please wait.

Presentation is loading. Please wait.

L2 CPUs and DAQ Interface: Progress and Timeline

Similar presentations


Presentation on theme: "L2 CPUs and DAQ Interface: Progress and Timeline"— Presentation transcript:

1 L2 CPUs and DAQ Interface: Progress and Timeline
Kristian Hahn, Paul Keener, Joe Kroll, Chris Neu, Fritz Stabenau, Rick Van Berg, Daniel Whiteson, Peter Wittich UPenn February 17, 2019 Daniel Whiteson/Penn

2 Keep data processing path simple; make rejection very fast
Requirements Data Flow Keep data processing path simple; make rejection very fast Add no slow controls to event processing Control: Configuration (Load trigger exe, prescales, etc) System Reset (HRR) Monitoring Trigger rates, algorithm times, resource usage, etc Vital for commissioning of system Commissioning Phase I is Trigger Evaluation Director + single box Phase II is Trigger Evaluation Director + 4 boxes February 17, 2019 Daniel Whiteson/Penn

3 System & DAQ Architecture
Trigger Evaluation Director February 17, 2019 Daniel Whiteson/Penn

4 “Thirsty”: wait for events to process “Drunk”: disregard events
L2 Nodes Nodes can be mostly ignorant of the state of the rest of the system. Keep it as simple as possible. Need only two states: “Thirsty”: wait for events to process “Drunk”: disregard events February 17, 2019 Daniel Whiteson/Penn

5 between L1A and L2 decision Decisions are made in order
L2 Buffers CPU Rules: Events sit in L2 buffer between L1A and L2 decision Decisions are made in order If buffers are full on L1A  deadtime! L2 Buffer CPU L2 Buffer CPU L2 Buffer L2 Buffer CPU February 17, 2019 Daniel Whiteson/Penn

6 between L1A and L2 decision Decisions are made in order
L2 Buffers CPU Rules: Events sit in L2 buffer between L1A and L2 decision Decisions are made in order If buffers are full on L1A  deadtime! L2 Buffer CPU L2 Buffer CPU Parallel by Event Map buffer to CPUs Each CPU does whole event Phase I: 1 box does each event, FIFO L2 Buffer L2 Buffer CPU February 17, 2019 Daniel Whiteson/Penn

7 between L1A and L2 decision Decisions are made in order
L2 Buffers CPU Rules: Events sit in L2 buffer between L1A and L2 decision Decisions are made in order If buffers are full on L1A  deadtime! L2 Buffer CPU L2 Buffer CPU Parallel by Event Map buffer to CPUs Each CPU does whole event L2 Buffer L2 Buffer CPU Phase I: 1 box does each event, FIFO Phase II: Easy to extend to 1 box per buffer. Process events in parallel. Minimize deadtime! February 17, 2019 Daniel Whiteson/Penn

8 between L1A and L2 decision Decisions are made in order
L2 Buffers CPU Rules: Events sit in L2 buffer between L1A and L2 decision Decisions are made in order If buffers are full on L1A  deadtime! L2 Buffer CPU L2 Buffer CPU Parallel by Event Map buffer to CPUs Each CPU does whole event Future? Consider splitting events up to process in parallel. If triggers are partitionable, may reduce tails L2 Buffer L2 Buffer CPU February 17, 2019 Daniel Whiteson/Penn

9 TL2D Creation Current L2 System Built by Alpha Pulsar + Nodes
Bank created by nodes, sent to L2toTS for completion Bundle TL2D with L2 Decision Datasize is small, overhead is large February 17, 2019 Daniel Whiteson/Penn

10 Prescales Current L2 System All events pass through single alpha
standard prescales are trivial rate limited prescales require one counter each Phase I: Pulsars + TED +1 Node Single CPU handles all events Prescales done in L2TS (May be done in node as temporary measure for Phase I, but not useful in Phase II) Phase II: Pulsars + TED +4 Nodes: Standard prescales Easy to do in L2toTS, one counter per trigger Rate-limited prescales one clock per trigger Must be done on L2TS February 17, 2019 Daniel Whiteson/Penn

11 Scalar Monitoring Current L2 System Scalars record: Data is sent:
Number of events into L2 Number of events which pass Number of events out of L2 (after prescales) Data is sent: In TL2D bank To ScalarMon over ethernet from crate controller Scalar Monitoring Pulsar + Nodes Gather information in one place: Incoming event rate CPU decision Prescale influence Phase I: L2TS has all information (may be done on node as temporary measure for Phase I) Phase II: Cannot be done on node “In alpha, scalar incrementation takes 1-3 ms” -Greg Feild February 17, 2019 Daniel Whiteson/Penn

12 merger SVT SVT Information Current L2 System
Wait for SVT before processing Pulsar + Nodes If SVT L1 bits are zero: Ignore SVT information, process event. Leave SVT data out of TL2D. If SVT L1 bits are nonzero: Wait for SVT information, process event Include SVT data in TL2D. merger Note: we must wait for SVT info on Rate-Limited triggers. This may be often! SVT February 17, 2019 Daniel Whiteson/Penn

13 Run Control System L2 Paths & Processors Run Front Control End Crates
Partition Configure Activate HRR End Event Data Trigger Decisions Front End Crates Run Control RunControl Commands L1A Trigger Supervisor February 17, 2019 Daniel Whiteson/Penn

14 Run Control Signal Action Partition TED stores Partition Number
Configure TED sends trigger table/exe TED starts monitoring Nodes go to “thirsty” mode Activate (none) L1A Process, generate decision Halt go to “drunk”. Flush? Recover Go to “thirsty” Run (none) End Dump trigger table Close stats Go to “drunk” mode Retain Partition Number February 17, 2019 Daniel Whiteson/Penn

15 TED T.E.D. Control TED Control Run Control Run Control Client
L2 Node Controller Node L2 Node Monitor L2 Node Controller Node L2 Node Monitor L2 Monitor Server L2 Mon GUI L2 Node Controller Node L2 Node Monitor February 17, 2019 Daniel Whiteson/Penn

16 TED Run Control Interface Run Control Run Control Client Communication
Run Control talks to clients over ethernet, with SmartSockets via publish/subscribe Progress Prototype client is built Talks to RunControl Tests: - joined partition - received configuration data - received RC transitions - ACK back to RC TED February 17, 2019 Daniel Whiteson/Penn

17 TED Monitoring Server Asynchronous operation
Nodes push data regularly, at low priority Nodes are never blocked by monitoring L2 Mon GUI Web based interface Pull monitoring information from nodes Access from anywhere, anytime Security issues? Statistics CPU/Memory usage by node, by trigger, etc L1 A rates, trigger rates, event sizes, processing times by trigger.. L2 Monitor Server L2 Mon GUI February 17, 2019 Daniel Whiteson/Penn

18 TED TED <==> Nodes Communication Message Passing
Need flexible protocol for sending commands, configuration data, monitoring statistics Serialization Convert any message into a serial buffer Communication Send/Receive serial buffers Considering TCP/IP Interface design independent of implementation L2 Node Controller Node Serializer Deserializer Communicator TED L2 Node Monitor Deserializer Serializer Communicator February 17, 2019 Daniel Whiteson/Penn

19 TED Example: Configuration Trigger & Hardware Databases TED Control
Trigger Table Node ID/ Prescales Exe location monitoring rate TED Control Run Control Run Control Client Node L2 Node Controller Ack Ack Ack Ack TED February 17, 2019 Daniel Whiteson/Penn

20 Node CPU 1 OS Interrupts Monitoring TED Interface CPU 2 Data I/O Algo
2 CPUs, 1 box Kristian and Fritz have shown that OS and interrupts can be isolated on one CPU, freeing second CPU to do Data IO & algorithms. Ether Node CPU 1 OS Interrupts Monitoring TED Interface Timing Software tails are reduced when all interrupts are sent to one CPU CPU 2 Data I/O Algo Data Decision+TL2D February 17, 2019 Daniel Whiteson/Penn

21 Monitoring details Sharing Data between CPUs
Algorithm CPU writes monitoring data to shared memory Manager CPU looks for monitoring data. Must ensure that data is locked February 17, 2019 Daniel Whiteson/Penn

22 Big Picture: Commissioning Schedule
Phase I Begin testing in parasitic mode with single node in June Phase II Extend to 4 nodes. Design such that minimal work required: Send event to specific box rather than single box [simple mask in merger] Add nodes to TED [simple in current design] L2TS receives events from multiple nodes Like to avoid reworking prescales/scalar system February 17, 2019 Daniel Whiteson/Penn

23 TED <-> RunControl
Phase I Tasks SVT => PC Node <-> Pulsar Specify Data & Decision Format Merger => PC PC => L2TS TL2D Formation Prescaling/Scalars (Node/L2TS?) RC Controlled configuration and event processing Integrate Control & Configure CPU Control TED’s Brain TED <-> RunControl Specify Config Data Parasitic Running & Decisions TED <=> Node Control and Interface Design Monitor GUI has node data TED <=> L2Mon CPU Data Sharing Today April May June 1 June 21 February 17, 2019 Daniel Whiteson/Penn

24 Phase I Task Schedule Date Task to complete Names Needs
March 12th Pulsar <=> PC testing Kristian+Fritz Control & Interface design All TED <=> RunControl Daniel Specify Data & Decision Format All March 26th TED’s Brain Daniel Prescaling/Scalars in Nodes? Kristian/Cheng Ju? Specify Config Data Daniel TED <=> L2Mon Daniel April 8th Begin: SVT =>PC Kristian+Daniel SVT Pulsar Begin: Merger =>PC Kristian+Daniel Merger Pulsar TED <=> Node Fritz CPU Control Kristian CPU Data Sharing Kristian April 15th TL2D Formation Daniel+Kristian Begin: PC => L2TS Kristian+Daniel L2TS Pulsar May 1st Monitoring GUI has node data Daniel Begin: integrate control pieces Daniel+Kristian June 1st Begin: RC-controlled testing Daniel+Kristian June 21st Begin: parasitic running All February 17, 2019 Daniel Whiteson/Penn


Download ppt "L2 CPUs and DAQ Interface: Progress and Timeline"

Similar presentations


Ads by Google