Presentation is loading. Please wait.

Presentation is loading. Please wait.

HPAD 2D-pixel detector Outline - Basics of the AP-HPAD

Similar presentations


Presentation on theme: "HPAD 2D-pixel detector Outline - Basics of the AP-HPAD"— Presentation transcript:

1 HPAD 2D-pixel detector Outline - Basics of the AP-HPAD
Peter Göttlicher, DESY-FEB Outline - Basics of the AP-HPAD - Interface to the backend Actions inside detector head and/or backend-system Ideas from the proposal: protocol, algorithms for data sorting - Interface to the Control-electronics Synchronization to XFEL, in between modules Infrastructure Idea of bunch-to bunch-validate/veto monitoring, data to backend - Time line and Summary March, 11th 2008 Peter Göttlicher

2 Basics of the AP-HPAD The consortium
PSI University of Bonn University of Hamburg DESY March, 11th 2008 Peter Göttlicher

3 Modular structure of active plane
- 1Mpixel: 1024´1024 pixel - Quadrants to allow beam hole - 32 modules for 256´128 pixel - 8 ASIC's/module: 64´64pixel - pixel: 200µm´200µm Module: size: ~ 51.2´25.6mm2 pixels: 32K K=1024 Connected to backend via fast link control via TCP/IP or fieldbus +hardwire to infrastructure March, 11th 2008 Peter Göttlicher

4 Building blocks / Work packages
Rack/crate/instrument style systems common for all XFEL-detectors? Detector head main fast data stream control, monitoring, synchronizing, veto infrastructure e.g. cooling, power digitizing, data sorting and formatting, ASIC/link control integrating, analogue storage with 5MHz and analogue multiplexing March, 11th 2008 Peter Göttlicher

5 Functions of ASIC: 64´64 pixels
bunch clock, train start, usage of bunch boot data few bits/pixel 2 bits per bunch per pixel C3 C2 control logic digital pipeline Highest gain: 0 to 256 photons 2 bit C1 400 cells multiplexing between pixels/bunches amp 4 analogue lines discr. DAC analog pipeline filter bunch clock, train start usage of bunch fast veto control from interface electronics train stop readout control March, 11th 2008 Peter Göttlicher

6 Interface electronics for 256´128pixel digitizing to internal stage
14 bit/pixel/frame 700Mbit/s*32lines 8 ASIC's 4 outputs + digital 32 *16bit/50MS/s = 25.6Gbit/s ADC defined protocol 8% if time ADC 32 +2bit/pixel/frame 14 bits: 12 above noise over sampling 256 photons distinguish 0,1,2 photons need is bit, room for reduction Data volume is defined: - 16 bit/pixel/frame * 400 frames/train = 200Mbit/interface/train *32interface/1Mpixel = 6.4Gbit/train/1Mpixel over sampling: 13-18% but 2Bytes/pixel/frame is nice 50MS/s: compromise between + signal settling time - signal storage time March, 11th 2008 Peter Göttlicher

7 Interface electronic, end of task?
Now the signal are at digital lines: Any further step can be in the interface or at the backend But - ADC defined - No redundancy - Incredibly high total speed Usage of only 8% time Ü make use of inside interface ! March, 11th 2008 Peter Göttlicher

8 Interface electronics for 256´128pixel internal stage to backend, minimum
Proposal: 4 ´1Gbit Ethernet Opto-Fiber Option 10G Minimum is FPGA and RAM+opto What is adequate with it? - Gbit Ethernet with UDP on 4 Links: 1G is inside FPGA, - Usual hardware: 4 links instead of 3, opto to avoid GND-loop - RAM: Size defined by speed: 26Gb/s(write)+4Gb/s(read)=30Gb/s 2´64DataI/O, 400Mb/s/pin, with DIMM >1GB, many trains - if 10G, minor increase in rate to 36Gb/s, same number of fiber March, 11th 2008 Peter Göttlicher

9 Module mechanics Each interface has to fit behind active area
no depth limitation Separate PCB constructions for low-EMI analogue and high dense digital Heat per module: 35W at sensor and (rough) W at interface V-reg and RAM March, 11th 2008 Peter Göttlicher

10 Interface to backend Baseline for the AP-HPAD proposal:
- Backend is Standard IT-equipment, no custom designs - Needs of complete frames in the backend Concept: - Time scheduled data sending from all interfaces - avoids conflicts: No two to one backend-devices Backend construction: - one backend input device per parallel used link - Switches and CPU's for 1G-Ethernet with UDP March, 11th 2008 Peter Göttlicher

11 Time scheduled to XFEL - Using one bunch train for digitizing
- Transferring only during next train, or even later That allows: - Sorting algorithm on complete data set - Cancel frames inside the interface by demand of control-electronics with 100msec That needs: 50MByte of RAM per module or 25MByte more per longer delay March, 11th 2008 Peter Göttlicher

12 Ethernet connectivity to backend- 1G
Each backend-CPU(FPGA) needs connection to each interface and the control-electronics To each switch belongs a group of frames. They don't interfere to each another on the time-schedule frames 1-100 frames frames frames March, 11th 2008 Peter Göttlicher

13 Time schedule to backend: no access conflict
Explained on the system: 8 interface modules color code: group of frames and a backend-CPU horizontal is the time vertical are the interface modules Extendable to 32 modules and more switch over time will define number of trains to be stored in interface electronics to complete trains 1GByte/interface for 1MPixel and 400 frames March, 11th 2008 Peter Göttlicher

14 Usage of 10G-Ethernet? - Each interface module has one link,
can send its data in p=20% of the train repetition time. - With 1GByte in each interface n=40 trains can be stored in interface electronics Þ Time schedule data transfer would allow to sort trains to CPU's with one 10G-input as long as the number of interface modules is below n/p*efficiency = 200*efficiency (6.2MPixel*efficiency)……. 4Mpixel Þ Parallel required CPU's: number of interface*p/efficiency. For 4MPixel: 26CPU's/efficiency. Support of 10G: switch and CPU? xTCA? March, 11th 2008 Peter Göttlicher

15 Why moving tasks to the backend?
- Money for the interface-electronics will decrease only marginal + Tasks looks common for all 2D-detectors: manpower once + Interface in first stage to backend might be detector specific but for the rest the upgrade follow IT-market maintenance, operation experience would get common. ? What is costs for tasks in the backend ? · With the time schedule of write to RAM/send a train delayed: Any sorting is possible: For transmit errors, it is Good to put Group of frames in one record March, 11th 2008 Peter Göttlicher

16 Interface to control electronics
Expectation from control electronics: - Any fast signal to synchronize to accelerator and between the interface modules - Infrastructure e.g. Power, Cooling,..... - Train to train data and synchronization, validity checks - GUI: configure, monitor status, check data online - Operating also the rest of the experiment and user-equipment - Generating fast bunch/target quality information (200ns, 100ms) March, 11th 2008 Peter Göttlicher

17 Control: When specifying?
- Basic concept can be developed by the wish list of last slide - Exact specification will only develop with progress in ASIC and interface development and getting experience July 2009 test of first module (16´16pixel) operate at LCLS, PSI, PETRA3,.... quite different from XFEL - For that it is mostly commercial modules, but controlled: e.g. PXI based pulse pattern generator (national) laboratory power supplies Try to get some ideas and do some steps NOW! March, 11th 2008 Peter Göttlicher

18 Control: Generating fast veto
Idea: To get fast VETO at the ASIC via interface latest in 200ns - Good to reuse last storage cell - Reminder 400 cells for 3000 bunches, Might not reduce data volume! At control-electronics: One idea: PMT looking to florescence light expect user specifics with gaining experience NIM/VME electronics like discriminator, coincidence rumors, that similar module appears with USB (Mpot, WIENER) 200ns requires hard wired signals (FPGA possible) No request or less time stringent for data taking before XFEL March, 11th 2008 Peter Göttlicher

19 Control: Timing for fast veto
Time allowance rough estimates: Sensor + logic 40ns Cable to control 40ns (20m up- or 5m downstream) Cable to interface and inside interface 40ns (5m between control and interface) ASIC and jitter 40ns . Sum 160ns March, 11th 2008 Peter Göttlicher

20 Control: Generating slow veto (100msec)
Time scale: Before start of next train: (100msec) + Data per train are available from the accelerator/experiment + Control electronics can perform calculations - No access to HPAD-data itself Signal distribution via broadcast during train-pause to all interface and/or backend ? Would it help at all, if transfer to the backend is managed ? Alternative: Just tell the backend to through away? March, 11th 2008 Peter Göttlicher

21 Control: experimental area
Control of the area is supervising commercial-like devices Vacuum, temperature, cooling, power supply ..... Some fieldbusses TCP/IP, CAN, GPIB User: What will (s)he bring? target, pluggable sensor University standards, X-ray center standard Most products for measurements nowadays is PXI, but distributed USB or TCP/IP developing? Standard: - crate system and/or distributed systems? - Support for import from user, analog/digital I/O, fieldbus, pattern generators Don't be too stringent: Devices will define the requirements and only limited vice versa March, 11th 2008 Peter Göttlicher

22 Control: accelerator Fast control: bunch clock, start of train
Slow control: Train : numbers, bunches: expected and really delivered, quality Fast needs from accelerator with its standard and its modules - final step is AP-HPAD specific to keep interface easy - HPAD: Bunch clock, train start "train-end" or its fake to check counting Slow needs via records: “bunches to be filled into pipeline” - at boot: download a pattern or a few "Coincidence" of detector and experiment requirements - well before train: download/select the pattern (3000bit) - "Down load" to interface and backend, consistency March, 11th 2008 Peter Göttlicher

23 Control: AP-HPAD itself
Fast clocks: bunch, 5MHz, train-start, veto-last-bunch, ready,busy Slow communication via field bus (TCP/IP) configure, monitor status and performance transfer train to train information: train number, used bunches, Power: floating per interface module, GND at detector head All signals starting at common point of control-electronics Values (V,I) only defined after/during development (2kW) Try to avoid noisy DC/DC-converting, but cable diameter Take advantage from low-ripple supplies March, 11th 2008 Peter Göttlicher

24 Control: Putting together to a general concept
Different standards not necessary two crates Who is master? We go to different beams Best support in X-ray community at XFEL? Master defined by experiment defined minimized interface to adapt other beam lines Link to Interface: 100Mb/s has support from µC (TPC/IP) and FPGA(UPD) March, 11th 2008 Peter Göttlicher

25 Control: Synchronizing
- Fast : bunch clock, train, veto, few spares? - Busy, Ready, Error, start-transfer handshake between the trains with interfaces and backend via fieldbus - Synchronizing data transfer with link-switchover (1µs) , would require hardwired ready, busy can be left out, if that gets task of backend - counters for checks: Trains, bunches, checksums Optolinks have 1/1012 bit error: Every 16 sec! Try to program (control/interface) to loose only groups of frames March, 11th 2008 Peter Göttlicher

26 Data reduction, frame suppression
- Fast veto (200ns): Mostly for better usage of pipeline - Slow Veto(100ms): Generated in control-electronics Who is actor: Interface or backend? - Rejection on frames themselves: As complete frames only in backend - Data compression: Mentioned by application scientists: Non effective, counterbalance with additional CPU-power March, 11th 2008 Peter Göttlicher

27 Data reduction, Zero-suppression
1MPixel interface Pedestal = Function( pixel [0:1M] [0:32K] cell [1:400] [1:400] product gain [1:3] [1:3] 3 storage time) small to bunch number 3.5Gconstants 0.1Gconstants If zero suppression: only threshold in highest gain: Still 40Mconstants or 400Mbit Nothing foreseen in the interface: More memory access than read of memory Now: Pedestal behavior and effort not yet known Counterbalance: Thrown away pixels vs. addressing Non really promising: scientists claims “pictures are busy” March, 11th 2008 Peter Göttlicher

28 Time line Backend: On going: Tests for getting multiple 1G-links
from VIRTEX-evaluation board Combined test soon and follow up the progress First module in July 2009: Testing until July 2010 Hardware test with ONE module Data taking '2010: ONE module with ONE link Control: 2008 settle the concept July 2009 operate a single module from commercial products After first module test design by final fanout/supplies March, 11th 2008 Peter Göttlicher

29 Summary Concepts written down for the AP-HPAD has been presented
- data transfer to a backend system based on 1G-technology possible to upgrade to 10G - control electronics, allowing products in commercial, HEP/X-ray- and XFEL-standard allowing adaptation to other accelerators - ideas of synchronized control of AHAP as a whole Questions: - What can be done as a common task in a central XFEL-DAQ? advantage of common systems - How does a common solution looks like? March, 11th 2008 Peter Göttlicher


Download ppt "HPAD 2D-pixel detector Outline - Basics of the AP-HPAD"

Similar presentations


Ads by Google