Download presentation
Presentation is loading. Please wait.
1
Test Systems Software / FEE Controls Peter Chochula
2
2 PTS Status PTS v 2.0 Analysis and DBMS decoupled from system (easy to upgrade now) System configuration via ASCII files Possibility to dump settings to new config files Loadable Maskbit and Testbit matrices Fully integrated bus Updated panels …. And bugs fixed
3
Peter Chochula 3 PTS Version 2.0 – Main CP Version 2.0 Help available (to be extended…) DBMS Integration Status Overview Simplified Configuration
4
Peter Chochula 4 PTS 2.0 – JTAG Integration Supported Controllers: Corelis MVME 1149.1 with or without external multiplexer Corelis 100f (ISA) JTAG Technologies 3710 PCI (testbeams) KEJTAG v2.0 Automatic Controller Test
5
Peter Chochula 5 PTS 2.0 – Supported Testbeam Setup Reference Tested Object Scintillators
6
Peter Chochula 6 PTS 2.0 – DAQ Software 3 planes 1-10 chips / plane Automatic Data Integrity Checks
7
Peter Chochula 7 PTS 2.0 - New Debugging Tool - Data Analyser Plugins Run Conditions Buffered Beam Profile Data Frame Decoder Event Display Single Event Processing
8
Peter Chochula 8 PTS 2.0 – A1 and BUS Manual Controls integrated with Pilot MCM (Beta) Status of MCM JTAG Configuration MCM Manual Control – JTAG Configuration …Analog Pilot not yet fully integrated
9
Peter Chochula 9 PTS 2.0 – Threshold Scans – New Data Format New (Flexible) Data Format Root Interface recognizes the Data format
10
Peter Chochula 10 PTS 2.0 DAC Sweep Ready for any BUS configuration Using MB DACS or external device Integration of external device is easy ( 1 VI only)
11
Peter Chochula 11 LabView upgrade to v.6? If yes then in all Institutes at the same time CERN can upgrade only as the last one
12
SPD Front-end and Readout Electronics Setup & Configuration Based on talk given on ALICE-TB, January 2003 Please see also related document on ALICE DCS web (Documents -> FERO)
13
Peter Chochula 13 ALICE online software hierarchy DCS TPC FERO Gas LV HV SPD FERO LV HV DAQ/RC TPCSPD TRG TPCSPD HLT ECS … …… (Source: S. Vascotto, TB presentation, October 2002)
14
Peter Chochula 14 Partitioning of ALICE Online systems DAQ/R C PCA DCSTRG ECA (Source: S. Vascotto, TB presentation, October 2002) Partition A DAQ/R C PCA DCSTRG Partition A
15
Peter Chochula 15 Example: The Design of SPD Pilot MCM Sensor Readout Chip Bus
16
Peter Chochula 16 Summary: Alice FERO Architectures FERO Class A FERO Class B Class C FERO Class D FERO Configuration Monitoring DDL is used to configure FERO Monitoring is based on different technology There are 2 options to configure FERO: DDL based ( same as Class A) Non-DDL (Ethernet, etc.) DDL is not involved in configuration Configuration and monitoring are sharing the access path to FERO Configuration Monitoring Configuration DDL
17
Peter Chochula 17 Controls Technologies DCS interacts with devices via well defined interfaces Hardware details are usually transparent to upper layers (Example: CAEN, ISEG) Preferred communication technologies are OPC and DIM Device Hardware Process Management (PLC…) Communications (OPC,DIM) Supervision (SCADA) Customization, FSM
18
Peter Chochula 18 Concept of the Front-end Device (FED) PCA DAQ/RC DCS FED CPU FERO Hardware DIM Server DIM Client PLC LVPS Profibus, JTAG, etc. Additional monitoring path DAQ Workstation (LDC) DDL Sw DDL FED
19
Peter Chochula 19 SPD – FED Interface to DCS DAQ Data Halfstave control JTAGJTAG Router DDL SPD DATA JTAG Return Dedicated CPU (Workstation) Memory DIM DCS - PVSS VR Control, VR Status, I,V,Temp, Time Critical tasks Standard Interface Private Software SPD FED
20
Peter Chochula 20 DIM Protocol Service based protocol Client can subscribe to service and define the update policy Easy to implement on different platforms DIM –custom protocol Name Server Service Info Request Service Subscribe to service Service Data Commands Register services Client Source: C.Gaspar Server
21
Peter Chochula 21 Controls Hierarchy is Based on Functionality FERO Hardware FED Configuration DUMonitoring DU Trigger status DU DAQ/R C DCS PCA Trigger see C. Gaspar: Hierarchical Controls Configuration & Operation, published as a CERN JCOP framework document http://clara.home.cern.ch/clara/fw/FSMConfig.pdfhttp://clara.home.cern.ch/clara/fw/FSMConfig.pdf Configuration CUMonitoring CU Trigger status CU Commands Status Definition and implementation of Device Units is detector’s responsibility CU – Control Unit DU – Device Unit
22
Peter Chochula 22 Time Flow of FERO Configuration PCA DAQ/R C DCS 1 2 3 FERO Hardware FERO CPU DCS 3 PCA DAQ/R C Definition and implementation of FSM is detector’s responsibility
23
Peter Chochula 23 SPD Readout Layout Router(s) MXI-2 1 router services 6 halfstaves SPD contains 20 routers PCI-MXI DC S DAQ MXI-VME
24
Peter Chochula 24 Controlling the VME Crates – MXI Daisy-Chain.. only one PCI Controller needed programming is easy – chain is transparent to SW performance related questions
25
Peter Chochula 25 Controlling the VME Crates – 2 PCI-MXI Bridges in one PC.. two PCI Controllers needed programming still easy – (lookup table?) performance – we could gain using parallel processes
26
Peter Chochula 26 Controlling the VME Crates – 2 PCI-MXI Bridges in one PC.. two PCI Controllers and two Computers needed programming more complicated on upper level performance – probably the best
27
Peter Chochula 27 Tasks Running on the Control Workstation PVSS DIM Servers Local Monitoring Can a single machine handle this load? Do we need to separate PVSS from local control? Do we need to separate the two sides of SPD? Do we even need 3 computers….? Answer will be obtained from prototypes FAST – Time Critical tasks “Slow”
28
Peter Chochula 28 SPD needs additional processing configuration of data XX we need to develop a procedure for fast detection of bus status configuration data must be correctly formatted
29
Peter Chochula 29 Internal Chip Problems Can Affect the Configuration Strategy XXXX
30
Peter Chochula 30 Internal Chip Problems Can Affect the Configuration Strategy XXXX We need to develop a mechanism for problems recovery This should not be implemented as a patch in the configuration routine! Problems should be described in a “recipe” which is loaded from configuration database together with configuration data
31
Peter Chochula 31 Detector Calibration – Standard Approach PCA DCSDAQ/R C Load Thresholds and Test Patterns Log Data Run DAQ Analyze Data Prepare Configuration Data OFFLINE ONLINE
32
Peter Chochula 32 Detector Calibration – Standard Approach Synchronization between DAQ and DCS via PCA will add some overhead Conservative estimate ~ 7680 synchronization cycles, will add about 2 (or even more) hours dead time… We need a local calibration procedure SPD will be put into ignored state during the calibration We need to define FSM and DCS recipe PCA DCSDAQ/R C
33
Peter Chochula 33 …But… This was still not the bad message
34
Peter Chochula 34 Software/hardware overhead Loading of a single chip needs ~ 300 ms Out of this time more than 99% is the communication overhead This time seems to be negligible …but…. The ALICE1 chip is really complicated and big Remember, when we started, we needed some 2 hours to scan a single chip. This has been reduced to some 5 minutes using several tricks Time needed to scan a bus is still ~45 (or 15 with less statistics) minutes and cannot be reduced (the amount of data is bigger by an order of magnitude)
35
Peter Chochula 35 Detector Calibration We cannot simply implement the present procedures Estimated time for scan is ~30 hours, with 8 hours of JTAG activity Ways to reduce the needed time: Run scans in parallel …but – only one Router can be addresses at a time Use the built in macro option in KE-JTAG controller Implement a part of scanning procedures in Router’s hardware
36
Peter Chochula 36 SEU Monitoring Standard approach: Write the configuration data into the Alice1 chips Compare output with previously written configuration …But… Analyzing routines must understand the way how the configuration is written (bus configuration) Part of data will be lost Due to the nature of Alice1 chips (stuck LSB) Due to the tricks used to load chips with internal problems
37
Peter Chochula 37 DCS Architecture: Data Flow (Configration & Logging) Configuration DB Archive Subsystems Hardware Conditions DB PVSS Configuration DCS Recipes FERO Config. Device Config.
38
Peter Chochula 38 Required tasks Definition of configuration data Definition of monitoring limits (recipes) Definition of data subset written to Conditions DB Development of offline analysis tools
39
Peter Chochula 39 A few recommendations Base the development of PTS reverse engineering Use Windows XP and if possible Visual Studio.NET as development platform (at least for final product testing) Use MySQL for database prototyping Restrict Database programming to standard SQL We will probably change the underlying database for final system (ORACLE?)
40
Peter Chochula 40 Conclusions PTS 2.0 is available FERO configuration & monitoring needs a lot of work
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.