Presentation is loading. Please wait.

Presentation is loading. Please wait.

Towards a better testability of the control system components

Similar presentations


Presentation on theme: "Towards a better testability of the control system components"— Presentation transcript:

1 Towards a better testability of the control system components
S. Deghaye with help from M. Peryt and input from L. Burdzanowski, G. Kruk, W. Sliwinski, J. Lauener, J-C. Bau, A. Dworak, F. Hoguin, J. Wojniak, R. Gorbonosov, L. Cseppento

2 Introduction What is a testbed? Two cases in our context:
A testbed is a platform for conducting rigorous, transparent, and replicable testing of scientific theories, computational tools and new technologies Two cases in our context: Sandbox No persistency except environment Release-less, cheap, throw-away tests or prototypes Full PRO-grade validation No risk to break PRO but very close to PRO We don’t mock any part under test Users expect pro-SLA Who are we targeting? Developers/Integrators of complete full-stack solutions from hardware to GUI. CO3 recommendation (EDMS) Docker could help for the sandbox

3 Current situation Integration tests in BE-CO Unit tests in BE-CO
Control Testbed aka CTB set-up several years ago Main users are CMW & FESA Dedicated timing, all available platforms, isolated environment on the GPN Timing Testbed NXCALS FESA devices, old CCS test environment Uses the CMW test environment InCA had 2500 devices but they were unused => removed Unit tests in BE-CO The vast majority have unit tests (out of scope of this talk) We provide very little to our clients => Their feedback is… Devices on inca1-5

4 Common feedback from Equipment Groups
Develop Release Deploy Import Configure Test Low-level development cycle Operational Timing ONLY Vertical testing not possible Long iteration time Impossible to be GPN-only Versioning too strict Incompatible features (square peg into round hole) No DEV env => need to go to PRO => need to release + PRO timing only

5 Proposal Provide a complete infrastructure to facilitate the setup of testbed(s) Not one CTB but as many independent TBs as needed  no cross-talk Take into account the whole workflow; not just testing Transition to PRO Staging environment, instead of separate environment Developments to be integrated in individual service plans Incremental availability depending on needs (e.g. TE-MPE) This infrastructure and its maintenance comes at a cost Difficult to estimate

6 Proposal

7 Current situation SVN Hardware SW - Testbed SW - Production CBNG
Test code Applic UT RBAC Auth. SRV CALS InCA/LSA LSA DB PRO CMW Dir SRV PRO version CCDB PRO CTR HW DUT CPU FESA Process CTR HW DUT CPU FESA Process CTR HW DUT CPU FESA Process NFS Local dev

8 Proposal SVN Hardware SW - Testbed SW - Production CBNG Test code
Applic UT RBAC Auth. SRV CALS InCA/LSA LSA DB LSA DB PRO CMW Dir SRV Dev version PRO version CCDB CCDB PRO CTR HW DUT CPU FESA Process CTR HW DUT CPU FESA Process CTR HW DUT CPU FESA Process NFS GPN-only Timing (simulated)

9 Jenkins, GUI testers, Junit, etc…
Automation SVN Hardware SW - Testbed SW - Production Context/Container CBNG Test code Applic UT RBAC Auth. SRV CALS InCA/LSA Jenkins, GUI testers, Junit, etc… LSA DB LSA DB PRO CMW Dir SRV Dev version PRO version CCDB CCDB PRO CTR HW DUT CPU FESA Process CTR HW DUT CPU FESA Process CTR HW DUT CPU FESA Process NFS GPN-only Timing (simulated)

10 Control System missing features
Run-time infrastructure is incomplete Integration environment does not exist for most components Dev release cycle for low-level CS PRO upgrades (2nd phase) Timing simulation is a must Tests must be replicable

11 Status & plans per service/component
BE-CO Initiative CS-378 to collect WPs and track progress Initiative involves: CCS CMW FESA FEC environment InCA/LSA CALS Timing HW commissioning sequencer CBNG PM server

12 CCS Two environments for testing Known Issues: Sandbox use case:
INT: integration Empty DB with dictionary entries only. For testbeds that need to be setup from scratch TESTING: Periodic, on-demand snapshot of PRO. No persistence For further details see: Known Issues: APEX not available Too many constraints from PRO (e.g. versioning) (FESA/CCS) Synchronisation to/from PRO Sandbox use case: Would a CCDB in Docker be worthwhile?

13 CMW RBAC authentication SRV & Directory SRV
Test instances available on the GPN Connected to old ACCINT CCDB To connect to the new INT CCS would require a few days work Integration in INT to be improved Requires several flags (testbed users shouldn’t care)

14 FESA Dev release feature: Upgrade to PRO
Build chain to be added + integration with Eclipse plug-in Allow re-release with build info e.g Cost: 0.5PM Upgrade to PRO Shouldn’t have to recompile – use what was tested! CODEINE is a must to avoid FESA-specific solution Cost:1.5PM

15 FEC environment Generation scripts to be re-written
First step towards eradication of transfer.ref/new_dtab Quite risky, as it touches 30 year-old code Huge impact (2000+ FECs) As a lightweight first step, modify the existing scripts

16 InCA/LSA LSA Testbed DB to be put in place
One DB for all testbeds, but more might be needed later, if too much cross-talk Ongoing LiquiBase effort should help LSA could be single instance InCA must be 1 instance per TB (Aquisition-ready event) Cost: 1PM Good opportunity to clean-up configuration Automatic insertion upon FESA class dev-release not obvious (details to be checked) Support effort hard to estimate Lack of tools Is 1 Testbed ≡ 1 Accelerator when it comes to support ?

17 CALS Hypothesis: NXCALS only
Several environments already available but a PRO-level instance is required Hardware or OpenStack? Hadoop development environment provided by IT is pro-grade (limited space) Data policy to be discussed Cannot keep everything forever Connect to CCDB INT GPN w/o TN visibility Cost: <0.5PM

18 Timing Local Timing Central timing Not a problem
LTIM PRO versions must be available as part of the infrastructure Central timing Operational timing is not sufficient Prevents reproducibility No control on what is played No possibility to validate corner cases Simulation/testbed must take into account the HW Cannot be FESA event simulation (lack of interrupts and pulses) How to simulate the GMT?

19 Timing simulation Simulated timing requires HW
Courtesy J-C. Bau Timing simulation Simulated timing requires HW CTSYN – can it work w/o GPS? MTT to drive the GMT cable CTR for the distribution to apps (TIDE) Keep it simple to start with but… XML file required to configure the MTT (complex and support-intensive) Basic tool required to generate the file, adding a few checks (GUI not necessary) Can increase complexity later, if needed Cost: 3-4PM for the timing team

20 HWC sequencer Needs INT LSA DB Communication with acc-testing
Deploy test instance of system server (MPE) Fwk to coord schedule the tests, auto analysis (10 y/o)

21 CBNG Currently either DEV (e.g ) or PRO release with version from product.xml Artifactory contains 3rd party, DEV and PRO releases. Deploy repository: ~pcrops Services on GPN Uses deploy tool (copera user) with --dev to access the dev deploy repository Future: promote DEV release to PRO

22 PM server FESA should send data to pm-test when in INT environment
To be studied how FESA can redirect automatically w/o changing the FESA class design

23 Bamboo vs Jenkins Current testbed uses Bamboo
A few teams already moved to Jenkins Jenkins appears to be more flexible and easier to script Jenkins is the market leader, and a product that is well-known by students Need to decide what we recommend Phasing-out Bamboo comes with a cost Bamboo’s Total Cost of Ownership

24 Network Icing on the cake:
Add a separate network domain to further protect the testbed environment from the risks incurred from the GPN

25 SLA? What is expected? Proposal
Remain within working hours: no out-of-hours service Testbed must be reasonably stable Intended to be PRO-grade, not Test Aim for a reaction time of a few hours At least look at the problem within a day next steps: - milestones (MPE, LS2)

26 Milestone plan

27 Milestone plan Phase 1 Phase 2 Phase 3 Separate environment on the GPN
Versioning less strict than in PRO End of 2017, InCA/LSA end 2018 Phase 2 INT environment part of a pipeline => Promote from INT to PRO Phase 3 Simulated timing Hardware aspect (availability, redesign, etc.) to be considered Could be part of WhiteRabbit for timing ?


Download ppt "Towards a better testability of the control system components"

Similar presentations


Ads by Google