Towards a better testability of the control system components

Slides:



Advertisements
Similar presentations
Introduction to Maven 2.0 An open source build tool for Enterprise Java projects Mahen Goonewardene.
Advertisements

BE/CO Changes in LS1 to the Software Development Infrastructure and Widely Used Libraries Chris Roderick, Greg Kruk, Katarina Sigerud, Luigi Gallerani,
ITIL: Service Transition
Setting Up a Sandbox Presented by: Kevin Brunson Chief Technology Officer.
Windows 2003 and 802.1x Secure Wireless Deployments.
E. Hatziangeli – LHC Beam Commissioning meeting - 17th March 2009.
Systems Analysis – Analyzing Requirements.  Analyzing requirement stage identifies user information needs and new systems requirements  IS dev team.
Designing For Testability. Incorporate design features that facilitate testing Include features to: –Support test automation at all levels (unit, integration,
T Project Review Magnificent Seven Project planning iteration
EGEE is a project funded by the European Union under contract IST Testing processes Leanne Guy Testing activity manager JRA1 All hands meeting,
Component Technology. Challenges Facing the Software Industry Today’s applications are large & complex – time consuming to develop, difficult and costly.
Continuous Integration and Code Review: how IT can help Alex Lossent – IT/PES – Version Control Systems 29-Sep st Forum1.
Stephane Deghaye (AB/CO) ATC/ABOC days.
Server Virtualization
The System and Software Development Process Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
SONIC-3: Creating Large Scale Installations & Deployments Andrew S. Neumann Principal Engineer, Progress Sonic.
T HE BE/CO T ESTBED AND ITS USE FOR TIMING AND SOFTWARE VALIDATION 22 June BE-CO-HT Jean-Claude BAU.
14th Oct 2005CERN AB Controls Development Process of Accelerator Controls Software G.Kruk L.Mestre, V.Paris, S.Oglaza, V. Baggiolini, E.Roux and Application.
SONIC-3: Creating Large Scale Installations & Deployments Andrew S. Neumann Principal Engineer Progress Sonic.
Definition (Wikipedia)  What is deployment ? “Software deployment is all of the activities that make a software system available for use.” 1. Install.
Creating SmartArt 1.Create a slide and select Insert > SmartArt. 2.Choose a SmartArt design and type your text. (Choose any format to start. You can change.
BE-CO-DO - Development tools (Eclipse, CBNG, Artifactory, …) - Atlassian (Jira, Wikis, Bamboo, Crucible), CO Testbed - DIAMON/LASER - JMS (Java messaging.
Strategy to achieve smooth upgrades during operations Vito Baggiolini BE/CO 1.
Feedbacks from EN/STI A. Masi On behalf of EN-STI Mathieu Donze` Odd Oyvind Andreassen Adriaan Rijllart Paul Peronnard Salvatore Danzeca Mario Di Castro.
MPE and BE-CO Collaborations  MPE and BE-CO collaborations Jean-Christophe Garnier 01/12/2015 On behalf of TE-MPE.
DIAMON Project Project Definition and Specifications Based on input from the AB/CO Section leaders.
LS1 – View from Applications BE-CO LS1 review – 1 December 2015 Greg Kruk on behalf of the Applications section.
Proposal: Use of ECRs for “Controls” Changes and Renovations Rende Steerenberg, Samy Chemli, Marine Gourber-Pace, Klaus Hanke, Verena Kain, Bettina Mikulec,
HWC Review – Sequencer Vito Baggiolini AB/CO, with the team: Carlos Castillo, Daniele Raffo, Roman Gorbonosov.
 Automation Strategies for LHC System Tests and Re-Commissioning after LS1 Kajetan Fuchsberger TE-MPE LS1 Workshop On behalf of the TE-MPE-MS Software.
H2LC The Hitchhiker's guide to LSA Core Rule #1 Don’t panic.
LS1 Review BE-CO-SRC Section Contributions from: A.Radeva, J.C Bau, J.Betz, S.Deghaye, A.Dworak, F.Hoguin, S.Jensen, I.Koszar, J.Lauener, F.Locci, W.Sliwinski,
V4.
SmartCenter for Pointsec - MI
ITIL: Service Transition
Joonas Sirén, Technology Architect, Emerging Technologies Accenture
By: Raza Usmani SaaS, PaaS & TaaS By: Raza Usmani
Create setup scripts simply and easily.
IT Services Katarzyna Dziedziniewicz-Wojcik IT-DB.
C/C++ Build tools & Testbed
Status and Plans for InCA
DataGrid Quality Assurance
Trends like agile development and continuous integration speak to the modern enterprise’s need to build software hyper-efficiently Jenkins:  a highly.
CS 5150 Software Engineering
Control system network security issues and recommendations
Computing infrastructure for accelerator controls and security-related aspects BE/CO Day – 22.June.2010 The first part of this talk gives an overview of.
Generator Services planning meeting
Software Configuration Management
Middleware – ls1 progress and planning BE-CO Tc, 30th september 2013
LSA/InCA changes during LS1
LCGAA nightlies infrastructure
Renovation of the Accelerators Controls Infrastructure and its Assets Management Asset and Maintenance Management Workshop November 14th, 2013 Cl.Dehavay.
FESA evolution and the vision for Front-End Software
GIS Portal Racks Project
Operating Systems and Systems Programming
Transforming SharePoint Farm Solutions to the Add-in Model
X in [Integration, Delivery, Deployment]
Dev Test on Windows Azure Solution in a Box
Automated Testing and Integration with CI Tool
Analysis models and design models
JENKINS TIPS Ideas for making your life with Jenkins easier
Agile testing for web API with Postman
SOFTWARE LIFE-CYCLES Beyond the Waterfall.
Chapter 8 Software Evolution.
The IT Services Helpdesk
LHC BLM Software audit June 2008.
Extreme Programming.
Automation of Control System Configuration TAC 18
Kaj Rosengren FPGA Designer – Beam Diagnostics
Presentation transcript:

Towards a better testability of the control system components S. Deghaye with help from M. Peryt and input from L. Burdzanowski, G. Kruk, W. Sliwinski, J. Lauener, J-C. Bau, A. Dworak, F. Hoguin, J. Wojniak, R. Gorbonosov, L. Cseppento

Introduction What is a testbed? Two cases in our context: A testbed is a platform for conducting rigorous, transparent, and replicable testing of scientific theories, computational tools and new technologies Two cases in our context: Sandbox No persistency except environment Release-less, cheap, throw-away tests or prototypes Full PRO-grade validation No risk to break PRO but very close to PRO We don’t mock any part under test Users expect pro-SLA Who are we targeting? Developers/Integrators of complete full-stack solutions from hardware to GUI. CO3 recommendation (EDMS) Docker could help for the sandbox

Current situation Integration tests in BE-CO Unit tests in BE-CO Control Testbed aka CTB set-up several years ago Main users are CMW & FESA Dedicated timing, all available platforms, isolated environment on the GPN Timing Testbed NXCALS FESA devices, old CCS test environment Uses the CMW test environment InCA had 2500 devices but they were unused => removed Unit tests in BE-CO The vast majority have unit tests (out of scope of this talk) We provide very little to our clients => Their feedback is… Devices on inca1-5

Common feedback from Equipment Groups Develop Release Deploy Import Configure Test Low-level development cycle Operational Timing ONLY Vertical testing not possible Long iteration time Impossible to be GPN-only Versioning too strict … Incompatible features (square peg into round hole) No DEV env => need to go to PRO => need to release + PRO timing only

Proposal Provide a complete infrastructure to facilitate the setup of testbed(s) Not one CTB but as many independent TBs as needed  no cross-talk Take into account the whole workflow; not just testing Transition to PRO Staging environment, instead of separate environment Developments to be integrated in individual service plans Incremental availability depending on needs (e.g. TE-MPE) This infrastructure and its maintenance comes at a cost Difficult to estimate

Proposal

Current situation SVN Hardware SW - Testbed SW - Production CBNG Test code Applic UT RBAC Auth. SRV CALS InCA/LSA LSA DB PRO CMW Dir SRV PRO version CCDB PRO CTR HW DUT CPU … FESA Process CTR HW DUT CPU … FESA Process CTR HW DUT CPU … FESA Process NFS Local dev

Proposal SVN Hardware SW - Testbed SW - Production CBNG Test code Applic UT RBAC Auth. SRV CALS InCA/LSA LSA DB LSA DB PRO CMW Dir SRV Dev version PRO version CCDB CCDB PRO CTR HW DUT CPU … FESA Process CTR HW DUT CPU … FESA Process CTR HW DUT CPU … FESA Process NFS GPN-only Timing (simulated)

Jenkins, GUI testers, Junit, etc… Automation SVN Hardware SW - Testbed SW - Production Context/Container CBNG Test code Applic UT RBAC Auth. SRV CALS InCA/LSA Jenkins, GUI testers, Junit, etc… LSA DB LSA DB PRO CMW Dir SRV Dev version PRO version CCDB CCDB PRO CTR HW DUT CPU … FESA Process CTR HW DUT CPU … FESA Process CTR HW DUT CPU … FESA Process NFS GPN-only Timing (simulated)

Control System missing features Run-time infrastructure is incomplete Integration environment does not exist for most components Dev release cycle for low-level CS PRO upgrades (2nd phase) Timing simulation is a must Tests must be replicable

Status & plans per service/component BE-CO Initiative CS-378 to collect WPs and track progress Initiative involves: CCS CMW FESA FEC environment InCA/LSA CALS Timing HW commissioning sequencer CBNG PM server

CCS Two environments for testing Known Issues: Sandbox use case: INT: integration Empty DB with dictionary entries only. For testbeds that need to be setup from scratch TESTING: Periodic, on-demand snapshot of PRO. No persistence For further details see: https://wikis.cern.ch/display/config/New+CCS+test+environment Known Issues: APEX not available Too many constraints from PRO (e.g. versioning) (FESA/CCS) Synchronisation to/from PRO Sandbox use case: Would a CCDB in Docker be worthwhile?

CMW RBAC authentication SRV & Directory SRV Test instances available on the GPN Connected to old ACCINT CCDB To connect to the new INT CCS would require a few days work Integration in INT to be improved Requires several flags (testbed users shouldn’t care)

FESA Dev release feature: Upgrade to PRO Build chain to be added + integration with Eclipse plug-in Allow re-release with build info e.g. 1.2.3-20170923034923 Cost: 0.5PM Upgrade to PRO Shouldn’t have to recompile – use what was tested! CODEINE is a must to avoid FESA-specific solution Cost:1.5PM

FEC environment Generation scripts to be re-written First step towards eradication of transfer.ref/new_dtab Quite risky, as it touches 30 year-old code Huge impact (2000+ FECs) As a lightweight first step, modify the existing scripts

InCA/LSA LSA Testbed DB to be put in place One DB for all testbeds, but more might be needed later, if too much cross-talk Ongoing LiquiBase effort should help LSA could be single instance InCA must be 1 instance per TB (Aquisition-ready event) Cost: 1PM Good opportunity to clean-up configuration Automatic insertion upon FESA class dev-release not obvious (details to be checked) Support effort hard to estimate Lack of tools Is 1 Testbed ≡ 1 Accelerator when it comes to support ?

CALS Hypothesis: NXCALS only Several environments already available but a PRO-level instance is required Hardware or OpenStack? Hadoop development environment provided by IT is pro-grade (limited space) Data policy to be discussed Cannot keep everything forever Connect to CCDB INT GPN w/o TN visibility Cost: <0.5PM

Timing Local Timing Central timing Not a problem LTIM PRO versions must be available as part of the infrastructure Central timing Operational timing is not sufficient Prevents reproducibility No control on what is played No possibility to validate corner cases Simulation/testbed must take into account the HW Cannot be FESA event simulation (lack of interrupts and pulses) How to simulate the GMT?

Timing simulation Simulated timing requires HW Courtesy J-C. Bau Timing simulation Simulated timing requires HW CTSYN – can it work w/o GPS? MTT to drive the GMT cable CTR for the distribution to apps (TIDE) Keep it simple to start with but… XML file required to configure the MTT (complex and support-intensive) Basic tool required to generate the file, adding a few checks (GUI not necessary) Can increase complexity later, if needed Cost: 3-4PM for the timing team

HWC sequencer Needs INT LSA DB Communication with acc-testing Deploy test instance of system server (MPE) Fwk to coord schedule the tests, auto analysis (10 y/o)

CBNG Currently either DEV (e.g. 1.2.3-20170831142300) or PRO release with version from product.xml Artifactory contains 3rd party, DEV and PRO releases. Deploy repository: ~pcrops Services on GPN Uses deploy tool (copera user) with --dev to access the dev deploy repository Future: promote DEV release to PRO

PM server FESA should send data to pm-test when in INT environment To be studied how FESA can redirect automatically w/o changing the FESA class design

Bamboo vs Jenkins Current testbed uses Bamboo A few teams already moved to Jenkins Jenkins appears to be more flexible and easier to script Jenkins is the market leader, and a product that is well-known by students Need to decide what we recommend Phasing-out Bamboo comes with a cost Bamboo’s Total Cost of Ownership

Network Icing on the cake: Add a separate network domain to further protect the testbed environment from the risks incurred from the GPN

SLA? What is expected? Proposal Remain within working hours: no out-of-hours service Testbed must be reasonably stable Intended to be PRO-grade, not Test Aim for a reaction time of a few hours At least look at the problem within a day next steps: - milestones (MPE, LS2)

Milestone plan

Milestone plan Phase 1 Phase 2 Phase 3 Separate environment on the GPN Versioning less strict than in PRO End of 2017, InCA/LSA end 2018 Phase 2 INT environment part of a pipeline => Promote from INT to PRO Phase 3 Simulated timing Hardware aspect (availability, redesign, etc.) to be considered Could be part of WhiteRabbit for timing ?