Download presentation
Presentation is loading. Please wait.
Published byCaren Dorsey Modified over 9 years ago
1
Conformance Test Experiments for Distributed Real-Time Systems Rachel Cardell-Oliver Complex Systems Group Department of Computer Science & Software Engineering The University of Western Australia July 2002
2
Talk Overview Research Goal: to build correct distributed real-time systems 1. Distributed Real-Time Systems 2. Correctness: Formal Methods & Testing 3. Experiments: A New Test Method
3
1. Distributed Real-Time Systems
4
Real Time Reactions
5
Distributed
6
System Characteristics React or interact with their environment Must respond to events within fixed time Distributed over two or more processors Fixed network topology Each processor runs a set of tasks Processors embedded in other systems Built with limited HW & SW resources
7
Testing Issues for these Systems Many sources of non-determinism 2 or more processors with independent clocks Set of tasks scheduled on each processor Independent but concurrent subsystems Inputs from uncontrolled environment e.g. people Limited resources affects test control e.g. speed Our goal: to develop robust test specification and execution methods
8
2. Correctness: Formal Methods & Testing
9
Goal: Building Correct Systems Design (intended behaviour) Implementation behaves like this?
10
Software Tests are experiments designed to answer the question “does this implementation behave as intended?” Defect tests are tests which try to try to force the implementation NOT to behave as intended Our focus is to specify and execute robust defect tests
11
Related Work on Test Case Generation Chow TSE 1978 deterministic Mealy FSM Clarke & Lee 1997 timed requirements graphs Neilsen TACAS 2000 event recording automata Cardell-Oliver FACJ 2000 Uppaal timed automata Specific experiments are described by a test case: a timed sequence of inputs and outputs Non-determinism is not handled well (if at all) Not robust enough for our purposes
12
3. Experiments: A New Test Method
13
Our Method for Defect Testing 1. Identify types of behaviour which are likely to uncover implementation defects (e.g. extreme cases) 2. Describe these behaviours using a formal specification language 3. Translate the formal test specification into a test program to run on a test driver 4. Connect the test driver to the system under test and execute the test program 5. Analyse test results (on-the-fly or off-line)
14
Example System to Test
15
Step 1 – Identify interesting behaviours Usually extreme behaviours such as Inputs at the maximum allowable rate Maximum response time to events Timely scheduling of tasks
16
Example Property to Test Whenever the light level changes from low to high then the valve starts to open within 60cs assuming the light level alternates between high and low every 100cs
17
Step 2 – choose a formal specification language which is able to model real-time clocks persistent data concurrency and communication use Uppaal Timed Automata (UTA)
18
Example UTA for timely response m:=0
19
Writing Robust Tests with UTA Test cases specify all valid test inputs no need to test outside these bounds Test cases specify all expected test outputs if an output doesn’t match then it’s wrong No need to model the implementation explicitly Test cases may be concurrent programs Test cases are executed multiple times
20
Step 3. Translate Spec to Exec UTA specs are already program-like Identify test inputs and how they will be controlled by the driver Identify test outputs and how they will be observed by the driver then straightforward translation into NQC (not quite C) programs
21
Example NQC for timely response task dolightinput() { while (i<=MAXRUNS) { Wait(100); setlighthigh(OUT_C); setlighthigh(OUT_A); record(FastTimer(0),HIGH- LIGHT); i++; Wait(100); setlightlow(OUT_C); setlightlow(OUT_A); record(FastTimer(0),LOW- LIGHT); i++; }// end while }// end task task monitormessages() { while (i<=MAXRUNS) { monitor (EVENT_MASK(1)) { Wait(LONGINTERVAL); } catch (EVENT_MASK(1)) { record(FastTimer(0), Message()); i++; ClearMessage(); } } // end while } // end task
22
Step 4 –test driver
23
Step 4 - connect tester and execute tests
24
Step 5: Analyse Results
25
Scheduling Deadlines Test Results
26
Concluding Observations Defect testing requires active test drivers able to control extreme inputs and observe relevant outputs Test generation methods must take into account the constraints of executing test cases Robust to non-determinism in the SUT Measure what can be measured Engineers must design for testability
27
Results 1: Observation Issues Things you can’t see Probe effect Clock skew Tester speed
28
Things you can’t see Motor outputs can’t be observed directly because of power drain so we used IR messages to signal motor changes But we can observe touch & light sensors via piggybacked wires broadcast IR messages
29
The probe effect We can instrument program code to observe program variables but the time taken to record results disturbs the timing of the system under test Solutions observe only externally visible outputs design for testability: allow for probe effects
30
Clock Skew Clocks may differ for local results from two or more processors Solutions: user observations timed only by the tester including tester events gives a partial order
31
Tester speed Tester must be sufficiently fast to observe and record all interesting events Beware scheduling and monitoring overheads execution time variability Solution: use NQC parallel tasks and off-line analysis for speed
32
Results 2: Input Control Issues Input value control Input timing control
33
Input Values can be Controlled Touch sensor input (0..1) good by piggybacked wire Light sensor input (0..100) OK by piggybacked wire Broadcast IR Messages good from tester Also use inputs directly from the env. natural light or button pushed by hand
34
Input Timing Control is hard to control Can’t control input timing precisely e.g. offered just before SUT task is called Solution: Run tests multiple times and analyse average and spread of results Can’t predict all system timings for a fully accurate model c.f. WCET research, but our problem is harder
35
Conclusions from Experiments Defect testing requires active test drivers able to control extreme inputs and observe relevant outputs Test generation methods must take into account the constraints of executing test cases Robust to non-determinism in the SUT Measure what can be measured Engineers must design for testability
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.