Download presentation
Presentation is loading. Please wait.
Published byErika Oliver Modified over 9 years ago
1
Monitoring IVHM Systems using a Monitor-Oriented Programming Framework S. Ghoshal, S. Manimaran - QSI G. Rosu, T. Serbanuta, G. Stefanescu - UIUC
2
IVHM System Analysis IVHM systems pose significantly higher safety and dependability requirements than most other systems Formal analysis of IVHM systems is therefore highly desirable … … but also challenging, due to their highly integrated nature (different technologies, hardware, software, sensors, etc.) and combined complexity
3
Overview Our Approach MOP (University of Illinois at Urbana) TEAMS (Qualtech Systems Inc.) Project Research Plan Conclusion and Future Work
4
Our Approach to IVHM Analysis Separation of concerns 1.State “health” assessment, or diagnosis 2.Temporal behaviors of state sequences Steering / Recovery IVHM System Model-based observation Temporal behavior monitor Violation / Validation Abstract events/states TEAMS MOP
5
Overview Our Approach MOP (University of Illinois at Urbana) TEAMS (Qualtech Systems Inc.) Project Research Plan Conclusion and Future Work
6
Monitoring-Oriented Programming (MOP) http://fsl.cs.uiuc.edu/mop - proposed in 2003 – RV’03, ICFEM’04, RV’05, CAV’05, TACAS’05, CAV’06, CAV’07, OOPSLA’07, ICSE08, … ERE LTL ptLTL ptCaRet logic plugins … … JavaMOP BusMOP MOP CFG languages
7
What is MOP? Framework for reliable software development –Monitoring is basic design discipline … rather than “add on” grafted onto existing code –Recovery allowed and encouraged –Provides to programmers and hides under the hood a large body of formal methods knowledge/techniques Monitor synthesis algorithms –Generic for different languages and application domains
8
Example: Correct and efficient sorting Heap sort O(n log(n)) Monitor if vector is sorted yes O(n) Insertion sort no O(n 2 ) provably correct Works in MOP We have an efficient and provably correct sorting algorithm! We avoided proving heap sort correct, which is hard! Need to show it does not destroy the multiset
9
MOP Example: “Authentication before use” Execution 1, correct authenticate beginend use beginend Execution 2, incorrect
10
class Resource { /*@ class-scoped SafeUse() { event authenticate : after(exec(* authenticate())) event use : before(exec(* access())) ptltl : use -> authenticate } @*/ void authenticate() {...} void access() {...}... } MOP Example: “Authentication before use”
11
MOP Example: “Enforce authentication before use” Execution 1, correct authenticate beginend use beginend Execution 2, incorrect but corrected Call authenticate()
12
class Resource { /*@ class-scoped SafeUse() { event authenticate : after(exec(* authenticate())) event use : before(exec(* access())) ptltl : use -> authenticate violation { @this.authenticate(); } } @*/ void authenticate() {...} void access() {...}... } MOP Example: “Enforce authentication before use”
13
/*@ class-scoped SafeClose() { event openRegKey : after(exec(* openRegKey())) event closeHandle : before(exec(* closeHandle())) event closeRegKey : before(exec(* closeRegKey())) ere : any* openRegKey closeHandle validation { @this.closeRegKey(); return; } @*/ Method openRegKey should be followed by closeRegKey, not by closeHandle MOP Example: “Correcting method matching”
14
/*@ class-scoped FileProfiling() { [ int count = 0; int writes = 0;] event open : after(call(* open(..))) {writes = 0;} event write : after(call(* write(..))) {writes ++;} event close : after(call(* close(..))) ere : (open write+ close)* violation { @RESET; } validation { count ++; File2.log(count + ": " + writes); } @*/ MOP Example: Profiling How many times a file is open, written to, and then closed?
15
Fail Fast Iterators Vector v = new Vector(); v.add(new Integer(1)); Iterator i = v.iterator(); v.add(new Integer(2)); Following code throws exception in Java: No exception raised if one uses Enumeration instead of Iterator –Java language decision, showing that properties referring to sets of objects are important
16
MOP Example: Safe Enumeration Basic safety property: –If nextElement() invoked on an enumeration object, then the corresponding collection (vector) is not allowed to change after the creation of the enumeration object
17
/*@ global validation SafeEnum(Vector v, Enumeration+ e) { event create : after(call(Enumeration+.new(v,..))) returning e event updatesource : after(call(* v.add*(..))) \/ … event next : before(call(Object e.nextElement())) ere : create next* updatesource+ next) } @*/ MOP Example: Safe Enumeration AspectJ code generated from the above: ~700 LOC
18
MOP Example: Safe Locking Policy Each lock should be released as many times as it was acquired
19
/*@ method-scoped SafeLock(Lock l) { event acquire : before(call(* l.acquire())) event release : before(call(* l.release())) cfg : S -> epsilon | S acquire S release } @*/ MOP Example: Safe Locking
20
MOP Approach to Monitoring Keep the following distinct and generic: specification formalisms event definitions validation handlers
21
MOP Distinguished Features: Extensible logic framework Observation: no silver-bullet logic for specs MOP logic plugins (the “How”): encapsulate monitor synthesizers; so far we have plugins for –ERE (extended regular expressions), PtLTL (Past-time LTL), FtLTL (Future-time LTL), ATL (Allen temporal logic), JML (Java modeling language), PtCaRet (Past- time Call/Return), CFG (Context-free grammars) Generic universal parameters –Allow monitor instances per groups of objects
22
MOP Distinguished Features: Configurable monitors Working scope –Check point: check spec at defined place –Method: within a method call –Class: check spec everywhere during obj lifetime –Interface: check spec at boundaries of methods –Global: may refer to more than one object Running mode –Inline: shares resources with application –Outline: communicates with application via sockets –Offline: generated monitor has random access to log
23
MOP Distinguished Features: Decentralized Monitoring/Indexing The problem: how to monitor a universally quantified specification efficiently! create udatesource next create next* updatesource+ next ( v,e )
24
Decentralized Monitoring Monitor instances (one per parameter instance) M p1 M p2 M p3 … M p1000
25
Indexing … The problem: how can we retrieve all needed monitor instances efficiently? M p1 M v,e1 M v,e2 … M p1000 udatesource Naïve implementation very inefficient (both time- and memory-wise)
26
MOP – Grigore Rosu26 MOP’s Decentralized Indexing Monitors scattered all over the program Monitor states piggybacked to object states Weak references SafeEnum events create udatesource next
27
MOP: Evaluation More than 100 program-property pairs –Dacapo benchmark, Tracematches benchmark, Eclipse, … Overhead < 8% in most cases; close to hand-optimized Overhead in %MOP monitors VS. hand-optimized monitors
28
MOP: Evaluation (cont.) Even significantly faster than logic specific solutions N/A63.520.211.151.2KABCReweave N/A124.323.921.2438.7KAproveHashSet N/A15.23.3 9.9KWekaHashtable N/A4522322101.4KCerRevSimNullTrack 708415091360.19.5KjHotDrawSafeEnum 21933546.6021.1KajHotDrawListener PQLTracematchesMOPHand Optimized LOCProgramProperty Results for Tracematches benchmarks, Overhead in %
29
Overview Our Approach MOP (University of Illinois at Urbana) TEAMS (Qualtech Systems Inc.) Project Research Plan Conclusion and Future Work
30
QSI’s TEAMS Model-based diagnosis system –TEAMS model = dependency model capturing relationships: failure modes observable effects QSI’s TEAMS Tool Set –TEAMS Designer: help create models –TEAMS-RT: processing data in real time –TEAMATE: infers health status + optimal tests –TEAMS-RDS: remote diagnostic server
31
TEAMS Designer Help users create models –(model can also be imported) –Capture component and data dependency + other aspects that allow efficient diagnosis Model = hierarchical multi- layered directed graph –Node: physical component –Test-point: “observation” node –Edge: cause-effect dependency
32
Overview Our Approach MOP (University of Illinois at Urbana) TEAMS (Qualtech Systems Inc.) Project Research Plan Conclusion and Future Work
33
Project Objectives 1.Develop tools, techniques and ultimately an integrated framework for IVHM system monitoring, control and verification 2.Show that runtime verification and monitoring can play a crucial role in the development of safe, robust, reliable, scalable and operational IVHM systems
34
Project Plan TEAMS: capture system “health” MOP: generate and integrate monitors Integrated system: check IVHM system at runtime, steering if failures are detected Steering / Recovery IVHM System Model-based observation Temporal behavior monitor Violation / Validation Abstract events/states TEAMS MOP
35
What is done: TEAMS side Case study: B-737 Autoland With data provided by Celeste M. Belcastro and Kenneth Eure, a model for B-737 is being developed
36
What is done: MOP side Case study: B-737 Autoland Two new logic plugins –Context-free patterns –Past-time LTL with Calls/Returns –(still missing timed logic plugins) Improved monitor garbage collection –Current MOP more than an order of magnitude faster than other RV systems
37
Overview Our Approach MOP (University of Illinois at Urbana) TEAMS (Qualtech Systems Inc.) Project Research Plan Conclusion and Future Work
38
Discussed initial steps towards integrated framework for IVHM system monitoring, control and verification –Separation of concerns Observation / diagnosis of system “health” Monitoring of temporal behaviours A lot to be done –Complete TEAMS model for B-737 autoland –Automate integration of TEAMS and MOP
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.