Download presentation
Presentation is loading. Please wait.
Published byAda Hamilton Modified over 6 years ago
1
Dongyun Jin, Patrick Meredith, Dennis Griffith, Grigore Rosu
Garbage Collection for Monitoring Parametric Properties Dongyun Jin, Patrick Meredith, Dennis Griffith, Grigore Rosu This talk is about how to monitor parametric properties “efficiently”. University of Illinois at Urbana-Champaign
2
Outline Motivation Property Static Analysis
Double Lazy Garbage Collection Evaluation Conclusion First, I will give you the motivation
3
? Monitoring Scalable technique for ensuring software reliability
Observe run of a program Analyze execution trace against desired properties React/Report using handlers (if needed) … Program Event 1 Execute & Observe Analyze Event 2 ? Event 3 Event 4 Monitoring is a scalable technique for ensuring software reliability. It observes run of a program, then generates an execution trace consisting of events. We analyze execution trace against desired properties and react or report according to the analysis result, by using handlers. Execution Trace …
4
Applications of Monitoring
Development Debugging Testing Deployment Security Reliability Runtime Verification Monitoring can be used in many ways. In a system development, it can help in debugging or testing. Also, monitoring can be a part of a deployed system for security, enhanced reliability, or runtime verification.
5
Parametric Properties
Properties referring to object instances The following property describes a bad behavior between Each Collection c and Iterator i : Generalize typestates Typestates are parametric properties with one parameter update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) Parametric properties are properties referring to object instances. For example, this property describes an error pattern between each collection and iterator. It says you should not update the collection while you are using its iterator. It’s a generalization of typestates. Then, a typestate is a parametric property with only one parameter. Note that parametric property can have multiple parameters.
6
Parametric Property Monitoring Systems
Tracematches [de Moor et al. 05] Regular Patterns PQL [Lam et al. 05] Context-Free Patterns PTQL [Aiken et al. 05] SQL-like language Eagle then RuleR [Barringer et al. 04] Rule-Based Rewrite Properties (e.g. Context-Free Patterns, ..) MOP [Rosu et al. 03] Formalism Independent Approach many others There are many parametric property monitoring systems. Tracematches monitors regular patterns. PQL and RuleR monitor Context-Free Patterns PTQL uses a SQL-like language. While many systems fix their formalism, MOP is formalism independent. It supports multiple formalisms that a user can choose.
7
Parametric Monitoring in MOP
Keep one monitor for each parameter instance A parameter instance binds parameters to objects E.g., (𝑐 ⟼ 𝑐 2 , 𝑖⟼ 𝑖 3 ) Each monitor knows nothing of parameters; operates exclusively on only one trace slice Here’s how we monitor parametric properties in MOP. We keep one monitor for each parameter instance. A parameter instance is the bindings of objects to a parameter. For examples, this parameter instance binds c2 to c and i3 to i. Then, each monitor does not need to know anything about parameters. It can focus on one trace slice.
8
Parametric Monitoring in MOP
Keep one monitor for each parameter instance A parameter instance binds parameters to objects E.g., (𝑐 ⟼ 𝑐 2 , 𝑖⟼ 𝑖 3 ) Each monitor knows nothing of parameters; operates exclusively on only one trace slice Trace: create(c1, i1) create(c1, i2) next(i1) update(c1) next(i2) Here’s a possible trace with parameters. We have two creation events, next, update, and next.
9
Parametric Monitoring in MOP
Keep one monitor for each parameter instance A parameter instance binds parameters to objects E.g., (𝑐 ⟼ 𝑐 2 , 𝑖⟼ 𝑖 3 ) Each monitor knows nothing of parameters; operates exclusively on only one trace slice Trace: create(c1, i1) create(c1, i2) next(i1) update(c1) next(i2) (c1, i1) Trace Slice: create next update Then, the trace slice for <c1, i1> is create next update. and the trace slice for <c1, i2> is create update next. (c1, i2) Trace Slice: create update next
10
Parametric Monitoring in MOP
Keep one monitor for each parameter instance A parameter instance binds parameters to objects E.g., (𝑐 ⟼ 𝑐 2 , 𝑖⟼ 𝑖 3 ) Each monitor knows nothing of parameters; operates exclusively on only one trace slice Trace: create(c1, i1) create(c1, i2) next(i1) update(c1) next(i2) update(c ) next(i ) create(c , i ) (c1, i1) Trace Slice: create next update We keep a monitor for each trace and update them with the corresponding trace slice. In this example, the monitor for the second slice reached the final state. It means it has an error. (c1, i2) Trace Slice: create update next update(c ) next(i ) create(c , i )
11
Parametric Monitoring in MOP
Keep one monitor for each parameter instance A parameter instance binds parameters to objects E.g., (𝑐 ⟼ 𝑐 2 , 𝑖⟼ 𝑖 3 ) Each monitor knows nothing of parameters; operates exclusively on only one trace slice Trace: create(c1, i1) create(c1, i2) next(i1) update(c1) next(i2) update(c ) next(i ) create(c , i ) (c1, i1) Trace Slice: create next update Our challenge here is that we have a large number of monitors. Because, theoretically, there can be unbounded number of parameter instances, there can be unbounded number of monitors. (c1, i2) Trace Slice: create update next update(c ) next(i ) create(c , i ) Challenge: Large number of monitors
12
Previously… Benchmark based on DaCapo 9.12 [Blackburn et al. 06]
On average for 65 cases (13 examples × 5 properties) Tracematches [de Moor et al. 05] One of the most memory efficient monitoring systems 142% Runtime overhead 1 12% Memory overhead 1 MOP [Rosu et al. 03] 33% Runtime overhead 140% Memory overhead Let’s see how bad it was previously. We did benchmark two systems based on DaCapo 9.12 It has 13 examples and we used 5 properties. To sum, we have 65 combinations. We got average overheads of 65 cases. Tracematches is one of the most memory efficient monitoring systems. It shows 142% runtime overhead and only 12% memory overhead. On the contrary, MOP, which has the lowest runtime overhead of any runtime monitoring system that we know. It shows only 33% runtime overhead and 140% memory overhead. Tracematches manages memory quite well but its memory management is formalism dependent and it spends too much time for that. MOP is fast but it has a memory issue. 1: Except 6(of 65) crashes
13
Worst Cases – Runtime Overhead
From DaCapo 9.12 and DaCapo MR2 2119% 448% 19194% 569% 637% 311% OOM 1203% OOM That was average performances based on new DaCapo. Let’s see 7 worst cases including examples from the old version of DaCapo. This graph shows runtime overheads. As you can see, Tracematches shows much higher runtime overhead. Although JavaMOP shows better runtime performance, but those numbers are still high in our opinion. 571% 1359% ~ 4M Monitors 746% 1942% 716%
14
Worst Cases – Memory Overhead
From DaCapo 9.12 and DaCapo MR2 1059% 294% 57% 2896% 5% 2799% OOM 3439% This graph shows memory overhead in logarithmic scale. Overall, Tracematches shows better memory management except three cases including two crashes. We want to improve memory and runtime performances in MOP. 1082% OOM 39% 2510% 41% 1030%
15
Goal: Reduce the Runtime/Memory Overhead
Static Program Analysis [Bodden et al. 09, 10], [Dwyer et al. 10], … Property Analysis Dynamic By using information from property analysis, garbage collect monitors when they become useless during monitoring There are many approaches both statically and dynamically. In static optimization, we can either analyze program or property. There are many research going on program analysis. It statically analyze the target program and apply monitoring only where needed. Also, we can analyze properties statically and use the result to optimize the system. Our approach consists of two parts. First, we statically analyze the property. Second, we use that information to garbage collect monitors when they become useless.
16
Outline Motivation Property Static Analysis
Double Lazy Garbage Collection Evaluation Conclusion Let’s see Property Static Analysis, first.
17
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) Let’s consider the parametric property we saw in the beginning. Remind that MOP is formalism independent. Some formalisms cannot be analyzed based on state, like Context-Free Grammar We analyze properties based on the last event so that any formalism can be analyzed in the same way. In this analysis, we’d like to know what parameters are essential for a monitor to trigger. If a monitor has no way to trigger, we can collect it. The Last Event Parameter instances that need to be alive
18
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) If the last event is create, The Last Event Parameter instances that need to be alive create
19
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) It must be in this state The Last Event Parameter instances that need to be alive create
20
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) Needed Events Then, we need these two events for this monitor to trigger. The Last Event Parameter instances that need to be alive create
21
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) Needed Events Therefore, we need both parameters. The Last Event Parameter instances that need to be alive create {c , i }
22
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) Needed Events If any of parameters are garbage collected, this monitor has no future and can be collected. The Last Event Parameter instances that need to be alive create {c , i } If any of these parameters are garbage collected, this monitor cannot trigger; can be collected
23
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) When the last event is update, The Last Event Parameter instances that need to be alive create {c , i } update
24
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) There are two cases: it can be in this state, or that state. The Last Event Parameter instances that need to be alive create {c , i } update
25
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) Needed Events If it is in this state, we need next event. The Last Event Parameter instances that need to be alive create {c , i } update
26
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) Needed Events In this case, we need the parameter i The Last Event Parameter instances that need to be alive create {c , i } update {i }
27
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) Needed Events In this state, we need three events: create, update and next. The Last Event Parameter instances that need to be alive create {c , i } update {i }
28
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) Needed Events Then, we need both parameters. Since we don’t know which state the monitor would be in, we should consider both cases. When c is garbage collected, the first case still has a hope, so we don’t collect the monitor. When i is garbage collected, the both cases have no hope, so we collect the monitor. The Last Event Parameter instances that need to be alive create {c , i } update {i }
29
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) In a similar way, for next event, we need both parameters. The Last Event Parameter instances that need to be alive create {c , i } update {i } next
30
Property Static Analysis
Property analysis tells us when monitors are unnecessary Our analysis is formalism independent; we only show it for FSM monitors (see paper for CFG monitors) update(c ) next(i ) update(c ) create(c , i ) update(c ) next(i ) In a similar way, for next event, we need both parameters. The Last Event Parameter instances that need to be alive create {c , i } update {i } next If any of these are dead, then the current monitor can be collected
31
Outline Motivation Property Static Analysis
Double Lazy Garbage Collection Evaluation Conclusion Now, we know how to decide whether to collect a monitor. Let’s see how we actually collect monitors efficiently.
32
Double Lazy Garbage Collection
Indexing Trees map parameter instances to monitors Lazy propagation of info about dead parameter instances Eager propagation yields high runtime overhead (like TM) Idea: propagate when monitors are updated for other reasons Lazy removal of useless monitors Removal is expensive (monitor can belong to many trees) Idea: just mark unnecessary monitors, then clean up later (technical; see paper for details and proof of correctness) We use double lazy garbage collection. We propagate parameter collections lazily and we remove monitors lazily. To understand this, we should know that we use multi-level indexing trees to map parameter instances to corresponding monitors. Propagating parameter collection information through the indexing tree in real results in too much overhead. Therefore, we propagate when monitors are updated for other reasons In this way, we don’t need to spend extra time to access monitors. We use lazy removal. Removing a monitor is expensive since it can belong to many indexing trees. So, we just mark unnecessary monitors, then remove them whenever we see them later for other reasons. If we are lucky, indexing trees can be garbage collected, then there is no need to remove monitors from them.
33
Outline Motivation Property Static Analysis
Double Lazy Garbage Collection Evaluation Conclusion Let’s see how we improve the performance.
34
RV (MOP + Garbage Collection)
Evaluation – Overall On average, based on the DaCapo 9.12 Benchmark 13 Programs and 5 Properties Tracematches MOP RV (MOP + Garbage Collection) Runtime Overhead 142%1 33% 15% Memory Overhead 12%1 140% 39% We named our new system, RV. On the same setting, we improve the runtime overhead from 33% to 15% and memory overhead from 140% to 39%. We could improve the runtime performance because we do not have to update useless monitors and collecting monitors will result faster garbage collection. 1: Except 6(of 65) crashes
35
Evaluation – Worst Cases Runtime Overhead
From DaCapo 9.12 and DaCapo MR2 2119% 448% 116% 19194% 569% 251% 637% 311% 118% OOM 1203% 178% OOM Let’s see how we improve on the worst cases. As we can see our new system RV shows much less runtime overhead than MOP and Tracematches. 571% 188% 1359% ~ 4M Monitors 746% 212% 1942% 716% 130%
36
Evaluation – Worst Cases Memory Overhead
From DaCapo 9.12 and DaCapo MR2 1059% 294% 169% 57% 2896% 1512% 5% 2799% 236% OOM 3439% 1045% OOM 1082% This graph is about memory overhead, again in logarithmic scale. We can see that RV shows always less memory overhead than MOP, but it shows higher memory overhead than Tracematches in many cases. 420% 39% 2510% 886% 41% 1030% 159%
37
Conclusions Garbage collection for parametric monitoring
Formalism-independent Directed by property static analysis Double Lazy Garbage Collection Lazy propagation of info Lazy removal of monitors Evaluation Lowest runtime overhead parametric monitoring system Reasonable memory overhead (When I have enough Time) Our garbage collection for parametric monitoring is formalism-independent, directed by property static analysis and it’s efficient. It uses double lazy garbage collection to collect monitors efficiently. We reduce overhead of alerting monitors of garbage collected parameters We also reduce overhead of removing monitors. Our evaluation shows that it has the lowest runtime overhead of any parametric monitoring system And it shows reasonable memory overhead. (2) (In a rush) Our garbage collection for parametric monitoring is formalism-independent, directed by property static analysis and efficient.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.