MaC Monitoring and Checking at Runtime (Continue) Presented By Usa Sammapun CIS 700 Oct 12, 2005.

Slides:



Advertisements
Similar presentations
Introduction to Hypothesis Testing
Advertisements

Probability models- the Normal especially.
Feedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms Chenyang Lu, John A. Stankovic, Gang Tao, Sang H. Son Presented by Josh Carl.
Hypothesis Testing Goal: Make statement(s) regarding unknown population parameter values based on sample data Elements of a hypothesis test: Null hypothesis.
Equivalence Testing Dig it!.
Chapter 7 Hypothesis Testing
Introduction to Hypothesis Testing
Reliable Scripting Using Push Logic Push Logic David Greaves, Daniel Gordon University of Cambridge Computer Laboratory Reliable Scripting.
“Students” t-test.
Inferential Statistics
Bison Management Suppose you take over the management of a certain Bison population. The population dynamics are similar to those of the population we.
CHAPTER 15: Tests of Significance: The Basics Lecture PowerPoint Slides The Basic Practice of Statistics 6 th Edition Moore / Notz / Fligner.
A Survey of Runtime Verification Jonathan Amir 2004.
Probabilistic Verification of Discrete Event Systems using Acceptance Sampling Håkan L. S. YounesReid G. Simmons Carnegie Mellon University.
Hypothesis Testing A hypothesis is a claim or statement about a property of a population (in our case, about the mean or a proportion of the population)
Run Time Monitoring of Reactive System Models Mikhail Auguston Naval Postgraduate School Mark Trakhtenbrot Holon Academic Institute of.
Statistical Techniques I EXST7005 Lets go Power and Types of Errors.
1 Design by Contract Building Reliable Software. 2 Software Correctness Correctness is a relative notion  A program is correct with respect to its specification.
MaC Monitoring and Checking at Runtime Presented By Usa Sammapun CIS 700 Oct 10, 2005.
Probabilistic Verification of Discrete Event Systems Håkan L. S. Younes Reid G. Simmons (initial work performed at HTC, Summer 2001)
1 Statistical Inference H Plan: –Discuss statistical methods in simulations –Define concepts and terminology –Traditional approaches: u Hypothesis testing.
Evaluating Hypotheses Chapter 9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics.
Probabilistic Verification of Discrete Event Systems Håkan L. S. Younes.
MaCS: Monitoring, Checking and Steering O. Sokolsky, S. Kannan, I. Lee, U. Sammapun, J. Shin, M. Viswanathan CIS, Penn M. Kim SECUi.com, Korea.
PSY 1950 Confidence and Power December, Requisite Quote “The picturing of data allows us to be sensitive not only to the multiple hypotheses that.
Soft. Eng. II, Spr. 2002Dr Driss Kettani, from I. Sommerville1 CSC-3325: Chapter 9 Title : Reliability Reading: I. Sommerville, Chap. 16, 17 and 18.
System Design Research Laboratory Model-based Testing and Monitoring for Hybrid Embedded Systems Li Tan Jesung Kim Oleg Sokolsky Insup Lee University of.
EE694v-Verification-Lect5-1- Lecture 5 - Verification Tools Automation improves the efficiency and reliability of the verification process Some tools,
7-2 Estimating a Population Proportion
Experimental Evaluation
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Lecture Slides Elementary Statistics Eleventh Edition and the Triola Statistics Series by.
VESTA: A Statistical Model- checker and Analyzer for Probabilistic Systems Authors: Koushik Sen Mahesh Viswanathan Gul Agha University of Illinois at Urbana-Champaign.
Testing and Monitoring at Penn An Integrated Framework for Validating Model-based Embedded Software Li Tan University of Pennsylvania September, 2003.
Ch. 9 Fundamental of Hypothesis Testing
BCOR 1020 Business Statistics
Inferential Statistics
A Framework for Planning in Continuous-time Stochastic Domains Håkan L. S. Younes Carnegie Mellon University David J. MuslinerReid G. Simmons Honeywell.
Computer Programming and Basic Software Engineering 4. Basic Software Engineering 1 Writing a Good Program 4. Basic Software Engineering.
AM Recitation 2/10/11.
CHAPTER 10: Hypothesis Testing, One Population Mean or Proportion
Overview Definition Hypothesis
1/2555 สมศักดิ์ ศิวดำรงพงศ์
1 Exception Handling Introduction to Exception Handling Exception Handling in PLs –Ada –C++ –Java Sebesta Chapter 14.
+ Chapter 9 Summary. + Section 9.1 Significance Tests: The Basics After this section, you should be able to… STATE correct hypotheses for a significance.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
1 Statistical Inference Greg C Elvers. 2 Why Use Statistical Inference Whenever we collect data, we want our results to be true for the entire population.
Education Research 250:205 Writing Chapter 3. Objectives Subjects Instrumentation Procedures Experimental Design Statistical Analysis  Displaying data.
CS6133 Software Specification and Verification
5/27/03MDES Supporting Model-Based Validation at Run-time Insup Lee and Oleg Sokolsky Department of Computer and Information Science University of.
On Reducing the Global State Graph for Verification of Distributed Computations Vijay K. Garg, Arindam Chakraborty Parallel and Distributed Systems Laboratory.
Safety-Critical Systems 5 Testing and V&V T
Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved. Section 7-1 Review and Preview.
Copyright © 2010, 2007, 2004 Pearson Education, Inc Section 8-2 Basics of Hypothesis Testing.
5 May CmpE 516 Fault Tolerant Scheduling in Multiprocessor Systems Betül Demiröz.
MeanVariance Sample Population Size n N IME 301. b = is a random value = is probability means For example: IME 301 Also: For example means Then from standard.
2015 Concurrency: logical properties 1 ©Magee/Kramer 2 nd Edition Chapter 14 Logical Properties Satisfied? Not satisfied?
Title 11/5/2000 eSimplex Architecture Using MaCS Insup Lee Oleg Sokolsky Moonjoo Kim Anirban Majumdar Sampath Kannan Mahesh Viswanathan Insik Shin and.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Statistical Techniques
Probabilistic Verification of Discrete Event Systems Håkan L. S. Younes.
1 Probability and Statistics Confidence Intervals.
 Simulation enables the study of complex system.  Simulation is a good approach when analytic study of a system is not possible or very complex.  Informational,
Chapter 13 Understanding research results: statistical inference.
Håkan L. S. YounesDavid J. Musliner Carnegie Mellon UniversityHoneywell Laboratories Probabilistic Plan Verification through Acceptance Sampling.
CHAPTER 15: Tests of Significance The Basics ESSENTIAL STATISTICS Second Edition David S. Moore, William I. Notz, and Michael A. Fligner Lecture Presentation.
Formally Specified Monitoring of Temporal Properties
Monitoring, Checking and Steering of Real-Time Systems
Discrete Event Simulation - 4
A Trusted Safety Verifier for Process Controller Code
Statistical Test A test of significance is a formal procedure for comparing observed data with a claim (also called a hypothesis) whose truth we want to.
Presentation transcript:

MaC Monitoring and Checking at Runtime (Continue) Presented By Usa Sammapun CIS 700 Oct 12, 2005

Recap: MaC Runtime verification technique –Ensures the current program execution follows its formal requirements at run-time

Instrumented Program Event Recognizer CheckerInjector MaC Verifier Execution info variable update method call/return Events Conditions Violations Feedback MaC Verifier and Language Program MaC Compiler PEDLMEDL MaC Specification SADL Where/when to steer System Properties using EVENTS and CONDITIONS Map variable update, method call/return to EVENTS and CONDITIONS

Events e - variable update, start/end method e1 || e2 - or e1 && e2 - and start(c) - instant when condition c becomes true end(c) - instant when condition c becomes false e when c - e occurs when condition c is true Alarms: events that must never occur

Conditions Conditions interpreted over 3 values: true, false and undefined. c - boolean expression !c - not c c 1 || c 2 - or c 1 && c 2 - and c 1 -> c 2 - imply defined(c) - true when c is defined [e 1, e 2 ) - interval Safety Properties: conditions that must always hold true

Current Work Timing properties: Regular expressions Probabilistic properties Dynamic MaC

Regular Expressions in MEDL MEDL is based on temporal logic Regular expressions (RE) may be better –Engineers understand them –More concise than TL for temporal ordering RE ranges over MaC events –event a,b,c –a.b*.c

Challenges When to accept several possible inputs (ab*c*) –Shortest input –Longest input –All input Identify which events are relevant Overlapping RE Simultaneous events

Identify which events are relevant An unexpected event fails the RE check Trace may contain irrelevant events, which should not make RE fail

Example: no sends after read open.send*.read*.close Which traces should be accepted or rejected? –open.send.read.closeaccept –open.send.read.send.closereject –open.send.send.readcontinue –open.send.delete?reject –open.send.chdir.close ?accept RE fileaccess{open,send,close,delete} = open.send*.read*.close

MaC with Regular Expressions Regular expression over events –Statement: RE R {Ē} =, –Grammar of R: R ::= e | R.R | R+R | R* –Relevant set {Ē} : contribute to RE failure RE are neither events nor conditions –Events associated with RE R : startR(R), success(R), fail(R) alarm badAccess = fail(fileaccess)

Overlapping RE Property: open.send*.read*.close Trace: –Actual: open open send read send read –We see: open open send read send read Cannot distinguish between two overlapping instances; events miss attribution –What is the right way to index events?

Simultaneous Events Checker operates on a stream of observations –Observations are primitive events that reflect change of system state One primitive event can trigger different other events What if those events are in the the same RE –a. (a || b). b –at state i, a occurs, then (a || b) also occurs –How do we order a and (a || b)

Probabilistic Properties Probability calculation –Numerical technique –Statistical technique 1. Simulate 2. Collect several samples 3. Estimate probabilities task start finish in finish in 100 not finish in 100 not finish in

Statistical Technique usually, we 1) execute for X times, 2) use them as samples, and 3) estimate probabilities task startfinish in 100not finish in 100task start finish in 100task startfinish in 100task startfinish in 100 …

1. Simulate and 2. Collect Sample runtime verification – only one execution path task startfinish in 100 not finish in 100task start finish in 100 task startfinish in 100 task startfinish in 100 server startupdate v client request update v … … task startfinish in 100not finish in 100task start finish in 100task startfinish in 100task startfinish in 100

MaC Probabilistic Properties Experiment –An element that indicates a sub-path e exp ( previous example: task start ) c exp Probabilistic event = {,,, =, } –e prob( p, e exp ) Probabilistic condition –c prob( p, c exp )

Example A soft real-time task must not miss a deadline of 100 time units with probability 0.2 event missDeadline = end([startT,endT) {100} ) alarm soft_rt_task = missDeadline prob( 0.2, startT) A car velocity must be < 50mph with prob 0.9 in work zones property speed = (v < 50) prob( 0.9, work_zone)

3. Estimating Probability Estimate probability from program execution –compute experimental probability p condition and p event –Condition: c prob( < p, c exp ) Event: e prob( < p, e exp ) A car velocity must be < 50mph with prob 0.9 in work zones (v < 50) prob( 0.9, work_zone)

Example 01 p=0.2 p=0.267 task must not miss a deadline of 100 time units with probability 0.2 alarm soft_rt_task = missDeadline prob( 0.2, startT) # miss deadline events = 40 # startT (task start events) = 150 p = 40 / 150 = 0.267

Statistical Hypothesis Testing Given –Probability estimation –Confidence interval (CI) e.g. CI = 95% Statistical Hypothesis Testing –Satisfied –Not satisfied –Need more sample

Probability Estimation: Z-Score Use z-score to calculate how far apart p and p are For event, n = |occurrences of e exp | For condition, n = |S i s.t. c exp = true| Sign of z says which direction + z says p > p - z says p < p Value of z says how far apart p and p task must not miss a deadline of 100 time units with probability 0.2 p = 0.2 p = z p = z p = 0 p = 0.2 z p = 2.05 p = 0.267

Continue… Given confidence interval (CI) –We calculate z-score z* for CI (e.g. CI = 95% has z* = 1.96) Decide: alarm soft_rt_task = missDeadline prob( 0.2, startT) – no alarm: z p < -z* [ means p < p with confidence CI ] – raise alarm: z p > z* [ means p > p with confidence CI ] – more sample: -z* < z p < z* [ means p p, either action wouldnt cause serious error ] z p = 0 p = 0.2 -z* = Raise alarmNo alarm z* = 1.96 more sample z p = 2.05 p = 0.267

Dynamic MaC From fixed to dynamic object sets What if tasks can be added dynamically? –The set of events and conditions changes dynamically Events and conditions are parameterized Example: Client event clientReq(ID i) = startM(Client.request()) { clientReq.i = Client.id; } condition clientValid(ID i) = [clientReq(i), clientDropped(i)); Special event that add or remove an object in the object set