Presentation is loading. Please wait.

Presentation is loading. Please wait.

Markov Reward Models By H. Momeni Supervisor: Dr. Abdollahi Azgomi.

Similar presentations


Presentation on theme: "Markov Reward Models By H. Momeni Supervisor: Dr. Abdollahi Azgomi."— Presentation transcript:

1 Markov Reward Models By H. Momeni Supervisor: Dr. Abdollahi Azgomi

2 Markov Reward Models 2 31 Contents  Modeling Taxonomy  Markov Reward Models Definition  Reliability measures  Availability measures  Performance measures  Conclusion

3 Markov Reward Models 3 31 MODELING TAXONOMY “All Models are Wrong; Some Models are Useful” George Box Modeling Simulation Analytic modeling Non-State-Space Method State-Space Method

4 Markov Reward Models 4 31 Non-State-Space Modeling Taxonomy Non-State-Space method Performance modelsDependability models Queuing models Reliability Block Diagram models Fault Tree models

5 Markov Reward Models 5 31 State Space Modeling Taxonomy Markovian models Non-Markovian models discrete-time Markov chains continuous-time Markov chains Markov reward models Semi-Markov process Markov regenerative process Non-Homogeneous Markov State space models

6 Markov Reward Models 6 31 Motivation  Extension of CTMC to Markov reward models make them even more useful  Markov reward models is used as a means to obtain performance and dependability measures.

7 Markov Reward Models 7 31 Dependability Concepts DEPENDABILITY ATTRIBUTES AVAILABILITY RELIABILITY SAFETY CONFIDENTIALITY INTEGRITY MAINTAINABILITY FAULT PREVENTION FAULT REMOVAL FAULT TOLERANCE FAULT FORECASTING MEANS THREATS FAULTS ERRORS FAILURES SECURITY Faults are the cause of errors that may lead to failures FaultErrorFailure

8 Markov Reward Models 8 31 MRM Formal Definition  A Markov reward model consists of a continuous time Markov chain X={X(t), t  0)} with a finite state space S, and a reward function r where r:S   Usually, for each state i  S, r(i) represents the reward obtained per unit time in that state  With MRMs, rewards can assign to states or transitions  The reward rates are defined based on the system requirements (availability, reliability, performance, … )

9 Markov Reward Models 9 31 Formal Definitions  is the system reward rate at time t  Accumulated reward in the interval [0, t) is denoted as  The expected accumulated reward is Li(t) denotes the expected total time the CTMC spends in state i during the interval [0, t]

10 Markov Reward Models 10 31 Formal Definition (cont’d)  Let  i be the steady state probability for state i  The expected steady-state reward rate is  The expected instantaneous reward rate is

11 Markov Reward Models 11 31 Example  A three state Markov Reward model  The reward rate vector is r=(3,1,0)  Initial probability vector is

12 Markov Reward Models 12 31

13 Markov Reward Models 13 31 Case Study  Consider a multiprocessor system with n processor elements processing a given workload

14 Markov Reward Models 14 31 System Availability  Definition: The availability of a system at time t (A(t)) is the probability that the system is accessible to perform its tasks correctly  Availability measures are based on a binary reward structure  One processor is sufficient for the system to be up, otherwise it is considered as being down  Set of states where and  Reward rate 1 is attached to the states in U and a reward rate 0 to those in D

15 Markov Reward Models 15 31 System Availability  Reward function r is:  Instantaneous availability is : Availability reward rates

16 Markov Reward Models 16 31 System Availability  Unavailability can be calculated with a reverse reward assignment to that for availability  Steady state availability

17 Markov Reward Models 17 31 System Availability  There are related measures that do not rely on the binary reward structure (e.g. uptime, number of repair calls)  Mean transient uptime Mean uptimes reward rates

18 Markov Reward Models 18 31 System Availability  Very important measures related to the frequency of certain events of interest (e.g. average number of repair calls in [0,t) )  With repair rate the transient average number of repair call and steady-state Reward rates for average number of repair calls in [0,t)

19 Markov Reward Models 19 31 System Reliability  Definition : The reliability of a system at time t (R(t)) is the probability that the system operation is proper throughout the interval [0,t]  A binary reward function r is defined that assigns reward rates 1 to up states and reward rates 0 to down states.

20 Markov Reward Models 20 31 System Reliability  Reliability is the likelihood that an unwanted event has not yet occurred since the beginning of the system operation.  T is the time to the next occurrence of an unwanted (failure) event Reward rates for reliability

21 Markov Reward Models 21 31 System Reliability 

22 Markov Reward Models 22 31 System Reliability  Mean time to the occurrence of an unwanted (failure) event is given by:  Unreliability follows as the complement:  The unreliability also could be calculated based on a reward assignment complementing the one in Table

23 Markov Reward Models 23 31 System Reliability  Related to Reliability measures, the expected number of catastrophic events C(t) in [o,t) is important Reward assignment for predicting the number of catastrophic incidents

24 Markov Reward Models 24 31 System and Task Performance  Definition: measure of responsiveness  The use of reward rates is not restricted to availability, reliability and performability models  This concept can also be used in pure (failure-free) performance models (e.g. throughput, response time, utilization, total task loss probability)

25 Markov Reward Models 25 31 System and Task Performance  The values are used to characterize the percentage loss of tasks arriving at the system in state Reward rates for computing the total loss probability Reward rates for throughput

26 Markov Reward Models 26 31 System and Task Performance  The expected total loss probability, TLP, in the steady state an transient state TLP(t) are:

27 Markov Reward Models 27 31 System and Task Performance  Throughput can be achieved by assigning state transition rates corresponding to departure from a queue (service completion) as reward rates  Mean response time can be achieved by assigning number of customers present in a state as a reward rate  Utilization is based on binary reward structure, if a particular resource is occupied in a given state, reward rate 1 is assigned, otherwise reward rate 0, indicates the idleness of the resources.

28 Markov Reward Models 28 31 System and Task Performance Mean number of customers reward rates Throughput reward ratesUtilization reward rates  imagine customers arriving at a system with λ, service time is μ  Single server

29 Markov Reward Models 29 31 Performance’s Measures  Throughput  Mean number of customers  Mean response time –Use Little ’ s law  Utilization

30 Markov Reward Models 30 31 Conclusion  MRM is State space model  MRM is more useful than CTMC to obtain Performance and dependability measures  Reward Rates are assigned based on system requirements  Structure of Reward rate can be various (usually binary)  Stochastic Reward Nets (SRN) are an extension on SPN that assign reward rate to transitions

31 Markov Reward Models 31 References  Gunter Bluch et al, Queuing network and markov chain, 2nd Ed., John Wiley and Sons, 2006  J.c. Laprie, Fundamental Concepts of Dependability, IEEE Transaction, 2004  K. Trivedi, Probability and Statistics with Reliability, Queuing, and Computer Science Applications, 2nd Ed., John Wiley and Sons, New York, 2001  B. Haverkort et al, Performability Modeling, John Wiley, 2001


Download ppt "Markov Reward Models By H. Momeni Supervisor: Dr. Abdollahi Azgomi."

Similar presentations


Ads by Google