Presentation is loading. Please wait.

Presentation is loading. Please wait.

Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 1 Introduction.

Similar presentations


Presentation on theme: "Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 1 Introduction."— Presentation transcript:

1 Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 1 Introduction

2 Sample Space Probability implies random experiments.
A random experiment can have many possible outcomes; each outcome known as a sample point (a.k.a. elementary event) has some probability assigned. This assignment may be based on measured data or guestimates (“equally likely” is a convenient and often made assumption). Sample Space S : a set of all possible outcomes (elementary events) of a random experiment. Finite (e.g., if statement execution; two outcomes) Countable (e.g., number of times a while statement is executed; countable number of outcomes) Continuous (e.g., time to failure of a component)

3 Events An event E is a collection of zero or more sample points from S
S is the universal event and the empty set S and E are sets  use of set operations. E’1 = { x| x  S AND x  E1}

4 Algebra of events Sample space is a set and events are the subsets of this (universal) set. Use set algebra and its laws on p. 9 of the text. Mutually exclusive (disjoint) events

5 Probability axioms (see pp of text for additional relations)

6 Combinatorial problems
Deals with the counting of the number of sample points in the event of interest. Assume equally likely sample points: P(E)= number of sample points in E / number in S Example: Two successive execution of an if statement S = {(T,T), (T,E), (E,T), (E,E)} {s1, s2, s3, s4} P(s1) = 0.25= P(s2) = P(s3) = P(s4) (equally likely assumption) E1: at least one execution of the then clause{s1,s2,s3} E2: exactly one execution of the else clause{s2, s3} P(E1) = 3/4; P(E2) = 1/2

7 Conditional probability
In some experiment, some prior information may be available, e.g., What is the probability that Blue Devils will win the opening game, given that they were the 2000 national champs. P(A|B): prob. that A occurs, given that ‘B’ has occurred. In general,

8 Mutual Independence A and B are said to be mutually independent, iff,
Also, then,

9 Independence Vs. Exclusive
Note that the probability of the union of mutually exclusive events is the sum of their probabilities While the probability of the intersection of two mutually independent events is the product of their probabilities

10 Independent set of events
Set of n events, {A1, A2,..,An} are mutually independent iff, for each Complements of such events also satisfy, Pair wise independence (not mutually independent) k = 2  nC2 sets of distinct pairs, each pair should exhibit mutual independence. k=3  nC3 sets of distinct triplets. Each triplet should exhibit mutual independence and so on.

11 Reliability Block Diagrams

12 Reliability Block Diagrams (RBDs)
Schematic representation or model Shows reliability structure (logic) of a system Can be used to determine If the system is operating or failed Given the information whether each block is in operating or failed state A block can be viewed as a “switch” that is “closed” when the block is operating and “open” when the block is failed System is operational if a path of “closed switches” is found from the input to the output of the diagram

13 Reliability Block Diagrams: RBDs
Combinatorial (non-state space) model type Each component of the system is represented as a block System behavior is represented by connecting the blocks Blocks that are all required are connected in series Blocks among which only one is required are connected in parallel When at least k out of n are required, use k-of-n structure Failures of individual components are assumed to be independent for easy solution For series-parallel RBD with independent components use series-parallel reductions to obtain the final answer

14 Series-Parallel Reliability Block Diagrams (RBDs)

15 Series system Series system: n statistically independent components.
Let, Ri = P(Ei), then series system reliability: For now reliability is simply a probability, later it will be a function of time

16 Series system (Continued)
Rn This simple PRODUCT LAW OF RELIABILITIES, is applicable to series systems of independent components.

17 Series system (Continued)
Assuming independent repair, we have product law of availabilities

18 Parallel system System consisting of n independent parallel components. System fails to function iff all n components fail. Ei = "component i is functioning properly" Ep = "parallel system of n components is functioning properly." Rp = P(Ep).

19 Parallel system (Continued)
Therefore:

20 Parallel system (Continued)
. Parallel systems of independent components follow the PRODUCT LAW OF UNRELIABILITIES . Rn

21 Parallel system (Continued)
Assuming independent repair, we have product law of unavailabilities:

22 Series-Parallel System
Series-parallel system: n-series stages, each with ni parallel components. Reliability of series parallel system R_{sp} assumes that all the parallel components in the ith block have same reliability R_I.

23 Series-Parallel system (example)
voice control voice control voice Example: 2 Control and 3 Voice Channels

24 Series-Parallel system (Continued)
Each control channel has a reliability Rc Each voice channel has a reliability Rv System is up if at least one control channel and at least 1 voice channel are up. Reliability:

25 Homework : For the following system, write
down the expression for system reliability: Assuming that block i failure probability qi C A B D E

26 Non-Series-Parallel Systems

27 Methods for non-series-parallel RBDs
State enumeration (Boolean truth table) Factoring or conditioning (implemented in SHARPE) First find minpaths inclusion/exclusion (Relation Rd on p.15 of text) SDP (Sum of Disjoint Products; Relation Re on p. 16 of text) (implemented in SHARPE) BDD (Binary Decision Diagram) (implemented in SHARPE)

28 Non-series-parallel RBD-Bridge with Five Components
1 2 3 S T 4 5

29 Truth Table for the Bridge
Component 1 2 3 4 5 System Probability 1 1 1 1 1 1

30 Truth Table for the Bridge
Component 1 2 3 4 5 System Probability } 1 1 1 1 1

31 Bridge Reliability From the truth table:

32 Conditioning & The Theorem of Total Probability
Any event A: partitioned into two disjoint events, P(Bi|A) – a-posteriori probability i.e. is the observed radar signal and Bi is the target detection Event, then P(Bi|A) implies the probability of detection AFTER observing the signal.

33 Example Binary communication channel: =P(R0|T1) P(T1) + P(R1|T0) P(T0)
Given: P(R0|T0) = 0.92; P(R1|T1) = 0.95 P(T0) = 0.45; P(T1) = 0.55 P(R0|T1) P(R1|T0) T1 R1 P(R1|T1) P(R0) = P(R0|T0) P(T0) + P(R0|T1) P(T1) (TTP) = 0.92 x x 0.55 = =P(R0|T1) P(T1) + P(R1|T0) P(T0)

34 Bridge Reliability using conditioning/factoring

35 Bridge: Conditioning Non-series-parallel block diagram 1 2 C3 down S T
4 5 3 S T C3 up 4 5 1 2 S T Factor (condition) on C3 4 5 Non-series-parallel block diagram

36 Bridge (Continued) Component C3 is chosen to factor on (or condition on) Upper resulting block diagram: C3 is down Lower resulting block diagram: C3 is up Series-parallel reliability formulas are applied to both the resulting block diagrams Use the theorem of total probability to get the final result

37 Bridge (Continued) RC3down= 1 - (1 - R1R2) (1 - R4R5)
RC3up = (1 - Q1Q4)(1 - Q2Q5) = [1 - (1-R1) (1-R4)] [1 - (1-R2) (1-R5)] Rbridge = RC3down . (1-R3 ) + RC3up R3

38 Fault Trees Combinatorial (non-state-space) model type
Components are represented as nodes Components or subsystems in series are connected to OR gates Components or subsystems in parallel are connected to AND gates Components or subsystems in kofn (RBD) are connected as (n-k+1)ofn gate

39 Fault Trees (Continued)
Failure of a component or subsystem causes the corresponding input to the gate to become TRUE Whenever the output of the topmost gate becomes TRUE, the system is considered failed Extensions to fault-trees include a variety of different gates NOT, EXOR, Priority AND, cold spare gate, functional dependency gate, sequence enforcing gate

40 Fault Tree Without repeated events or with repeated events
Reliability of series-parallel or non-series-parallel systems may be modeled using a fault tree State vector X={x1, x2, …, xn} and structure function

41 Fault Tree Without Repeated Events
or c1 and c2 v1 v2 v3 Structure Function: Reliability of the system 2 Control and 3 Voice Channels Example

42 Another Fault tree (w/o repeated events)
Example: DS1 NIC1 CPU DS2 NIC2 DS3 Using failure function and reliability fn. of different sub-systems, R can be computed.

43 2 control and 3 voice channels example with Fault Tree
Change the problem so that a control channel can also function as a voice channel We need to use a fault tree with repeated events to model the reliability of the system

44 Fault tree with repeated events

45 2 Proc 3 Mem Fault Tree failure and p1 p2
specialized for dependability analysis represent all sequences of individual component failures that cause system failure in a tree-like structure top event: system failure gates: AND, OR, (NOT), K-of-N Input of a gate: -- component (1 for failure, 0 for operational) -- output of another gate Basic component and repeated component and or failure p1 p2 m1 m3 m2 A fault tree example

46 Fault Tree (Cont.) For fault tree without repeated nodes
We can map a fault tree into a RBD Use algorithm for RBD to compute reliability For fault tree with repeated nodes Factoring algorithm SDP algorithm BDD algorithm Fault Tree RBD AND gate parallel system OR gate serial system k-of-n gate (n-k+1)-of-n system

47 Factoring Algorithm for Fault Tree
Basic idea: failure and M3 has failed or or and or failure p1 p2 m1 m3 m2 p1 m1 p2 m2 failure and M3 has not failed p1 p2

48 Fault tree (Continued)
Major characteristics: Fault trees without repeated events can be solved in linear time Fault trees with repeated events -Theoretical complexity: exponential in number of components. Find all minimal cut-sets & then use sum of disjoint products to compute reliability. Use Factoring (conditioning) Use BDD approach Can solve fault trees with 100’s of components

49 Bernoulli Trial(s) Random experiment  1/0, T/F, Head/Tail etc.
Two outcomes on each trial Successive trial independent Probability of success does not change from trial to trial Sequence of Bernoulli trials: n independent repetitions. n consecutive executions of an if-then-else statement Sn: sample space of n Bernoulli trials For S1:

50 Bernoulli Trials (contd.)
Problem: assign probabilities to points in Sn P(s): Prob. of successive k successes followed by (n-k) failures. What about any k failures out of n ?

51 Bernoulli Trials (contd.)
k=n, series system k=1, parallel system

52 Homework Consider a 2 out of 3 system
Write down expressions for its reliability assume that reliability of each individual component is R Find conditions under which RTMR is larger than R

53 Homework : The probability of error in the transmission of a bit over a communication channel is p = 10–4. What is the probability of more than three errors in transmitting a block of 1,000 bits?

54 Homework : Consider a binary communication channel transmitting coded words of n bits each. Assume that the probability of successful transmission of a single bit is p (and the probability of an error is q = 1-p), and the code is capable of correcting up to e (where e > 0) errors. For example, if no coding of parity checking is used, then e = 0. If a single error-correcting Hamming code is used then e = 1. If we assume that the transmission of successive bits is independent, give the probability of successful word transmission.

55 Homework : Assume that the probability of successful transmission of a single bit over a binary communication channel is p. We desire to transmit a four-bit word over the channel. To increase the probability of successful word transmission, we may use 7-bit Hamming code (4 data bits + 3 check bits). Such a code is known to be able to correct single-bit errors. Derive the probabilities of successful word transmission under the two schemes, and derive the condition under which the use of Hamming code will improve performance.

56 K-of-N System in RBD System consisting of n independent components
System is up when k or more components are operational. Identical K-of-N system: each component has the same failure and/or repair distribution Non-identical K-of-N system: each component may have different failure and/or repair distributions

57 Nonhomogenuous Bernoulli Trials
Success prob. for ith trial = pi Example: Ri – reliability of the ith component. Non-homogeneous case – n-parallel components such that k or more out n are working: Where I ranges over all choices i1 < i2 <…< im, such that k <=m<=n. The first term represents the event that during the ith trial, more than k components have failed and remaining were working. Hence 1- … represents the reliability.

58 Reliability for Non-identical K-of-N System
The reliability for nonidentical k-of-n system is: That is, where ri is the reliability for component i

59 BTS Sector/Transmitter Example

60 BTS Sector/Transmitter Example
Path 1 Transceiver 1 Power Amp 1 (XCVR 1) 2:1 Combiner Duplexer 1 Transceiver 2 Power Amp 2 (XCVR 2) Path 2 Transceiver 3 Power Amp 3 Pass-Thru Duplexer 2 (XCVR 3) Path 3 3 RF carriers (transceiver + PA) on two antennas Need at least two functional transmitter paths in order to meet demand (available) Failure of 2:1 Combiner or Duplexer 1 disables Path 1 and Path 2

61 Methodology Reliability Block Diagram Factoring
Fault tree with repeat events (later)

62 We use Factoring If any one of 2:1 Combiner or Duplexer 1 fails, then the system is down. If 2:1 Combiner and Duplexer 1 are up, then the system availability is given by the RBD XCVR1 2|3 XCVR2 XCVR3 Pass-Thru Duplexer2

63 XCVR1 2|3 XCVR2 2:1Com Dup1 XCVR3 Pass-Thru Dup2 Hence the overall system availability is captured by the RBD;non-identical 2 out of 3

64 BTS Sector/Transmitter Example Revisited as a fault tree

65

66 Homework : Solve for the bridge reliability
Using minpaths followed by Inclusion/Exclusion Then using SDP

67 Generalized Bernoulli Trials
Each trial has exactly k possibilities, b1, b2, .., bk. pi : Prob. that outcome of a trial is bi Outcome of a typical experiment is s, Application to multistate devices such as diode network and to vaxcluster In contrast to a (binary) Bernoulli trial.

68 Dept. of Electrical & Computer engineering
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 2 on Discrete Random Variables Dept. of Electrical & Computer engineering Duke University 2/5/2018

69 Random Variables Sample space is often too large to deal with directly
Recall that in the sequence of Bernoulli trials, if we don’t need the detailed information about the actual pattern of 0’s and 1’s but only the number of 0’s and 1’s, we are able to reduce the sample space from size 2n to size (n+1). Such abstractions lead to the notion of a random variable Number : integer or real valued. Examples of discrete RVs are, no. job or pkt arrivals in unit time, no. of rainy days in year etc. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

70 Discrete Random Variables
A random variable (rv) X is a mapping (function) from the sample space S to the set of real numbers If image(X ) finite or countable infinite, X is a discrete rv Inverse image of a real number x is the set of all sample points that are mapped by X into x: It is easy to see that Number : integer or real valued. Examples of discrete RVs are, no. job or pkt arrivals in unit time, no. of rainy days in year etc. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

71 Probability Mass Function (pmf)
Ax : set of all sample points such that, pmf Pmf may also be called discrete density function. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

72 pmf Properties Since a discrete rv X takes a finite or a countably infinite set values, the last property above can be restated as, All rv values satisfy p1. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

73 Distribution Function
pmf: defined for a specific rv value, i.e., Probability of a set Cumulative Distribution Function (CDF) Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

74 Distribution Function properties
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

75 Discrete Random Variables
Equivalence: Probability mass function Discrete density function (consider integer valued random variable) cdf: pmf: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

76 Common discrete random variables
Constant Uniform Bernoulli Binomial Geometric Poisson Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

77 Constant Random Variable
pmf CDF 1.0 c 1.0 c Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

78 Discrete Uniform Distribution
Discrete rv X that assumes n discrete value with equal probability 1/n Discrete uniform pmf Discrete uniform distribution function: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

79 Bernoulli Random Variable
RV generated by a single Bernoulli trial that has a binary valued outcome {0,1} Such a binary valued Random variable X is called the indicator or Bernoulli random variable so that Probability mass function: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

80 Bernoulli Distribution
CDF p+q=1 q x 0.0 1.0 Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

81 Binomial Random Variable
Binomial rv  a fixed no. n of Bernoulli trials (BTs) RV Yn: no. of successes in n BTs Binomial pmf b(k;n,p) Binomial CDF Notes on pk: pk term signifies n-successes. (cnk : is caused by the fact that there are these many ways in which k 1’s may appear in n-long sequence of 1’s and 0’s e.g. (0,0,1,0,1,1,1,0,0,1,0,0,… 1) Important: each trial is assumed to be an independent trial. Example 2.3 notes: 3-Bernoulli trials has 8-possible outcomes, {000, 001, 010, 100, 011, 101, 110, 111} FX (0)  0 successes: Prob. Of event (000) = 0/125 FX (1)  at least 1 success (i.e. 0 or 1 successes): Prob. Of event ( ) = 4x0.125=0.5 FX (2)  at least 2 successes: Prob. of events (000)= FX (1) + Prob( ) =1-Prob(111)=0.75 FX (3)  1 Symmetric, +ve skewed and –ve skewed Binomial: p=0.5, < 0.5 and > 0.5 As no. of trials n increases (to infinity), B(k;n,p) can be approx. to a normal (Gaussian) distribution Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

82 Binomial Random Variable
In fact, the number of successes in n Bernoulli trials can be seen as the sum of the number of successes in each trial: where Xi ’s are independent identically distributed Bernoulli random variables. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

83 Binomial Random Variable: pmf
pk Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

84 Binomial Random Variable: CDF
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

85 Applications of the binomial
Reliability of a k out of n system Series system: Parallel system: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

86 Applications of the binomial
Transmitting an LLC frame using MAC blocks p is the prob. of correctly transmitting one block Let pK(k) be the pmf of the rv K that is the number of LLC transmissions required to transmit n MAC blocks correctly; then Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

87 Geometric Distribution
Number of trials upto and including the 1st success. In general, S may have countably infinite size Z has image {1,2,3,….}. Because of independence, 1. That is, count the no of trials until the 1st success occurs. Typical output is {00001…} Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

88 Geometric Distribution (contd.)
Geometric distribution is the only discrete distribution that exhibits MEMORYLESS property. Future outcomes are independent of the past events. n trials completed with all failures. Y additional trials are performed before success, i.e. Z = n+Y or Y=Z-n The last equation says that, conditioned on Z > n, the no. of trials remaining until 1st success i.e. Y=Z-n has the same pmf as Z had originally. In other words, the system does not remember how many failures it has already encountered. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

89 Geometric Distribution (contd.)
Z rv: total no. of trials upto and including the 1st success. Modified geometric pmf: does not include the successful trial, i.e. Z=X+1. Then X is a modified geometric random variable. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

90 Applications of the geometric
The number of times the following statement is executed: repeat S until B is geometrically distributed assuming …. while B do S is modified geometrically distributed assuming …. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

91 Negative Binomial Distribution
RV Tr: no. of trials until rth success. Image of Tr = {r, r+1, r+2, …}. Define events: A: Tr = n B: Exactly r-1 successes in n-1 trials. C: The nth trial is a success. Clearly, since B and C are mutually independent, We can now also define the modified –ve binomial distribution. The resulting rv is defined as: Just the no. of failures until rth success. Event A gets re-defined as, “Tr = n+r”  Event that there are n failures. Event B: exactly r-1 successes in n+r-1 trials. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

92 Poisson Random Variable
RV such as “no. of arrivals in an interval (0,t)” In a small interval, Δt, prob. of new arrival= λΔt. pmf b(k;n, λt/n); CDF B(k;n, λt/n)= What happens when The problem now become similar to Bernoulli trials and Binomial distribution. Divide the interval [0,t) into n sub-intervals, each of length t/n. For a sufficiently large n, These n intervals can be thought as constituting a sequence of Bernoulli trials, with success probability p= λt/n . So now the problem can again re-defined as, finding the prob of k arrivals in a total of n intervals each of duration t/n. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

93 Poisson Random Variable (contd.)
Poisson Random Variable often occurs in situations, such as, “no. of packets (or calls) arriving in t sec.” or “no. of components failing in t hours” etc. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

94 Poisson Failure Model Let N(t) be the number of (failure) events that occur in the time interval (0,t). Then a (homogeneous) Poisson model for N(t) assumes: 1. The probability mass function (pmf) of N(t) is: Where l > 0 is the expected number of event occurrences per unit time 2. The number of events in two non-overlapping intervals are mutually independent Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

95 Note: For a fixed t, N(t) is a random variable (in this case a discrete random variable known as the Poisson random variable). The family {N(t), t  0} is a stochastic process, in this case, the homogeneous Poisson process. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

96 Poisson Failure Model (cont.)
The successive interevent times X1, X2, … in a homogenous Poisson model, are mutually independent, and have a common exponential distribution given by: To show this: Thus, the discrete random variable, N(t), with the Poisson distribution, is related to the continuous random variable X1, which has an exponential distribution The mean interevent time is 1/l, which in this case is the mean time to failure Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

97 Poisson Distribution Probability mass function (pmf) (or discrete density function): Distribution function (CDF): Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

98 Poisson pmf pk t=1.0 Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

99 Poisson CDF t 1 2 3 4 5 6 7 8 9 10 0.5 0.1 CDF t=1.0 Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

100 Poisson pmf pk t=4.0 t=4.0 Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

101 Poisson CDF t CDF 1 2 3 4 5 6 7 8 9 10 0.5 0.1 t=4.0 Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

102 Probability Generating Function (PGF)
Helps in dealing with operations (e.g. sum) on rv’s Letting, P(X=k)=pk , PGF of X is defined by, One-to-one mapping: pmf (or CDF) PGF See page 98 for PGF of some common pmfs GX(z) is identical to the z-transform (digital filtering) of a discrete time function. If |z| < 1 (i.e. inside a unit circle), the above summation is guaranteed to converge to 1. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

103 Discrete Random Vectors
Examples: Z=X+Y, (X and Y are random execution times) Z = min(X, Y) or Z = max(X1, X2,…,Xk) X:(X1, X2,…,Xk) is a k-dimensional rv defined on S For each sample point s in S, Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

104 Discrete Random Vectors (properties)
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

105 Independent Discrete RVs
X and Y are independent iff the joint pmf satisfies: Mutual independence also implies: Pair wise independence vs. set-wide independence Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

106 Discrete Convolution Let Z=X+Y . Then, if X and Y are independent,
In general, then, Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

107 Continuous Random Variables
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 3 Continuous Random Variables

108 Definitions Distribution function:
If FX(x) is a continuous function of x, then X is a continuous random variable. FX(x): discrete in x  Discrete rv’s FX(x): piecewise continuous  Mixed rv’s If X is to qualify as a rv, on the space (S,F,P) then we should be able to define prob. measure for X. This in turn implies that P(X <= x) be defined for all x. This would require the existence of the event {s| X<= s} belonging to F. 2. F(x) is actually absolutely continuous i.e. its derivative is well defined except possibly at the end points.

109 Definitions (Continued)
Equivalence: CDF (cumulative distribution function) PDF (probability distribution function) Distribution function FX(x) or FX(t) or F(t)

110 Probability Density Function (pdf)
X : continuous rv, then, pdf properties: Sometimes we may have deal with mixed (discrete+continuous) type of rv’s as well. See Fig. 3.2 and understand it.

111 Definitions (Continued)
Equivalence: pdf probability density function density function density f(t) = For a non-negative random variable

112 Exponential Distribution
Arises commonly in reliability & queuing theory. A non-negative random variable It exhibits memoryless (Markov) property. Related to (the discrete) Poisson distribution Interarrival time between two IP packets (or voice calls) Time to failure, time to repair etc. Mathematically (CDF and pdf, respectively): No. of failure in a given interval may follow Poisson distribution.

113 CDF of exponentially distributed random variable with  = 0.0001
F(t) t

114 Exponential Density Function (pdf)
f(t) t

115 Memoryless property Assume X > t. We have observed that the component has not failed until time t. Let Y = X - t , the remaining (residual) lifetime The distribution of the remaining life, Y, does not depend on how long the component has been operating. Distribution of Y is identical to that of X.

116 Memoryless property Assume X > t. We have observed that the component has not failed until time t. Let Y = X - t , the remaining (residual) lifetime

117 Memoryless property (Continued)
Thus Gt(y) is independent of t and is identical to the original exponential distribution of X. The distribution of the remaining life does not depend on how long the component has been operating. Its eventual breakdown is the result of some suddenly appearing failure, not of gradual deterioration.

118 Reliability as a Function of Time
Reliability R(t): failure occurs after time ‘t’. Let X be the lifetime of a component subject to failures. Let N0: total no. of components (fixed); Ns(t): surviving ones; Nf(t): failed one by time t. Failure density function: f(t)t is the probability that the component will fail in the interval (t, t ]. Instantaneous Failure rate function h(t): is the conditional probability that the component will fail in the interval (t, t ], given that it survived until time t.

119 Definitions (Continued)
Equivalence: Reliability Complementary distribution function Survivor function R(t) = 1 -F(t)

120 Failure Rate or Hazard Rate
Instantaneous failure rate: h(t) (#failures/10k hrs) Let the rv X be EXP( λ). Then, Using simple calculus the following apples to any rv, R(t): P(X > t). For Exp() distribution, P(-infinity < X <= t) = 1-exp(-λt). Therefore, P(X>t) = 1- (1-exp(-λt)) = exp(-λt)

121 Hazard Rate and the pdf h(t) t = Conditional Prob. system will fail in (t, t + t) given that it has survived until time t f(t) t = Unconditional Prob. System will fail in (t, t + t) Difference between: probability that someone will die between 90 and 91, given that he lives to 90 probability that someone will die between 90 and 91

122 Weibull Distribution Frequently used to model fatigue failure, ball bearing failure etc. (very long tails) Reliability: Weibull distribution is capable of modeling DFR (α < 1), CFR (α = 1) and IFR (α >1) behavior. α is called the shape parameter and  is the scale parameter Weibull distribution may include the 3rd parameter sometimes. It is called the location parameter that has the effect of shifting the origin. That is, F(t) = F(t-θ) (RHS is the original Weibull distribution). F(t) = 1 – exp{–λ(t- θ)α}

123 Failure rate of the weibull distribution with various values of  and  = 1
5.0

124 Infant Mortality Effects in System Modeling
Bathtub curves Early-life period Steady-state period Wear out period Failure rate models

125 Bathtub Curve Until now we assumed that failure rate of equipment is time (age) independent. In real-life, variation as per the bathtub shape has been observed Failure Rate l(t) Infant Mortality (Early Life Failures) Steady State Wear out Operating Time

126 Early-life Period Also called infant mortality phase or reliability growth phase Caused by undetected hardware/software defects that are being fixed resulting in reliability growth Can cause significant prediction errors if steady-state failure rates are used Availability models can be constructed and solved to include this effect Weibull Model can be used

127 Steady-state Period Failure rate much lower than in early-life period
Either constant (age independent) or slowly varying failure rate Failures caused by environmental shocks Arrival process of environmental shocks can be assumed to be a Poisson process Hence time between two shocks has the exponential distribution

128 Wear out Period Failure rate increases rapidly with age
Properly qualified electronic hardware do not exhibit wear out failure during its intended service life (Motorola) Applicable for mechanical and other systems Weibull Failure Model can be used

129 Bathtub curve DFR phase: Initial design, constant bug fixes
CFR phase: Normal operational phase IFR phase: Aging behavior h(t) (burn-in-period) (wear-out-phase) CFR (useful life) DFR IFR t Decreasing failure rate Increasing fail. rate

130 Failure-Rate Multiplier
Failure Rate Models We use a truncated Weibull Model Infant mortality phase modeled by DFR Weibull and the steady-state phase by the exponential 7 6 5 4 3 2 1 Failure-Rate Multiplier 2,190 4,380 6,570 8,760 10,950 13,140 15,330 17,520 Operating Times (hrs)

131 Failure Rate Models (cont.)
This model has the form: where: steady-state failure rate is the Weibull shape parameter Failure rate multiplier =

132 Failure Rate Models (cont.)
There are several ways to incorporate time dependent failure rates in availability models The easiest way is to approximate a continuous function by a decreasing step function 7 6 5 4 3 2 1 Failure-Rate Multiplier 2,190 4,380 6,570 8,760 10,950 13,140 15,330 17,520 Operating Times (hrs)

133 Failure Rate Models (cont.)
Here the discrete failure-rate model is defined by:

134 Uniform Random Variable
All (pseudo) random generators generate random deviates of U(0,1) distribution; that is, if you generate a large number of random variables and plot their empirical distribution function, it will approach this distribution in the limit. U(a,b)  pdf constant over the (a,b) interval and CDF is the ramp function

135 Uniform density

136 { Uniform distribution The distribution function is given by:
0 , x < a, F(x)= , a < x < b 1 , x > b. {

137 Uniform distribution (Continued)

138 HypoExponential HypoExp: multiple Exp stages in series.
2-stage HypoExp denoted as HYPO(λ1, λ2). The density, distribution and hazard rate function are: HypoExp results in IFR: 0  min(λ1, λ2) Disk service time may be modeled as a 3-stage Hypoexponential as the overall time is the sum of the seek, the latency and the transfer time

139 HypoExponential used in software rejuvenation models
Preventive maintenance is useful only if failure rate is increasing Robust state A simple and useful model of increasing failure rate: Failure probable state Failed state Time to failure: Hypo-exponential distribution Increasing failure rate aging

140 Erlang Distribution Special case of HypoExp: All stages have same rate. [X > t] = [Nt < r] (Nt : no. of stresses applied in (0,t]) and Nt is Possion (param λt). This interpretation gives, r=1 case of Erlang distribution reduces to the Exp( ) distribution case.

141 Is used to approximate the deterministic one
Erlang Distribution Is used to approximate the deterministic one since if you keep the same mean but increase the number of stages, the pdf approaches the delta function in the limit Can also be used to approximate the uniform distribution r=1 case of Erlang distribution reduces to the Exp( ) distribution case.

142 probability density functions (pdf)
If we vary r keeping r/ constant, pdf of r-stage Erlang approaches an impulse function at r/ .

143 cumulative distribution functions (cdf)
And the cdf approaches a step function at r/. In other words r-stage Erlang can approximate a deterministic variable.

144 Comparison of probability density functions (pdf)

145 Comparison of cumulative distribution functions (cdf)

146 Gamma Random Variable Gamma density function is,
Gamma distribution can capture all three failure modes, viz. DFR, CFR and IFR. α = 1: CFR α <1 : DFR α >1 : IFR Gamma with α = ½ and  = n/2 is known as the chi-square random variable with n degrees of freedom

147 HyperExponential Distribution
Hypo or Erlang  Sequential Exp( ) stages. Alternate Exp( ) stages  HyperExponential. CPU service time may be modeled as HyperExp In workload based software rejuvenation model we found the sojourn times in many workload states have this distribution

148 Log-logistic Distribution
Log-logistic can model DFR, CFR and IFR failure models simultaneously, unlike previous ones. For, κ > 1, the failure rate first increases with t (IFR); after momentarily leveling off (CFR), it decreases (DFR) with time. This is known as the inverse bath tub shape curve Use in modeling software reliability growth

149 Hazard rate comparison

150 Defective Distribution
If Example: This defect (also known as the mass at infinity) could represent the probability that the program will not terminate (1-c). Continuous part can model completion time of program. There can also be a mass at origin.

151 Pareto Random Variable
Also known as the power law or long-tailed distribution Found to be useful in modeling CPU time consumed by a request Webfile sizes Number of data bytes in FTP bursts Thinking time of a Web browser

152 Gaussian (Normal) Distribution
Bell shaped pdf – intuitively pleasing! Central Limit Theorem: mean of a large number of mutually independent rv’s (having arbitrary distributions) starts following Normal distribution as n  μ: mean, σ: std. deviation, σ2: variance (N(μ, σ2)) μ and σ completely describe the statistics. This is significant in statistical estimation/signal processing/communication theory etc. In these areas, very often, we have to solve optimization problems that call minimizing the variance in estimating a parameter, detecting a signal, testing a hypothesis. By assuming the distributions to be Normal, variance minimization (Least-mean-square-error (LSME) or Min. mean square error (mmse)) results are globally optimal.

153 Normal Distribution (contd.)
N(0,1) is called normalized Guassian. N(0,1) is symmetric i.e. f(x)=f(-x) F(z) = 1-F(z). Failure rate h(t) follows IFR behavior. Hence, N( ) is suitable for modeling long-term wear or aging related failure phenomena.

154 Functions of Random Variables
Often, rv’s need to be transformed/operated upon. Y = Φ (X) : so, what is the density of Y ? Example: Y = X2 If X is N(0,1), then, Above Y is also known as the χ2 distribution (with 1-degree of freedom).

155 Functions of RV’s (contd.)
If X is uniformly distributed, then, Y= -λ-1ln(1-X) follows Exp(λ) distribution transformations may be used to generate random variates (or deviates) with desired distributions.

156 Functions of RV’s (contd.)
Given, A monotone differentiable function, Above method suggests a way to get the random variates with desired distribution. Choose Φ to be F. Since, Y=F(X), FY(y) = y and Y is U(0,1). To generate a random variate with X having desired distribution, generate U(0,1) random variable Y, then transform y to x= F-1(y) . This inversion can be done in closed-form, graphically or using a table.

157 Jointly Distributed RVs
Joint Distribution Function: Independent rv’s: iff the following holds:

158 Joint Distribution Properties

159 Joint Distribution Properties (contd)

160 Order statistics: kofn, TMR

161 Order Statistics: KofN
X1 ,X2 ,..., Xn iid (independent and identically distributed) random variables with a common distribution function F(). Let Y1 ,Y2 ,...,Yn be random variables obtained by permuting the set X1 ,X2 ,..., Xn so as to be in increasing order. To be specific: Y1 = min{X1 ,X2 ,..., Xn} and Yn = max{X1 ,X2 ,..., Xn}

162 Order Statistics: KofN (Continued)
The random variable Yk is called the k-th ORDER STATISTIC. If Xi is the lifetime of the i-th component in a system of n components. Then: Y1 will be the overall series system lifetime. Yn will denote the lifetime of a parallel system. Yn-k+1 will be the lifetime of an k-out-of-n system.

163 Order Statistics: KofN (Continued)
To derive the distribution function of Yk, we note that the probability that exactly j of the Xi's lie in (- ,y] and (n-j) lie in (y, ) is:

164 Applications of order statistics
Reliability of a k out of n system Series system: Parallel system: Minimum of n EXP random variables is special case of Y1 = min{X1,…,Xn} where Xi~EXP(i) Y1~EXP( i) This is not true (that is EXP dist.) for the parallel case

165 Triple Modular Redundancy (TMR)
R(t) Voter R(t) R(t) An interesting case of order statistics occurs when we consider the Triple Modular Redundant (TMR) system (n = 3 and k = 2). Y2 then denotes the time until the second component fails. We get:

166 TMR (Continued) Assuming that the reliability of a single component is given by, we get:

167 TMR (Continued) In the following figure, we have plotted RTMR(t) vs t as well as R(t) vs t.

168 TMR (Continued)

169

170

171

172

173

174

175

176

177

178

179

180 Cold standby (dynamic redundancy)
X Y Lifetime of Active EXP() Lifetime of Spare EXP() Total lifetime 2-Stage Erlang EXP() Assumptions: Detection & Switching perfect; spare does not fail

181 Sum of RVs: Standby Redundancy
Two independent components, X and Y Series system (Z=min(X,Y)) Parallel System (Z=max(X,Y)) Cold standby: the life time Z=X+Y

182 Sum of Random Variables
Z = Φ(X, Y)  ((X, Y) may not be independent) For the special case, Z = X + Y The resulting pdf is (assuming independence), Convolution integral (modify for the non-negative case) If X and Y are mutually independent, the resulting pdf is just the product of two pdf’s rather than Convolution.

183 Convolution (non-negative case)
Z = X + Y, X & Y are independent random variables (in this case, non-negative) The above integral is often called the convolution of fX and fY. Thus the density of the sum of two non-negative independent, continuous random variables is the convolution of the individual densities.

184 Cold standby derivation
X and Y are both EXP() and independent. Then

185 Cold standby derivation (Continued)
Z is two-stage Erlang Distributed

186 Convolution: Erlang Distribution
The general case of r-stage Erlang Distribution When r sequential phases have independent identical exponential distributions, then the resulting density is known as r-stage (or r-phase) Erlang and is given by:

187 Convolution: Erlang (Continued)
EXP()

188 Warm standby With Warm spare, we have:
Active unit time-to-failure: EXP() Spare unit time-to-failure: EXP() 2-stage hypoexponential distribution EXP(+ ) EXP()

189 Warm standby derivation
First event to occur is that either the active or the spare will fail. Time to this event is min{EXP(),EXP()} which is EXP( + ). Then due to the memoryless property of the exponential, remaining time is still EXP(). Hence system lifetime has a two-stage hypoexponential distribution with parameters 1 =  +  and 2 =  .

190 Warm standby derivation (Continued)
X is EXP(1) and Y is EXP(2) and are independent 1 = 2 Then fZ(t) is

191 Hot standby With hot spare, we have:
Active unit time-to-failure: EXP() Spare unit time-to-failure: EXP() 2-stage hypoexponential EXP(2) EXP()

192 TMR and TMR/simplex as hypoexponentials

193 Hypoexponential: general case
Z = , where X1 ,X2 , … , Xr are mutually independent and Xi is exponentially distributed with parameter i (i = j for i = j). Then Z is a r-stage hypoexponentially distributed random variable. EXP(1) EXP(2) EXP(r)

194 Hypoexponential: general case

195 KofN system lifetime as a hypoexponential
At least, k out of n units should be operational for the system to be Up. EXP(n) EXP((n-1)) EXP(k) EXP((k-1)) EXP() ... ... Y1 Y2 Yn-k+1 Yn-k+2 Yn

196 KofN with warm spares At least, k out of n + s units should be operational for the system to be Up. Initially n units are active and s units are warm spares. EXP(n s) EXP(n +(s-1) ) EXP(n + ) EXP(n) EXP(k) ... ...

197 Sum of Normal Random Variables
X1, X2, .., Xk are normal ‘iid’ rv’s, then, the rv Z = (X1+ X2+ ..+Xk) is also normal with, X1, X2, .., Xk are normal. Then, follows Gamma or the χ2 (with n-degrees of freedom) distribution

198 Dept. of Electrical & Computer engineering
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 4 on Expected Value and Higher Moments Dept. of Electrical & Computer engineering Duke University 2/5/2018

199 Expected (Mean, Average) Value
There are several ways to abstract the information in the CDF into a single number: median, mode, mean. Mean: E(X) may also be computed using distribution function In case, the summation or the integration does is not absolutely convergent, then E(X) does not exist. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

200 Higher Moments RV’s X and Y (=Φ(X)). Then,
Φ(X) = Xk, k=1,2,3,.., E[Xk]: kth moment k=1 Mean; k=2: Variance (Measures degree of variability) Example: Exp(λ)  E[X]= 1/ λ; σ2 = 1/λ2 shape of the pdf (or pmf) for small and large variance values. σ is commonly referred to as the ‘standard deviation’ Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

201 Bernoulli Random Variable
For a fixed t, X(t) is a random variable. The family of random variables {X(t), t  0} is a stochastic process. Random variable X(t) is the indicator or Bernoulli random variable so that: Probability mass function: Mean E[X(t)]: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

202 Binomial Random Variable (cont.)
Y(t) is binomial with parameters n,p Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

203 Poisson Distribution Probability mass function (pmf) (or discrete density function): Mean E[N(t)] : Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

204 Exponential Distribution
Distribution Function: Density Function: Reliability: Failure Rate: failure rate is age-independent (constant) MTTF: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

205 Exponential Distribution
Distribution Function: Density Function: Reliability: Failure Rate (CFR): Failure rate is age-independent (constant) Mean Time to Failure: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

206 Weibull Distribution (cont.)
Failure Rate: IFR for DFR for MTTF: Shape parameter  and scale parameter  Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

207 Using Equations of the underlying Semi-Markov Process (Continued)
Time to the next diagnostic is uniformly distributed over (0,T) Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

208 Using Equations of the underlying Semi-Markov Process (Continued)
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

209 E[ ] of a function of mutliple RV’s
If Z=X+Y, then E[X+Y] = E[X]+E[Y] (X, Y need not be independent) If Z=XY, then E[XY] = E[X]E[Y] (if X, Y are mutually independent) Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

210 Variance: function of Mutliple RV’s
Var[X+Y]=Var[X]+Var[Y] (If X, Y independent) Cov[X,Y] E{[X-E[X]][Y-E[Y]]} Cov[X,Y] = 0 and (If X, Y independent) Cross Cov[ ] terms may appear if not independent. (Cross) Correlation Co-efficient: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

211 Moment Generating Function (MGF)
For dealing with complex function of rv’s. Use transforms (similar z-transform for pmf) If X is a non-negative continuous rv, then, If X is a non-negative discrete rv, then, M[θ] is not guaranteed to exist. But for most distributions of our interest, it does exist. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

212 MGF (contd.) Complex no. domain characteristics fn. transform is
If X is Gaussian N(μ, σ), then, Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

213 MGF Properties If Y=aX+b (translation & scaling), then,
Uniqueness property Summation in one domain  convolution in the other domain. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

214 MGF Properties For the LST: For the z-transform case:
For the characteristic function, Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

215 MFG of Common Distributions
Read sec pp Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

216 MTTF Computation R(t) = P(X > t), X: Lifetime of a component
Expected life time or MTTF is In general, kth moment is, Series of components, (each has lifetime Exp(λi) Overall lifetime distribution: Exp( ), and MTTF = The last equality follows from by integrating by parts, int_0^∞ t R’(t) = -t R(t)|0 to ∞ + Int_0^∞ R(t) -t R(t) 0 as t ∞ since R(t)  0 faster than t  ∞. Hence, the first term disappears. Note that the MTTF of a series system is much smaller than the MTTF of an individual component. Failure of any component implies failure of the overall system. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

217 Series system (Continued)
Other versions of Equation (2) Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

218 Series System MTTF (contd.)
RV Xi : ith comp’s life time (arbitrary distribution) Case of least common denominator. To prove above Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

219 Homework 2: For a 2-component parallel redundant system
with EXP( ) behavior, write down expressions for: Rp(t) MTTFp Further assuming EXP(µ) behavior and independent repair, write down expressions for: Ap(t) Ap downtime Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

220 Homework 3: For a 2-component parallel redundant system
with EXP( ) and EXP( ) behavior, write down expressions for: Rp(t) MTTFp Assuming independent repair at rates µ1 and µ2, write down expressions for: Ap(t) Ap downtime Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

221 TMR (Continued) Assuming that the reliability of a single component is given by, we get: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

222 TMR (Continued) In the following figure, we have plotted RTMR(t) vs t as well as R(t) vs t. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

223 Homework 5: specialize the bridge reliability formula to the case
where Ri(t) = find Rbridge(t) and MTTF for the bridge Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

224 MTTF Computation (contd.)
Parallel system: life time of ith component is rv Xi X = max(X1, X2, ..,Xn) If all Xi’s are EXP(λ), then, As n increases, MTTF also increases as does the Var. These are notes. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

225 Standby Redundancy A system with 1 component and (n-1) cold spares.
Life time, If all Xi’s same,  Erlang distribution. Read secs and on TMR and k-out of-n. Sec Inequalities and Limit theorems Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

226 Cold standby Lifetime of Active EXP() Total lifetime 2-Stage Erlang
Spare EXP() Total lifetime 2-Stage Erlang EXP() Assumptions: Detection & Switching perfect; spare does not fail Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

227 Warm standby With Warm spare, we have:
Active unit time-to-failure: EXP() Spare unit time-to-failure: EXP() 2-stage hypoexponential distribution EXP(+ ) EXP() Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

228 Warm standby derivation
First event to occur is that either the active or the spare will fail. Time to this event is min{EXP(),EXP()} which is EXP( + ). Then due to the memoryless property of the exponential, remaining time is still EXP(). Hence system lifetime has a two-stage hypoexponential distribution with parameters 1 =  +  and 2 =  . Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

229 Hot standby With hot spare, we have:
Active unit time-to-failure: EXP() Spare unit time-to-failure: EXP() 2-stage hypoexponential EXP(2) EXP() Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

230 The WFS Example File Server Computer Network Workstation 1
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

231 RBD for the WFS Example Workstation 1 File Server Workstation 2
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

232 RBD for the WFS Example (cont.)
Rw(t): workstation reliability Rf (t): file-server reliability System reliability R(t) is given by: Note: applies to any time-to-failure distributions Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

233 RBD for the WFS Example (cont.)
Assuming exponentially distributed times to failure: failure rate of workstation failure rate of file-server The system mean time to failure (MTTF) is given by: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

234 Comparison Between Exponential and Weibull
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

235 Homework 2: For a 2-component parallel redundant system
with EXP( ) behavior, write down expressions for: Rp(t) MTTFp Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

236 Solution 2: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

237 Homework 3 Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

238 Homework 3: For a 2-component parallel redundant system
with EXP( ) and EXP( ) behavior, write down expressions for: Rp(t) MTTFp Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

239 Solution 3: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

240 Homework 4: Specialize formula (3) to the case where:
Derive expressions for system reliability and system meantime to failure. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

241 Homework 4 Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

242 Control channels-Voice channels Example:
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

243 Homework 5 Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

244 Homework 5: specialize the bridge reliability formula to the case
where Ri(t) = find Rbridge(t) and MTTF for the bridge Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

245 Bridge: conditioning Non-series-parallel block diagram C1 C2 C3 fails
C3 is working C4 C5 C1 C2 S T Factor (condition) on C3 C4 C5 Non-series-parallel block diagram Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

246 Bridge: Rbridge(t) When C3 is working C1 C4 C2 C5 S T
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

247 Bridge: Rbridge(t) When C3 fails C1 C5 C2 C4 S T
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

248 Bridge: Rbridge(t) Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

249 Bridge: MTTF Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

250 Homework 7 Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

251 Homework 7: Derive & compare reliability expressions for Cold, Warm and Hot standby cases. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

252 Cold spare: EXP() Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

253 Warm spare: EXP(+ ) EXP()
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

254 Hot spare: EXP(2) EXP()
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

255 Comparison graph: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

256 Homework 8 Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

257 Homework 8: For the 2-component system with non-shared repair, use a reliability block diagram to derive the formula for instantaneous and steady-state availability. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

258 Solution 8: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

259 TMR and TMR/simplex as hypoexponentials
Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

260 Dept. of Electrical & Computer engineering
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 5 on Conditional Probability and Expectation Dept. of Electrical & Computer engineering Duke University 2/5/2018

261 Conditional pmf Conditional probability:
Above works if x is a discrete rv. For discrete rv’s X and Y, conditional pmf is, Above relationship also implies, Hence we have another version of the theorem of total probability If x and y are mutually independent, then, p(y|x) = p(y). Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

262 Independence, Conditional Distribution
Conditional distribution function Using conditional pmf, Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

263 Example n total jobs, k are passed on to server A A B Two servers
k jobs p: prob. that the next job goes to server A p A Two servers Poisson ( λ) Job stream Bernoulli trial n jobs 1-p B n total jobs, k are passed on to server A pY(k) = [(pk/k!)e-λ \sum_{k-n}^\infty \frac{\lambda^k (\lambda (1-p))^{n-k}}{(n-k)!} = (\lamda p)^k / k! \sum_{n=k}^\infty \frac{(\lambda (1-p))^{n-k}}{n-k!} substituting m= n-k gives the result. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

264 Conditional pdf For continuous rv’s X and Y, conditional pdf is, Also,
Independent X, Y  Marginal pdf (cont. version of the TTP), Conditional distribution function Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

265 Conditional Reliability
Software system after having incurred (i-1) faults, Ri(t) = P(Ti > t) (Ti : inter-failure times) Ti : independent exponentially distributed Exp(λi). λi : Failure rate itself may be random, then Conditional reliability: λi:: random  that with fault occurrence, followed by its removal, failure rate λi changes (in a random manner denoted by the random variable Λi ). If ψi = a1+a2i, then what happens to E[Λi]and R_i(t) ? E[Λi] = 1/ψi = i.e. decreasing with i and there is a likelihood (but no guarantee) that R_i(t) improves with time. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

266 Mixture Distribution Conditional distribution: continuous and discrete rvs combined. Examples: (Response time | that there k processors), (Reliability| k components) etc. (Y: continuous, X:discrete) Compute server with r classes of jobs (i=1,2,..,r) Hence, Y follows an r-stage HyperExpo distribution. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

267 Mixture Distribution (contd.)
What if fY|X(y|i) is not Exponential? The unconditional pdf and CDF are, Using LST, Moments can now be found as, Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

268 Mixture Distribution (contd.)
Such Mixture distrib.: arise in reliability studies. Software system: Modules (or objects) may have been written by different groups or companies, ith group contributes ai fraction of modules and has reliability characteristic given by Fi. Gp#1: EXP( λ1) (α frac); Gp#2: r-stage Erlang (1- α frac) Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

269 Mixture Distribution (contd.)
Y:continuous; X: continuous or uncountable, e.g., life time Y depends on the impurities X. Finally, Y:discrete; X: continuous Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

270 Mixture Distribution (contd.)
X: web server response time; Y: # of requests arriving while a request being serviced. For a given value of X=x, Y is Poisson, The joint pdf is, f(x,y) = pY|X(y|x)fX(x) Unconditional pmf pY(y) = P(Y=y) With (λ+μ)x = w, Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

271 Conditional Moments Conditional Expectation is E[Y|X=x] or E[Y|x]
E[Y|x]: a.k.a regression function For the discrete case, In general, then, The mathematical function used to describe the deterministic variation in the response variable is sometimes called the "regression function", the "regression equation", the "smoothing function", or the "smooth". [ Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

272 Conditional Moments (contd.)
This can be specialized to: kth moment of Y: E[Yk|X=x] Conditional MGF, MY|X(θ |x) = E[eθY|X=x] Conditional LST, LY|X(s|x) = E[e-sY|X=x] Conditional PGF, GY|X(z|x) = E[zY|X=x] Total Expectation: Total moments: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

273 Conditional Moments (contd.)
Total transforms: In the previous example, Total expectation: Therefore, we can also talk of conditional MTTF MTTF may depend on impurities or operating temp. Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

274 Conditional MTTF Y: time-to-failure may depend on the temperature, and the conditional MTTF may be: Let Temp be normal, Unconditional MTTF is: Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

275 Imperfect Fault Coverage
Hybrid k-out of-n system, with m cold standbys. Reliability depends on recovery from a failure. What if the failed module cannot be substituted by a standby? These are called not covered faults. Probability that a fault is covered is c (coverage factor) Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

276 Fault Handling Phases Fault handling involves 3-distinct phases.
Finite success probability for each phase  finite coverage. c = P(“ok recovery”|”fault occurs”) = P(“fault detected” & “fault located” & “fault corrected” | “fault occurs”) = cd.cl.cr Fault Processing Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

277 Near Coincident Faults
Coincident fault: 2nd fault occurs while the 1st one has not been completely processed. Y: Random time to process a fault. X: Time at which coincident fault occurs (EXP(γ)). Fault coverage: prob. that Y < X Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

278 Near Coincidence: Fault Coverage
Fault handling has multiple phases. This gives: X:Life time of a system with one active + one standby λ: Active component’s failure rate; Y = 1  fault covered; Y = 0  fault not covered. c=0 or c=1? Ca is currently active and Cs acts as standby so that when Cafails and the fault covered, Cs becomes active. Similarly, now if Cs fails and this fault is covered, then Ca becomes the active component. Hence when the faults are covered, the life time is the sum of the life times of active and standby (2/λ). Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University

279 Life Time Distribution-Limited Coverage
fX|Y(t|0): life time of the active comp. ~EXP(λ) fX|Y(t|1): life time of active+standby 2-stage Erlang Joint density fn: Marginal density fn: Reliability Bharat B. Madan, Department of Electrical and Computer Engineering, Duke University


Download ppt "Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 1 Introduction."

Similar presentations


Ads by Google