Presentation is loading. Please wait.

Presentation is loading. Please wait.

0 K. Salah 2. Review of Probability and Statistics Refs: Law & Kelton, Chapter 4.

Similar presentations


Presentation on theme: "0 K. Salah 2. Review of Probability and Statistics Refs: Law & Kelton, Chapter 4."— Presentation transcript:

1 0 K. Salah 2. Review of Probability and Statistics Refs: Law & Kelton, Chapter 4

2 1 K. Salah Random Variables Experiment: a process whose outcome is not known with certainty Sample space: S = {all possible outcomes of an experiment} Sample point: an outcome (a member of sample space S) Example: in coin flipping, S={Head,Tail}, Head and Tail are outcomes Random variable: a function that assigns a real number to each point in sample space S Example: in flipping two coins: If X is the random variable = number of heads that occur then X=1 for outcomes (H,T) and (T,H), and X=2 for (H,H)

3 2 K. Salah Random Variables: Notation Denote random variables by upper case letters: X, Y, Z,... Denote values of random variables by lower case letters: x, y, z, … The distribution function (cumulative distribution function) F(x) of random variable X for real number x is F(x) = P(X  x) for -  <x<  where P(X  x) is the probability associated with event {X  x} F(x) has the following properties: 0  F(x)  1 for all x F(x) is nondecreasing: i.e. if x 1  x 2 then F(x 1 )  F(x 2 )

4 3 K. Salah Discrete Random Variables A random variable is discrete if it takes on at most a countable number of values The probability that random variable X takes on value x i is given by: p(x i ) = P(X=x i ) for I=1,2,… and p(x) is the probability mass function of discrete random variable X F(x) is the probability distribution function of discrete random variable X

5 4 K. Salah Continuous Random Variables X is a continuous random variable if there exists a non-negative function f(x) such that for any set of real numbers B, f(x) is the probability density function for the continuous RV X: F(x) is the probability distribution function for the continuous RV X:

6 5 K. Salah Distribution and density functions of a Continuous Random Variable Given an interval I = [a,b] a b x f(x)

7 6 K. Salah Joint Random Variables In the M/M/1 queuing system, the input can be represented as two sets of random variables: arrival times of customers: A 1, A 2, …, A n and service times of customers: S 1, S 2, …, S n The output can be a set of random variables: delays in queue of customers: D 1, D 2, …, D n The D’s are not independent.

8 7 K. Salah Jointly discrete Random Variables Consider the case of two discrete random variables X and Y, the joint probability mass function is p(x,y) = P(X=x,Y=y) for all x,y X and Y are independent if p(x,y) = p X (x)p Y (y) for all x,y these are the marginal probability mass functions

9 8 K. Salah Jointly continuous Random Variables Consider the case of two continuous random variables X and Y. X and Y are jointly continuous random variables if there exists a non-negative function f(x,y) (the joint probability density function of X and Y) such that for all sets of real numbers A and B, X and Y are independent if f X (x) and f Y (y) are called the marginal probability density functions

10 9 K. Salah Measuring Random Variables: mean and median Consider n random variables X 1, X 2, …, X n The mean of the random variable X i (i=1,2,…,n) is The mean is the expected value, and is a measure of central tendency. The median, x 0.5, is the smallest value of x such that F X (x)  0.5 For a continuous random variable, F(x 0.5 ) = 0.5

11 10 K. Salah Measuring Random Variables: variance The variance of the random variable X i, denoted by  i 2 or Var(X i ) is  i 2 = E[(X i -  i ) 2 ] = E(X i 2 ) -  i 2 (this is a measure of dispersion)   large  2 small  2 Some properties of variance are: Var(X)  0 Var(cX) = c 2 Var(X) If the X i ’s are independent, then Var(X + Y) = Var(X) + Var(Y) or Standard deviation is

12 11 K. Salah Variance is not dimensionless *60 2 *60 e.g. if we collect data on service times at a bank machine with mean 0.5 minutes, variance 0.1 minute, the same data in seconds will give mean 30 seconds, variance 360 seconds; i.e., variance is dependent on scale!

13 12 K. Salah Measures of dependence The covariance between random variables X i and X j where i=1,…,n and j=1,…,n is C ij = Cov(X i, X j ) = E[(X i -  i )(X j -  j )] = E[X i X j ] -  i  j Some properties of covariance: C ij = C ji if i=j, then C ij = C ji = C ii = C jj =  i 2 if C ij > 0, then X i and X j are positively correlated: X i >  i and X j >  j tend to occur together, and X i <  i and X j <  j tend to occur together if C ij < 0, then X i and X j are negatively correlated: X i >  i and X j <  j tend to occur together, and X i  j tend to occur together C ij is NOT dimensionless, i.e. the value is influenced by the scale of the data.

14 13 K. Salah A dimensionless measure of dependence Correlation is a dimensionless measure of dependence: Var(a 1 X 1 + a 2 X 2 ) = a 1 2 Var(X 1 )+2a 1 a 2 Covar(X 1, X 2 )+ a 2 2 Var(X 1 ) Var(X-Y) = Var(X) + Var(Y) – 2Cov(X,Y) Var(X+Y) = Var(X) + Var(Y) +2Cov(X,Y)

15 14 The covariance between the random variables X and Y, denoted by Cov(X, Y), is defined by Cov(X, Y) = E{[X - E(X)][Y - E(Y)]} = E(XY) - E(X)E(Y) The covariance is a measure of the dependence between X and Y. Note that Cov(X, X) = Var(X).

16 15 Definitions: Cov(X, Y)X and Y are = 0uncorrelated > 0positively correlated < 0negatively correlated Independent random variables are also uncorrelated.

17 16 Note that, in general, we have Var(X - Y) = Var(X) + Var(Y) - 2Cov(X, Y) If X and Y are independent, then Var(X - Y) = Var(X) + Var(Y) The correlation between the random variables X and Y, denoted by Cor(X, Y), is defined by

18 17 It can be shown that -1  Cor(X, Y)  1

19 18 4.2. Simulation Output Data and Stochastic Processes A stochastic process is a collection of "similar" random variables ordered over time all defined relative to the same experiment. If the collection is X 1, X 2,..., then we have a discrete-time stochastic process. If the collection is {X(t), t  0}, then we have a continuous-time stochastic process.

20 19 Example 4.3: Consider the single-server queueing system of Chapter 1 with independent interarrival times A 1, A 2,... and independent processing times P 1, P 2,.... Relative to the experiment of generating the A i 's and P i 's, one can define the discrete-time stochastic process of delays in queue D 1, D 2,... as follows: D 1 = 0 D i +1 = max{D i + P i - A i +1, 0} for i = 1, 2,...

21 20 Thus, the simulation maps the input random variables into the output process of interest. Other examples of stochastic processes: N 1, N 2,..., where N i = number of parts produced in the ith hour for a manufacturing system T 1, T 2,..., where T i = time in system of the ith part for a manufacturing system

22 21 {Q(t), t  0}, where Q(t) = number of customers in queue at time t C 1, C 2,..., where C i = total cost in the ith month for an inventory system E 1, E 2,..., where E i = end-to-end delay of ith message to reach its destination in a communications network {R(t), t  0}, where R(t) = number of red tanks in a battle at time t

23 22 Example 4.4: Consider the delay-in- queue process D 1, D 2,... for the M/M/1 queue with utilization factor ρ. Then the correlation function ρ j between D i and D i+j is given in Figure 4.8.

24 23 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0123456 78 910 j ρjρj Figure 4.8. Correlation function ρ j of the process D 1, D 2,... for the M/M/1 queue. ρ = 0.9 ρ = 0.5

25 24 K. Salah Estimating distribution parameters Assume that X 1, X 2, …, X n are independent, identically distributed (IID) random variables with finite population mean , and finite population variance  2. We would like to estimate  and  2 The sample mean is This is an unbiased estimator of , i.e. also, for a very large number of independent calculations of, the average of the will tend to 

26 25 K. Salah Estimating variance The sample variance is This is an unbiased estimator of , i.e. E[s 2 (n)] =  2 this is a measure of how well estimates 

27 26 K. Salah Two interesting theorems Central Limit Theorem: For large enough n, tends to be normally distributed with mean  and variance  2 /n This means that we can assume that the average of a large numbers of random samples is normally distributed - very convenient. Strong Law of Large Numbers: For an infinite number of experiments, each resulting in, for sufficiently large n, will be arbitrarily close to  for almost all experiments. This means that we must choose n carefully: every sample must be large enough to give a good.

28 27 K. Salah Area = 0.68 Figure 4.7. Density function for a N( ,  2 ) distribution.

29 28 K. Salah Confidence Intervals 0 x f(x) 1-  Interpretation of confidence intervals: If one constructs a very large number of independent 100(1-  )% confidence intervals, each based on n observations, with n sufficiently large, then the proportion of these confidence intervals that contain  should be 1- . What is n sufficiently large? The more “non-normal” the distribution of X i, the larger n must be to get good coverage (1-  ).

30 29 K. Salah An exact confidence interval If the X i ’s are normal random variables, then t n has a t distribution with n-1 degrees of freedom (df), and an exact 100(1-  )% confidence interval for  is: This is better than z’s confidence interval, because it provides better coverage (1-  ) for small n, and converges to the z confidence interval for large n. Note: this estimator assumes that the X i ’s are normally distributed. This is reasonable if the central limit theorem applies: i.e. each X i is the average of one sample (large enough to give a good average), and we take a large number of samples (enough samples to get a good confidence interval).


Download ppt "0 K. Salah 2. Review of Probability and Statistics Refs: Law & Kelton, Chapter 4."

Similar presentations


Ads by Google