week 51 Theorem For g: R R If X is a discrete random variable then If X is a continuous random variable Proof: We proof it for the discrete case. Let Y = g(X) then
week 52 Example to illustrate steps in proof Suppose i.e. and the possible values of X are so the possible values of Y are then,
week 53 Examples 1. Suppose X ~ Uniform(0, 1). Let then, 2. Suppose X ~ Poisson(λ). Let, then
week 54 Properties of Expectation For X, Y random variables and constants, E(aX + b) = aE(X) + b Proof: Continuous case E(aX + bY) = aE(X) + bE(Y) Proof to come… If X is a non-negative random variable, then E(X) = 0 if and only if X = 0 with probability 1. If X is a non-negative random variable, then E(X) ≥ 0 E(a) = a
week 55 Moments The k th moment of a distribution is E(X k ). We are usually interested in 1 st and 2 nd moments (sometimes in 3 rd and 4 th ) Some second moments: 1. Suppose X ~ Uniform(0, 1), then 2. Suppose X ~ Geometric(p), then
week 56 Variance The expected value of a random variable E(X) is a measure of the “center” of a distribution. The variance is a measure of how closely concentrated to center (µ) the probability is. It is also called 2nd central moment. Definition The variance of a random variable X is Claim: Proof: We can use the above formula for convenience of calculation. The standard deviation of a random variable X is denoted by σ X ; it is the square root of the variance i.e..
week 57 Properties of Variance For X, Y random variables and are constants, then Var(aX + b) = a 2 Var(X) Proof: Var(aX + bY) = a 2 Var(X) + b 2 Var(Y) + 2abE[(X – E(X ))(Y – E(Y ))] Proof: Var(X) ≥ 0 Var(X) = 0 if and only if X = E(X) with probability 1 Var(a) = 0
week 58 Examples 1. Suppose X ~ Uniform(0, 1), then and therefore 2. Suppose X ~ Geometric(p), then and therefore 3. Suppose X ~ Bernoulli(p), then and therefore,
week 59 Example Suppose X ~ Uniform(2, 4). Let. Find. What if X ~ Uniform(-4, 4)?
week 510 Functions of Random variables In some case we would like to find the distribution of Y = h(X) when the distribution of X is known. Discrete case Examples 1. Let Y = aX + b, a ≠ 0 2. Let
week 511 Continuous case – Examples 1. Suppose X ~ Uniform(0, 1). Let, then the cdf of Y can be found as follows The density of Y is then given by 2. Let X have the exponential distribution with parameter λ. Find the density for 3. Suppose X is a random variable with density Check if this is a valid density and find the density of.
week 512 Question Can we formulate a general rule for densities so that we don’t have to look at cdf? Answer: sometimes … Suppose Y = h(X) then and but need h to be monotone on region where density for X is non-zero.
week 513 Check with previous examples: 1. X ~ Uniform(0, 1) and 2. X ~ Exponential(λ). Let 3. X is a random variable with density and
week 514 Theorem If X is a continuous random variable with density f X (x) and h is strictly increasing and differentiable function form R R then Y = h(X) has density for. Proof:
week 515 Theorem If X is a continuous random variable with density fX(x) and h is strictly decreasing and differentiable function form R R then Y = h(X) has density for. Proof:
week 516 Summary If Y = h(X) and h is monotone then Example X has a density Let. Compute the density of Y.
week 517 Indicator Functions and Random Variables Indicator function – definition Let A be a set of real numbers. The indicator function for A is defined by Some properties of indicator functions: The support of a discrete random variable X is the set of values of x for which P(X = x) > 0. The support of a continuous random variable X with density f X (x) is the set of values of x for which f X (x) > 0.
week 518 Examples A discrete random variable with pmf can be written as A continuous random variable with density function can be written as
week 519 Important Indicator random variable If A is an event then I A is a random variable which is 0 if A does not occur and 1 if it does. I A is an indicator random variable. I A is also called a Bernoulli random variable. If we perform a random experiment repeatedly and each time measure the random variable I A, we could get 1, 1, 0, 0, 0, 0, 1, 0, …The average of this list in the long run is E(I A ); it gives the proportion with which A occurs. In the long run it is P(A), i.e. P(A) = E(I A ) Example: for a Bernoulli random variable X we have
week 520 Use of Indicator random variable Suppose X ~ Binomial(n, p). Let Y 1,…, Y n be Bernoulli random variables with probability of success p. Then X can be thought of as, then Similar trick for Negative Binomial: Suppose X ~ Negative Binomial(r, p). Let X 1 be the number of trials until the 1 st success X 2 be the number of trails between 1 st and 2 nd success. : X r be the number of trails between (r - 1) th and r th success Then and we have