Chapter 3 DeGroot & Schervish. Functions of a Random Variable the distribution of some function of X suppose X is the rate at which customers are served.

Slides:



Advertisements
Similar presentations
9. Two Functions of Two Random Variables
Advertisements

Chapter 2 Multivariate Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Joint and marginal distribution functions For any two random variables X and Y defined on the same sample space, the joint c.d.f. is For an example, see.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
EE663 Image Processing Histogram Equalization Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Chain Rules for Entropy
Introduction to stochastic process
TRANSFORMATION OF FUNCTION OF A RANDOM VARIABLE UNIVARIATE TRANSFORMATIONS.
Probability theory 2010 Order statistics  Distribution of order variables (and extremes)  Joint distribution of order variables (and extremes)
Continuous Random Variables. For discrete random variables, we required that Y was limited to a finite (or countably infinite) set of values. Now, for.
Review.
Probability theory 2010 Main topics in the course on probability theory  Multivariate random variables  Conditional distributions  Transforms  Order.
Today Today: Chapter 5 Reading: –Chapter 5 (not 5.12) –Suggested problems: 5.1, 5.2, 5.3, 5.15, 5.25, 5.33, 5.38, 5.47, 5.53, 5.62.
Class notes for ISE 201 San Jose State University
Probability theory 2011 Main topics in the course on probability theory  The concept of probability – Repetition of basic skills  Multivariate random.
Differentiating the Inverse. Objectives Students will be able to Calculate the inverse of a function. Determine if a function has an inverse. Differentiate.
Copyright © Cengage Learning. All rights reserved. 4 Continuous Random Variables and Probability Distributions.
Lecture II-2: Probability Review
NIPRL Chapter 2. Random Variables 2.1 Discrete Random Variables 2.2 Continuous Random Variables 2.3 The Expectation of a Random Variable 2.4 The Variance.
Partially Ordered Sets (POSets)
Ch 5.3: Series Solutions Near an Ordinary Point, Part II A function p is analytic at x 0 if it has a Taylor series expansion that converges to p in some.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Continuous Random Variables and Probability Distributions
Foundations of Discrete Mathematics Chapter 3 By Dr. Dalia M. Gil, Ph.D.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Two Functions of Two Random.
2.4 Sequences and Summations
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Use of moment generating functions 1.Using the moment generating functions of X, Y, Z, …determine the moment generating function of W = h(X, Y, Z, …).
Chapter 4 DeGroot & Schervish. Variance Although the mean of a distribution is a useful summary, it does not convey very much information about the distribution.
Chapter 5.6 From DeGroot & Schervish. Uniform Distribution.
1 Two Functions of Two Random Variables In the spirit of the previous lecture, let us look at an immediate generalization: Suppose X and Y are two random.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Chapter 4-5 DeGroot & Schervish. Conditional Expectation/Mean Let X and Y be random variables such that the mean of Y exists and is finite. The conditional.
Chapter 3. Random Variables and Distributions
Discrete Random Variables. Introduction In previous lectures we established a foundation of the probability theory; we applied the probability theory.
Random Variables. Numerical Outcomes Consider associating a numerical value with each sample point in a sample space. (1,1) (1,2) (1,3) (1,4) (1,5) (1,6)
TRANSFORMATION OF FUNCTION OF A RANDOM VARIABLE
Distributions of Functions of Random Variables November 18, 2015
Section 10.5 Let X be any random variable with (finite) mean  and (finite) variance  2. We shall assume X is a continuous type random variable with p.d.f.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
FUNCTIONS COSC-1321 Discrete Structures 1. Function. Definition Let X and Y be sets. A function f from X to Y is a relation from X to Y with the property.
Functions of Random Variables
Lecture 3 Types of Probability Distributions Dr Peter Wheale.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Copyright © Cengage Learning. All rights reserved. 4 Continuous Random Variables and Probability Distributions.
Chapter 6 Differentiation.
3. Random Variables (Fig.3.1)
Cumulative distribution functions and expected values
Lecture 7 Functions.
Some Rules for Expectation
Jiaping Wang Department of Mathematical Science 04/10/2013, Wednesday
Chap 6 Continuous Random Variables Ghahramani 3rd edition
Example Suppose X ~ Uniform(2, 4). Let . Find .
Tutorial 9: Further Topics on Random Variables 2
Distributions and expected value
EMIS 7300 SYSTEMS ANALYSIS METHODS FALL 2005
Statistics Lecture 12.
3.0 Functions of One Random Variable
Chapter 2. Random Variables
9. Two Functions of Two Random Variables
Further Topics on Random Variables: Derived Distributions
Further Topics on Random Variables: Derived Distributions
HKN ECE 313 Exam 2 Review Session
Further Topics on Random Variables: Derived Distributions
Empirical Distributions
Continuous Random Variables: Basics
Presentation transcript:

Chapter 3 DeGroot & Schervish

Functions of a Random Variable the distribution of some function of X suppose X is the rate at which customers are served in a queue then 1/X is the average waiting time If we have the distribution of X, we should be able to: determine the distribution of 1/X or of any other function of X

Random Variable with a Discrete Distribution Distance from the Middle example Let X have the uniform distribution on the integers 1, 2,..., 9. Suppose that we are interested in how far X is from the middle of the distribution, namely, 5. We could define Y = |X − 5| and compute probabilities such as Pr(Y = 1) = Pr(X ∈ {4, 6}) = 2/9.

Function of a Discrete Random Variable Let X have a discrete distribution with p.f. f, let Y = r(X) for some function of r defined on the set of possible values of X For each possible value y of Y, the p.f. g of Y is

Distance from the Middle The possible values of Y in the previous example are 0, 1, 2, 3, and 4. We see that Y = 0 if and only if X = 5 g(0) = f (5) = 1/9. For all other values of Y, there are two values of X that give that value of Y. For example, {Y = 4} = {X = 1} ∪ {X = 9}. So, g(y) = 2/9 for y = 1, 2, 3, 4.

Random Variable with a Continuous Distribution If a random variable X has a continuous distribution, then the procedure for deriving the probability distribution of a function of X differs from that given for a discrete distribution. One way to proceed is by direct calculation

Average Waiting Time Let Z be the rate at which customers are served in a queue, suppose that Z has a continuous c.d.f. F. The average waiting time is Y = 1/Z. If we want to find the c.d.f. G of Y, we can write

Random Variable with a Continuous Distribution In general, suppose that the p.d.f. of X is f and that another random variable is defined as Y = r(X). For each real number y, the c.d.f. G(y) of Y can be derived as follows: If the random variable Y also has a continuous distribution, its p.d.f. g can be obtained from the relation

Direct Derivation of the p.d.f. Let r be a differentiable one-to-one function on the open interval (a, b). Then r is either strictly increasing or strictly decreasing. Because r is also continuous, it will map the interval (a, b) to another open interval (α, β), called the image of (a, b) under r. That is, for each x ∈ (a, b), r(x) ∈ (α, β), and for each y ∈ (α, β) there is x ∈ (a, b) such that y = r(x) and this y is unique because r is one-to-one. So the inverse s of r will exist on the interval (α, β), meaning that for x ∈ (a, b) and y ∈ (α, β) we have r(x) = y if and only if s(y) = x.

Theorem Let X be a random variable for which the p.d.f. is f and for which Pr(a <X<b) = 1. Here, a and/or b can be either finite or infinite. Let Y = r(X), and suppose that r(x) is differentiable and one-to- one for a <x <b. Let (α, β) be the image of the interval (a, b) under the function r. Let s(y) be the inverse function of r(x) for α <y <β. Then the p.d.f. g of Y is

Proof If r is increasing, then s is increasing, and for each y ∈ (α, β) Because s is increasing, ds(y)/dy is positive; hence, it equals |ds(y)/dy| and this equation implies the theorem. Similarly, if r is decreasing, then s is decreasing, and for each y ∈ (α, β), Since s is strictly decreasing, ds(y)/dy is negative so that −ds(y)/dy equals |ds(y)/dy|. It follows that the equation implies the theorem.

The Probability Integral Transformation Let X be a continuous random variable The p.d.f. f (x) = exp(−x) for x >0 and 0 otherwise. The c.d.f. of X is F(x) = 1− exp(−x) for x >0 and 0 otherwise. If we let F be the function r, we can find the distribution of Y = F(X). The c.d.f. or Y is, for 0 < y <1, which is the c.d.f. of the uniform distribution on the interval [0, 1]. It follows that Y has the uniform distribution on the interval [0, 1].

Theorem Let X have a continuous c.d.f. F, let Y = F(X). This transformation from X to Y is called the probability integral transformation. The distribution of Y is the uniform distribution on the interval [0, 1].

Proof First, because F is the c.d.f. of a random variable, then 0 ≤ F(x) ≤ 1 for −∞ < x <∞. Therefore, Pr(Y 1) = 0. Since F is continuous, the set of x such that F(x) = y is a nonempty closed and bounded interval [x0, x1] for each y in the interval (0, 1). Let F −1 (y) denote the lower endpoint x0 of this interval, which was called the y quantile of F. In this way, Y ≤ y if and only if X ≤ x1. Let G denote the c.d.f. of Y. Then Hence, G(y) = y for 0 < y <1. Because this function is the c.d.f. of the uniform distribution on the interval [0, 1], this uniform distribution is the distribution of Y.

Functions of Two or More Random Variables When we observe data consisting of the values of several random variables, we need to summarize the observed values in order to be able to focus on the information in the data. Summarizing consists of constructing one or a few functions of the random variables. We now describe the techniques needed to determine the distribution of a function of two or more random variables.

Random Variables with a Discrete Joint Distribution Suppose that n random variables X1,..., Xn have a discrete joint distribution for which the joint p.f. is f, and that m functions Y1,..., Ym of these n random variables are defined as follows: Y1 = r1(X1,..., Xn), Y2 = r2(X1,..., Xn),... Ym = rm(X1,..., Xn).

Random Variables with a Discrete Joint Distribution For given values y1,..., ym of the m random variables Y1,..., Ym, let A denote the set of all points (x1,..., xn) such that r1(x1,..., xn) = y1, r2(x1,..., xn) = y2,... rm(x1,..., xn) = ym. Then the value of the joint p.f. g of Y1,..., Ym is specified at the point (y1,..., ym) by the relation

Random Variables with a Continuous Joint Distribution

Direct Transformation of a Multivariate p.d.f. Let X1,..., Xn have a continuous joint distribution for which the joint p.d.f. is f. Assume that there is a subset S of R n such that Pr[(X1,..., Xn) ∈ S]= 1. Define n new random variables Y1,..., Yn as follows: Y1 = r1(X1,..., Xn), Y2 = r2(X1,..., Xn),... Yn= rn(X1,..., Xn), where we assume that the n functions r1,..., rn define a one-to- one differentiable transformation of S onto a subset T of R n.

Direct Transformation of a Multivariate p.d.f. Let the inverse of this transformation be given as follows: x1 = s1(y1,..., yn), x2 = s2(y1,..., yn),... xn = sn(y1,..., yn).

Direct Transformation of a Multivariate p.d.f. Then the joint p.d.f. g of Y1,..., Yn is where J is the determinant and |J | denotes the absolute value of the determinant J. This determinant J is called the Jacobian of the transformation specified by the equations.

Linear Transformations Let X = (X1,..., Xn) have a continuous joint distribution for which the joint p.d.f. is f. Define Y = (Y1,..., Yn) by Y = AX, where A is a nonsingular n × n matrix. Then Y has a continuous joint distribution with p.d.f. where A −1 is the inverse of A.