Analysis of Engineering and Scientific Data

Slides:



Advertisements
Similar presentations
Chapter 2 Multivariate Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Advertisements

CS433: Modeling and Simulation
Laws of division of casual sizes. Binomial law of division.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Contemporary Engineering Economics, 4 th edition, © 2007 Probabilistic Cash Flow Analysis Lecture No. 47 Chapter 12 Contemporary Engineering Economics.
Probability Theory STAT 312 STAT 312 Dr. Zakeia AlSaiary.
Chapter 2: Probability Random Variable (r.v.) is a variable whose value is unknown until it is observed. The value of a random variable results from an.
06/05/2008 Jae Hyun Kim Chapter 2 Probability Theory (ii) : Many Random Variables Bioinformatics Tea Seminar: Statistical Methods in Bioinformatics.
Probability Densities
Probability theory 2010 Main topics in the course on probability theory  Multivariate random variables  Conditional distributions  Transforms  Order.
Assignment 2 Chapter 2: Problems  Due: March 1, 2004 Exam 1 April 1, 2004 – 6:30-8:30 PM Exam 2 May 13, 2004 – 6:30-8:30 PM Makeup.
Chapter 6 Continuous Random Variables and Probability Distributions
Probability theory 2011 Main topics in the course on probability theory  The concept of probability – Repetition of basic skills  Multivariate random.
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
A random variable that has the following pmf is said to be a binomial random variable with parameters n, p The Binomial random variable.
Chapter 4: Joint and Conditional Distributions
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
5-1 Two Discrete Random Variables Example Two Discrete Random Variables Figure 5-1 Joint probability distribution of X and Y in Example 5-1.
Joint Probability distribution
5-1 Two Discrete Random Variables Example Two Discrete Random Variables Figure 5-1 Joint probability distribution of X and Y in Example 5-1.
Chapter 21 Random Variables Discrete: Bernoulli, Binomial, Geometric, Poisson Continuous: Uniform, Exponential, Gamma, Normal Expectation & Variance, Joint.
NIPRL Chapter 2. Random Variables 2.1 Discrete Random Variables 2.2 Continuous Random Variables 2.3 The Expectation of a Random Variable 2.4 The Variance.
Discrete Probability Distributions A sample space can be difficult to describe and work with if its elements are not numeric.A sample space can be difficult.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
4.2 Variances of random variables. A useful further characteristic to consider is the degree of dispersion in the distribution, i.e. the spread of the.
CHAPTER 4 Multiple Random Variable
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Chapter 3 Random vectors and their numerical characteristics.
STA347 - week 51 More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function.
Expectation for multivariate distributions. Definition Let X 1, X 2, …, X n denote n jointly distributed random variable with joint density function f(x.
Tim Marks, Dept. of Computer Science and Engineering Random Variables and Random Vectors Tim Marks University of California San Diego.
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Chapter 5a:Functions of Random Variables Yang Zhenlin.
CONTINUOUS RANDOM VARIABLES
6.4 Application of the Normal Distribution: Example 6.7: Reading assignment Example 6.8: Reading assignment Example 6.9: In an industrial process, the.
Chapter 5. Continuous Random Variables. Continuous Random Variables Discrete random variables –Random variables whose set of possible values is either.
Section 10.5 Let X be any random variable with (finite) mean  and (finite) variance  2. We shall assume X is a continuous type random variable with p.d.f.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
Virtual University of Pakistan Lecture No. 26 Statistics and Probability Miss Saleha Naghmi Habibullah.
Chapter 4 Multivariate Normal Distribution. 4.1 Random Vector Random Variable Random Vector X X 1, , X p are random variables.
디지털통신 Random Process 임 민 중 동국대학교 정보통신공학과 1.
Random Variables By: 1.
Chapter 5 Joint Probability Distributions and Random Samples
Probability and Statistics for Computer Scientists Second Edition, By: Michael Baron Chapter 3: Discrete Random Variables and Their Distributions CIS.
Statistics Lecture 19.
Section 7.3: Probability Distributions for Continuous Random Variables
Main topics in the course on probability theory
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
PRODUCT MOMENTS OF BIVARIATE RANDOM VARIABLES
CONTINUOUS RANDOM VARIABLES
The distribution function F(x)
UNIT-2 Multiple Random Variable
Some Rules for Expectation
Chapter 3: Getting the Hang of Statistics
Probability Review for Financial Engineers
Random Variable X, with pmf p(x) or pdf f(x)
How accurately can you (1) predict Y from X, and (2) predict X from Y?
STOCHASTIC HYDROLOGY Random Processes
EMIS 7300 SYSTEMS ANALYSIS METHODS FALL 2005
6.4 Application of the Normal Distribution:
... DISCRETE random variables X, Y Joint Probability Mass Function y1
Chapter 3: Getting the Hang of Statistics
ASV Chapters 1 - Sample Spaces and Probabilities
Financial Econometrics Fin. 505
Chapter 2. Random Variables
Introduction to Probability: Solutions for Quizzes 4 and 5
Chapter 3-2 Discrete Random Variables
Moments of Random Variables
Mathematical Expectation
Presentation transcript:

Analysis of Engineering and Scientific Data Semester 1 – 2019 Sabrina Streipert s.streipert@uq.edu.au

Example - revisited Joint pmf: P(X ≤ 2, Y < 5) = Marginal expected value: E (X ) = E (Y ) =

Covariance and Correlation Definition The covariance of X and Y is cov(X, Y ) := Basically, it is a measure for the amount of linear dependency between the variables. The correlation (correlation coefficient) of X, Y is ρ(X, Y ) = ∈

Properties of Variance and Covariance ► cov(X, Y ) = . ► cov(aX + bY , Z ) = ► cov(X, X ) = ► Marginal Variance: Var(X ) = ► Var(X + Y ) =

Example - revisited Var(X ) = Covariance: cov(X, Y ) = Correlation:

Conditional Probability Mass Function Definition If X, Y are discrete R.V. and P(X = x ) > 0, then the conditional probability mass function of Y given X = x is: P(Y = y | X = x ) = Definition If X, Y are continuous R.V. and fX (x ) > 0, then the conditional probability density function of Y given X = x is: fY (y | x ) =

Conditional Cumulative Distribution Function conditional cdf: FY (Y = y | X = x ) = ► If X, Y are discrete R.V. and P(X = x ) > 0, then ► If X, Y are continuous R.V., then

Conditional Expectation ► If X, Y are discrete R.V., then the conditional expectation of Y given X = x is: E[Y | X ] = and the conditional expectation of X given Y = y is: E[X | Y ] = ► If X, Y are continuous R.V., then the conditional expectation of Y given X = x is:

Example: Now, we draw at random a point (X, Y ) from the 10 points on the triangle D below.

► Joint pmf: P(X = i , Y = j ) = , ► Marginal pmf of X : (i , j ) ∈ D. i ∈ {1, 2, 3, 4}, j ∈ {1, 2, 3, 4}. ► Marginal pmf of Y : P(Y = j ) = , ► Conditional pmf: P(Y = j | X = i ) = ► Conditional Expectation: E [Y |X = i ] =

Independence of two Random Variables Definition X, Y are independent R.V. if any event defined by X is independent of every event defined by Y , i.e., P((X ∈ A) ∩ (Y ∈ B)) = for any A and B, i.e., F (x, y ) = ► i.e., (if X, Y are discrete R.V.): P(X = x, Y = y ) = ► i.e., (if X, Y are continuous R.V.): fX,Y (x, y ) =

Independence - Properties ► If X, Y are independent −→ cov(X, Y ) = ► If X, Y are independent ←?− cov(X, Y ) = 0 ► If X, Y are independent −→ ρ(X, Y ) = ► If X, Y are independent −→ Var (aX + bY ) =

Example - revisited: Now, we draw at random a point (X, Y ) from the 10 points on the triangle D below. P(X = 2, Y = 2) =? P(X = 2)P(Y = 2) −→

Example: We draw at random a point (X, Y ) from the 16 points on the square E below. P(X = 2, Y = 2) =? P(X = 2)P(Y = 2) −→

Multiple Random Variables - Generalization Let X1, X2, . . . , Xn be random variables (random vector): ► If Xi ’s are discrete, there exists a joint pmf p: p(x1, . . . , xn) = P(X1 = x1, . . . , Xn = xn). ► If Xi ’s are continuous, there exists a joint pdf f : ∂nF (x1, . . . , xn) f (x1, . . . , xn) = . ∂x · · · ∂x 1 n ► Joint cdf F : F (x1, x2, . . . , xn) = P(X1 ≤ x1, X2 ≤ x2, · · · , Xn ≤ xn).

Independence of multiple R.V. ► If X1, X2, . . . , Xn are discrete R.V., then they are independent if: P(X1 = x1, . . . , Xn = xn) = P(X1 = x1)·P(X2 = x2) · · · P(Xn = xn), for all x1, x2, . . .. ► If X1, X2, . . . , Xn are continuous R.V., then they are f (x1, . . . , xn) = fX1 (x1) · · · fXn (xn). ► An infinite sequence X1, X2, . . . of R.V. is called independent if for any finite choice of parameters i1, i2, . . . , in (none of them the same), Xi1 , . . . , Xin are independent.

► Let X1, . . . , Xn be discrete R.V.s, with means µ1, . . . , µn. Expected value ► Let X1, . . . , Xn be discrete R.V.s, with means µ1, . . . , µn. ► Let Y = a + b1X1 + b2X2 + · · · + bnXn where a, b1, . . . , bn are constants. Then E[Y ] = E[a + b1X1 + b2X2 + · · · + bnXn] = = a + b1E[X1] + · · · bnE[Xn] = a + µ1 + b1 · · · + bnµn. ► If X1, . . . , Xn are independent, then E[X1X2 · · · Xn] = E[X1]E[X2] · · · E[Xn].

► The n-dimensional density of the random vector Jointly Gaussian RVs ► The n-dimensional density of the random vector X = (X1, . . . , Xn)T (column vector), with X1, . . . , Xn independent and standard normal, is 2𝜋 − 𝑛 2 𝑒 − 1 2 𝑥 𝑇 𝑥 f (x ) = (2π) e − − x x n 1 T 2 2 . X ► We consider now the function (transformation) Z = µ + BX. The pdf of Z is 𝑓 𝑍 𝑧 = 1 2𝜋 𝑛 |Σ| 𝑒 − 1 2 𝑥−𝜇 𝑇 Σ −1 (𝑥−𝜇) 1 f (z ) = ✓ e , − (x 1 T −1 −µ) Σ (x−µ) Z 2 (2π)|Σ| n where Σ = BBT. ► Z is said to have a multi-variate Gaussian (or normal) distribution with expectation vector µ and covariance matrix Σ.

b2 σ2 Jointly Gaussian RVs i i i A very important property of the normal distribution is for independent 2 X ∼ N(µ , σ ), i = 1, . . . , n. i i i Specifically, the random variable n Y = a + b X , i i i =1 is distributed n n b2 σ2 N a + bi µi, . i i i =1 i =1

Jointly Gaussian RVs Example A machine produces ball bearings with a N(1, 0.01) diameter (cm). The balls are placed on a sieve with a N(1.1, 0.04) diameter. The diameter of the balls and the sieve are assumed to be independent of each other.

Jointly Gaussian RVs Example A machine produces ball bearings with a N(1, 0.01) diameter (cm). The balls are placed on a sieve with a N(1.1, 0.04) diameter. The diameter of the balls and the sieve are assumed to be independent of each other. What is the probability that a ball will fall through?

► We need to calculate P(Y > X ) = P(Y − X > 0). Jointly Gaussian RVs Example A machine produces ball bearings with a N(1, 0.01) diameter (cm). The balls are placed on a sieve with a N(1.1, 0.04) diameter. The diameter of the balls and the sieve are assumed to be independent of each other. What is the probability that a ball will fall through? Solution ► Let X ∼ N(1, 0.01) and Y ∼ N(1.1, 0.04). ► We need to calculate P(Y > X ) = P(Y − X > 0). ► But, T := Y − X ∼ N(0.1, 0.05). Hence,

► We need to calculate P(Y > X ) = P(Y − X > 0). Jointly Gaussian RVs Example A machine produces ball bearings with a N(1, 0.01) diameter (cm). The balls are placed on a sieve with a N(1.1, 0.04) diameter. The diameter of the balls and the sieve are assumed to be independent of each other. What is the probability that a ball will fall through? Solution ► Let X ∼ N(1, 0.01) and Y ∼ N(1.1, 0.04). ► We need to calculate P(Y > X ) = P(Y − X > 0). ► But, T := Y − X ∼ N(0.1, 0.05). Hence, ( Y − X 0.1 P(T > 0) = P √0.05 − √0.05 > 0 −0.1 0.05 0.1 = P = 1 − Φ(0.447) ≈ 0.67. -0.477 0.67 Z > −√0.05

Jointly Gaussian RV - Example Consider the 2-dimensional case with µ = (µ1, µ2)T, and σ1 0 ρσ1σ2 σ2 B = ( The covariance matrix is now . ( σ2 ρσ σ Σ = BB = T 1 1 2 . ρσ1σ2 σ2 2 Therefore, the density is 1 fZ (Z ) = fZ (z1, z2) = × 2πσ1σ2 1 − ρ2 (z1 − µ1)2 (z1 − µ1)(z2 − µ2) 𝜇 2 ) 𝑧 2 − 𝜇 2 2 2 × exp − 2ρ + 2 2 σ2 o σ σ2 1 2 1 2 This is the pdf of the bi-variate Gaussian distribution, which we encountered earlier.

Transformations of R.V. - Motivation Let X1 is the amount of daily sugar intake of Australians and X2 the sugar intake of Europeans, and X3 of Asians. Suppose we are interested in the mean of the daily sugar intake across countries, that is X1 + X2 + X3 . 3 Let X1, . . . , Xn be the lifetimes of n components in a series system. Then, the lifetime of the system is min{X1, . . . , Xn}. Let X1, . . . , Xn be the risk of a portfolio with n financial assets Xi . A risk averse person will look at max{X1, . . . , Xn} to analyse the risk of the portfolio.

Transformations of R.V. - Properties ► Let Xi be discrete R.V. for i = 1, . . . , n and Z = g (X1, . . . , Xn), then E[Z ] = · · · g (x , . . . , x ) P(X = x , . . . , X = x ), 1 n 1 1 n n x1 xn ► Let Xi be continuous R.V. for i = 1, . . . , n and Z = g (X1, . . . , Xn), then r r ∞ ∞ E[Z ] = · · · g (x , . . . , x )f (x , . . . , x 1 n 1 n 1 ) dx · · · dx . n −∞ −∞