Generating Random Matrices

Slides:



Advertisements
Similar presentations
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Advertisements

Computational Statistics. Basic ideas  Predict values that are hard to measure irl, by using co-variables (other properties from the same measurement.
Exponential Distribution. = mean interval between consequent events = rate = mean number of counts in the unit interval > 0 X = distance between events.
Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Maths for Computer Graphics
Matrix Operations. Matrix Notation Example Equality of Matrices.
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
Simulation Modeling and Analysis Session 12 Comparing Alternative System Designs.
1 The t table provides critical value for various probabilities of interest. The form of the probabilities that appear in Appendix B are: P(t > t A, d.f.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
Matrix Approach to Simple Linear Regression KNNL – Chapter 5.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Today Wrap up of probability Vectors, Matrices. Calculus
Wireless Communication Elec 534 Set IV October 23, 2007
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Multiple Linear Regression - Matrix Formulation Let x = (x 1, x 2, …, x n )′ be a n  1 column vector and let g(x) be a scalar function of x. Then, by.
Linear Algebra (Aljabar Linier) Week 10 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
A Review of Some Fundamental Mathematical and Statistical Concepts UnB Mestrado em Ciências Contábeis Prof. Otávio Medeiros, MSc, PhD.
The final exam solutions. Part I, #1, Central limit theorem Let X1,X2, …, Xn be a sequence of i.i.d. random variables each having mean μ and variance.
Matrices: Simplifying Algebraic Expressions Combining Like Terms & Distributive Property.
Trees Example More than one variable. The residual plot suggests that the linear model is satisfactory. The R squared value seems quite low though,
1.7 Linear Independence. in R n is said to be linearly independent if has only the trivial solution. in R n is said to be linearly dependent if there.
13.3 Product of a Scalar and a Matrix.  In matrix algebra, a real number is often called a.  To multiply a matrix by a scalar, you multiply each entry.
Statistics and probability Dr. Khaled Ismael Almghari Phone No:
Estimating standard error using bootstrap
MTH108 Business Math I Lecture 20.
Introduction to Vectors and Matrices
Inference concerning two population variances
Matrices and Vector Concepts
Power and p-values Benjamin Neale March 10th, 2016
MATRICES.
Multiplying Matrices.
ESTIMATION.
Linear Algebra Lecture 2.
The Exponential and Gamma Distributions
CS479/679 Pattern Recognition Dr. George Bebis
Background on Classification
Math 4030 – 10b Inferences Concerning Variances: Hypothesis Testing
Linear independence and matrix rank
Test for Goodness of Fit
CH 5: Multivariate Methods
Matrix Operations Monday, August 06, 2018.
Multiplication of Matrices
Regression.
Agenda Textbook / Web Based Resource Basics of Matrices Classwork
Multiplying Matrices.
Statistical Analysis Professor Lynne Stokes
CONCEPTS OF ESTIMATION
Inferential Statistics and Probability a Holistic Approach
CS485/685 Computer Vision Dr. George Bebis
Model Comparison: some basic concepts
POINT ESTIMATOR OF PARAMETERS
The Multivariate Normal Distribution, Part 2
Statistical Process Control
2.2 Introduction to Matrices
Matrix Algebra and Random Vectors
Matrices Introduction.
Multiplying Matrices.
Principal Components What matters most?.
Introduction to Vectors and Matrices
3.5 Perform Basic Matrix Operations
Multiplication of Matrices
Matrix Operations Ms. Olifer.
Multiplying Matrices.
Eigenvalues and Eigenvectors
3.5 Perform Basic Matrix Operations Algebra II.
Multiplying Matrices.
Multiplying Matrices.
Presentation transcript:

Generating Random Matrices BIOS 524 Project Brett Kliner Abigail Robinson

Goals of Project To use simulation to create a random vector X, where X~N(μ, Σ). To simulate the probability that W >= w, where W is a scalar generated from the X matrix. W is generated from the mean vector, μ. W is generated fro the a k x 1 zero vector.

Applications This exercise is mostly academic with uses in matrix algorithms and general linear models. Hypothesis testing that the mean vector is equal to the zero vector. This will be useful in Dr. Johnson’s General Linear Models class next semester.

The Random X Vector The X vector (k x 1) will be replicated n times. X will have a mean vector μ, k x 1. X will be formed using the covariance matrix Σ, k x k. The user may specify n, μ and Σ. The mean vector μ (k x 1) replicated n times gives us an n x k matrix.

The Random X Vector μ and Σ must match on dimension so matrix multiplication can occur. The covariance matrix must be symmetric, that is Σ = Σ’. Σ must also be positive definite which means that all of the eigenvalues must be positive.

The Random X Vector Each column of the new n x k matrix will be averaged using PROC MEANS. Mean of each column Standard Deviation 95% Confidence interval on the mean The n x k matrix will be compared to the Vnormal matrix.

The Random X Vector The call Vnormal function will be used to generate an n x k Vnormal matrix. PROC Means will be used to analyze each column. Mean of each column Standard Deviation 95% Confidence Interval

Computing W A quadratic form occurs when q = x’Ax. W is a quadratic form where: W = (x - v)’ -1 (x – v) v is a k x 1 vector of constants. We will consider two cases of v: v = μ v = 0

When v = μ When v = μ, the distribution of W is considered to be Chi-Square with k degrees of freedom. The value of w is specified by the user. The probability that (W>=w) is compared to the call function 1 - ProbChi (w,k).

When v = 0 ncp is calculated by: When v = 0, the distribution of W is considered to be a non-central chi-squared distribution with k degrees of freedom and non-centrality parameter ncp. Notice that when v = 0, W = x’ -1 x. ncp is calculated by: ncp = v’ -1 v where v = μ .

When v = 0 The probability that (W >= w), where W = x’ -1 x, can be compared to the call function 1 – ProbChi (w, k, ncp).

The SAS Code Let’s take a look at the SAS code that accomplishes these tasks. Please ask questions when they arise.