Presentation is loading. Please wait.

Presentation is loading. Please wait.

Numerical Computations and Random Matrix Theory

Similar presentations


Presentation on theme: "Numerical Computations and Random Matrix Theory"— Presentation transcript:

1 Numerical Computations and Random Matrix Theory
Alan Edelman MIT: Dept of Mathematics, Computer Science AI Laboratories Friday February 25, 2005 11/8/2018

2 Wigner’s Semi-Circle The classical & most famous rand eig theorem
Let S = random symmetric Gaussian MATLAB: A=randn(n); S=(A+A’)/2; S known as the Gaussian Orthogonal Ensemble Normalized eigenvalue histogram is a semi-circle Precise statements require n etc. n=20; s=30000; d=.05; %matrix size, samples, sample dist e=[]; %gather up eigenvalues im=1; %imaginary(1) or real(0) for i=1:s, a=randn(n)+im*sqrt(-1)*randn(n);a=(a+a')/(2*sqrt(2*n*(im+1))); v=eig(a)'; e=[e v]; end hold off; [m x]=hist(e,-1.5:d:1.5); bar(x,m*pi/(2*d*n*s)); axis('square'); axis([ ]); hold on; t=-1:.01:1; plot(t,sqrt(1-t.^2),'r');

3 Tidbit of interest to “Matrix Computations” Audience
Condition Numbers and Jacobians of Matrix Functions and Factorizations or What is matrix calculus?? 11/8/2018

4 Matrix Functions and Factorizations
e.g. f(A)=A2 or [L,U]=lu(A) or [Q,R]=qr(A) U, R: n(n+1)/2 parameters L, Q: n(n-1)/2 parameters Q globally (Householder) Q locally (tangent space = Q*antisym ) The Jacobian or “df” or “linearization” is n2 x n2 f:SS2 (sym) df is n(n+1)/2 x n(n+1)/2 f:QQ2 (orth) df is n(n-1)/2 x n(n-1)/2 11/8/2018

5 Condition number of a matrix function or factorization
Jacobian Det = J = ∏σi(df)=det(df) Example 1: f(A)=A2 df(A) =kron(I,A)+kron(AT ,I) Example 2: f(A)=A-1 df(A)=-kron(A-T,A-1) ||df(A)||=||A-1||2  κ=||A|| ||A-1|| 11/8/2018

6 Matrix Factorization Jacobians
General A=LU A=UVT A=XX-1  uiin-i A=QR A=QS (polar)  riim-i  (i2- j2)  (i+j)  (i-j)2 Sym Orthogonal S=QQT S=LLT S=LDLT  (i-j) 2n liin+1-i  din-i Q=U VT C S S -C [ ] sin(i+ j)sin (i- j) Tridiagonal T=QQT  (ti+1,i)/ qi 11/8/2018

7 Tidbit of interest to “Matrix Computations” Audience and pure mathematicians!
The most analytical random matrices seen from on high: 11/8/2018

8 Same structure everywhere!
Orthog Matrix MATLAB (A=randn(n) B=randn(n)) Hermite Sym Eig eig(A+A’) Laguerre SVD eig(A*A’) Jacobi GSVD gsvd(A,B) Fourier Eig [U,R]=qr(A+i*B) 11/8/2018

9 Same structure everywhere!
Orthog Matrix Weight Stats Graph Theory SymSpace Hermite Sym Eig exp(-x2) Normal Complete Graph non-compact A,AI,AII Laguerre SVD xαe-x Chi-squared Bipartite Graph AIII,BDI,CII Jacobi GSVD (1-x)α x (1+x)β Beta Regular Graph compact A, AI, AII, C, D, CI, D, DIII Fourier Eig eiθ AIII, BDI, CDI 11/8/2018

10 Tidbit of interest to “Matrix Computations” Audience and combinatorists!
The longest increasing subsequence 11/8/2018

11 Longest Increasing Subsequence (n=4)
Green: 4 Yellow: 3 Red: 2 Purple: 1 11/8/2018

12 Random Matrix Result # Permutations on 1..n with longest increasing subsequence ≤ k is E ( |tr(Qk)|2n) …. The 2nth moment of the absolute trace of random kxk orthogonal matrices Longest increasing subsequence is the parallel complexity of an upper triangular solve with sparsity given by Uij(π) ≠0 if π(i)≤π(j) and i≤j 11/8/2018

13 Haar or not Haar? 11/8/2018

14 Tidbit! Random Tridiagonalization leads to eigenvalues of billion by billion matrix! 11/8/2018

15 G 6 5 4 3 2 1 sym matrix to tridiagonal form
Same eigenvalue distribution as A+A’: O(n) storage !! O(n) compute 11/8/2018

16 G 6 5 4 3 2  General beta beta:
1: reals 2: complexes 4: quaternions Bidiagonal Version corresponds To Wishart matrices of Statistics 11/8/2018

17 Largest Eigenvalue of Hermite
11/8/2018

18 Painlevé Equations 11/8/2018

19 MATLAB beta=1; n=1e9; opts.disp=0;opts.issym=1;
alpha=10; k=round(alpha*n^(1/3)); % cutoff parameters d=sqrt(chi2rnd( beta*(n:-1:(n-k-1))))'; H=spdiags( d,1,k,k)+spdiags(randn(k,1),0,k,k); H=(H+H')/sqrt(4*n*beta); eigs(H,1,1,opts) 11/8/2018

20 Tricks to get O(n9) speedup
Sparse matrix storage (Only O(n) storage is used) Tridiagonal Ensemble Formulas (Any beta is available due to the tridiagonal ensemble) The Lanczos Algorithm for Eigenvalue Computation ( This allows the computation of the extreme eigenvalue faster than typical general purpose eigensolvers.) The shift-and-invert accelerator to Lanczos and Arnoldi (Since we know the eigenvalues are near 1, we can accelerate the convergence of the largest eigenvalue) The ARPACK software package as made available seamlessly in MATLAB (The Arnoldi package contains state of the art data structures and numerical choices.) The observation that if k = 10n1/3 , then the largest eigenvalue is determined numerically by the top k × k segment of n. (This is an interesting mathematical statement related to the decay of the Airy function.) 11/8/2018

21 Tidbit of interest to “Matrix Computations” Audience Stochastic Eigenequations
Continuous vs Discrete: Diff Eqns : Matrix Comps :: Cont Eig : Matrix Eigs Add probability: Stochastic Differential Equations :: Stochastic Eigenequations Finite = Random Matrix Theory 11/8/2018

22 Spacings of eigs of A+A’
11/8/2018

23 Riemann Zeta Zeros 11/8/2018

24 Stochastic Operator 11/8/2018

25 Everyone’s Favorite Tridiagonal
-2 1 1 n2 d2 dx2 11/8/2018

26 Everyone’s Favorite Tridiagonal
-2 1 G 1 (βn)1/2 1 n2 + dW β1/2 d2 dx2 + 11/8/2018

27 Stochastic Operator Limit
, dW β 2 x dx d + - , N(0,2) χ n β 2 1 ~ H 2) (n 1) ÷ ø ö ç è æ - , G β 2 H n + 11/8/2018

28 Tidbit eig(A+B) = eig(A) + eig(B) ????? 11/8/2018

29 Free Probability vs Classical Probability
11/8/2018

30 Random Matrix Calculator
11/8/2018

31 How to use calculator 11/8/2018

32 Steps 1 and 2 11/8/2018

33 Steps 3 and 4 11/8/2018

34 Steps 5 and 6 11/8/2018

35 ∫ pκ(x)pλ(x) Δ(x)β ∏i w(xi)dxi = δκλ
Multivariate Orthogonal Polynomials & Hypergeometrics of Matrix Argument The important special functions of the 21st century Begin with w(x) on I ∫ pκ(x)pλ(x) Δ(x)β ∏i w(xi)dxi = δκλ Jack Polynomials orthogonal for w=1 on the unit circle. Analogs of xm 11/8/2018

36 Multivariate Hypergeometric Functions
11/8/2018

37 Multivariate Hypergeometric Functions
11/8/2018

38 Plamen’s clever idea 11/8/2018

39 Smallest eigenvalue statistics
A=randn(m,n); hist(min(svd(A).^2)) 11/8/2018

40 Symbolic MOPS applications
A=randn(n); S=(A+A’)/2; trace(S^4) det(S^3) 11/8/2018

41 Summary Linear Algebra + Randomness !!! 11/8/2018


Download ppt "Numerical Computations and Random Matrix Theory"

Similar presentations


Ads by Google