Presentation is loading. Please wait.

Presentation is loading. Please wait.

Advances in Random Matrix Theory (stochastic eigenanalysis)

Similar presentations


Presentation on theme: "Advances in Random Matrix Theory (stochastic eigenanalysis)"— Presentation transcript:

1 Advances in Random Matrix Theory (stochastic eigenanalysis)
Alan Edelman MIT: Dept of Mathematics, Computer Science AI Laboratories 11/23/2018

2 Stochastic Eigenanalysis
Counterpart to stochastic differential equations Emphasis on applications to engineering & finance Beautiful mathematics: Random Matrix Theory Free Probability Raw Material from Physics Combinatorics Numerical Linear Algebra Multivariate Statistics 11/23/2018

3 Scalars, Vectors, Matrices
Mathematics: Notation = power & less ink! Computation: Use those caches! Statistics: Classical, Multivariate,  Modern Random Matrix Theory The Stochastic Eigenproblem * Mathematics of probabilistic linear algebra * Emerging Computational Algorithms * Emerging Statistical Techniques Ideas from numerical computation that stand the test of time are right for mathematics!

4 Open Questions Find new applications of spacing (or other) statistics
Cleanest derivation of Tracy-Widom? “Finite” free probability? Finite meets infinite Muirhead meets Tracy-Widom Software for stochastic eigen-analysis

5 Wigner’s Semi-Circle The classical & most famous rand eig theorem
Let S = random symmetric Gaussian MATLAB: A=randn(n); S=( A+A’)/2; S known as the Hermite Ensemble Normalized eigenvalue histogram is a semi-circle Precise statements require n etc.

6 Wigner’s Semi-Circle The classical & most famous rand eig theorem
Let S = random symmetric Gaussian MATLAB: A=randn(n); S=( A+A’)/2; S known as the Hermite Ensemble Normalized eigenvalue histogram is a semi-circle Precise statements require n etc. n x n iid standard normals

7 Wigner’s Semi-Circle The classical & most famous rand eig theorem
Let S = random symmetric Gaussian MATLAB: A=randn(n); S=( A+A’)/2; S known as the Hermite Ensemble Normalized eigenvalue histogram is a semi-circle Precise statements require n etc.

8 Wigner’s original proof
Compute E(tr A2p) as n∞ Terms with too many indices, have some element with power 1. Vanishes with mean 0. Terms with too few indices: not enough to be relevant as n∞ Leaves only a Catalan number left: Cp=(2p)/(p+1) for the moments when all is said and done Semi-circle only distribution with Catalan number moments p

9 Finite Versions of semicircle
n=2; n=4; n=3; n=5;

10 Finite Versions n=2; n=4; n=3; n=5;
Area under curve (-∞,x): Can be expressed as sums of probabilities that certain tridiagonal determinants are positive.

11 Wigner’s Semi-Circle Real Numbers: x β=1 Complex Numbers: x+iy β=2
Quaternions: x+iy+jz+kw β=4 β=2½? x+iy+jz β=2½? Defined through joint eigenvalue density: const x ∏|xi-xj|β ∏exp(-xi2 /2) β=repulsion strength β=0 “no interference” spacings are Poisson Classical research only β=1,2,4 missing the link to Poisson, continuous techniques, etc

12 Largest eigenvalue “convection-diffusion?”

13 Haar or not Haar? “Uniform Distribution on orthogonal matrices”
Gram-Schmidt or [Q,R]=QR(randn(n))

14 Haar or not Haar? “Uniform Distribution on orthogonal matrices”
Gram-Schmidt or [Q,R]=QR(randn(n)) Eigenvalues Wrong

15 Longest Increasing Subsequence (n=4) (Baik-Deift-Johansson) (Okounkov’s proof)
Green: 4 Yellow: 3 Red: 2 Purple: 1

16 Bulk spacing statistics
“convection-diffusion?” Bus wait times in Mexico Energy levels of heavy atoms Parked Cars in London Zeros of Riemann zeta Mice Brain Wave Spikes Telltale Sign: Repulsion + optimality

17 “what’s my β?” web page Cy’s tricks: Maximum Likelihood Estimation
Bayesian Probability Kernel Density Estimation Epanechnikov kernel Confidence Intervals

18 Open Questions Find new applications of spacing (or other) distributions Cleanest derivation of Tracy-Widom? “Finite” free probability? Finite meets infinite Muirhead meets Tracy-Widom Software for stochastic eigen-analysis

19 Everyone’s Favorite Tridiagonal
-2 1 1 n2 d2 dx2 11/23/2018

20 Everyone’s Favorite Tridiagonal
-2 1 G 1 (βn)1/2 1 n2 + dW β1/2 d2 dx2 + 11/23/2018

21 Stochastic Operator Limit
, dW β 2 x dx d + - , N(0,2) χ n β 2 1 ~ H 2) (n 1) ÷ ø ö ç è æ - , G β 2 H n + Cast of characters: Dumitriu, Sutton, Rider

22 Open Questions Find new applications of spacing (or other) distributions Cleanest derivation of Tracy-Widom? “Finite” free probability? Finite meets infinite Muirhead meets Tracy-Widom Software for stochastic eigen-analysis

23 Is it really the random matrices?
The excitement is that the random matrix statistics are everyhwere Random matrices properly tridiagonalized are discretizations of stochastic differential operators! Eigenvalues of SDO’s not as well studied Deep down this is what I believe is the important mechanism in the spacings, not the random matrices! (See Brian Sutton thesis, Brian Rider papers—connection to Schrodinger operators) Deep down for other statistics, though it’s the matrices

24 Open Questions Find new applications of spacing (or other) distributions Cleanest derivation of Tracy-Widom? “Finite” free probability? Finite meets infinite Muirhead meets Tracy-Widom Software for stochastic eigen-analysis

25 Open Questions Find new applications of spacing (or other) distributions Cleanest derivation of Tracy-Widom? “Finite” free probability? Finite meets infinite Muirhead meets Tracy-Widom Software for stochastic eigen-analysis

26 Free Probability Free Probability (name refers to “free algebras” meaning no strings attached) Gets us past Gaussian ensembles and Wishart Matrices

27 The flipping coins example
Classical Probability: Coin: +1 or -1 with p=.5 50% 50% 50% 50% y: x: x+y:

28 The flipping coins example
Classical Probability: Coin: +1 or -1 with p=.5 Free 50% 50% 50% 50% eig(B): eig(A): eig(A+QBQ’):

29 From Finite to Infinite

30 From Finite to Infinite
 Gaussian (m=1)

31 From Finite to Infinite
 Gaussian (m=1) Wiggly

32 From Finite to Infinite
 Gaussian (m=1) Wiggly Wigner

33 Semi-circle law for different betas

34 Open Questions Find new applications of spacing (or other) distributions Cleanest derivation of Tracy-Widom? “Finite” free probability? Finite meets infinite Muirhead meets Tracy-Widom Software for stochastic eigen-analysis

35 Matrix Statistics Many Worked out in 1950s and 1960s
Muirhead “Aspects of Multivariate Statistics” Are two covariance matrices equal? Does my matrix equal this matrix? Is my matrix a multiple of the identity? Answers Require Computation of Hypergeometrics of Matrix Argument Long thought Computationally Intractible

36 The special functions of multivariate statistics
Hypergeometric Functions of Matrix Argument β=2: Schur Polynomials Other values: Jack Polynomials Orthogonal Polynomials of Matrix Argument Begin with w(x) on I ∫ pκ(x)pλ(x) Δ(x)β ∏i w(xi)dxi = δκλ Jack Polynomials orthogonal for w=1 on the unit circle. Analogs of xm Plamen Koev revolutionary computation Dumitriu’s MOPS symbolic package

37 ∫ pκ(x)pλ(x) Δ(x)β ∏i w(xi)dxi = δκλ
Multivariate Orthogonal Polynomials & Hypergeometrics of Matrix Argument The important special functions of the 21st century Begin with w(x) on I ∫ pκ(x)pλ(x) Δ(x)β ∏i w(xi)dxi = δκλ Jack Polynomials orthogonal for w=1 on the unit circle. Analogs of xm

38 Smallest eigenvalue statistics
A=randn(m,n); hist(min(svd(A).^2))

39 Multivariate Hypergeometric Functions

40 Multivariate Hypergeometric Functions

41 Open Questions Find new applications of spacing (or other) distributions Cleanest derivation of Tracy-Widom? “Finite” free probability? Finite meets infinite Muirhead meets Tracy-Widom Software for stochastic eigen-analysis

42 Plamen Koev’s clever idea

43 Symbolic MOPS applications
A=randn(n); S=(A+A’)/2; trace(S^4) det(S^3)

44 Mops (Ioana Dumitriu) Symbolic

45 Random Matrix Calculator

46 Encoding the semicircle The algebraic secret
f(x) = sqrt(4-x2)/(2π) m(z) = (-z + i*sqrt(4-z2))/2 L(m,z) ≡ m2+zm+1=0 m(z) = ∫ (x-z)-1f(x) dx Stieltjes transform Practical encoding: Polynomial L whose root m is Stieltjes transform

47 The Polynomial Method RMTool Eigenvectors as well!
The polynomial method for random matrices Eigenvectors as well!

48 Plus + X =randn(n,n) A=X+X’ m2+zm+1=0 Y=randn(n,2n) B=Y*Y’
zm2+(2z-1)m+2=0 A+B m3+(z+2)m2+(2z-1)m+2=0

49 Times * X =randn(n,n) A=X+X’ m2+zm+1=0 Y=randn(n,2n) B=Y*Y’
zm2+(2z-1)m+2=0 A*B m4z2-2m3z+m2+4mz+4=0

50 Open Questions Find new applications of spacing (or other) distributions Cleanest derivation of Tracy-Widom? “Finite” free probability? Finite meets infinite Muirhead meets Tracy-Widom Software for stochastic eigen-analysis

51 Matrix Versions of Classical Stats
Orthog Matrix MATLAB (A=randn(n) B=randn(n)) Hermite Sym Eig eig(A+A’) Normal Laguerre SVD eig(A*A’) Chi-squared Jacobi GSVD gsvd(A,B) Beta Fourier Eig [U,R]=qr(A+i*B)

52 The big structure Hermite Sym Eig exp(-x2) Normal Laguerre SVD xαe-x
Orthog Matrix Weight Stats Graph Theory SymSpace Hermite Sym Eig exp(-x2) Normal Complete Graph non-compact A,AI,AII Laguerre SVD xαe-x Chi-squared Bipartite Graph AIII,BDI,CII Jacobi GSVD (1-x)α x (1+x)β Beta Regular Graph compact A, AI, AII, C, D, CI, D, DIII Fourier Eig eiθ AIII, BDI, CDI

53 Summary Stochastic Eigenanalysis Emerging Techniques Open Problems


Download ppt "Advances in Random Matrix Theory (stochastic eigenanalysis)"

Similar presentations


Ads by Google