Download presentation
Presentation is loading. Please wait.
1
A Sparsification Approach for Temporal Graphical Model Decomposition Ning Ruan Kent State University Joint work with Ruoming Jin (KSU), Victor Lee (KSU) and Kun Huang (OSU)
2
Motivation: Financial Markets
3
Motivation: Biological Systems 3 Microarray time series profile Protein-Protein Interaction Fluorescence Counts
4
4 Vector Autoregression Univariate Autoregression is self-regression for a time- series VAR is the multivariate extension of autoregression 0t=1234T
5
5 Granger Causality Goal: reveal causal relationship between two univariate time series. –Y is Granger causal for X at time t if X t-1 and Y t-1 together are a better predictor for X t than X t-1 alone. –i.e., compare the magnitude of error ε(t) vs. ε′(t)
6
Temporal Graphical Modeling Recover the causal structure among a group of relevant time series X1X1 X2X2 X3X3 X4X4 X5X5 X6X6 X7X7 X8X8 temporal graphical model X1X1 X3X3 X2X2 X5X5 X4X4 X7X7 X6X6 X8X8 Φ 12
7
The Problem Given a temporal graphical model, can we decompose it to get a simpler global view of the interactions among relevant time series? How to interpret these causal relationships ???
8
Extra Benefit X1X1 X2X2 X3X3 X4X4 X5X5 X6X6 X7X7 X8X8 Clustering based on similarity Consider time series clustering from a new perspective! X1X1 X2X2 X8X8 X7X7 X6X6 X5X5 X4X4 X3X3 X1X1 X3X3 X8X8 X7X7 X6X6 X5X5 X4X4 X2X2 X1X1 X3X3 X2X2 X5X5 X4X4 X7X7 X6X6 X8X8
9
Clustered Regression Coefficient Matrix Vector Autoregression Model –Φ(u) is a NxN coefficient matrix Clustered Regression Coefficient Matrix 1)ifΦ(u) ij ≠0, then time series i and j are in the same cluster 2)if time series i and j are not in the same cluster, then Φ(u) ij =0 submatrix
10
Temporal Graphical Model Decomposition Cost Goal: preserve prediction accuracy while reducing representation cost Given a temporal graphical model, the cost for model decomposition is Problem –Tend to group all time series into one cluster prediction error L2 penalty
11
Refined Cost for Decomposition Balance size of clusters –C is NxK membership matrix Overall cost is the sum of three parts Optimal Decomposition Problem –Find a cluster membership matrix C and its regression coefficient matrix Φ such that the cost for decomposition is minimal prediction error L2 penalty size constraint 100 100 010 001 X2X2 C1C1
12
Hardness of Decomposition Problem Combined integer (membership matrix) and numerical (regression coefficient matrix) optimization problem Large number of unknown variables –NxK variables in membership matrix –NxN variables in regression coefficient matrix
13
Basic Idea for Iterative Optimization Algorithm Relax binary membership matrix C to probabilistic membership matrix P Optimize membership matrix while fixing regression coefficient matrix Optimize regression coefficient matrix while fixing membership matrix Employ two optimization steps iteratively to get a local optimal solution
14
Overview of Iterative Optimization Algorithm Time Series Data Temporal Graphical Model Optimize cluster membership matrix Quasi-Newton Method Optimize regression coefficient matrix Generalized ridge regression Step 1Step 2
15
Step 1: Optimize Membership Matrix Apply Lagrange multiplier method: Quasi-Newton method –Approximate Hessian matrix by iteratively updating
16
Step 2: Optimize Regression Coefficient Matrix Decompose cost functions into N subfunctions Generalized Ridge Regression –y k is a vector related with P and X (length L) –X k is a matrix related with P and X (size LxN) k=1, traditional ridge regression constant
17
Complexity Analysis Step 1 is the computational bottleneck of entire algorithm NxK+N Update Hessian Matrix takes 10070 50506 80203 03012 40600 Compute coefficient matrix N N NxK
18
Basic Idea for Scalable Approach Utilize variable dependence relationship to optimize each variable (or a small number of variables) independently, assuming other relationships are fixed Convert the problem to a Maximal Weight Independent Set (MWIS) problem
19
Experiments: Synthetic Data Synthetic data generator –Generate community-based graph as underlying temporal graphical model [Girvan and Newman 05] –Assign random weights to graphical model and generate time series data using recursive matrix multiplication [Arnold et al. 07] Decomposition Accuracy –Find a matching between clustering results and ground-truth clusters such that the number of intersected variables are maximal –The number of intersected variables over total number of variables is decomposition accuracy
20
Experiments: Synthetic Data (cont.) Applied algorithms –Iterative optimization algorithm based on Quasi- Newton method (newton) –Iterative optimization algorithm based on MWIS method (mwis) –Benchmark 1: Pearson correlation test to generate temporal graphical model, and Ncut [Shi00] for clustering (Cor_Ncut) –Benchmark 2: directed spectral clustering [Zhou05] on ground-truth temporal graphical model (Dcut)
21
Experimental Results: Synthetic On average, newton is better than Cor_Ncut and Dcut by 27% and 32%, respectively On average, mwis is better than Cor_Ncut and Dcut by 24% and 29%, respectively
22
Experimental Results: Synthetic mwis is better than Cor_Ncut by an average of 30% mwis is better than Dcut by an average of 52%
23
Experiment: Real Data Data –Annual GDP growth rate (downloaded from http://www.ers.usda.gov/Data/Macroeconomics) –192 countries 4 Time periods –1969-1979 –1980-1989 –1990-1999 –1998-2007 Hierarchically bipartition into 6 or 7 clusters
24
Experimental Result: Real Data
25
Summary We formulate a novel objective function for the decomposition problem in temporal graphical modeling. We introduce an iterative optimization approach utilizing Quasi-Newton method and generalized ridge regression. We employ a maximum weight independent set based approach to speed up the Quasi-Newton method. The experimental results demonstrate the effective and efficiency of our approaches.
26
Thank you
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.