Download presentation
Presentation is loading. Please wait.
Published byAugustine Harper Modified over 9 years ago
1
ILAS 20041 Threshold partitioning for iterative aggregation – disaggregation method Ivana Pultarova Czech Technical University in Prague, Czech Republic
2
ILAS 20042 We consider column stochastic irreducible matrix B of type N × N. The Problem is to find stationary probability vector x p, || x p || = 1, We explore the iterative aggregation-disaggregation method (IAD). Notation: Spectral decomposition of B, B = P + Z, P 2 = P, ZP = PZ = 0, r(Z) < 1 (spectral radius). Number of aggregation groups n, n < N. Restriction matrix R of type n × N. The elements are 0 or 1, all column sums are 1. Prolongation N × n matrix S(x) for any positive vector x : (S(x)) ij := x i iff (R) ji = 1, then divide all elements in each column with the sum of the column. Projection N × N matrix P(x) = S(x) R. ||. || denote 1-norm.
3
ILAS 20043 Iterative aggregation disaggregation algorithm: step 1. Take the first approximation x 0 R N, x 0 > 0, and set k = 0. step 2. Solve RB s S(x k ) z k+1 = z k+1, z k+1 R n, || z k+1 || = 1, for (appropriate) integer s, (solution on the coarse level). step 3.Disaggregate x k+1,1 = S(x k ) z k+1. step 4. Compute x k+1 = B t x k+1,1 for appropriate integer t, (smoothing on the fine level). step 5.Test whether || x k+1 – x k || is less then a prescribed tolerance. If not, increase k and go to step 2. If yes, consider x k+1 be the solution of the problem.
4
ILAS 20044 Propositon 1. If s = t then the computed approximations x k, k = 1,2,…, follow the formulae a) B s P(x k ) x k+1 = x k+1, b) x k+1 = (I – Z s P(x k )) -1 x p, c) x k+1 – x p = J(x k ) (x k – x p ), where J(x) = B s (I – P(x) Z s ) -1 (I – P(x)) and also J(x) = B s (I – P(x) + P(x) J(x)). Proposition 2. Let V be a global core matrix associated with B s. Then J(x) = V(I – P(x) V) -1 (I – P(x)) and J(x) = V(I – P(x) + P(x) J(x)).
5
ILAS 20045 Note. The global core matrix V is here ηP + Z s. Using Z k → 0 for k → ∞, we have V = ηP + Z s ≥ 0 for a given η and for a sufficiently large s. This is equivalent to B s = P + Z s ≥ (1- η) P.
6
ILAS 20046 Local convergence. It is known that for arbitrary integers t ≥ 1 and s ≥ 1 there exists a neighborhood Ω of x p such, that if x k Ω then x r Ω, r = k +1, k + 2,…, and that || x k+1 - x p || ≤ c α k || x k - x p ||, where c R and α ≤ min{|| V loc || μ, ||(I-P(x p ))Z(I-P(x p ))|| μ }, where ||.|| μ is some special norm in R N. Here, V loc is a local core matrix associated with B. Thus, the local convergence rate of IAD algorithm is the same or better comparing with the Jacobi iteration of the original matrix B.
7
ILAS 20047 Global convergence. From Proposition 2 we have ||J(x k )|| ≤ ||V|| ||I – P(x k )|| + ||V|| ||P(x k )|| ||J(x k )||, i.e. ||J(x k )|| (1 – η) < 2η. So that the sufficient condition for the global convergence of IAD is η < 1/3, i.e. the relation B s > (2/3) P is the sufficient condition for the global convergence of IAD method. (It also means r(Z s ) ≤ 1/3. B s ≥ (2/3) P is equivalent to P/3 + Z s ≥ 0. Then P + 3Z s ≥ 0 is a spectral decomposition of an irreducible column stochastic matrix and then r(Z s ) ≤ 1/3.)
8
ILAS 20048 In practical computation of large problems we cannot verify the validity of relation B s ≥ η P > 0 to estimate the value of s. But, we can predict the constant k for which B k > 0. The value is known to be less than or equal to N 2 - 2 N + 2.
9
ILAS 20049 We propose a new method for achieving B s ≥ ηP > 0 with some η > 0. Let I – B = M – W be a regular splitting, M -1 ≥ 0, W ≥ 0. Then the solution of Problem is identical with solution of (M – W) x = 0. Denoting Mx = y and setting y := y / || y ||, we have (I – WM -1 ) y = 0, where WM -1 is column stochastic matrix. Thus, the solution of the Problem is transformed to the solution of WM -1 y = y, || y || = 1, for any regular splitting M, W of the matrix I – B.
10
ILAS 200410 The good choice of M, W. According to IAD algorithm we will use a block diagonal matrix M which is composed of blocks M 1, … M n, each of them invertible. To achieve (WM -1 ) s > 0 for low s, we need M i -1 > 0, i =1,…, n, nnz (WM -1 ) >> nnz (B), (number of nonzeros).
11
ILAS 200411 Algorithm of a good partitioning. step 1.For an apropriate threshold τ, 0 < τ < 1, use Tarjan’s parametrized algorithm to find the irreducible diagonal blocks B i, i = 1,…,n, of the permuted matrix B, (we now suppose “B := permuted B”). step 2. Compose the block diagonal matrix B Tar from the blocks B i, i = 1,…,n, and set M = I – B Tar / 2 and W = M – (I – B). Properties of WM -1. WM -1 is irreducible. Diagonal blocks of WM -1 are positive. (WM -1 ) s is positive for s ≤ n 2 - 2n + 3, n is the number of aggregation groups. (n = 3 → s = 2) The second largest eigenvalue of the aggregated n × n matrix is approximately the same as that of WM -1.
12
ILAS 200412 Example 1. Matrix B is composed from n × n blocks of size m. We set ε = 0.01, δ = 0.01. Then B is normalized.
13
ILAS 200413 Example 1. a) IAD method for WM -1 and threshold Tarjan’s block matrix M, s = 1, r(Z WM ) = 0.9996. (Exact solution – red, the last of approximations - black circles). b) Power iterations for WM -1 and the same M as in a), s = 1, r(Z WM ) = 0.9996. (Exact solution – red, the last of 500 approximations - black circles. No local convergence effect.). c) Rates of convergence of a) and b).
14
ILAS 200414 Example 2. Matrix B is composed from n × n blocks of size m. We set ε = 0.01, δ = 0.01. Then B := B + C (10% of C are 0.1) and normalized.
15
ILAS 200415 Example 2. IAD for B and WM -1. Power method for B and WM -1. Convergence rate for IAD and power method.
16
ILAS 200416 Example 2. Another random entries. a) IAD for B and WM -1. b) Power method for B and WM -1. c) Convergence rate for IAD and power method.
17
ILAS 200417 I. Marek and P. Mayer Convergence analysis of an aggregation/disaggregation iterative method for computation stationary probability vectors Numerical Linear Algebra With Applications, 5, pp. 253-274, 1998 I. Marek and P. Mayer Convergence theory of some classes of iterative aggregation-disaggregation methods for computing stationary probability vectors of stochastic matrices Linear Algebra and Its Applications, 363, pp. 177-200, 2003 G. W. Stewart Introduction to the numerical solutions of Markov chains, 1994 A. Berman, R. J. Plemmons Nonnegative matrices in the mathematical sciences, 1979 G. H. Golub, C. F. Van Loan Matrix Computations, 1996 ETC.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.