Presentation is loading. Please wait.

Presentation is loading. Please wait.

By: Jesse Ehlert Dustin Wells Li Zhang Iterative Aggregation/Disaggregation(IAD)

Similar presentations


Presentation on theme: "By: Jesse Ehlert Dustin Wells Li Zhang Iterative Aggregation/Disaggregation(IAD)"— Presentation transcript:

1 By: Jesse Ehlert Dustin Wells Li Zhang Iterative Aggregation/Disaggregation(IAD)

2 Introduction What are we trying to do? We are trying to find a more efficient way than the power method to compute the pagerank vector. How are we going to do this? We are going to use an IAD from the theory of Markov Chains to compute the pagerank vector. We are going to apply the power method to

3 Markov Chains We will represent the web by a Markov chain. Markov chain is a stochastic process describing a chain of events. Consist of a set of states S = {s 1, …, s n } Web pages will be the states Probability to move from state s i to state s j in one step is p ij. We can represent this by a stochastic matrix with entries p ij Probabilistic vector v is a stationary distribution if: v T = v T G This means that the PageRank vector is also a stationary distribution vector of the Markov chain represented by the matrix G

4 Aggregation/Disaggregation Approach Main idea to compute the pagerank vector v is to block the matrix G so the size of the problem is reduced to about the size of one of the diagonal blocks. In fact (I – G 11 ) is non singular. Then we define: and S to be

5 Aggregation/Disaggregation Approach Cont. From the previous slide we can show that I – G = LDU Thus, because U is nonsingular, we have: From the last equation, we can get v 2 T = v 2 T Swhich implies that V 2 is a stationary distribution of S. If u 2 is the unique stationary distribution of S with then we have:

6 Aggregation/Disaggregation Approach Cont. We need to find an expression for v 1 Let A be the aggregated matrix associated to G, defined as: What we want to do now is find the stationary distribution of A. From v T LD = 0, we can get: v 1 T (I – G 11 ) – v 2 T G 21 = 0 If we rearrange things, we can get

7 Aggregation/Disaggregation Approach Cont. From v 2 T = v 2 T S, we also have: From the previous three statements we can get an expression for v 1.

8 Theorem 3.20 (Exact aggregation/disaggregation) Theorem 3.20

9 Theorem 3.20 Cont. Instead of finding the stationary distribution of G, we have broken it down to find the stationary distribution of two smaller matrices. Problem- Forming the matrix S and computing its stationary distribution u 2 is very expensive and not very efficient. Solution: Use an approximation This leads us to Approximate Aggregation Matrix

10 Approximate Aggregation Matrix We now define the approximate aggregation matrix as: The only difference between this matrix and the previous aggregation matrix is the last row where we use an arbitrary probabilistic vector that plays the role of the exact stationary distribution u 2. In general this approach does not give a very good approximation to the stationary distribution of the original matrix G. To improve the accuracy, we add a power method step.

11 Approximate Aggregation Matrix Typically, we will have so that the actual algorithm to be implemented consists of repeated applications of the algorithm above. This gives us an iterative aggregation/disaggregation algorithm (IAD)

12 Iterative Aggregation/Disaggregation Algorithm (IAD) using Power Method As you can see from above, we still need to compute, the stationary distribution of

13 IAD Cont. First, we write so that we get rid of G 22 We then let From we have:

14 IAD Cont. Now we will try to get some sparsity out of G. We will write G like we did before: G = α H + α au T + (1 − α )eu T. From the blocking of G, we will block the matrices H, au T and eu T for some matrices A, B, C, D, E, F, J, K. From here you can see

15 IAD Cont. We now take G 11, G 12 and G 21 and plug them into We get: For the iterative process of power method within IAD, we give an arbitrary initial guess and iterate according to the formulas above for the next approximation until our tolerance is reached.

16 Combine Linear Systems and IAD Before, we had This can be written as

17 Combine Linear Systems and IAD Cont. The problem with this is the matrices G 11, G 12 and G 21 are full matrices which means the computations at each step are generally very expensive

18 Combine Linear Systems and IAD Cont. We will return to the original matrix H to find some sparsity. From this equation, we can look at G11 in more depth to get: We will use the fact that to simplify the equation to get: Note: we used

19 Using Dangling Nodes We can reorder H by dangling nodes so that H 21 is a matrix of zeros Then our equation from before reduces to: We approximate as: We can show that:

20 Linear Systems and IAD Process Combined Now, we combine ideas from IAD and linear systems, with H arranged by dangling nodes, to get the process below:

21 Conclusion Instead of finding the stationary distribution of G directly, we have broken it down to find the stationary distribution of smaller matrices, S and A, which gives us the stationary distribution of G The problem with this is that it was very inefficient. So we found the approximation of the stationary distribution and used power method techniques to improve accuracy. Then we used linear systems along with our iterative aggregation/disaggregation algorithm to find another solution to the pagerank vector.


Download ppt "By: Jesse Ehlert Dustin Wells Li Zhang Iterative Aggregation/Disaggregation(IAD)"

Similar presentations


Ads by Google