Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.

Slides:



Advertisements
Similar presentations
Trend for Precision Soil Testing % Zone or Grid Samples Tested compared to Total Samples.
Advertisements

AGVISE Laboratories %Zone or Grid Samples – Northwood laboratory
PDAs Accept Context-Free Languages
Numerical Linear Algebra in the Streaming Model Ken Clarkson - IBM David Woodruff - IBM.
Subspace Embeddings for the L1 norm with Applications Christian Sohler David Woodruff TU Dortmund IBM Almaden.
Tables, Figures, and Equations
The basics for simulations
Eigen Decomposition and Singular Value Decomposition
Introduction to Information Retrieval Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 1.
6.4 Best Approximation; Least Squares
Before Between After.
Mining Data Streams (Part 1)
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
Eigen Decomposition and Singular Value Decomposition
CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE.
Lecture 10 Nonuniqueness and Localized Averages. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
3D Geometry for Computer Graphics
Dimensionality Reduction. High-dimensional == many features Find concepts/topics/genres: – Documents: Features: Thousands of words, millions of word pairs.
Lecture 14 Nonlinear Problems Grid Search and Monte Carlo Methods.
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Schutzvermerk nach DIN 34 beachten 05/04/15 Seite 1 Training EPAM and CANopen Basic Solution: Password * * Level 1 Level 2 * Level 3 Password2 IP-Adr.
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
Dimensionality Reduction PCA -- SVD
Lecture 19 Singular Value Decomposition
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Sampling algorithms for l 2 regression and applications Michael W. Mahoney Yahoo Research (Joint work with P. Drineas.
3D Geometry for Computer Graphics
Information Retrieval in Text Part III Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Computing Sketches of Matrices Efficiently & (Privacy Preserving) Data Mining Petros Drineas Rensselaer Polytechnic Institute (joint.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 6 May 7, 2006
3D Geometry for Computer Graphics
10-603/15-826A: Multimedia Databases and Data Mining SVD - part I (definitions) C. Faloutsos.
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Less is More: Compact Matrix Decomposition for Large Sparse Graphs
Multimedia Databases LSI and SVD. Text - Detailed outline text problem full text scanning inversion signature files clustering information filtering and.
Singular Value Decomposition and Data Management
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
E.G.M. PetrakisDimensionality Reduction1  Given N vectors in n dims, find the k most important axes to project them  k is user defined (k < n)  Applications:
DATA MINING LECTURE 7 Dimensionality Reduction PCA – SVD
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
SVD(Singular Value Decomposition) and Its Applications
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Latent Semantic Indexing (mapping onto a smaller space of latent concepts) Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 18.
CS246 Topic-Based Models. Motivation  Q: For query “car”, will a document with the word “automobile” be returned as a result under the TF-IDF vector.
1 Information Retrieval through Various Approximate Matrix Decompositions Kathryn Linehan Advisor: Dr. Dianne O’Leary.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Dimensionality Reduction Shannon Quinn (with thanks to William Cohen of Carnegie Mellon University, and J. Leskovec, A. Rajaraman, and J. Ullman of Stanford.
CpSc 881: Information Retrieval. 2 Recall: Term-document matrix This matrix is the basis for computing the similarity between documents and queries. Today:
CMU SCS : Multimedia Databases and Data Mining Lecture #18: SVD - part I (definitions) C. Faloutsos.
Dimensionality Reduction
Math 285 Project Diffusion Maps Xiaoyan Chong Department of Mathematics and Statistics San Jose State University.
DATA MINING LECTURE 8 Sequence Segmentation Dimensionality Reduction.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Matrix Factorization & Singular Value Decomposition Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Jeffrey D. Ullman Stanford University.  Often, our data can be represented by an m-by-n matrix.  And this matrix can be closely approximated by the.
Chapter 13 Discrete Image Transforms
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
UV Decomposition Singular-Value Decomposition CUR Decomposition
Lecture 8:Eigenfaces and Shared Features
Lecture: Face Recognition and Feature Reduction
DATA MINING LECTURE 6 Dimensionality Reduction PCA – SVD
LSI, SVD and Data Management
Dimensionality Reduction: SVD & CUR
Recitation: SVD and dimensionality reduction
Marios Mattheakis and Pavlos Protopapas
Presentation transcript:

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. If you make use of a significant portion of these slides in your own lecture, please include this message, or a link to our web site:

 Assumption: Data lies on or near a low d-dimensional subspace  Axes of this subspace are effective representation of the data J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Compress / reduce dimensionality:  10 6 rows; 10 3 columns; no updates  Random access to any cell(s); small error: OK J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, The above matrix is really “2-dimensional.” All rows can be reconstructed by scaling [ ] or [ ]

 Q: What is rank of a matrix A?  A: Number of linearly independent columns of A  For example:  Matrix A = has rank r=2  Why? The first two rows are linearly independent, so the rank is at least 2, but all three rows are linearly dependent (the first is equal to the sum of the second and third) so the rank must be less than 3.  Why do we care about low rank?  We can write A as two “basis” vectors: [1 2 1] [ ]  And new coordinates of : [1 0] [0 1] [1 1] J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Cloud of points 3D space:  Think of point positions as a matrix:  We can rewrite coordinates more efficiently!  Old basis vectors: [1 0 0] [0 1 0] [0 0 1]  New basis vectors: [1 2 1] [ ]  Then A has new coordinates: [1 0]. B: [0 1], C: [1 1]  Notice: We reduced the number of coordinates! 1 row per point: ABCABC A J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Goal of dimensionality reduction is to discover the axis of data! Rather than representing every point with 2 coordinates we represent each point with 1 coordinate (corresponding to the position of the point on the red line). By doing this we incur a bit of error as the points do not exactly lie on the line J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

Why reduce dimensions?  Discover hidden correlations/topics  Words that occur commonly together  Remove redundant and noisy features  Not all words are useful  Interpretation and visualization  Easier storage and processing of the data J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

8 A [m x n] = U [m x r]   r x r] (V [n x r] ) T  A: Input data matrix  m x n matrix (e.g., m documents, n terms)  U: Left singular vectors  m x r matrix (m documents, r concepts)   : Singular values  r x r diagonal matrix (strength of each ‘concept’) (r : rank of the matrix A)  V: Right singular vectors  n x r matrix (n terms, r concepts)

9 A m n  m n U VTVT  J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, T

10 A m n  + 1u1v11u1v1 2u2v22u2v2 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, σ i … scalar u i … vector v i … vector T

It is always possible to decompose a real matrix A into A = U  V T, where  U, , V: unique  U, V: column orthonormal  U T U = I; V T V = I (I: identity matrix)  (Columns are orthogonal unit vectors)   : diagonal  Entries (singular values) are positive, and sorted in decreasing order ( σ 1  σ 2 ...  0) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Nice proof of uniqueness:

 A = U  V T - example: Users to Movies J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, = SciFi Romnce Matrix Alien Serenity Casablanca Amelie  m n U VTVT “Concepts” AKA Latent dimensions AKA Latent factors

 A = U  V T - example: Users to Movies J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, = SciFi Romnce x x Matrix Alien Serenity Casablanca Amelie

 A = U  V T - example: Users to Movies J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, SciFi-concept Romance-concept = SciFi Romnce x x Matrix Alien Serenity Casablanca Amelie

 A = U  V T - example: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Romance-concept U is “user-to-concept” similarity matrix SciFi-concept = SciFi Romnce x x Matrix Alien Serenity Casablanca Amelie

 A = U  V T - example: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, SciFi Romnce SciFi-concept “strength” of the SciFi-concept = SciFi Romnce x x Matrix Alien Serenity Casablanca Amelie

 A = U  V T - example: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, SciFi-concept V is “movie-to-concept” similarity matrix SciFi-concept = SciFi Romnce x x Matrix Alien Serenity Casablanca Amelie

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, ‘movies’, ‘users’ and ‘concepts’:  U: user-to-concept similarity matrix  V: movie-to-concept similarity matrix   : its diagonal elements: ‘strength’ of each concept

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, v1v1 first right singular vector Movie 1 rating Movie 2 rating

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, v1v1 first right singular vector Movie 1 rating Movie 2 rating

 A = U  V T - example:  V: “movie-to-concept” matrix  U: “user-to-concept” matrix J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, v1v1 first right singular vector Movie 1 rating Movie 2 rating = x x

 A = U  V T - example: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, v1v1 first right singular vector Movie 1 rating Movie 2 rating variance (‘spread’) on the v 1 axis = x x

A = U  V T - example:  U  Gives the coordinates of the points in the projection axis J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, v1v1 first right singular vector Movie 1 rating Movie 2 rating Projection of users on the “Sci-Fi” axis  U   :

More details  Q: How exactly is dim. reduction done? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, = x x

More details  Q: How exactly is dim. reduction done?  A: Set smallest singular values to zero J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, = x x

More details  Q: How exactly is dim. reduction done?  A: Set smallest singular values to zero J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, x x 

More details  Q: How exactly is dim. reduction done?  A: Set smallest singular values to zero J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, x x 

More details  Q: How exactly is dim. reduction done?  A: Set smallest singular values to zero J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,  x x

More details  Q: How exactly is dim. reduction done?  A: Set smallest singular values to zero J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,  Frobenius norm: ǁ M ǁ F =  Σ ij M ij 2 ǁ A-B ǁ F =  Σ ij (A ij -B ij ) 2 is “small”

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, A U Sigma VTVT = B U VTVT = B is best approximation of A

 Theorem: Let A = U  V T and B = U S V T where S = diagonal r x r matrix with s i = σ i (i=1…k) else s i =0 then B is a best rank(B)=k approx. to A What do we mean by “best”:  B is a solution to min B ǁ A-B ǁ F where rank(B)=k J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Details!

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, We apply: -- P column orthonormal -- R row orthonormal -- Q is diagonal Details!

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, We used: U  V T - U S V T = U (  - S) V T Details!

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Equivalent: ‘spectral decomposition’ of the matrix: = xx u1u1 u2u2 σ1σ1 σ2σ2 v1v1 v2v

Equivalent: ‘spectral decomposition’ of the matrix J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, = u1u1 σ1σ1 vT1vT1 u2u2 σ2σ2 vT2vT n m n x 1 1 x m k terms Assume: σ 1  σ 2  σ 3 ...  0 Why is setting small σ i to 0 the right thing to do? Vectors u i and v i are unit length, so σ i scales them. So, zeroing small σ i introduces less error

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, = u1u1 σ1σ1 vT1vT1 u2u2 σ2σ2 vT2vT n m Assume: σ 1  σ 2  σ 3 

 To compute SVD:  O(nm 2 ) or O(n 2 m) (whichever is less)  But:  Less work, if we just want singular values  or if we want first k singular vectors  or if the matrix is sparse  Implemented in linear algebra packages like  LINPACK, Matlab, SPlus, Mathematica... J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 SVD: A= U  V T : unique  U: user-to-concept similarities  V: movie-to-concept similarities   : strength of each concept  Dimensionality reduction:  keep the few largest singular values (80-90% of ‘energy’)  SVD: picks up linear correlations J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 SVD gives us:  A = U  V T  Eigen-decomposition:  A = X  X T  A is symmetric  U, V, X are orthonormal (U T U=I),   are diagonal  Now let’s calculate:  AA T = U  V T (U  V T ) T = U  V T (V  T U T ) = U  T U T  A T A = V  T U T (U  V T ) = V  T V T J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 SVD gives us:  A = U  V T  Eigen-decomposition:  A = X  X T  A is symmetric  U, V, X are orthonormal (U T U=I),   are diagonal  Now let’s calculate:  AA T = U  V T (U  V T ) T = U  V T (V  T U T ) = U  T U T  A T A = V  T U T (U  V T ) = V  T V T J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, X   X T Shows how to compute SVD using eigenvalue decomposition!

 A A T = U  2 U T  A T A = V  2 V T  (A T A) k = V  2k V T  E.g.: (A T A) 2 = V  2 V T V  2 V T = V  4 V T  (A T A) k ~ v 1 σ 1 2k v 1 T for k>>1 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Q: Find users that like ‘Matrix’  A: Map query into a ‘concept space’ – how? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, = SciFi Romnce x x Matrix Alien Serenity Casablanca Amelie

 Q: Find users that like ‘Matrix’  A: Map query into a ‘concept space’ – how? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, q =q = Matrix Alien v1 q v2 Matrix Alien Serenity Casablanca Amelie Project into concept space: Inner product with each ‘concept’ vector v i

 Q: Find users that like ‘Matrix’  A: Map query into a ‘concept space’ – how? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, v1 q q*v 1 Matrix Alien Serenity Casablanca Amelie v2 Matrix Alien q =q = Project into concept space: Inner product with each ‘concept’ vector v i

Compactly, we have: q concept = q V E.g.: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, movie-to-concept similarities (V) = SciFi-concept Matrix Alien Serenity Casablanca Amelie q =q = x

 How would the user d that rated (‘Alien’, ‘Serenity’) be handled? d concept = d V E.g.: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, movie-to-concept similarities (V) = SciFi-concept Matrix Alien Serenity Casablanca Amelie q =q = x

 Observation: User d that rated (‘Alien’, ‘Serenity’) will be similar to user q that rated (‘Matrix’), although d and q have zero ratings in common! J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, d = SciFi-concept q = Matrix Alien Serenity Casablanca Amelie Zero ratings in common Similarity ≠

+ Optimal low-rank approximation in terms of Frobenius norm - Interpretability problem:  A singular vector specifies a linear combination of all input columns or rows - Lack of sparsity:  Singular vectors are dense! J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, U  VTVT

 Goal: Express A as a product of matrices C,U,R Make ǁ A-C·U·R ǁ F small  “Constraints” on C and R: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, AC UR Frobenius norm: ǁ X ǁ F =  Σ ij X ij 2

 Goal: Express A as a product of matrices C,U,R Make ǁ A-C·U·R ǁ F small  “Constraints” on C and R: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Pseudo-inverse of the intersection of C and R AC UR Frobenius norm: ǁ X ǁ F =  Σ ij X ij 2

 Let: A k be the “best” rank k approximation to A (that is, A k is SVD of A) Theorem [Drineas et al.] CUR in O(m·n) time achieves  ǁ A-CUR ǁ F  ǁ A-A k ǁ F +  ǁ A ǁ F with probability at least 1- , by picking  O(k log(1/  )/  2 ) columns, and  O(k 2 log 3 (1/  )/  6 ) rows J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, In practice: Pick 4k cols/rows

 Sampling columns (similarly for rows): J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Note this is a randomized algorithm, same column can be sampled more than once

 Let W be the “intersection” of sampled columns C and rows R  Let SVD of W = X Z Y T  Then: U = W + = Y Z + X T   + : reciprocals of non-zero singular values:  + ii  ii  W + is the “pseudoinverse” J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, A C R U = W + W  Why pseudoinverse works? W = X Z Y then W -1 = X -1 Z -1 Y -1 Due to orthonomality X -1 =X T and Y -1 =Y T Since Z is diagonal Z -1 = 1/Z ii Thus, if W is nonsingular, pseudoinverse is the true inverse

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, In practice: Pick 4k cols/rows for a “rank-k” approximation SVD error CUR error

+ Easy interpretation Since the basis vectors are actual columns and rows + Sparse basis Since the basis vectors are actual columns and rows - Duplicate columns and rows Columns of large norms will be sampled many times J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Singular vector Actual column

 If we want to get rid of the duplicates:  Throw them away  Scale (multiply) the columns/rows by the square root of the number of duplicates J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, A CdCd R d CsCs R s Construct a small U

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, SVD: A = U  V T Huge but sparse Big and dense CUR: A = C U R Huge but sparse Big but sparse dense but small sparse and small

 DBLP bibliographic data  Author-to-conference big sparse matrix  A ij : Number of papers published by author i at conference j  428K authors (rows), 3659 conferences (columns)  Very sparse  Want to reduce dimensionality  How much time does it take?  What is the reconstruction error?  How much space do we need? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Accuracy:  1 – relative sum squared errors  Space ratio:  #output matrix entries / #input matrix entries  CPU time J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, SVD CUR CUR no duplicates SVD CUR CUR no dup Sun, Faloutsos: Less is More: Compact Matrix Decomposition for Large Sparse Graphs, SDM ’07. CUR SVD

 SVD is limited to linear projections:  Lower‐dimensional linear projection that preserves Euclidean distances  Non-linear methods: Isomap  Data lies on a nonlinear low‐dim curve aka manifold  Use the distance as measured along the manifold  How?  Build adjacency graph  Geodesic distance is graph distance  SVD/PCA the graph pairwise distance matrix J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Drineas et al., Fast Monte Carlo Algorithms for Matrices III: Computing a Compressed Approximate Matrix Decomposition, SIAM Journal on Computing,  J. Sun, Y. Xie, H. Zhang, C. Faloutsos: Less is More: Compact Matrix Decomposition for Large Sparse Graphs, SDM 2007  Intra- and interpopulation genotype reconstruction from tagging SNPs, P. Paschou, M. W. Mahoney, A. Javed, J. R. Kidd, A. J. Pakstis, S. Gu, K. K. Kidd, and P. Drineas, Genome Research, 17(1), (2007)  Tensor-CUR Decompositions For Tensor-Based Data, M. W. Mahoney, M. Maggioni, and P. Drineas, Proc. 12-th Annual SIGKDD, (2006) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,