Concept-based P2P Search How to find more relevant documents Ingmar Weber Max-Planck-Institute for Computer Science Joint work with Holger Bast Torino, December 8th 2004
What‘s wrong with ?... it‘s centralized. Global authority – can we trust it? - „presidential election Ukraine“ - „buy luxury car“ Dependency – I need it. - single point of failure Size – even Google is only human. - unlikely to index everything that‘s of interest (deep web) - infeasible to run expensive algorithms on 8 billion documents - difficult to input human knowledge First... Peer-to-peer search
... it‘s term-based. Searching for „matrix factorization“... but what about... „matrix decomposition“... or... „decompose linear system“... or even... „probabilistic latent semantic indexing“ First result: Does not find: Concept-based search We get some good hits but far from all No big problem if looking for popular things Then still enough good hits What‘s wrong with ? Second... A personal dilemma: Not a „Britney Spears“-like query
Peer-to-peer search Approach 0 Each peer has a local crawler and index Nobody posts any information about local indices Search can only be done by (limited) flooding No way to know where to find information in advance Very low recall for unpopular queries Matrix factorizatio n Relevant nerd
Peer-to-peer search Approach 1 Again local crawler and index Union of all indices stored via a distributed hash table (DHT) Each peer responsible for a few terms (i.e. keys in the DHT) To search for „matrix factorization“ retrieve the corresponding document lists and merge them to give result ranking Nice idea but infeasible Far too much data traffic even for medium local collections Britney Matrix Spears Factorization Linear Decomposition System A joining peer posts his full local index For each term in his collection he sends the inverted list to the corresponding peer The receiving peer merges this list with his current list The new peer will also become responsible for some terms Linear: doc 10, doc 6, doc 17 Factorization: doc 9, doc 7, doc 13 Matrix: doc 7, doc 10, doc 5 1.doc 7 2.doc 9...
Peer-to-peer search Approach 2 Again local crawler and index Peers share a distributed hash table (DHT) Each peer responsible for a few terms (i.e. keys in the DHT) For each term we maintain a peer list with statistics To search for „matrix factorization“ retrieve the peer lists and select the most promising peers Send the query to these peers The peers perform a local search and return their best results Merge the results This works but... Performance heavily depends on term-based peer selection Still low recall Britney Matrix Spears Factorization Linear Decomposition System A joining peer posts only statistics about his terms For each term in his collection he sends the short statistic to the corresponding peer The receiving peer merges this statistic with his current peer statistics for this term The new peer will also become responsible for some terms Linear: 20 docs, max tf 10 Factorization: peer 9, peer 13 Matrix: peer 7, peer 9 peer 9, peer 7 „matrix factorization“ doc 2, doc 5, doc 7 „matrix factorization“ doc 8, doc 7, doc 11 1.doc 7 2.doc „Minerva“ [Weikum et al.]
Concept-based search Improving recall Basic idea: Don‘t directly compare queries with documents at the word level, but introduce one level of abstraction. Query: „Probabilistic latent semantic indexing“ Document: „Non-negative matrix factorization“ Not similar at word level Strong link at concept level Concept: Approximately decomposing a non-negative matrix into a product of two smaller non- negative matrices Concept: Approximately decomposing a non-negative matrix into a product of two smaller non- negative matrices
How to derive concepts? Noise reduction –Replace documents by combinations of „simpler“ prototypes Document clustering –Partition documents into subsets and map the query to the best matching set Query expansion –Study co-occurrence pattern of terms and add „suitable“ terms to the query Note: In all cases the automatically derived concepts depend on the individual corpus. Concept-based search Improving recall
Concept-based P2P search Approach 1 Again local crawler and index Peers share a distributed hash table (DHT) Each peer responsible for a few terms (i.e. keys in the DHT) For each term we maintain a peer list with statistics Same peer selection process using query terms as in Minerva The only difference... Peers locally employ a concept-based retrieval scheme of their choice Locally better recall We still merge the invididual results This works but... Performance still heavily depends on term-based peer selection Still low recall Britney Matrix Spears Factorization Linear Decomposition System
What we would like to do: 1.Map a query to the most relevant concepts, either individually or with help from peers 2.Find out which peers have documents related to these concepts 3.Send the query to these peers and retrieve ranking for the concept 4.Merge the results Difficult as we have to post summaries of concepts which can then be found by others. Not clear how to universally and uniquely represent concepts. Concept-based P2P search A blueprint
doc 5, doc 8, doc 3,... A new „concept“-based scheme Using documents as concepts 1.Precompute a ranking of documents for each document (doc-doc similarities) using cosine similarity 2.For a query find a few documents which in their combination are most likely to generate the query (EM algorithm) 3.Merge the corresponding rankings to give the final output ranking doc 11, doc 7, doc 8,... doc 9, doc 1, doc 7,... doc 5, doc 8, doc 3,... doc 2, doc 5, doc 1,... doc 4, doc 3, doc 8, „matrix factorization“ Experimentally, the following works very well for a local collection: doc 11, doc 7, doc 8,... 1.doc 8 2.doc Observation 1: highest ranked documents do not have to contain query terms Observation 2: doc-doc similarities more content-based than doc-query
Concept-based P2P search Approach 2 As before, post short statistics about each term in collection but also about each document E.g., number of documents with cosine similarity > 0.7 Still use term statistics for first round of peer selection (as before) Send query to these peers (as before) Each selected peer locally selects documents which are most likely to generate query Sends back only these few documents (term vectors) Initiating peer then selects a few of them (EM algorithm) Britney Matrix Spears Factorization Linear Decomposition System Factorization: peer 9, peer 13 Matrix: peer 7, peer 9 peer 9, peer 7 „matrix factorization“ doc 2 w/ doc 5 „matrix factorization“ doc 8 w/ doc 5 doc 5 w/ doc doc 1 doc 5 doc 7 doc 2 doc 3 doc 4 doc 9 doc 11 doc 8
doc 5 w/ doc 8 Concept-based P2P search Approach 2 Then the peer lists for these documents are retrieved and a new set of peers is selected We then send these peers our query in terms of selected documents, i.e., „concepts“ The peers send back the most relevant documents Merge the individual results Britney Matrix Spears Factorization Linear Decomposition System peer 2, peer doc 1 doc 5 doc 7 doc 2 doc 3 doc 4 doc 9 doc 11 doc 8 doc 5: peer 2, peer 1 doc 8: peer 3, peer 2 doc 5 w/ doc 8 doc 10, doc 5, doc 2 doc 2, doc 3, doc 8 1.doc 2 2.doc
Concept-based P2P search Advantages of our approach Users only have to „agree“ on documents –No need for common taxonomy If we can find some relevant documents we can find more => increases recall –Allows content-based „More documents like this“ button Uses non-trivial doc-doc similarities –Infeasible to compute for 8 billion documents, easy to do for a few thousands