Download presentation
Presentation is loading. Please wait.
Published byMae Franklin Modified over 9 years ago
1
Unstructure P2P Overlay
2
Improving Search in Peer-to-Peer Networks ICDCS 2002 Beverly Yang Hector Garcia-Molina
3
Current Techniques Gnutella –BFS with depth limit D. –Waste bandwidth and processing resources Freenet –DFS with depth limit D. –Poor response time.
4
Iterative Deepening Basic idea is to reduce the number of nodes that process a query Under policy P= { a, b, c} ;waiting time W See example.
5
Directed BFS A source send query messages to just a subset of its neighbors A node maintains simple statistics on its neighbors –Number of results received from each neighbor –Latency of connection
6
Candidate nodes Returned the Highest number of results Return response messages that have taken the lowest average number of hops High messages
7
Local Indices Each node n maintains an index over the data of all nodes within r hops radius. All nodes at depths not listed in the policy simply forward the query. Example: policy P= { 1, 5}
8
Experimental Setup For each response,we log: –Number of hops took –IP from which the Response message came –Response time –Individual results
9
Experimental result
10
Routing Indices For P-to-P Systems ICDCS 2002
11
Introduction Search in a P2P system –Mechanisms without an index –Mechanisms with specialized index nodes (centralized search) –Mechanisms with indices at each node Structure P2P network Unstructure P2P network Parallel v.s. sequentially search –Response time –Network traffic
12
Routing indices(RI) Query –Documents are on zero or more “topics”, and queries request documents on particular topics. –Documents topics are independent Local index RI –Each node has a local routing index which contains following information The number of documents along each path The number of documents on each topic of interest –Allow a node to select the “best” neighbors to send a query to
13
The RI may be “coarser” than the local indices
14
Goodness measure –Number of results in a path Using Routing indices
15
–Storage space N: number of nodes in the P2P network b: branching factor c: number of categories s: counter size in bytes Centralized index : s*( c+1) *N Distributed system: s*(c+1)*b (each node)
16
Creating routing indices
17
Maintaining Routing Indices –Trade off between RI freshness and update cost –No requiring the participation of a disconnecting node Discussion –If the search topics is dependent? –Can the number of “hops” necessary to reach a document be estimated?
18
Alternative Routing Indices Hop-count RI –Aggregated RIs for each “hop” up to a maximum number of hops are stored
19
–Search cost Number of messages –The goodness of a neighbor The ratio between the number of documents available through that neighbor and the number of messages required to get those documents –Regular tree with fanout F –It takes F h messages to find all documents at hop h –Storage cost?
20
Exponentially aggregated RI –Store the result of applying the regular-tree cost formula to a hop-count RI –How to compute the goodness of a path for the query containing several topics?
21
Cycles in the P2P network
22
Efficient Content Location Using Interest-Based Locality in Peer-to- Peer Systems Kunwadee Sripanidkulchai Bruce Maggs Hui Zhang IEEE INFOCOM 2003
23
motivation Although flooding is simple and robust, it is not scalable. A content location solution in which peers organized into an interest-based structure on top of Gnutella. The algorithm is called interest-based shortcuts
24
Interest-based locality
25
Shortcuts Architecture and Design Goals To create additional links on top of a peer- to-peer system’s overlay As a separate performance enhancement layer on top of existing content location mechanisms
26
Content location paths
27
Shortcut Discovery The first lookup returns a set of peers that store the content These are potential candidates. One peer is selected at random from the set and added For scalability, each peer allocates a fixed- size amount of storage to implement shortcuts. Alternatives for shortcut discovery –Exchanging shortcut lists between peers
28
Shortcut selection We rank shortcuts based on their perceived utility A peer sequentially asking all of the shortcuts on its list.
29
Ranking metrics Probability of providing content Latency of the path to the shortcut Load at the shortcut A combination of metrics can be used based on each peer’s preference
30
Potential and Limitations Adding 5 shortcuts at a time produces success rates that are close to the best possible. Slightly increase the shortest path length from 1 to 2 hops will perform better success rate.
31
Efficient and Scalable Query Routing for Unstructured Peer-to-Peer Networks A.Kumar, J. Xu and W.W. Zegura
32
Overview As the distance from the node hosting the object increases, fewer bits are used to represent information about the direction in which the object located
33
Design Exponential decay bloom filter (EDBF) –Bloom filter is a data structure for approximately answering set membership questions k hash functions, and an array A A[h i (x)]=1, for i=1…k (x) =|{i|A[h i (x)]=1, i=1..k}| –# of 1’s in the filter – (x) /k roughly indicates the probability of finding x along a specific link in the overlay –Noise? –When there is no noise one hop away from the object x, (x) is approximately k bits two hops away from the object x, (x) is approximately k/d –Decay implementation Decay rate is 1/d Nodes reset each of the bits in the EDBFs received from upstream neighbors with a probability (1-d)
34
Creation and Maintenance of routing tables
35
The initial advertisement is created by taking the union of all advertisements received from neighbors other than the target neighbor Decay the combined advertisement by the decay factor d Union the result with the local EDBF –The local EDBF is propagated without attenuation Loops –Split horizon with poisoned reverse Information received from a neighbor j will not be advertised back to j –Exponentially decay The count to infinity problem manifests itself as a “decay to infinitely small amount of information”
36
Query forwarding
37
If the query is satisfied locally, it is answered Otherwise, if the TTL of the query has not expired –If the query was previously seen, it is forwarded to a randomly chosen neighbor –Otherwise, the query is forwarded to the neighbor with the highest (x)
38
Structure P2P Overlay
39
Similarity Discovery in structured P2P Overlays ICPP
40
Introduction Structured P2P network –Only support search with a single keyword Similarity between two documents –Keyword sets –Vector space –Measure Problems –Search problem –New keyword?
41
Meteorograph Absolute angle
42
Publishing and Searching Publish –Hash –Publish the item to a node n p with the hash key closest to hash value
43
Search problem –Nearest answers –K_nearest answers – Partial Comprehensive Search strategy Discussions What happened when keyword vector is represented by ?
44
Other issues Load balance Changes of vector space –Republished? –Comprehensive set of keywords –Other methods?
45
SWAM: A Family of Access Methods for Similarity-Search in Peer-to-Peer Data Networks Farnoush Banaei-Kashani Cyrus Shahabi (CIKM04)
46
PDN access method Defines How to organize the PDN topology to an index-like structure How to use the index structure
47
Hilbert space Hilbert space (V, Lp) Key k = (a1,a2, …, ad) – d: the dimension of a Vector space –The domain is a contiguous and finite interval of R The Lp norm with p belongs to Z+ –The distance function to measure the dissimilarity
49
Topology Topology of a PDN can be modelled as a directed graph G(N, E) A(n) is the set of neighbors for node n A node maintains –A limited amount of information about its neighbors Includes the key of the tuples maintained at neighbors The physical addresses of neighbors
50
The processing of the query is completed when all expected tuples in the relevant result set are visited Access methods –Join, leave for virtual nodes –Forward for using local information to process queries and make forwarding decisions
51
The small world example Grid component Random graph component The process of queries (exact, range, kNN) in the highly locality topology
53
Flat partitioning SWAM also employs the space partitioning idea: flat partitioning
54
Query Processing Exact-Match query processing Range query processing kNN Query processing
55
Similarity Search in Peer-to-Peer Databases IEEE International Conference on Distributed Computing Systems 2005
56
Data and Query Model All data objects are unit vectors in a d- dimensional Euclidean space Cosine distance Can
57
Design Details The indexing scheme –Locality sensitive hashing function is used to reduce the dimensionality r is a d-dimensional unit vector h(x) is the concatenation of the bits b r1 (X),b r2 (X)…b rk (X) –Objects with the same hash value belong to the same cluster and are stored at the node which owns the DHT key h(x) Group nearby objects to indices with low hamming distance To avoid the situation that nearby objects differ in some bit positions in their index –t hashing functions are used (replication) »To ensures that there is a high probability of two related objects hashing onto indices with low hamming distance in at least one of these sets
58
The search algorithm –Node u generate Query (x, ) –Compute h(x) –Compute the set V of all indices whose hamming distance from h(x) is at most r. –Node u queries each of the node in V –Nodes in V return all data objects which match u’s query –How to determine r?
59
Adaptive replication –Ensure the number of copies of each key in the network is proportional to its popularity The number of copies of each key is proportional to the rate at which queries arrive for this key Randomized Lookup –The lookup for a specific key terminates uniformly at random at one of the copies of this key –Guarantee that the load is balanced uniformly across all copies of all keys in the system.
60
Discussion Search cost ? What is the cardinality of set V? Availability ?
61
Guaranteeing Correctness and Availability in P2P Range Indices SIGMOD 2005
62
Introduction Hashing destroys the value ordering among the search key values –Cannot be used to process range queries efficiently Solution –Range indices assign data items to peers directly based on their search key value –Load balance?
63
P-ring overview Two types of peers –Live peers Used to store data item The data stored in each live peer is between sf and 2*sf (sf: storage factor) –Free peers Overflow (> 2*sf) –Split its assigned range with a free peer Underflow (< sf) –Merge with its successor in the ring to obtain more entries
64
Incorrect query results Inconsistent Ring
65
Concurrency in the data store
66
Solution Handling ring inconsistency –Two states Joined and joining Peer p remains in the joining state until all relevant peers know about p Only store items in peers in the joined state Handling data store concurrency –P stays in a lock state until p succ locks its range
67
Supporting Complex Multi- dimensional Queries in P2P systems IEEE International Conference on Distributing Systems 2005 (HW)
68
Data Indexing in Peer-to-Peer DHT Networks ICDCS 2004
69
Locating data using incomplete information. –How to search data in a DHT Data descriptors and queries –Semi-structured XML data
70
–Query Most specific query for d Relationship between queries
71
Given the most specific query, finding the location of the file is simple How about less specific queries Solution –Provide query-to-query service For a given query q, the index service returns a list of more specific queries, covered by q –DHT storage system must be extended Insert(q.q i ), q->q i, adds a mapping (q;q i ) to the index of the node responsible for key q.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.