Topics in Database Systems: Data Management in Peer-to-Peer Systems

Slides:



Advertisements
Similar presentations
Peer-to-Peer and Social Networks An overview of Gnutella.
Advertisements

Topics in Database Systems: Data Management in Peer-to-Peer Systems
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Peer to Peer and Distributed Hash Tables
Replication Strategies in Unstructured Peer-to-Peer Networks Edith Cohen Scott Shenker This is a modified version of the original presentation by the authors.
Efficient Search - Overview Improving Search In Peer-to-Peer Systems Presented By Jon Hess cs294-4 Fall 2003.
Technion –Israel Institute of Technology Computer Networks Laboratory A Comparison of Peer-to-Peer systems by Gomon Dmitri and Kritsmer Ilya under Roi.
1 An Overview of Gnutella. 2 History The Gnutella network is a fully distributed alternative to the centralized Napster. Initial popularity of the network.
Search and Replication in Unstructured Peer-to-Peer Networks Pei Cao, Christine Lv., Edith Cohen, Kai Li and Scott Shenker ICS 2002.
Farnoush Banaei-Kashani and Cyrus Shahabi Criticality-based Analysis and Design of Unstructured P2P Networks as “ Complex Systems ” Mohammad Al-Rifai.
LightFlood: An Optimal Flooding Scheme for File Search in Unstructured P2P Systems Song Jiang, Lei Guo, and Xiaodong Zhang College of William and Mary.
Denial-of-Service Resilience in Peer-to-Peer Systems D. Dumitriu, E. Knightly, A. Kuzmanovic, I. Stoica and W. Zwaenepoel Presenter: Yan Gao.
An Overview of Peer-to-Peer Networking CPSC 441 (with thanks to Sami Rollins, UCSB)
Peer-to-Peer Networks João Guerreiro Truong Cong Thanh Department of Information Technology Uppsala University.
Open Problems in Data- Sharing Peer-to-Peer Systems Neil Daswani, Hector Garcia-Molina, Beverly Yang.
P2p, Spring 05 1 Topics in Database Systems: Data Management in Peer-to-Peer Systems March 29, 2005.
Cis e-commerce -- lecture #6: Content Distribution Networks and P2P (based on notes from Dr Peter McBurney © )
FRIENDS: File Retrieval In a dEcentralized Network Distribution System Steven Huang, Kevin Li Computer Science and Engineering University of California,
Peer-to-Peer Based Multimedia Distribution Service Zhe Xiang, Qian Zhang, Wenwu Zhu, Zhensheng Zhang IEEE Transactions on Multimedia, Vol. 6, No. 2, April.
A Trust Based Assess Control Framework for P2P File-Sharing System Speaker : Jia-Hui Huang Adviser : Kai-Wei Ke Date : 2004 / 3 / 15.
Exploiting Content Localities for Efficient Search in P2P Systems Lei Guo 1 Song Jiang 2 Li Xiao 3 and Xiaodong Zhang 1 1 College of William and Mary,
Search and Replication in Unstructured Peer-to-Peer Networks Pei Cao Cisco Systems, Inc. (Joint work with Christine Lv, Edith Cohen, Kai Li and Scott Shenker)
presented by Hasan SÖZER1 Scalable P2P Search Daniel A. Menascé George Mason University.
Chord-over-Chord Overlay Sudhindra Rao Ph.D Qualifier Exam Department of ECECS.
Freenet A Distributed Anonymous Information Storage and Retrieval System I Clarke O Sandberg I Clarke O Sandberg B WileyT W Hong.
P2p, Spring 05 1 Topics in Database Systems: Data Management in Peer-to-Peer Systems Replication.
Efficient Search in Peer to Peer Networks By: Beverly Yang Hector Garcia-Molina Presented By: Anshumaan Rajshiva Date: May 20,2002.
Searching in Unstructured Networks Joining Theory with P-P2P.
Improving Data Access in P2P Systems Karl Aberer and Magdalena Punceva Swiss Federal Institute of Technology Manfred Hauswirth and Roman Schmidt Technical.
 Structured peer to peer overlay networks are resilient – but not secure.  Even a small fraction of malicious nodes may result in failure of correct.
1CS 6401 Peer-to-Peer Networks Outline Overview Gnutella Structured Overlays BitTorrent.
P2P File Sharing Systems
INTRODUCTION TO PEER TO PEER NETWORKS Z.M. Joseph CSE 6392 – DB Exploration Spring 2006 CSE, UT Arlington.
Freenet. Anonymity  Napster, Gnutella, Kazaa do not provide anonymity  Users know who they are downloading from  Others know who sent a query  Freenet.
1 Napster & Gnutella An Overview. 2 About Napster Distributed application allowing users to search and exchange MP3 files. Written by Shawn Fanning in.
Introduction Widespread unstructured P2P network
Searching In Peer-To-Peer Networks Chunlin Yang. What’s P2P - Unofficial Definition All of the computers in the network are equal Each computer functions.
Peer to Peer Research survey TingYang Chang. Intro. Of P2P Computers of the system was known as peers which sharing data files with each other. Build.
Jonathan Walpole CSE515 - Distributed Computing Systems 1 Teaching Assistant for CSE515 Rahul Dubey.
Using the Small-World Model to Improve Freenet Performance Hui Zhang Ashish Goel Ramesh Govindan USC.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Xiaozhou Li COS 461: Computer Networks (precept 04/06/12) Princeton University.
A Scalable Content-Addressable Network (CAN) Seminar “Peer-to-peer Information Systems” Speaker Vladimir Eske Advisor Dr. Ralf Schenkel November 2003.
Quantitative Evaluation of Unstructured Peer-to-Peer Architectures Fabrício Benevenuto José Ismael Jr. Jussara M. Almeida Department of Computer Science.
An IP Address Based Caching Scheme for Peer-to-Peer Networks Ronaldo Alves Ferreira Joint work with Ananth Grama and Suresh Jagannathan Department of Computer.
SIGCOMM 2001 Lecture slides by Dr. Yingwu Zhu Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
1 Peer-to-Peer Technologies Seminar by: Kunal Goswami (05IT6006) School of Information Technology Guided by: Prof. C.R.Mandal, School of Information Technology.
Lecture 3: Uninformed Search
Efficient P2P Search by Exploiting Localities in Peer Community and Individual Peers A DISC’04 paper Lei Guo 1 Song Jiang 2 Li Xiao 3 and Xiaodong Zhang.
By Jonathan Drake.  The Gnutella protocol is simply not scalable  This is due to the flooding approach it currently utilizes  As the nodes increase.
LightFlood: An Efficient Flooding Scheme for File Search in Unstructured P2P Systems Song Jiang, Lei Guo, and Xiaodong Zhang College of William and Mary.
P2p, Fall 06 1 Topics in Database Systems: Data Management in Peer-to-Peer Systems Search in Unstructured P2p.
Computer Networking P2P. Why P2P? Scaling: system scales with number of clients, by definition Eliminate centralization: Eliminate single point.
Peer to Peer Network Design Discovery and Routing algorithms
Aug 22, 2002Sigcomm 2002 Replication Strategies in Unstructured Peer-to-Peer Networks Edith Cohen AT&T Labs-research Scott Shenker ICIR.
Algorithms and Techniques in Structured Scalable Peer-to-Peer Networks
Peer-to-Peer Systems: An Overview Hongyu Li. Outline  Introduction  Characteristics of P2P  Algorithms  P2P Applications  Conclusion.
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
Two Peer-to-Peer Networking Approaches Ken Calvert Net Seminar, 23 October 2001 Note: Many slides “borrowed” from S. Ratnasamy’s Qualifying Exam talk.
INTERNET TECHNOLOGIES Week 10 Peer to Peer Paradigm 1.
CS 347Notes081 CS 347: Parallel and Distributed Data Management Notes 08: P2P Systems.
P2P Search COP P2P Search Techniques Centralized P2P systems  e.g. Napster, Decentralized & unstructured P2P systems  e.g. Gnutella.
School of Electrical Engineering &Telecommunications UNSW Cost-effective Broadcast for Fully Decentralized Peer-to-peer Networks Marius Portmann & Aruna.
09/13/04 CDA 6506 Network Architecture and Client/Server Computing Peer-to-Peer Computing and Content Distribution Networks by Zornitza Genova Prodanoff.
Large Scale Sharing Marco F. Duarte COMP 520: Distributed Systems September 19, 2004.
Distance Vector Routing
CS Spring 2010 CS 414 – Multimedia Systems Design Lecture 24 – Introduction to Peer-to-Peer (P2P) Systems Klara Nahrstedt (presented by Long Vu)
Unstructured Networks: Search Márk Jelasity. 2 Outline ● Emergence of decentralized networks ● The Gnutella network: how it worked and looked like ● Search.
Peer-to-Peer and Social Networks
Peer-to-Peer Information Systems Week 6: Performance
Presentation transcript:

Topics in Database Systems: Data Management in Peer-to-Peer Systems Search in Unstructured P2p

Outline Search Strategies in Unstructured p2p Routing Indexes

Topics in Database Systems: Data Management in Peer-to-Peer Systems D. Tsoumakos and N. Roussopoulos, “A Comparison of Peer-to-Peer Search Methods”, WebDB03

Overview Centralized Constantly-updated directory hosted at central locations (do not scale well, updates, single points of failure) Decentralized but structured The overlay topology is highly controlled and files (or metadata/index) are not placed at random nodes but at specified locations “loosely” vs “highly-structured” DHT Decentralized and Unstructured peers connect in an ad-hoc fashion the location of document/metadata is not controlled by the system No guaranteed for the success of a search No bounds on search time

Flooding on Overlays xyz.mp3 ? xyz.mp3

Flooding on Overlays xyz.mp3 xyz.mp3 ? Flooding

Flooding on Overlays xyz.mp3 xyz.mp3 ? Flooding

Flooding on Overlays xyz.mp3

Search in Unstructured P2P Must find a way to stop the search Time-to-Leave (TTL) Exponential Number of Messages Cycles (?)

Search in Unstructured P2P BFS vs DFS BFS better response time, larger number of nodes (message overhead per node and overall) Note: search in BFS continues (if TTL is not reached), even if the object has been located on a different path Recursive vs Iterative During search, whether the node issuing the query direct contacts others, or recursively. Does the result follows the same path?

Iterative vs. Recursive Routing Iterative: Originator requests IP address of each hop Message transport is actually done via direct IP Recursive: Message transferred hop-by-hop K V K V K V K V K V K V K V K V K V K V K V retrieve (K1)

Search in Unstructured P2P Two general types of search in unstructured p2p: Blind: try to propagate the query to a sufficient number of nodes (example Gnutella) Informed: utilize information about document locations (example Routing Indexes) Informed search increases the cost of join for an improved search cost

Blind Search Methods Gnutella: Use flooding (BFS) to contact all accessible nodes within the TTL value Huge overhead to a large number of peers + Overall network traffic Hard to find unpopular items Up to 60% bandwidth consumption of the total Internet traffic

Overlay Networks P2P applications need to: Track identities & (IP) addresses of peers May be many! May have significant Churn (update rate) Best not to have n2 ID references Route messages among peers If you don’t keep track of all peers, this is “multi-hop” This is an overlay network Peers are doing both naming and routing IP becomes “just” the low-level transport All the IP routing is opaque

P2P Cooperation Models Centralized model global index held by a central authority (single point of failure) direct contact between requestors and providers Example: Napster Decentralized model Examples: Freenet, Gnutella no global index, no central coordination, global behavior emerges from local interactions, etc. direct contact between requestors and providers (Gnutella) or mediated by a chain of intermediaries (Freenet) Hierarchical model introduction of “super-peers” mix of centralized and decentralized model Example: DNS

Free-riding on Gnutella [Adar00] 24 hour sampling period: 70% of Gnutella users share no files 50% of all responses are returned by top 1% of sharing hosts A social problem not a technical one Problems: Degradation of system performance: collapse? Increase of system vulnerability “Centralized” (“backbone”) Gnutella  copyright issues? Verified hypotheses: H1: A significant portion of Gnutella peers are free riders H2: Free riders are distributed evenly across domains H3: Often hosts share files nobody is interested in (are not downloaded)

Free-riding Statistics - 1 [Adar00] H1: Most Gnutella users are free riders Of 33,335 hosts: 22,084 (66%) of the peers share no files 24,347 (73%) share ten or less files Top 1 percent (333) hosts share 37% (1,142,645) of total files shared Top 5 percent (1,667) hosts share 70% (1,142,645) of total files shared Top 10 percent (3,334) hosts share 87% (2,692,082) of total files shared

Free-riding Statistics - 2 [Adar00] H3: Many servents share files nobody downloads Of 11,585 sharing hosts: Top 1% of sites provide nearly 47% of all answers Top 25% of sites provide 98% of all answers 7,349 (63%) never provide a query response

Free Riders File sharing studies Lots of people download Few people serve files Is this bad? If there’s no incentive to serve, why do people do so? What if there are strong disincentives to being a major server?

Simple Solution: Thresholds Many programs allow a threshold to be set Don’t upload a file to a peer unless it shares > k files Problems: What’s k? How to ensure the shared files are interesting?

Categories of Queries [Sripanidkulchai01] Categorized top 20 queries

Popularity of Queries [Sripanidkulchai01] Very popular documents are approximately equally popular Less popular documents follow a Zipf-like distribution (i.e., the probability of seeing a query for the ith most popular query is proportional to 1/(ialpha)) Access frequency of web documents also follows Zipf-like distributions  caching might also work for Gnutella

Caching in Gnutella [Sripanidkulchai01] Average bandwidth consumption in tests: 3.5Mbps Best case: trace 2 (73% hit rate = 3.7 times traffic reduction)

Topology of Gnutella [Jovanovic01] Power-law properties verified (“find everything close by”) Backbone + outskirts Power-Law Random Graph (PLRG): The node degrees follow a power law distribution: if one ranks all nodes from the most connected to the least connected, then the i’th most connected node has ω/ia neighbors, where w is a constant.

Gnutella Backbone [Jovanovic01]

Why does it work? It’s a small World! [Hong01] Milgram: 42 out of 160 letters from Oregon to Boston (~ 6 hops) Watts: between order and randomness short-distance clustering + long-distance shortcuts In 1967, Stanley Milgram conducted a classic experiment where he instructed randomly chosen people in Nebraska to pass letters to a selected target person in Boston, using only intermediaries who were known to one another on a first-name basis. He found that it only required a median of six steps for the letters to reach their destination, giving rise to “six degrees of separation” and the “small-world effect.” Duncan Watts and Steven Strogatz extended this work in 1998 with an influential paper in Nature that described small-world networks as an intermediate state between regular graphs and random graphs. Small-world graphs maintain the high local clustering of regular graphs (as measured by the clustering coefficient, the proportion of a nodes linked to a given node which are also linked to each other) but also have the short pathlengths of random graphs. They can be regarded as locally clustered graphs with shortcuts scattered in. Freenet networks can be shown to be small-world graphs (next slide). Regular graph: n nodes, k nearest neighbors  path length ~ n/2k 4096/16 = 256 Rewired graph (1% of nodes): path length ~ random graph clustering ~ regular graph Random graph: path length ~ log (n)/log(k) ~ 4

Links in the small World [Hong01] “Scale-free” link distribution Scale-free: independent of the total number of nodes Characteristic for small-world networks The proportion of nodes having a given number of links n is: P(n) = 1 /n k Most nodes have only a few connections Some have a lot of links: important for binding disparate regions together A key characteristic of small-world graphs is the “scale-free” link distribution, which has no term related to the size of the network, and thus applies at all scales from small to large. This distribution can be seen in Freenet. The nodes at the top left, with few connections, are the local clusters while the nodes at the bottom right, with lots of connections, provide the shortcuts that tie the network together. The outlier at far right is the group of nodes whose datastores are completely filled, with 250 entries – with larger datastores, this column shifts further to the right.

Freenet: Links in the small World [Hong01] P(n) ~ 1/n 1.5 A key characteristic of small-world graphs is the “scale-free” link distribution, which has no term related to the size of the network, and thus applies at all scales from small to large. This distribution can be seen in Freenet. The nodes at the top left, with few connections, are the local clusters while the nodes at the bottom right, with lots of connections, provide the shortcuts that tie the network together. The outlier at far right is the group of nodes whose datastores are completely filled, with 250 entries – with larger datastores, this column shifts further to the right.

Freenet: “Scale-free” Link Distribution [Hong01] A key characteristic of small-world graphs is the “scale-free” link distribution, which has no term related to the size of the network, and thus applies at all scales from small to large. This distribution can be seen in Freenet. The nodes at the top left, with few connections, are the local clusters while the nodes at the bottom right, with lots of connections, provide the shortcuts that tie the network together. The outlier at far right is the group of nodes whose datastores are completely filled, with 250 entries – with larger datastores, this column shifts further to the right.

Gnutella: New Measurements [1] Stefan Saroiu, P. Krishna Gummadi, Steven D. Gribble: A Measurement Study of Peer-to-Peer File Sharing Systems, Proceedings of Multimedia Computing and Networking (MMCN) 2002, San Jose, CA, USA, January 2002.   [2] M. Ripeanu, I. Foster, and A. Iamnitchi. Mapping the gnutella network: Properties of large-scale peer-to-peer systems and implications for system design. IEEE Internet Computing Journal, 6(1), 2002 [3] Evangelos P. Markatos, Tracing a large-scale Peer to Peer System: an hour in the life of Gnutella, 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid, 2002. [4] Y. HawatheAWATHE, S. Ratnasamy, L. Breslau, and S. Shenker. Making Gnutella-like P2P Systems Scalable. In Proc. ACM SIGCOMM (Aug. 2003). [5] Qin Lv, Pei Cao, Edith Cohen, Kai Li, Scott Shenker: Search and replication in unstructured peer-to-peer networks. ICS 2002: 84-95 

Gnutella: Bandwidth Barriers Clip2 measured Gnutella over 1 month: typical query is 560 bits long (including TCP/IP headers) 25% of the traffic are queries, 50% pings, 25% other on average each peer seems to have 3 other peers actively connected Clip2 found a scalability barrier with substantial performance degradation if queries/sec > 10: 10 queries/sec * 560 bits/query * 4 (to account for the other 3 quarters of message traffic) * 3 simultaneous connections 67,200 bps 10 queries/sec maximum in the presence of many dialup users won’t improve (more bandwidth - larger files)

Gnutella: Summary Completely decentralized Hit rates are high High fault tolerance Adopts well and dynamically to changing peer populations Protocol causes high network traffic (e.g., 3.5Mbps). For example: 4 connections C / peer, TTL = 7 1 ping packet can cause packets No estimates on the duration of queries can be given No probability for successful queries can be given Topology is unknown  algorithms cannot exploit it Free riding is a problem Reputation of peers is not addressed Simple, robust, and scalable (at the moment)

Hierarchical Networks (& Queries) DNS Hierarchical name space (“clients” + hierarchy of servers) Hierarchical routing w/aggressive caching 13 managed “root servers” Traditional pros/cons of Hierarchical data mgmt Works well for things aligned with the hierarchy Esp. physical locality Inflexible No data independence!

Commercial Offerings JXTA Java/XML Framework for p2p applications Name resolution and routing is done with floods & superpeers Can always add your own if you like MS WinXP p2p networking An unstructured overlay, flooded publication and caching “does not yet support distributed searches” Both have some security support Authentication via signatures (assumes a trusted authority) Encryption of traffic

Lessons and Limitations Client-Server performs well But not always feasible Ideal performance is often not the key issue! Things that flood-based systems do well Organic scaling Decentralization of visibility and liability Finding popular stuff (e.g., caching) Fancy local queries Things that flood-based systems do poorly Finding unpopular stuff [Loo, et al VLDB 04] Fancy distributed queries Vulnerabilities: data poisoning, tracking, etc. Guarantees about anything (answer quality, privacy, etc.)

Summary and Comparison of Approaches

More on Search Search Options Query Expressiveness (type of queries) Comprehensiveness (all or just the first (or k) results Topology Data Placement Message Routing

Comparison     

Comparison          

Parallel Clusters links out of these clusters not shown  search at only a fraction of the nodes!

Other Open Problems besides Search: Security Availability (e.g., coping with DOS attacks) Authenticity Anonymity Access Control (e.g., IP protection, payments,...)

Trustworthy P2P Many challenges here. Examples: Authenticating peers Authenticating/validating data Stored (poisoning) and in flight Ensuring communication Validating distributed computations Avoiding Denial of Service Ensuring fair resource/work allocation Ensuring privacy of messages Content, quantity, source, destination

? title: origin of species author: charles darwin date: 1859 Authenticity title: origin of species author: charles darwin ? date: 1859 body: In an island far, far away ... ...

More than Just File Integrity title: origin of species author: charles darwin ? date: 1859 00 body: In an island far, far away ... checksum

More than Fetching One File T=origin Y=? A=darwin B=? T=origin Y=1859 A=darwin B=abcd T=origin Y=1800 A=darwin Y=1859

Solutions Authenticity Function A(doc): T or F at expert sites, at all sites? can use signature expert sig(doc) user Voting Based authentic is what majority says Time Based e.g., oldest version (available) is authentic

Added Challenge: Efficiency Example: Current music sharing everyone has authenticity function but downloading files is expensive Solution: Track peer behavior bad peer good peer

Trust computations in dynamic system Overloading good nodes Issues Trust computations in dynamic system Overloading good nodes Bad nodes can provide good content sometimes Bad nodes can build up reputation Bad nodes can form collectives ...

Information Preservation Information Quality Trust Security & Privacy Issues: Anonymity Reputation Accountability Information Preservation Information Quality Trust Denial of service attacks

Blind Search Methods Modified-BFS: Choose only a ratio of the neighbors (some random subset) Iterative Deepening: Start BFS with a small TTL and repeat the BFS at increasing depths if the first BFS fails Works well when there is some stop condition and a “small” flood will satisfy the query Else even bigger loads than standard flooding (more later …)

Two methods to terminate each walker: Blind Search Methods Random Walks: The node that poses the query sends out k query messages to an equal number of randomly chosen neighbors Each step follows each own path at each step randomly choosing one neighbor to forward it Each path – a walker Two methods to terminate each walker: TTL-based or checking method (the walkers periodically check with the query source if the stop condition has been met) It reduces the number of messages to k x TTL in the worst case Some kind of local load-balancing

Blind Search Methods Random Walks: In addition, the protocol bias its walks towards high-degree nodes (choose the highest degree neighbor)

Blind Search Methods Using Super-nodes: Super (or ultra) peers are connected to each other Each super-peer is also connected with a number of lead nodes Routing among the super-peers The super-peers then contact their leaf nodes

Blind Search Methods Using Super-nodes: Gnutella2 When a super-peer (or hub) receives a query from a leaf, it forwards it to its relevant leaves and to neighboring super-peers The hubs process the query locally and forward it to their relevant leaves Neighboring super-peers regularly exchange local repository tables to filter out traffic between them

Interconnection between the superpeers Blind Search Methods Ultrapeers can be installed (KaZaA) or self-promoted (Gnutella) Interconnection between the superpeers

Informed Search Methods Local Index Each node indexes all files stored at all nodes within a certain radius r and can answer queries on behalf of them Search process at steps of r, hop distance between two consecutive searches 2r+1 Increased cost for join/leave Flood inside each r with TTL = r, when join/leave the network

Informed Search Methods Intelligent BFS query ... ? Nodes store simple statistics on its neighbors: (query, NeigborID) tuples for recently answered requests from or through their neighbors so they can rank them For each query, a node finds similar ones and selects a direction How?

Informed Search Methods Intelligent or Directed BFS query ... ? Heuristics for Selecting Direction >RES: Returned most results for previous queries <TIME: Shortest satisfaction time <HOPS: Min hops for results >MSG: Forwarded the largest number of messages (all types), suggests that the neighbor is stable <QLEN: Shortest queue <LAT: Shortest latency >DEG: Highest degree

Informed Search Methods Intelligent or Directed BFS No negative feedback Depends on the assumption that nodes specialize in certain documents

Informed Search Methods APS Again, each node keeps a local index with one entry for each object it has requested per neighbor – this reflects the relative probability of the node to be chosen to forward the query k independent walkers and probabilistic forwarding Each node forwards the query to one of its neighbor based on the local index (for each object, choose a neighbor using the stored probability) If a walker, succeeds the probability is increased, else is decreased – Take the reverse path to the requestor and update the probability, after a walker miss (optimistic update) or after a hit (pessimistic update)

Topics in Database Systems: Data Management in Peer-to-Peer Systems Q. Lv et al, “Search and Replication in Unstructured Peer-to-Peer Networks”, ICS’02

Search and Replication in Unstructured Peer-to-Peer Networks Type of replication depends on the search strategy used A number of blind-search variations of flooding A number of (metadata) replication strategies Evaluation Method: Study how they work for a number of different topologies and query distributions

Performance of search depends on Methodology Three aspects of P2P Performance of search depends on Network topology: graph formed by the p2p overlay network Query distribution: the distribution of query frequencies for individual files Replication: number of nodes that have a particular file Assumption: fixed network topology and fixed query distribution Results still hold, if one assumes that the time to complete a search is short compared to the time of change in network topology and in query distribution

Network Topology

(1) Power-Law Random Graph A 9239-node random graph Network Topology (1) Power-Law Random Graph A 9239-node random graph Node degrees follow a power law distribution when ranked from the most connected to the least, the i-th ranked has ω/ia, where ω is a constant Once the node degrees are chosen, the nodes are connected randomly

Network Topology (2) Normal Random Graph A 9836-node random graph

Network Topology (3) Gnutella Graph (Gnutella) A 4736-node graph obtained in Oct 2000 Node degrees roughly follow a two-segment power law distribution

Network Topology (4) Two-Dimensional Grid (Grid) A two dimensional 100x100 grid

Query Distribution Assume m objects Let qi be the relative popularity of the i-th object (in terms of queries issued for it) Values are normalized Σ i=1, m qi = 1 Uniform: All objects are equally popular qi = 1/m (2) Zipf-like qi  1 / iα

Replication Each object i is replicated on ri nodes and the total number of objects stored is R, that is Σ i=1, m ri = R Uniform: All objects are replicated at the same number of nodes ri = R/m (2) Proportional: The replication of an object is proportional to the query probability of the object ri  qi (3) Square-root: The replication of an object i is proportional to the square root of its query probability qi ri  √qi

Query Distribution & Replication When the replication is uniform, the query distribution is irrelevant (since all objects are replicated by the same amount, search times are equivalent for both hot and cold items) When the query distribution is uniform, all three replication distributions are equivalent (uniform!) Thus, three relevant combinations query-distribution/replication Uniform/Uniform Zipf-like/Proportional Zipf-like/Square-root

Metrics Pr(success): probability of finding the queried object before the search terminates #hops: delay in finding an object as measured in number of hops

Metrics #msgs per node: Overhead of an algorithm as measured in average number of search messages each node in the p2p has to process #nodes visited Percentage of message duplication Peak #msgs: the number of messages that the busiest node has to process (to identify hot spots) These are per-query measures An aggregated performance measure, each query convoluted with its probability

Simulation Methodology For each experiment, First select the topology and the query/replication distributions For each object i with replication ri, generate numPlace different sets of random replica placements (each set contains ri random nodes on which to place the replicas of object i) For each replica placement, randomly choose numQuery different nodes form which to initiate the query for object i Thus, we get numPlace x numQuery queries In the paper, numPlace = 10 and numQuery = 100 -> 1000 different queries per object

Limitation of Flooding Choice of TTL Too low, the node may not find the object, even if it exists Too high, burdens the network unnecessarily Search for an object that is replicated at 0.125% of the nodes (~11 nodes if total 9000) Note that TTL depends on the topology Also depends on replication (which is however unknown)

Limitation of Flooding Choice of TTL Overhead Also depends on the topology

Limitation of Flooding There are many duplicate messages (due to cycles) particularly in high connectivity graphs Multiple copies of a query are sent to a node by multiple neighbors Duplicated messages can be detected and not forwarded BUT, the number of duplicate messages can still be excessive and worsens as TTL increases

Limitation of Flooding Different nodes

Limitation of Flooding: Comparison of the topologies Power-law and Gnutella-style graphs particularly bad with flooding Highly connected nodes means higher duplication messages, because many nodes’ neighbors overlap Random graph best, Because in truly random graph the duplication ratio (the likelihood that the next node already received the query) is the same as the fraction of nodes visited so far, as long as that fraction is small Random graph better load distribution among nodes

Two New Blind Search Strategies Expanding Ring – not a fixed TTL (iterative deepening) 2. Random Walks (more details) – reduce number of duplicate messages

Expanding Ring or Iterative Deepening Note that since flooding queries node in parallel, search may not stop even if the object is located Use successive floods with increasing TTL A node starts a flood with a small TTL If the search is not successful, the node increases the TTL and starts another flood The process repeats until the object is found Works well when hot objects are replicated more widely than cold objects

Expanding Ring or Iterative Deepening (details) Need to define A policy: at which depths the iterations are to occur (i.e. the successive TTLs) A time period W between successive iterations after waiting for a time period W, if it has not received a positive response (i.e. the requested object), the query initiator resends the query with a larger TTL Nodes maintain ID of queries for W + ε Α node that receives the same message as in the previous round does not process it, it just forwards it

Expanding Ring Start with TTL = 1 and increase it linearly at each time by a step of 2 For replication over 10%, search stops at TTL 1 or 2

Expanding Ring Comparison of message overhead between flooding and expanding ring Even for objects that are replicated at 0.125% of the nodes, even if flooding uses the best TTL for each topology, expending ring still halves the per-node message overhead

Expanding Ring More pronounced improvement for Random and Gnutella graphs than for the PLRG partly because the very high degree nodes in PLGR reduce the opportunity for incremental retries in the expanding ring Introduce slight increase in the delays of finding an object: From 2 to 4 in flooding to 3 to 6 in expanding ring

Random Walks Forward the query to a randomly chosen neighbor at each step Each message a walker k-walkers The requesting node sends k query messages and each query message takes its own random walk k walkers after T steps should reach roughly the same number of nodes as 1 walker after kT steps So cut delay by a factor of k 16 to 64 walkers give good results

Random Walks When to terminate the walks TTL-based Checking: the walker periodically checks with the original requestor before walking to the next node (again uses (a larger) TTL, just to prevent loops) Experiments show that checking once at every 4th step strikes a good balance between the overhead of the checking message and the benefits of checking

Random Walks When compared to flooding: The 32-walker random walk reduces message overhead by roughly two orders of magnitude for all queries across all network topologies at the expense of a slight increase in the number of hops (increasing from 2-6 to 4-15) When compared to expanding ring, The 32-walkers random walk outperforms expanding ring as well, particularly in PLRG and Gnutella graphs

Random Walks Keeping State Each query has a unique ID and its k-walkers are tagged with this ID For each ID, a node remembers the neighbor it has forwarded the query When a new query with the same ID arrives, the node forwards it to a different neighbor (randomly chosen) Improves Random and Grid by reducing up to 30% the message overhead and up to 30% the number of hops Small improvements for Gnutella and PLRG

Adaptive termination is very important Principles of Search Adaptive termination is very important Expanding ring or the checking method Message duplication should be minimized Preferably, each query should visit a node just once Granularity of the coverage should be small Increase of each additional step should not significantly increase the number of nodes visited

Replication Next time