Download presentation
Presentation is loading. Please wait.
Published byIris Caldwell Modified over 9 years ago
1
Peer Communications
2
References Many of the slides are from a tutorial “P2P Systems” by Keith Ross and Dan Rubenstein 1.Dejan S. Milojicic, Vana Kalogeraki, Rajan Lukose,Kiran Nagaraja1, Jim Pruyne, Bruno Richard, Sami Rollins 2,Zhichen Xu, “Peer-to-Peer Computing, HP Technical Report HPL-2002-57. 2.http://www.bsdg.org/Jim/Peer2Peer/Paper/3214.htmlhttp://www.bsdg.org/Jim/Peer2Peer/Paper/3214.html
3
Client-Server Model Let’s look at the Client-Server model Servers are centrally maintained and administered Client has fewer computing resources than a server This is the way the web works No interaction between clients
4
Client Server Model Disadvantages of the client-server model Reliability The network depends on a possibly highly loaded server to function properly. Server needs to be replicated to some extent to provide better reliability. Scalability More users imply more demand for computing power, storage space and bandwidth
5
Peer-to-Peer Model All nodes have similar resources. No reliance on central services or resources. A node acts as both as a “server” and client. Considered more scalable
6
Why P2P? The Internet has three valuable fundamental assets Information Computing resources Bandwidth All of which are vastly under utilized,partly due to the traditional client-server model
7
Why P2P? No single search engine can locate and catalog the ever-increasing amount of information on the Web in a timely way Moreover, a huge amount of information is transient and not subject to capture by techniques such as Web crawling Google claims that it searches about 1.3x10 8 web pages Finding useful information in real time is increasingly difficult!
8
Why P2P? Although miles of new fiber have been installed, the new bandwidth gets little use if everyone goes to Yahoo for content and to eBay Instead, hot spots just get hotter while cold pipes remain cold This is partly why most people still feel the congestion over the Internet while a single fiber’s bandwidth has increased by a factor of 10 6 since 1975, doubling every 16 months
9
Why P2P? P2P potentially can eliminating the single-source bottleneck P2P can be used to distribute data and control and load-balance requests across the Net P2P potentially eliminates the risk of a single point of failure P2P infrastructure allows direct access and shared space, and this can enable remote maintenance capability
10
Peer-to-Peer Applications File/Object sharing (e.g., Gnutella) Distributed computing(e.g., SETI@Home)SETI@Home Collaboration (e.g., Jabber, Magi, Groove)
11
Environment Characteristics for Peer-to-Peer Systems Unreliable environments Peers connecting/disconnecting – network failures to participation Random Failures e.g. power outages, cable and DSL failures, hackers Personal machines are much more vulnerable than servers
12
Evaluating Peer-to-Peer Systems A node’s database: What does a node need to save in order to operate properly/efficiently Success rate (if the file is in the network, what are the changes that a search will find it) Lookup cost: Time Communication (bandwidth usage) Join/departure cost Fault Tolerance – Resilience to faults Resilience to denial of service attacks, security.
13
SETI@Home Main Server Distributes a screen saver–based application to users Applies signal analysis algorithms different data sets to process radio-telescope data. Has more than 3 million users - used over a million years of CPU time to date 1. Install Screen Server Radio-telescope Data 2. SETI client (screen Saver) starts 3. SETI client gets data from server and runs 4. Client sends results back to server Distributed Computing: SETI@HOME
14
Issues in File Sharing Services Publish – How to insert a new file into the network Lookup – Find a specific file Retrieval – Getting a copy of a file
15
P2P File Sharing Software Allows a user to open up a directory in their file system Anyone can retrieve a file from directory Like a Web server Allows the user to copy files from other users’ open directories: Like a Web client Allows users to search nodes for content based on keyword matches: Like Google
16
Napster: How Did It Work Application-level, client-server protocol over point- to-point TCP Centralized directory server Steps: Connect to Napster server Give server keywords to search the full list with. Select “best” of correct answers. One approach is select based on the response time of a pings. –Shortest response time is chosen.
17
Napster: How Did It Work File list and IP address is uploaded 1. napster.com centralized directory
18
Napster: How Did It Work napster.com centralized directory Query and results User requests search at server. 2.
19
Napster: How Did It Work pings User pings hosts that apparently have data. Looks for best transfer rate. 3. napster.com centralized directory
20
Napster: How Did It Work napster.com centralized directory Retrieves file User chooses server 4. Napster’s centralized server farm had difficult time keeping up with traffic
21
Napster History: 5/99: Shawn Fanning (freshman, Northeasten U.) founds Napster Online music service 12/99: first lawsuit 3/00: 25% UWisc traffic Napster 2/01: US Circuit Court of Appeals: Napster knew users violating copyright laws 7/01: # simultaneous online users: Napster 160K, Gnutella: 40K, Morpheus (KaZaA): 300K
22
Napster Judge orders Napster to pull plug in July ‘01 Other file sharing apps take over! gnutella napster fastrack (KaZaA) 8M 6M 4M 2M 0.0 bits per sec
23
Napster: Discussion Locates files quickly Vulnerable to censorship and technical failure Popular data become less accessible because of the load of the requests on a central server
24
Gnutella The focus is on a decentralized method of searching for files Central directory server no longer the bottleneck More difficult to “pull plug” Each application instance serves to: Store selected files Route queries from and to its neighboring peers Respond to queries if file stored locally Serve files
25
Gnutella Gnutella history: 3/14/00: release by AOL, almost immediately withdrawn Became open source Many iterations to fix poor initial design (poor design turned many people off) Issues: How much traffic does one query generate? How many hosts can it support at once? What is the latency associated with querying? Is there a bottleneck?
26
Gnutella: Searching Searching by flooding: A Query packet might ask, "Do you have any content that matches the string ‘Homer"? If a node does not have the requested file, then 7 (default set by Gnutella) of its neighbors are queried. If the neighbors do not have it, they contact 7 of their neighbors. Maximum hop count: 10 (this is called time-to-live TTL) Reverse path forwarding for responses (not files)
27
Gnutella: Searching Downloading Peers respond with a “QueryHit” (contains contact info) File transfers use direct connection using HTTP protocol’s GET method When there is a firewall a "Push" packet is used – reroutes via Push path
28
Gnutella: Searching
30
Gnutella: Discovering Peers A peer has to know at least one other peer to send requests to. Addresses of some peers have been published on a website. When a peer enters the network, it contacts a designated peer and receives a list of other peers that have recently entered the network.
31
Gnutella: Discussion Robust: The failure of peer is not a failure of Gnutella. Performance: Flooding leads to poor performance Free riders: Those who get data but do not share data.
32
KaZaA: The Service More than 3 million up peers sharing over 3,000 terabytes of content More popular than Napster ever was More than 50% of Internet traffic ? MP3s & entire albums, videos, games Optional parallel downloading of files Automatically switches to new download server when current server becomes unavailable Provides estimated download times
33
KaZaA: The Service A user can configure the maximum number of simultaneous uploads and maximum number of simultaneous downloads Queue management at server and client Frequent uploaders can get priority in server queue Keyword search User can configure “up to x” responses to keywords Responses to keyword queries come in waves; stops when x responses are found
34
KaZaA: The Technology Proprietary Control data encrypted Everything in HTTP request and response messages
35
KaZaA: Architecture Each peer is either a supernode or is assigned to a supernode 56 min avg connect Each SN has about 100- 150 children Roughly 30,000 SNs Each supernode has TCP connections with 30-50 supernodes 23 min avg connect supernodes
36
KaZaA: Architecture Nodes that have more connection bandwidth and are more available are designated as supernodes Each supernode acts as a mini-Napster hub, tracking the content and IP addresses of its descendants A supernode tracks only the content of its children. Considered a cross between Napster and Gnutella
37
KaZaA: Finding Supernodes List of potential supernodes included within software download New peer goes through list until it finds operational supernode Connects, obtains more up-to-date list, with 200 entries Nodes in list are “close” to ON. Node then pings 5 nodes on list and connects with the one If supernode goes down, node obtains updated list and chooses new supernode
38
KaZaA Queries Node first sends query to supernode Supernode responds with matches If x matches found, done. Otherwise, supernode forwards query to subset of supernodes If total of x matches found, done. Otherwise, query further forwarded Probably by original supernode rather than recursively
39
Peer Topologies Core Centralized Ring Hierarchical Decentralized Hybrid Centralized-Ring Centralized-Centralized Centralized-Decentralized
40
Centralized Client/server Web servers Databases Napster search Instant Messaging Popular Power
41
Ring Fail-over clusters Simple load balancing Assumption Single owner
42
Hierarchical Tree structure Example: DNS
43
Decentralized Gnutella Freenet Internet routing
44
Centralized + Ring Robust web applications High availability of servers
45
Centralized + Centralized N-tier apps Database heavy systems Web services gateways Google.com uses this topology to deliver their service
46
Centralized + Centralized N-tier apps Database heavy systems Web services gateways Google.com uses this topology to deliver their service
47
Centralized + Decentralized New Wave of P2P Clip2 Gnutella Reflector (next) FastTrack KaZaA Morpheus Email
48
Structured P2P So far the examples that we have seen are considered unstructured P2P in the sense that looking for content does not actually take the content into account. Structured P2P allows for more directed searches based on the content. Directed searches are an attempt to address the flooding problem
49
Distributed Hash Tables (DHT) This is used to overcoming the flooded search problem Operationally like standard hash tables Data is distributed around the network Features Efficient: O(log N) messages per lookup Even distribution of keys among nodes Adaptable Network reconfiguration does not cascade to all nodes Robust: replication of tables provides survival to node failures
50
DHT Step 1: The Hash Introduce a hash function to map the object being searched for to a unique identifier: e.g., h(“NGC’02 Tutorial Notes”) → 8045 Distribute the range of the hash function among all nodes in the network Each node must “know about” at least one copy of each object that hashes within its range (when one exists) 0-999 9500-9999 1000-1999 1500-4999 9000-9500 4500-6999 8000-8999 7000-8500 8045
51
“Knowing about objects” Two alternatives Node can cache each (existing) object that hashes within its range Pointer-based: level of indirection - node caches pointer to location(s) of object 0-999 9500-9999 1000-1999 1500-4999 9000-9500 4500-6999 8000-8999 7000-8500
52
DHT Step 2: Routing For each object, node(s) whose range(s) cover that object must be reachable via a “short” path by the querier node (assumed can be chosen arbitrarily) by nodes that have copies of the object (when pointer-based approach is used) The different approaches (CAN,Chord,Pastry,Tapestry) differ fundamentally only in the routing approach any “good” random hash function will suffice This is a major research topic
53
Summary P2P is a major portion of Internet traffic. Has exceeded web traffic There are different approaches to structuring P2P applications. There is a good deal of concern about future scalability and traffic which leading to a lot research from academia and industry.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.