Presentation is loading. Please wait.

Presentation is loading. Please wait.

Peer-to-peer computing research: a fad? Frans Kaashoek Joint work with: H. Balakrishnan, P. Druschel, J. Hellerstein, D. Karger, R.

Similar presentations


Presentation on theme: "Peer-to-peer computing research: a fad? Frans Kaashoek Joint work with: H. Balakrishnan, P. Druschel, J. Hellerstein, D. Karger, R."— Presentation transcript:

1 Peer-to-peer computing research: a fad? Frans Kaashoek kaashoek@lcs.mit.edu Joint work with: H. Balakrishnan, P. Druschel, J. Hellerstein, D. Karger, R. Karp, J. Kubiatowicz, B. Liskov, D. Mazières, R. Morris, S. Shenker, and I. Stoica Berkeley, ICSI, MIT, NYU, and Rice

2 What is a P2P system? A distributed system architecture: No centralized control Nodes are symmetric in function Larger number of unreliable nodes Enabled by technology improvements Node Internet

3 P2P: an exciting social development Internet users cooperating to share, for example, music files Napster, Gnutella, Morpheus, KaZaA, etc. Lots of attention from the popular press “The ultimate form of democracy on the Internet” “The ultimate threat to copy-right protection on the Internet”

4 How to build critical services? Many critical services use Internet Hospitals, government agencies, etc. These services need to be robust Node and communication failures Load fluctuations (e.g., flash crowds) Attacks (including DDoS)

5 The promise of P2P computing Reliability: no central point of failure Many replicas Geographic distribution High capacity through parallelism: Many disks Many network connections Many CPUs Automatic configuration Useful in public and proprietary settings

6 Traditional distributed computing: client/server Successful architecture, and will continue to be so Tremendous engineering necessary to make server farms scalable and robust Server Client Internet

7 Application-level overlays One per application Nodes are decentralized NOC is centralized ISP3 ISP1ISP2 Site 1 Site 4 Site 3Site 2 N NN N N N P2P systems are overlay networks without central control

8 Distributed hash table (DHT) Distributed hash table Distributed application get (key) data node …. put(key, data) Lookup service lookup(key)node IP address Application may be distributed over many nodes DHT distributes data storage over many nodes (File sharing) (DHash) (Chord)

9 A DHT has a good interface Put(key, value) and get(key)  value Simple interface! API supports a wide range of applications DHT imposes no structure/meaning on keys Key/value pairs are persistent and global Can store keys in other DHT values And thus build complex data structures

10 A DHT makes a good shared infrastructure Many applications can share one DHT service Much as applications share the Internet Eases deployment of new applications Pools resources from many participants Efficient due to statistical multiplexing Fault-tolerant due to geographic distribution

11 Many recent DHT-based projects File sharing [CFS, OceanStore, PAST, Ivy, …] Web cache [Squirrel,..] Archival/Backup store [HiveNet,Mojo,Pastiche] Censor-resistant stores [Eternity, FreeNet,..] DB query and indexing [PIER, …] Event notification [Scribe] Naming systems [ChordDNS, Twine,..] Communication primitives [I3, …] Common thread: data is location-independent

12 CFS: Cooperative file sharing DHT is a robust block store Client of DHT implements file system Read-only: CFS, PAST Read-write: OceanStore, Ivy Distributed hash tables File system get (key) block node …. put (key, block)

13 File representation: self-authenticating data Key = SHA-1(content block) File and file systems form Merkle hash trees 995: key=901 key=732 Signature File System key=995 ………… “a.txt” ID=144 key=431 key=795 … … (root block) (directory blocks) (i-node block) (data) 901= SHA-1 144 = SHA-1 431=SHA-1

14 DHT distributes blocks by hashing IDs Internet Node A Node C Node B Node D 995: key=901 key=732 Signature Block 732 Block 901 247: key=407 key=992 key=705 Signature Block 992 Block 407 Block 705 DHT replicates blocks for fault tolerance DHT caches popular blocks for load balance

15 Historical web archiver Goal: make and archive a daily check point of the Web Estimates: Web is about 57 Tbyte, compressed HTML+img New data per day: 580 Gbyte  128 Tbyte per year with 5 replicas Design: 12,810 nodes: 100 Gbyte disk each and 61 Kbit/s per node

16 Implementation using DHT DHT usage: Crawler distributes crawling and storage load by hash(URL) Client retrieve Web pages by hash(URL) DHT replicates data for fault tolerance Distributed hash tables Crawler get (URL) node …. put(sha-1(URL), page) Client

17 Archival/backup store Goal: archive on other user’s machines Observations Many user machines are not backed up Archiving requires significant manual effort Many machines have lots of spare disk space Using DHT: Merkle tree to validate integrity of data Administrative and financial costs are less for all participants Archives are robust (automatic off-site backups) Blocks are stored once, if key = sha1(data)

18 DHT implementation challenges 1.Scalable lookup 2.Balance load (flash crowds) 3.Handling failures 4.Coping with systems in flux 5.Network-awareness for performance 6.Robustness with untrusted participants 7.Programming abstraction 8.Heterogeneity 9.Anonymity 10.Indexing Goal: simple, provably-good algorithms this talk

19 1. The lookup problem Internet N1N1 N2N2 N3N3 N6N6 N5N5 N4N4 Publisher Put (Key=sha-1(data), Value=data…) Client Get(key=sha-1(data)) ? Get() is a lookup followed by check Put() is a lookup followed by a store

20 Centralized lookup (Napster) Publisher@ Client Lookup(“title”) N6N6 N9N9 N7N7 DB N8N8 N3N3 N2N2 N1N1 SetLoc(“title”, N4) Simple, but O( N ) state and a single point of failure Key=“title” Value=file data… N4N4

21 Flooded queries (Gnutella) N4N4 Publisher@ Client N6N6 N9N9 N7N7 N8N8 N3N3 N2N2 N1N1 Robust, but worst case O( N ) messages per lookup Key=“title” Value=MP3 data… Lookup(“title”)

22 Algorithms based on routing Map keys to nodes in a load-balanced way Hash keys and nodes into a string of digit Assign key to “closest” node Examples: CAN, Chord, Kademlia, Pastry, Tapestry, Viceroy, …. K20 K5 K80 Circular ID space N32 N90 N105 N60 Forward a lookup for a key to a closer node Join: insert node in ring

23 Chord’s routing table: fingers N80 ½ ¼ 1/8 1/16 1/32 1/64 1/128

24 Lookups take O( log(N) ) hops N32 N10 N5 N20 N110 N99 N80 N60 Lookup(K19) K19 Lookup: route to closest predecessor

25 CAN: exploit d dimensions Each node is assigned a zone Nodes are identified by zone boundaries Join: chose random point, split its zone (0,0)(1,0) (0,1) (0,0.5, 0.5, 1) (0.5,0.5, 1, 1) (0,0, 0.5, 0.5) (0.5,0.25, 0.75, 0.5) (0.75,0, 1, 0.5)

26 Routing in 2-dimensions Routing is navigating a d-dimensional ID space Route to closest neighbor in direction of destination Routing table contains O(d) neighbors Number of hops is O(dN 1/d ) (0,0)(1,0) (0,1) (0,0.5, 0.5, 1) (0.5,0.5, 1, 1) (0,0, 0.5, 0.5) (0.75,0, 1, 0.5) (0.5,0.25, 0.75, 0.5)

27 2. Balance load N32 N10 N5 N20 N110 N99 N80 N60 Lookup(K19) K19 Hash function balances keys over nodes For popular keys, cache along the path K19

28 Why Caching Works Well N20 Only O(log N) nodes have fingers pointing to N20 This limits the single-block load on N20

29 3. Handling failures: redundancy N32 N10 N5 N20 N110 N99 N80 N60 Each node knows IP addresses of next r nodes Each key is replicated at next r nodes N40 K19

30 Lookups find replicas N40 N10 N5 N20 N110 N99 N80 N60 N50 N68 1. 2. 3. 4. Lookup(K19) Tradeoff between latency and bandwidth [Kademlia] K19

31 4. Systems in flux Lookup takes log(N) hops If system is stable But, system is never stable! What we desire are theorems of the type: 1.In the almost-ideal state, ….log(N)… 2.System maintains almost-ideal state as nodes join and fail

32 Half-life [Liben-Nowell 2002] Doubling time: time for N joins Halfing time: time for N/2 old nodes to fail Half life: MIN(doubling-time, halfing-time) N nodes N new nodes join N/2 old nodes leave

33 Applying half life For any node u in any P2P networks: If u wishes to stay connected with high probability, then, on average, u must be notified about  (log N) new nodes per half life And so on, …

34 5. Optimize routing to reduce latency Nodes close on ring, but far away in Internet Goal: put nodes in routing table that result in few hops and low latency N20 N41 N80 N40

35 “close” metric impacts choice of nearby nodes Chord’s numerical close and table restrict choice Prefix-based allows for choice Kademlia’s offers choice in nodes and places nodes in absolute order: close (a,b) = XOR(a, b) N32N103 N105 N60 N06 USA Europe USA Far east USA K104

36 Neighbor set From k nodes, insert nearest node with appropriate prefix in routing table Assumption: triangle inequality holds N32N103 N105 N60 N06 USA Europe USA Far east USA K104

37 Finding k near neighbors 1.Ping random nodes 2.Swap neighbor sets with neighbors Combine with random pings to explore 3.Provably-good algorithm to find nearby neighbors based on sampling [Karger and Ruhl 02]

38 6. Malicious participants Attacker denies service Flood DHT with data Attacker returns incorrect data [detectable] Self-authenticating data Attacker denies data exists [liveness] Bad node is responsible, but says no Bad node supplies incorrect routing info Bad nodes make a bad ring, and good node joins it Basic approach: use redundancy

39 Sybil attack [Douceur 02] Attacker creates multiple identities Attacker controls enough nodes to foil the redundancy N32 N10 N5 N20 N110 N99 N80 N60 N40  Need a way to control creation of node IDs

40 One solution: secure node IDs Every node has a public key Certificate authority signs public key of good nodes Every node signs and verifies messages Quotas per publisher

41 Another solution: exploit practical byzantine protocols A core set of servers is pre-configured with keys and perform admission control [OceanStore] The servers achieve consensus with a practical byzantine recovery protocol [Castro and Liskov ’99 and ’00] The servers serialize updates [OceanStore] or assign secure node Ids [Configuration service] N32N103 N105 N60 N06 N N N N

42 A more decentralized solution: weak secure node IDs ID = SHA-1 (IP-address node) Assumption: attacker controls limited IP addresses Before using a node, challenge it to verify its ID

43 Using weak secure node IDS Detect malicious nodes Define verifiable system properties Each node has a successor Data is stored at its successor Allow querier to observe lookup progress Each hop should bring the query closer Cross check routing tables with random queries Recovery: assume limited number of bad nodes Quota per node ID

44 Philosophical questions How decentralized should systems be? Gnutella versus content distribution network Have a bit of both? (e.g., OceanStore) Why does the distributed systems community have more problems with decentralized systems than the networking community? “A distributed system is a system in which a computer you don’t know about renders your own computer unusable” Internet (BGP, NetNews)

45 What are we doing at MIT? Building a system based on Chord Applications: CFS, Herodotus, Melody, Backup store,Ivy, … Collaborate with other institutions P2P workshop, PlanetLab Berkeley, ICSI, NYU, and Rice as part of ITR Building a large-scale testbed RON, PlanetLab

46 Summary Once we have DHTs, building large-scale, distributed applications is easy Single, shared infrastructure for many applications Robust in the face of failures and attacks Scalable to large number of servers Self configuring across administrative domains Easy to program Let’s build DHTs …. stay tuned …. http://project-iris.net

47 7. Programming abstraction Blocks versus files Database queries (join, etc.) Mutable data (writers) Atomicity of DHT operations


Download ppt "Peer-to-peer computing research: a fad? Frans Kaashoek Joint work with: H. Balakrishnan, P. Druschel, J. Hellerstein, D. Karger, R."

Similar presentations


Ads by Google