Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chord and CFS Philip Skov Knudsen Niels Teglsbo Jensen Mads Lundemann

Similar presentations


Presentation on theme: "Chord and CFS Philip Skov Knudsen Niels Teglsbo Jensen Mads Lundemann"— Presentation transcript:

1 Chord and CFS Philip Skov Knudsen (skov@diku.dk) Niels Teglsbo Jensen (teglsbo@diku.dk) Mads Lundemann (thenox@diku.dk)

2 Distributed hash table Stores values at nodes Hash function Name -> Hash key, name can be any string or byte array Article mixes up key and ID Chord CFS

3 Chord A scalable Peer-to-peer Lookup Protocol for Internet Applications

4 Chord purpose Map keys to nodes (Compared to Freenet: No anonymity)

5 Goals Load balance Decentralization Scalability Availability Flexible naming

6 Consistent hashing

7 Simple network topology

8 Efficient network topology

9 Lookup algorithm

10 Node joining 26.join(friend) -> 26.successor = 32 26.stabilize -> 32.notify(26) 21.stabilize -> 21.successor=26 -> 26.notify(21)

11 Preventing lookup failure Successor list length r Disregarding network failures Assuming each node failing within one stabilization period with probability p: Connectivity loss for a node with probability: p^r

12 Path lengths from simulation Probability density function for path length in a network of 2^12 nodes. Path lengths with varying N

13 Load balance Nodes: 10^4, keys: 5*10^5

14 Virtual servers 10^4 nodes and 10^6 keys

15 Resilience to failed nodes In a network of 1000 nodes

16 Latency stretch In a network of 2^16 nodesc = Chord latency i = IP latency stretch = c / i

17 CFS Wide-area cooperative storage

18 Purpose Distributed cooperative file system

19 System design

20 File system using DHash

21 Block placement Tick mark: Block ID Square: Server responsible for ID (in Chord) Circles: Servers holding replicas Triangle: Servers receiving a copy of the block to cache

22 Availability r servers holding replicas of a block The server responsible for ID is responsible for detecting failed replica servers If the server responsible for ID fails the new server in charge will be the first replica server Replica server detects this when Chord stabilizes Replica nodes are found in the successor list

23 Persistence Each server promises to keep a copy of a block available for at least an agreed-on interval Publishers can ask for extensions This does not apply to cached copies, but to replicas The server responsible for the ID is also responsible for relaying extension requests to servers holding replicas

24 Load balancing Consistent hashing Virtual servers Caching

25 Preventing flooding Each CFS server limits any one IP address to using a certain percentage of its storage Percentage might be lowered as more nodes enter the network Can be circumvented by clients with dynamic IP addresses

26 Efficiency Efficient lookups using Chord Prefetching Server selection

27 Conclusion Efficient Scalable Available Load-balanced Decentralized Persistent Prevents flooding


Download ppt "Chord and CFS Philip Skov Knudsen Niels Teglsbo Jensen Mads Lundemann"

Similar presentations


Ads by Google