Implementation of a Tapestry Node: The main components: The core router, utilizes the routing and object reference tables to handle messages, The node membership component, responsible for handling the integration of new nodes as well as graceful (or voluntary) exit of nodes The mesh repair, is responsible for adapting the Tapestry mesh as the Network environment changes The patchwork monitoring loss rates and latency information for established communication channels
The central role of the Core Routing Component
The critical path of routing Two kinds of lookup’s for the Router: The object-references: Timecomplexity O(1), if all references fit in memory The tablelookup for nodes: Timecomplexity O(Log B N) Toward a Higher Performance Implementation
The Bloomfilter Which comes in handy, because: In a large global-scale deployment of Tapestry the pointers will not fit into main-memory In Tapestry most routing hops receive negative lookup results Two links: From the article: “A Bloom filter is a lossy representation of a set that can detect the absence of a member quite quickly” Definition: “A probalistic algorithm to quickly test membership in a large set using multiple hash functions into a single array of bits”
Using the Bloomfilter
Testing the Performance of Routing
Optimizing the Performance of Routing
Testing Scalability
Testing the Performance Against Massiv Fall-outs and Joins
Testing the Performance Against continous Fall-outs and Joins
Conclusion Tapestry provides a stable interface under a variety of network conditions Tapestry supports efficient routing of messages It scales well Simple optimizations can be achieved by replicating data across multiple servers