Download presentation
Presentation is loading. Please wait.
1
1 PIER
2
2 Presentation overview PIER Core functionality and design principles Core functionality and design principles Distributed join example. Distributed join example. CAN high level functions. CAN high level functions. Application, PIER, DHT/CAN in detail. Application, PIER, DHT/CAN in detail. Distributed join / operations in detail. Simulation results. Distributed join / operations in detail. Simulation results. Conclusion Conclusion
3
3 What is PIER? Peer-to-Peer Information Exchange and Retrieval Peer-to-Peer Information Exchange and Retrieval A distributed query engine based on widely distributed environments A distributed query engine based on widely distributed environments "general data retrieval system" using any source "general data retrieval system" using any source Pier is internally using a relational database format Pier is internally using a relational database format Read only system Read only system
4
4 Network Monitoring Other User Apps Applications Pier overview >>based on CAN Tier 1Tier 2Tier 3 Core Relational Execution Engine Catalog Manager Query Optimizer PIER DHT Wrapper Storage Manager Overlay Routing DHT
5
5 Relaxed Consistency Brewer states that a distributed data system can only have two out of three of the following properties : Brewer states that a distributed data system can only have two out of three of the following properties : 1. (C)onsistency 2. (A)vailability 3. Tolerance of network (P)artitions Pier : Pier : Priority : A,P and sacrifice C ie. Best effort results Detailed in distributed join part Detailed in distributed join part
6
6 Scalability Scalability – amount of work scales with the amount of nodes Scalability – amount of work scales with the amount of nodes Network can grow easilyNetwork can grow easily RobustRobust PIER doesnt require a-priori allocation of resources PIER doesnt require a-priori allocation of resources
7
7 Data sources Data remains in it original source Data remains in it original source Could be anything : -file system -live feed from a proces Could be anything : -file system -live feed from a proces Wrappers or gateways have to be provided Wrappers or gateways have to be provided Source Wrapper Tuple
8
8 Standard schemas Design goal for application layer Design goal for application layer Pro : Bypasses standardization process Pro : Bypasses standardization process Con : Limited by current applications Con : Limited by current applications IPPayloadTimeStamp 131.54.78.12810101001 03-02-2004 18:24:50 Wrapper IPPayload131.54.78.12810101001 Relation Popular software (example tcpdump)
9
9 PIER is independent of the DHT. Currently it is CAN. PIER is independent of the DHT. Currently it is CAN. Currently using multicast. Other strategies are possible. Currently using multicast. Other strategies are possible. (Further explained in DHT)
10
PIER storage overview
11
11 Presentation overview PIER Core functionality and design principles Core functionality and design principles Distributed join example. Distributed join example. CAN high level functions. CAN high level functions. Application, PIER, DHT/CAN in detail. Application, PIER, DHT/CAN in detail. Distributed join / operations in detail. Simulation results. Distributed join / operations in detail. Simulation results. Conclusion Conclusion
12
12 DHT based distributed join: example Distributed execution of relational database query operations, e.g. join, is the core functionality of the PIER system. Distributed execution of relational database query operations, e.g. join, is the core functionality of the PIER system. Distributed join is a relational database join performed to some degree in parallel by a number of processors (machines) containing different parts of the relations on which the join is performed. Distributed join is a relational database join performed to some degree in parallel by a number of processors (machines) containing different parts of the relations on which the join is performed. Perspective: generic intelligent "keyword" based search based on distributed database query operations (e.g. like Google). Perspective: generic intelligent "keyword" based search based on distributed database query operations (e.g. like Google). The following example illustrates what PIER can do and thus the main purpose of PIER, by means of distributed join based on DHT. Details of how and which layer is doing what are provided later. The following example illustrates what PIER can do and thus the main purpose of PIER, by means of distributed join based on DHT. Details of how and which layer is doing what are provided later.
13
DHT based distributed join example: relational database join proper (1/2)
14
DHT based distributed join example: relational database join proper (2/2)
15
DHT based distributed join example: (1/9)
16
DHT based distributed join example: (2/9)
17
DHT based distributed join example: (3/9)
18
DHT based distributed join example: (4/9)
19
DHT based distributed join example: (5/9)
20
DHT based distributed join example: (6/9)
21
DHT based distributed join example: (7/9)
22
DHT based distributed join example: (8/9)
23
DHT based distributed join example: (9/9)
24
24 Presentation overview PIER Core functionality and design principles Core functionality and design principles Distributed join example. Distributed join example. CAN high level functions. CAN high level functions. Application, PIER, DHT/CAN in detail. Application, PIER, DHT/CAN in detail. Distributed join / operations in detail. Simulation results. Distributed join / operations in detail. Simulation results. Conclusion Conclusion
25
25 CAN CAN is a DHT CAN is a DHT Basic operations on CAN are insertion, lookup and deletion of (key,value) pairs Basic operations on CAN are insertion, lookup and deletion of (key,value) pairs Each CAN node stores a chunk (called zone) of the entire hash table Each CAN node stores a chunk (called zone) of the entire hash table
26
26 Every node is responsible of a small number of “adjacent” nodes(called zones) in the table Every node is responsible of a small number of “adjacent” nodes(called zones) in the table Requests (insert, lookup, delete) for a particular key are routed by intermediate CAN nodes towards the CAN node that contains the key Requests (insert, lookup, delete) for a particular key are routed by intermediate CAN nodes towards the CAN node that contains the key
27
27 Design centers around a virtual d-dimensional Cartesian coordinate space on a d-torus Design centers around a virtual d-dimensional Cartesian coordinate space on a d-torus Coordinate space is completely virtual and has no relation to any physical coordinate system Coordinate space is completely virtual and has no relation to any physical coordinate system Keyspace is probably SHA1 Keyspace is probably SHA1 Map Key k1 onto a point p1 using a Uniform Hash function (k1,v1) is stored at the node Nx that owns the zone with p1 that owns the zone with p1
28
28 Retrieving a value for a given key Apply the deterministic hash function to map key onto point P and then retrieve the corresponding value from the point P. Apply the deterministic hash function to map key onto point P and then retrieve the corresponding value from the point P. One hash function per dimension One hash function per dimension get(key) -> value: - lookup(key) -> ip address. - ip.retrieve(key) -> value
29
29 Storing (key,value) Key is deterministically mapped onto a point P in the coordinate space using a uniform hash function Key is deterministically mapped onto a point P in the coordinate space using a uniform hash function The corresponding (key,value) pair is then stored at the node that owns the zone within which the point P lies The corresponding (key,value) pair is then stored at the node that owns the zone within which the point P lies put(key,value): - lookup(key) -> ip address - ip.store(key,value)
30
30 Routing CAN node maintains routing table that holds the IP address and virtual coordinate zone of each of its immediate neighbors in the coordinate space CAN node maintains routing table that holds the IP address and virtual coordinate zone of each of its immediate neighbors in the coordinate space Using its neighbor coordinate set, a node routes a message towards its destination by simple greedy forwarding to the neighbor with coordinates closest to the destination coordinates Using its neighbor coordinate set, a node routes a message towards its destination by simple greedy forwarding to the neighbor with coordinates closest to the destination coordinates
31
31 Node Maintains routingNode Maintains routing table with neighbors Follow the straight line path throughFollow the straight line path through the Cartesian space (16,16) (16,0) (0,16)(0,0) Data Key = (15,14)
32
32 Node Joining New node must find a node already in the CAN New node must find a node already in the CAN Next, using the CAN routing mechanisms, it must find a node whose zone will be split Next, using the CAN routing mechanisms, it must find a node whose zone will be split Neighbors of the split zone must be notified so that routing can include the new node Neighbors of the split zone must be notified so that routing can include the new node
33
33 CAN: construction I new node 1) Discover some node “I” already in CAN
34
34 CAN: construction 2) Pick random point in space I (x,y) new node
35
35 CAN: construction (x,y) 3) I routes to (x,y), discovers node J I J new node
36
36 CAN: construction new J 4) split J’s zone in half… new node owns one half
37
37 Node departure Node explicitly hands over its zone and the associated (key,value) database to one of its neighbors Incase of network failure this is handled by a take-over algorithm Problem : take over mechanism does not provide regeneration of data solution: every node has a backup of its neighbours
38
38 Presentation overview PIER Core functionality and design principles Core functionality and design principles Distributed join example. Distributed join example. CAN high level functions. CAN high level functions. Application, PIER, DHT/CAN in detail. Application, PIER, DHT/CAN in detail. Distributed join / operations in detail. Simulation results. Distributed join / operations in detail. Simulation results. Conclusion Conclusion
39
39 Querying the Internet with PIER (PIER = Peer-to-peer Information Exchange and Retrieval)
40
40 What is a DHT? Take an abstract ID space, and partition among a changing set of computers (nodes)Take an abstract ID space, and partition among a changing set of computers (nodes) Given a message with an ID, route the message to the computer currently responsible for that IDGiven a message with an ID, route the message to the computer currently responsible for that ID Can store messages at the nodesCan store messages at the nodes This is like a “distributed hash table” This is like a “distributed hash table” Provides a put()/get() API Provides a put()/get() API
41
41 (16,16) (16,0) (0,16)(0,0) Data Key = (15,14) Given a message with an ID, route the message to the computer currently responsible for that ID
42
42 Lots of effort is put into making DHTs better: Lots of effort is put into making DHTs better: Scalable (thousands millions of nodes)Scalable (thousands millions of nodes) Resilient to failureResilient to failure Secure (anonymity, encryption, etc.)Secure (anonymity, encryption, etc.) Efficient (fast access with minimal state)Efficient (fast access with minimal state) Load balancedLoad balanced etc.etc.
43
43 Physical NetworkOverlay Network Query Plan >>based on Can Declarative Queries Select R.Cpr,R.name, s.Address From R,S Where R.cpr=S.cpr
44
44 Applications Any Distributed Relational Database Applications Network Monitoring Feasible Applications Feasible Applications Intrusion detection Intrusion detection Fingerprint queries Fingerprint queries Load Monitor of CPU Load Monitor of CPU Split Zone Split Zone
45
45 DHTs DHTs Implemented with CAN (Content Addressable Network). Implemented with CAN (Content Addressable Network). Node identified by rectangle in d- dimensional space Node identified by rectangle in d- dimensional space Key hashed to a point, stored in corresponding node. Key hashed to a point, stored in corresponding node. Routing Table of neighbours is maintained. O(d) Routing Table of neighbours is maintained. O(d)
46
46 DHT Design Routing Layer Routing Layer Mapping for keys Mapping for keys (-- dynamic as nodes leave and join) (-- dynamic as nodes leave and join) Storage Manager Storage Manager DHT based data DHT based data Provider Provider Storage access interface for higher levels Storage access interface for higher levels
47
47 DHT – Routing Routing layer maps a key into the IP address of the node currently responsible for that key. Provides exact lookups, callbacks higher levels when the set of keys has changed Routing layer API Asynchronous Fnc lookup(key) ipaddr synch fnc Local Node join(landmarkNode)leave()locationMapChange()
48
48 DHT – Storage Storage Manager stores and retrieves records, which consist of key/value pairs. Keys are used to locate items and can be any data type or structure supported Storage Manager API store(key, items) retrieve(key) items --Structure remove(key)
49
49 DHT – Provider (1) Provider ties routing and storage manager layers and provides an interface Each object in the DHT has a namespace, resourceID and instanceID Each object in the DHT has a namespace, resourceID and instanceID DHT key = (hash1(namespace,resourceID) +..+ hashN(namespace,resourceID)) DHT key = (hash1(namespace,resourceID) +..+ hashN(namespace,resourceID)) namespace - application or group of object, table or relation namespace - application or group of object, table or relation resourceID – primary key or any attribute(Object) resourceID – primary key or any attribute(Object) instanceID – integer, to separate items with the same namespace and resourceID instanceID – integer, to separate items with the same namespace and resourceID Lifetime - item storage duration -- adherence principle of relaxed Consistency Lifetime - item storage duration -- adherence principle of relaxed Consistency CAN’s mapping of resourceID /Object is equivalent to an index Depends on dimension On CAN 0.2 160
50
50 DHT – Provider (2) Provider API get(namespace, resourceID) item put(namespace, resourceID, item, lifetime) renew(namespace, resourceID, instanceID, lifetime) bool multicast(namespace, resourceID, item) lscan(namespace) items (Structure/Iterator) newData(namespace, item) Node R1 (1..n) Table R (namespace) (1..n) tuples (n+1..m) tuples Node R2 (n+1..m) rID1item rID3item rID2item
51
51 Query Processor How it works? How it works? performs selection, projection, joins, grouping, aggregation -> Operatorsperforms selection, projection, joins, grouping, aggregation -> Operators Push & Pull Ways of OperatingPush & Pull Ways of Operating simultaneous execution of multiple operators pipelined togethersimultaneous execution of multiple operators pipelined together results are produced and queued as quick as possibleresults are produced and queued as quick as possible How it modifies data? How it modifies data? insert, update and delete different items via DHT interfaceinsert, update and delete different items via DHT interface How it selects data to process? How it selects data to process? dilated-reachable snapshot – data, published by reachable nodes at the query arrival timedilated-reachable snapshot – data, published by reachable nodes at the query arrival time
52
52 pull R s Temporary Data in DHT NameSpace push pull push Temporary Data in DHT Salary T Q Grouping & Aggregation (R S)(TQ) (4b1.CPR=4b2.CPR)
53
53 DHT based distributed join detailed: PIER layer procedures in node originating join (1/2).
54
54 DHT based distributed join detailed: PIER layer procedures in node originating join (2/2).
55
55 DHT based distributed join detailed: PIER layer procedures in node containing data of relation R.
56
56 DHT based distributed join detailed: PIER layer procedures in node containing data of relation S.
57
57 DHT based distributed join detailed: PIER layer procedures in node containing intermediate data (namespace NQ).
58
58 Presentation overview PIER Core functionality and design principles Core functionality and design principles Distributed join example. Distributed join example. CAN high level functions. CAN high level functions. Application, PIER, DHT/CAN in detail. Application, PIER, DHT/CAN in detail. Distributed join / operations in detail. Distributed join / operations in detail. Simulation results. Simulation results. Conclusion Conclusion
59
59 DHT based distributed joins detailed: important properities. Distributed join performed by PIER layer. Distributed join performed by PIER layer.
60
60 DHT based distributed joins detailed: important properities. The PIER distributed joins are adaptions of well-known distributed database distributed join algorithms. The PIER distributed joins are adaptions of well-known distributed database distributed join algorithms. Distributed databases and database operations have been the object of research for many years. Distributed databases and database operations have been the object of research for many years. PIER distributed join leverages DHT in many aspects, because DHT provides the desired scalability on network size. PIER distributed join leverages DHT in many aspects, because DHT provides the desired scalability on network size.
61
DHT based distributed join detailed: CAN multicast design.
62
DHT based distributed join consistency: in comparison: complete consistency in traditional distributed database systems.
63
DHT based distributed join consistency: consistency defined by dilated-reachable snapshot.
64
64 DHT based distributed joins detailed: important properities. PIER use of DHT layer in distributed join: PIER use of DHT layer in distributed join: Relation -> multicast group = namespace. Join operation multicasted to all nodes containing data for a given relation (addressing of table by multicast group). This multicast functionalilty is essential to achieve distribution and parallelism. Relation -> multicast group = namespace. Join operation multicasted to all nodes containing data for a given relation (addressing of table by multicast group). This multicast functionalilty is essential to achieve distribution and parallelism. Push / pull technique: Push / pull technique: Intermediate result stored (pushed) in hashed table.Intermediate result stored (pushed) in hashed table. Nodes performing next step in workflow pulls intermediate result.Nodes performing next step in workflow pulls intermediate result. DHT hashed tables with intermediate results used as queue in workflow. DHT hashed tables with intermediate results used as queue in workflow. DHT leveraged queue and high degree of distribution and parallelism makes network delay much less important. DHT leveraged queue and high degree of distribution and parallelism makes network delay much less important. Hashing on primary key (rehashing) ensures related tuples on same node (content addressable network / store based on hashed key). Hashing on primary key (rehashing) ensures related tuples on same node (content addressable network / store based on hashed key). DHT used as exchange medium between nodes (database terms). DHT used as exchange medium between nodes (database terms).
65
65 DHT based distributed join detailed: important properities. Essential properties of DHT protocol used: Essential properties of DHT protocol used: reliability in inserting data (put)reliability in inserting data (put) reliability in retrieving data (get)reliability in retrieving data (get) reliability in storing data (preserving data at nodefailure)reliability in storing data (preserving data at nodefailure) scalability in terms of number of nodes and amount of work handledscalability in terms of number of nodes and amount of work handled trad. distributed databases systems has this, but DHT (at least CAN) scalability is larger trad. distributed databases systems has this, but DHT (at least CAN) scalability is larger robustness to node and network failures.robustness to node and network failures. trad. database systems do not have this (cf. Brewer's "CAP" conjecture: you can only have two of Consistency, Availability and tolerance of network Partitions. Trad. distributed systems choose C and sacrifiCes A in the face of P). trad. database systems do not have this (cf. Brewer's "CAP" conjecture: you can only have two of Consistency, Availability and tolerance of network Partitions. Trad. distributed systems choose C and sacrifiCes A in the face of P). support of quickly and frequently varying number of participating nodessupport of quickly and frequently varying number of participating nodes trad. distributed databases normally does not have this. trad. distributed databases normally does not have this.
66
66 DHT based distributed join detailed: important properities. Comparison of traditional distributed databases and PIER distributed query: Comparison of traditional distributed databases and PIER distributed query: PIER query (as name shows) is read only, no update.PIER query (as name shows) is read only, no update. Committed update transactions and operation support facilitites like backup (e.g. checkpoint based)Committed update transactions and operation support facilitites like backup (e.g. checkpoint based) not a design goal of PIER and other P2P systems. not a design goal of PIER and other P2P systems. essential property of trad. distributed database systems for many years. essential property of trad. distributed database systems for many years.
67
67 DHT based distributed join detailed: important properties. PIER: supported distributed join algoritms and query rewrite mechanisms (1/2): PIER: supported distributed join algoritms and query rewrite mechanisms (1/2): Note initially:Note initially: Basic principles of the use of DHT by PIER distributed query is already shown by above example containing pipelining symmetric hash join. Basic principles of the use of DHT by PIER distributed query is already shown by above example containing pipelining symmetric hash join. Pipelining symmetric hash join is the most general purpose equi join operation of PIER. Pipelining symmetric hash join is the most general purpose equi join operation of PIER. Other distributed query mechanisms primarily differ with respect to distributed database operation strategy and bandwidth saving techniques. Other distributed query mechanisms primarily differ with respect to distributed database operation strategy and bandwidth saving techniques. Other distributed query mechanisms are just shortly mentioned here. Other distributed query mechanisms are just shortly mentioned here.
68
68 DHT based distributed join detailed: important properties. PIER: supported distributed join algoritms and query rewrite mechanisms (2/2): PIER: supported distributed join algoritms and query rewrite mechanisms (2/2): PIER supports DHT based versions of two distributed binary equi join algoritms:PIER supports DHT based versions of two distributed binary equi join algoritms: pipelined symmetric hash join: shown in above example. pipelined symmetric hash join: shown in above example. fetch matches: one of the tables is already hashed on join key and attributes. fetch matches: one of the tables is already hashed on join key and attributes. PIER supports DHT based versions of two distributed bandwidth-reducing query rewrite strategies:PIER supports DHT based versions of two distributed bandwidth-reducing query rewrite strategies: symmetric semi-join: first local projection of "source" table's join keys and attributes, then pipelined symmetric hash join. symmetric semi-join: first local projection of "source" table's join keys and attributes, then pipelined symmetric hash join. Bloom join: something like the following: an "index" table keyed on join key of each participating table is put into intermediate table and makes it possible in an efficient way to identify matching set of tuples in the participating tables. Bloom join: something like the following: an "index" table keyed on join key of each participating table is put into intermediate table and makes it possible in an efficient way to identify matching set of tuples in the participating tables. Note: pipelining symmetric has join (above example) does in it self safe much network bandwidth. Note: pipelining symmetric has join (above example) does in it self safe much network bandwidth.
69
69 Presentation overview PIER Core functionality and design principles Core functionality and design principles Distributed join example. Distributed join example. CAN high level functions. CAN high level functions. Application, PIER, DHT/CAN in detail. Application, PIER, DHT/CAN in detail. Distributed join / operations in detail. Distributed join / operations in detail. Simulation results. Simulation results. Conclusion Conclusion
70
70 Evaluation of peer scalability and robustness by means of simulation and limited scale experimental operation: Subject of evalution: above two distributed join algoritms, above two query rewrite methods. Subject of evalution: above two distributed join algoritms, above two query rewrite methods. Simulation: 10000 nodes network. Simulation: 10000 nodes network. Experimental operation: cluster of 64 pcs in a LAN. Experimental operation: cluster of 64 pcs in a LAN. Simulation results: Simulation results: network scales well with both scaling number of nodes and scaling amount of work.network scales well with both scaling number of nodes and scaling amount of work. network robust to node failure. Robustness is provided by the DHT layer, not by the PIER layer.network robust to node failure. Robustness is provided by the DHT layer, not by the PIER layer. Evaluation of presenters: the Pier simulation test cases seem relevant and the results seem acceptable. Evaluation of presenters: the Pier simulation test cases seem relevant and the results seem acceptable. Experimental results: negligeable (experimental operation must be extended). Experimental results: negligeable (experimental operation must be extended). Reference: "Querying the Internet with PIER", Ryan Huebsch a.o., see course homepage. Hereafter "article". Reference: "Querying the Internet with PIER", Ryan Huebsch a.o., see course homepage. Hereafter "article". The most important simulation tests and resuls shown below. The most important simulation tests and resuls shown below.
71
71 Simulation testbed The same test distributed join is used in all tests. The same test distributed join is used in all tests. The amount of work, in terms of datatraffic, of the test distributed join can be and is proportionally increased with increasing number of nodes in the network. The amount of work, in terms of datatraffic, of the test distributed join can be and is proportionally increased with increasing number of nodes in the network.
72
72 Simulation: average join time when scaling network and work ("full mesh" topology) (1/2). Test case: average join time when both the amount of work and the amount of nodes in the network scale. Test case: average join time when both the amount of work and the amount of nodes in the network scale. "Full mesh" network topology used. "Full mesh" network topology used. Result: network scales well (performance degradation of factor 4 when number of nodes (and propertionally the amount of work) scales from 2 to 10.000. Result: network scales well (performance degradation of factor 4 when number of nodes (and propertionally the amount of work) scales from 2 to 10.000. See article fig. 3 below. See article fig. 3 below.
73
73 Simulation: average join time when scaling network and work ("full mesh" topology) (2/2).
74
74 Simulation: average join time for different mechanisms (1/2). Test case: Average join time for: Test case: Average join time for: supported distributed join algoritms:supported distributed join algoritms: pipelining symmetric hash join pipelining symmetric hash join fetch matches. fetch matches. supported query rewrite strategies:supported query rewrite strategies: symmetric semi-join. symmetric semi-join. Bloom join. Bloom join. Result: see article table 4 below. Result: see article table 4 below.
75
75 Simulation: average join time for different mechanisms (2/2).
76
76 Simulation test cases and results: aggregate join traffic for different mechanisms (1/2). Test case: aggregate join traffic generated supported join algorithms and query rewrite strategies. Test case: aggregate join traffic generated supported join algorithms and query rewrite strategies. Result: see article fig. 4 below. Result: see article fig. 4 below.
77
77 Simulation test cases and results: aggregate join traffic for different mechanisms (2/2).
78
78 Simulation: average join time when scaling network and work ("transit stub" topology) (1/4). Initially note: the topologies examined are the underlying IP network topology, not the PIER / DHT overlay network topology. Initially note: the topologies examined are the underlying IP network topology, not the PIER / DHT overlay network topology. Test case: "Transit stub" network topology compared to "full mesh" network topology. Test case: "Transit stub" network topology compared to "full mesh" network topology. "Transit stub" is the only realistic network topology. "Full mesh" is easier to simulate. "Transit stub" is the only realistic network topology. "Full mesh" is easier to simulate. "Transit stub" topology: traditional hierarchical network topology: "Transit stub" topology: traditional hierarchical network topology: 4 "transit domains" each having4 "transit domains" each having 10 "transit nodes" each having10 "transit nodes" each having 3 "stub nodes".3 "stub nodes". PIER nodes distributed equally upon "stub nodes".PIER nodes distributed equally upon "stub nodes".
79
79 Simulation: average join time when scaling network and work ("transit stub" topology) (2/4). Comparison of "transit stub" topology with "full mesh" topology is done by comparing the following in the two topologies: average join time when both the amount of work and the amount of nodes scale. Comparison of "transit stub" topology with "full mesh" topology is done by comparing the following in the two topologies: average join time when both the amount of work and the amount of nodes scale. Performance scaling ability of "full mesh" topology was shown on fig. 3 above. Performance scaling ability of "full mesh" topology was shown on fig. 3 above. Performance scaling ability of "transit stub" topology shown on fig. 7 below. Performance scaling ability of "transit stub" topology shown on fig. 7 below. Result: Result: "transit stub" topology performance scaling ability close to that of "full mesh" topology."transit stub" topology performance scaling ability close to that of "full mesh" topology. "transit stub" absolute delay bigger (due to realistic topology and network delays)."transit stub" absolute delay bigger (due to realistic topology and network delays). conclusion: "full mesh", which is easier to simulate, used in simulation test, since it is sufficiently close to the realistic "transit stub" topology.conclusion: "full mesh", which is easier to simulate, used in simulation test, since it is sufficiently close to the realistic "transit stub" topology.
80
80 Simulation: "transit stub" topology's ability to keep performance high when scaling (fig. 7): average join time when scaling network and work (3/4).
81
81 Simulation: in comparison "full mesh" topology's ability to keep performance high when scaling (fig. 3): average join time when scaling network and work (4/4).
82
82 Presentation overview PIER Core functionality and design principles Core functionality and design principles Distributed join example. Distributed join example. CAN high level functions. CAN high level functions. Application, PIER, DHT/CAN in detail. Application, PIER, DHT/CAN in detail. Distributed join / operations in detail. Distributed join / operations in detail. Simulation results. Simulation results. Conclusion. Conclusion.
83
83Conclusion Pier is a promising technique Network monitoring is limited to network state data, which do not need to be consistent with respect to small timeinterval for updates. Only applications feasible are ones which can accept highly inconsistent data
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.