Distributed Monitoring and Management Presented by: Ahmed Khurshid Abdullah Al-Nayeem CS 525 Spring 2009 Advanced Distributed Systems
Large Distributed Systems Infrastructure PlanetLab: 971 nodes in 485 sites Application Hadoop at Yahoo!: 4000 nodes 2 Google probably has more than 450,000 servers worldwide (Wikipedia) Not only nodes, data processed in commercial systems, e.g. Facebook is enormous (over 10 billion picture uploaded). 3/12/2009Department of Computer Science, UIUC
Monitoring and Management Monitoring and management of both infrastructures and applications – Corrective measures against failure, attacks, etc. – Ensuring better performance, e.g. load balancing What resources are managed? – Distributed application processes, objects (log files, routing table, etc.) – System resources: CPU utilization, free disk space, bandwidth utilization 33/12/2009Department of Computer Science, UIUC
Management and Monitoring Operations Query current system status – CPU utilization, disk space.. – Process progress rate.. Push software updates – Install the query program Monitor dynamically changing state ? n1 n2 n3 n4 n5 n6 43/12/2009Department of Computer Science, UIUC
Challenges Managing today’s large-scale systems is difficult. – A centralized solution doesn’t scale (no in-network aggregation) – Self-organization capability is becoming a necessity – Responses are expected in seconds, not in minutes/hours. – Node failure causes inconsistent results (network partition) Brewer’s conjecture: – It is impossible for a web service to provide the following three guarantees: Consistency, Availability, Partition- tolerance (CAP Dilemma) 53/12/2009Department of Computer Science, UIUC
Astrolabe: A Robust and Scalable Technology for Distributed System Monitoring, Management, and Data Mining Presented by: Abdullah Al-Nayeem Robbert Van Renesse, Kenneth P. Birman, and Werner Vogels
Overview Astrolabe as an information management service. – Locates and collects the status of a set of servers. – Reports the summaries of this information (aggregation mechanism using SQL). – Automatically updates and reports any changed summaries. Design principles: – Scalability through hierarchy of resources – Robustness through gossip protocol (p2p) – Flexibility through customization of queries (SQL) – Security through certificates 73/12/2009Department of Computer Science, UIUC
n8 Astrolabe Zone Hierarchy n1 n2 n3 n4 n5 n7 n6 /berkeley/cornell /uiuc /uiuc/cs Zone: 83/12/2009Department of Computer Science, UIUC
Astrolabe Zone Hierarchy (2) /uiuc/ece/n1/uiuc/cs/n4/uiuc/cs/n6/cornell/n2/cornell/cs/n3 /berkeley /eecs/n5 /berkeley /eecs/n7 /berkeley /eecs/n8 /uiuc/ece /uiuc/cs /cornell/cs /berkeley/eecs /uiuc/cornell/berkeley / It’s a virtual hierarchy. Only the host in leaf zone runs an Astrolabe agent - Zone hierarchy is determined by the administrators (less flexibility). - Assumption: zone names are consistent with the physical topology. - Zone hierarchy is determined by the administrators (less flexibility). - Assumption: zone names are consistent with the physical topology. 93/12/2009Department of Computer Science, UIUC
Decentralized Hierarchical Database An attribute list is associated with each zone. – This attribute list is defined as Management Information Base (MIB) Attributes includes information on load, total free disk space, process information, etc. Each internal zone has a relational table of MIBs of its child zones. – The leaf zone is an exception (.. next slide) 103/12/2009Department of Computer Science, UIUC
Decentralized Hierarchical Database (2) /uiuc/ece/n1/uiuc/cs/n4/uiuc/cs/n6 /uiuc/ece /uiuc/cs /uiuc /cornell /berkeley / cs ece uiuc cornell berkeley 11 …… n4 n6 Load = 0.1 Load = 0.3 system process Load = 0.1 Disk = 1.2TB Service: A(1.1) progress = 0.7 files system process Load = 0.3 Disk = 0.6TB Service: A(1.0) progress = 0.5 files 3/12/2009Department of Computer Science, UIUC Agent (/uiuc/cs/n6) has its local copy of these management table of MIBs. Agent (/uiuc/cs/n6) has its local copy of these management table of MIBs.
State Aggregation uiuc cornell berkeley /uiuc/cs/n4 /uiuc/cs /uiuc / cs ece n4 n6 Load = 0.3 Load = 0.5 (Own) SELECT MIN(Load) as Load Load = 0.3 Time = 121 Time = 101 Time = 130 Other aggregation includes: MAX (attribute) SUM (attribute) AVG (attribute) FIRST(n, attribute) 12 Aggregates the result using SQL query 3/12/2009Department of Computer Science, UIUC
State Merge – Gossip Protocol uiuc cornell berkeley /uiuc/cs/n4 /uiuc/cs /uiuc / cs ece n4 n6 Load = 0.3 Load = 0.5 Load = 0.3 (Own) /uiuc/cs/n6 /uiuc/cs /uiuc / Time = 121 Time = 101 uiuc cornell berkeley cs ece n4 n6 Load = 0.5 (Own) Time = 101 Each agent periodically contacts some other agent and exchanges the state associated with MIB based on timestamp. Time = 130Time = 110 Time = 121 Load = /12/2009Department of Computer Science, UIUC
More about Astrolabe Gossip /uiuc/ece/n1/uiuc/cs/n4/uiuc/cs/n6/cornell/n2/cornell/cs/n3 /berkeley /eecs/n5 /berkeley /eecs/n7 /berkeley /eecs/n8 /uiuc/ece /uiuc/cs /cornell/cs /berkeley/eecs /uiuc/cornell/berkeley / uiuc cornell berkeley cs ece How does /uiuc/cs/n4 know the MIB of /cornell? Gossiped MIBs in /uiuc/cs 14 By gossiping with /cornell/n2 or /cornell/cs/n3 3/12/2009Department of Computer Science, UIUC
More about Astrolabe Gossip (2) Each zone dynamically elects a set of representative agents to gossip on behalf of this zone. – Election can be based on the load of the agents, their longevity. – The MIB contains the list of representative agents. – An agent can represent for multiple zones. Each agent periodically gossips for a zone it represents. – Randomly picks another sibling zones and one of its representative agents. – Gossips the MIBs of all their sibling zones. Gossip dissemination within a zone grows at O(logK). – K = no. of child zones. 153/12/2009Department of Computer Science, UIUC
More about Astrolabe Gossip (3) /uiuc/ece/n1/uiuc/cs/n4/uiuc/cs/n6/cornell/n2/cornell/cs/n3 /berkeley /eecs/n5 /berkeley /eecs/n7 /berkeley /eecs/n8 /uiuc/ece /uiuc/cs /cornell/cs /berkeley/eecs /uiuc/cornell/berkeley / /uiuc/cs/n4 /uiuc/ece/n1 /cornell/cs/n3 /berkeley/eecs/n8 /berkeley/eecs/n5 Gossips about the MIBs of the /berkeley, /uiuc 16 Representative agents for /uiuc uiuc cornell berkeley 3/12/2009Department of Computer Science, UIUC
Example: P2P Caching of Large Objects Query to locate a copy of the file, game.db – SELECT COUNT(*) AS file_count FROM files WHERE name = ‘game.db’ – SELECT FIRST(1, result) AS result SUM(file_count) AS file_count WHERE file_count > 0 The SQL query code is installed in an Astrolabe agent using an aggregation function certificate (AFC) 17 Each host installs an attribute ‘result’ with its host name in its leaf MIB. Aggregates the ‘result’ of each zone and picks one host in each zone. 3/12/2009Department of Computer Science, UIUC
Example: P2P Caching of Large Objects (2) The querying application introduces this new AFC at the management table of some Astrolabe agent. Query propagation and aggregation of output: 1.An Astrolabe agent automatically evaluates the AFC for the leaf zone and recursively updates the table of the ancestor zones. A copy of the AFC is also included along the result. This query is evaluated recursively till the root zone as long as the policy permits. 2.AFCs are also gossiped to other agents (as part of the MIB). An receiving agent scans the gossiped message and installs the new AFCs and in the leaf MIBs Each AFC has an expiration period. Till then, queries are evaluated frequently. 183/12/2009Department of Computer Science, UIUC
Membership Removing failed or disconnected nodes: – Astrolabe also gossips about the membership information. – If a process (or host) fails, its MIB will eventually expire and be deleted. Integrating new members: – Astrolabe relies on IP multicast to set up the initial contacts. Gossip message is also broadcast on the local LAN (occasionally). – Astrolabe agents also contact a set of its relatives (occasionally) 193/12/2009Department of Computer Science, UIUC
Simulation Results - Number of representative agents per zone = 1. - No failure. The smaller the branching of the zone hierarchy, the slower the gossip dissemination. 20 Effect of branching factor of the zone hierarchy on the gossip dissemination time. 3/12/2009Department of Computer Science, UIUC
Simulation Results (2) - Branching factor = 25 - No failure. The higher the representative agents per zone, the lower the time of gossip dissemination in the presence of failures. 21 Effect of the number of representative agents on the gossip dissemination time. 3/12/2009Department of Computer Science, UIUC
Discussion Astrolabe is not meant to provide routing features similar to DHTs. – How is Astrolabe different from DHTs? Astrolabe attributes are updated proactively and frequently. – Do you think this proactive management is better than the reactive one? 223/12/2009Department of Computer Science, UIUC
Moara: Flexible and Scalable Group-Based Querying System Steven Y. Ko 1, Praveen Yalagandula 2, Indranil Gupta 1, Vanish Talwar 2, Dejan Milojicic 2, Subu Iyer 2 1 University of Illinois at Urbana-Champaign 2 HP Labs, Palo Alto ACM/IFIP/USENIX Middleware, 2008 Presented by Ahmed Khurshid
Motivation 3/12/2009Department of Computer Science, UIUC24 Linux Apache MySQL What is the average memory utilization of machines running MySQL? Naïve approach Consumes extra bandwidth and adds delay
Motivation (cont.) 3/12/2009Department of Computer Science, UIUC25 Linux Apache MySQL What is the average memory utilization of machines running MySQL? Better approach Avoids unnecessary traffic
Two Approaches 3/12/2009Department of Computer Science, UIUC26 Single Tree (No Grouping) Query cost is high Group Based Trees Group maintenance cost is high MOARAMOARA
Moara Features Moara maintains aggregation trees for different groups – Uses FreePasrty DHT for this purpose Supports a query language having the following form – (query-attribute, aggregation function, group-predicate) – E.g. (Mem-Util, Average, MySQL = true) Supports composite queries that target multiple groups using unions and intersections Reduces bandwidth usage and response time by – Adaptively pruning branches of the DHT tree – Bypassing intermediate nodes that do not satisfy a given query – Only querying those nodes that are able to answer quickly without affecting the result of the query 3/12/2009Department of Computer Science, UIUC27
Common Queries 3/12/2009Department of Computer Science, UIUC28
Group Size and Dynamism 3/12/2009Department of Computer Science, UIUC29 Usage of PlanetLab nodes by different slices Usage of HP’s utility computing environment by different jobs Most slices have fewer than 10 nodes Number of machines to do a job varies
Moara: Data and Query Model Three part query – Query attribute – Aggregation function – Group-predicate Aggregation functions are partially aggregatable – To perform in-network aggregation Composite queries can be constructed using “and” and “or operators” – E.g. (Linux=true and Apache=true) 3/12/2009Department of Computer Science, UIUC30 (attribute 1, value 1 ) (attribute 2, value 2 ). (attribute n, value n ) Moara Agent
Scalable Aggregation Moara employs a peer-to-peer in-network aggregation approach Maintains separate DHT trees for each group predicate Hash of the group attribute is used to designate a node as the root of such tree Queries are first send to the root node that then propagates the query down the tree Data coming from child nodes are aggregated before sending results to parent node 3/12/2009Department of Computer Science, UIUC31
DHT Tree Steps Take the hash of the group predicate – Let, Hash(ServiceX) = 000 Select the root based on the hash Use Pastry’s routing mechanism to join the tree – Similar to SCRIBE 3/12/2009Department of Computer Science, UIUC
Optimizations Prune out branches of the tree the do not contain any node satisfying a given query – Need to balance the cost of maintaining group tree with the query resolution cost Bypass internal nodes that do not satisfy a given query While dealing with composite queries, select a minimal set of groups by rewriting the query into a more manageable form 3/12/2009Department of Computer Science, UIUC33
Dynamic Maintenance 3/12/2009Department of Computer Science, UIUC34 DEFGH BC A p=false prune (p) = binary local state variable at every node per attribute p=false p(D)=false p(E)=false p=false p(F)=false p(G)=false p=false p(B)=false p(C)=false p(H)=false NO-PRUNE message
Dynamic Maintenance 3/12/2009Department of Computer Science, UIUC35 DEFGH BC A p=false prune (p) = binary local state variable at every node p=falsep=truep=false p(D)=false p(E)=false p=false p(F)=true p(G)=false p=false p(B)=false p(C)=false p(H)=false PRUNE message
Dynamic Maintenance 3/12/2009Department of Computer Science, UIUC36 DEFGH BC A p=false prune (p) = binary local state variable at every node p=falsep=truep=false p(D)=false p(E)=false p=false p(F)=true p(G)=true p=true p(B)=false p(C)=true p(H)=false PRUNE message High group churn rate will cause more PRUNE/NO-PRUNE message and may be more expensive than forwarding query to all nodes
Adaptation Policy Maintain two additional state variables at every node – sat To track if a subtree rooted at this node should continue receiving queries for a given predicate – update Denotes whether a node will update its prune variable or not The following invariants are maintained – update = 1 AND sat = 1 => prune = 0 – update = 1 AND sat = 0 => prune = 1 – update = 0 => prune = 0 3/12/2009Department of Computer Science, UIUC37
Adaptation Policy (cont.) 3/12/2009Department of Computer Science, UIUC38 When overhead due to unrelated queries is higher compared to group maintenance messages When group maintenance messages consume more bandwidth than queries
Separate Query Plane Used to bypass intermediate nodes that do not satisfy a given query Reduces message complexity from O(mlogN) to O(m), where – N = total nodes in the system – m = number of query satisfying nodes Uses two locally maintained sets at each node – updateSet – a list of nodes forwarded to parent – qSet – list of children nodes to which queries are forwarded qSet is the union of all updateSets received from child nodes Based on the size of the qSet and SAT value, a node decides whether to remain in the tree or not 3/12/2009Department of Computer Science, UIUC39
Separate Query Plane (cont.) 3/12/2009Department of Computer Science, UIUC40 SAT NOSAT CDE B A CD B C,D updateSet qSet Remain in the tree if SAT is true and |qSet| ≥ threshold SAT NOSAT CDE B A CD C,D threshold = 2threshold = 3 (B) B Query (C,D) C,D
Composite Queries Moara does not maintain trees for composite queries Answers composite queries by contacting one or more simple predicate trees Example - 1 – (Free_Mem, Avg, ServiceX = true AND ServiceY = true) – Two trees – one for each service – Can be answered using a single tree (whichever promises to respond early) Example - 2 – (Free_Mem, Avg, ServiceX = true OR ServiceY = true) – Two trees – one for each service – Need to query both trees 3/12/2009Department of Computer Science, UIUC41
Composite Queries (cont.) Moara selects a small cover that contains required trees to answer a query For example – cover(Q=“A”) = {A} if A is a predefined group – cover(Q=“A or B”) = cover(A) U cover(B) – cover(Q=“A and B”) = cover(A), cover(B) or (cover(A) U cover(B)) Bandwidth is saved by – Rewriting a nested query to select a low cost cover – Estimating query costs for individual trees – Use semantic information supplied by the user 3/12/2009Department of Computer Science, UIUC42
Finding Low-Cost Covers 3/12/2009Department of Computer Science, UIUC43 - CNF = Conjunctive Normal Form - Gives minimal-cost cover
Performance Evaluation - Dynamic Maintenance FreePastry simulator 10,000 nodes 2000-sized group churn event 500 events Query for an attribute A with value Є {0, 1} 3/12/2009Department of Computer Science, UIUC 44 Bandwidth usage with various query-to churn ratios Moara performs better than both the extreme approaches
Performance Evaluation - Separate Query Plane 3/12/2009Department of Computer Science, UIUC 45 Bandwidth usage for different (group size, threshold) For higher threshold values, query cost does not depend on total number of nodes
Emulab Experiments 50 machines, 10 instances of Moara per machine Fixed query rate 3/12/2009Department of Computer Science, UIUC46 Latency and BW usage with static groups Average Latency of dynamically changing groups Average latency with a static group of the same size - Latency and message cost increases with group size - Moara performs well even with high group churn events
PlanetLab Experiments 200 PlanetLab nodes, one instance of Moara per node 500 queries injected 5 seconds apart 3/12/2009Department of Computer Science, UIUC47 Moara vs. Centralized Aggregator Moara responds faster than a centralized aggregator
Discussion Points Using DAG or Synopsis Diffusion instead of trees Handling group churn in the middle of a query – Moara ensures eventual consistency Effect of nodes that are unreachable State maintenance overhead for different attributes Computation overhead for maintaining DHT trees for different attributes Using Moara in ad-hoc mobile wireless networks 3/12/2009Department of Computer Science, UIUC48
Network Imprecision: A New Consistency Metric for Scalable Monitoring Navendu Jain †, Prince Mahajan ⋆, Dmitry Kit ⋆, Praveen Yalagandula ‡, Mike Dahlin ⋆, and Yin Zhang ⋆ † Microsoft Research ⋆ The University of Texas at Austin ‡ HP Labs
Motivation Providing a consistency metric suitable for large-scale monitoring system Safeguarding accuracy despite node and network failure Providing a level of confidence on the reported information Efficiently tracking the number of nodes that fail to report status or that report status multiple times 3/12/2009Department of Computer Science, UIUC50
Thanks Questions and Comments?