Download presentation
Presentation is loading. Please wait.
1
Efficient Incremental Optimal Chain Partition of Distributed Program Traces
Selma Ikiz Vijay K. Garg Parallel and Distributed Systems Laboratory
2
Outline Introduction and Motivation Problem definition
Outline two previous algorithms Offline vs Incremental -> Experimental Results New Incremental algorithm Experimental Results Concluding Remarks This is the general outline of today’s talk I will start with giving some background information and our motivation. I will briefly go over the two previously known algorithms and problem definition I’ll define how we can use these algorithms both in offline and incremental manner, and show some experimental results. Then finally I’ll introduce a new incremental algorithm, and show its experimental result.
3
Software testing and debugging
Commercial software has large number of components Verification Formal proof of correctness is not feasible Predicate detection (Runtime verification) simulation & formal methods Debugging Large number of states Abstraction or grouping of states/processes Today, commercial software has large number of components, hence, in general it’s not feasible to give a formal proof of correctness. Therefore, extensive simulation is still the most common system in the industry. Predicate detection of program traces (which is also called runtime verification) is a way to verify the correctness of these computations. Also, due to the large number of states, it is hard to debug the programs. One way of easing this complexity is using grouping among states or processes.
4
Distributed Computation as Partial order set
Partial-order models Poset (X,P) : X is a set P is antisymmetric, reflexive, and transitive binary relation on X Lamport 1978: “happened-before” relation f1 → e3 c(f1) < c(e3) Fidge 1991 & Mattern 1989 : vector-clocks (1,0,0) (2,2,0) (3,2,0) P1 e1 e2 e3 (0,1,0) (0,2,0) (0,3,0) P2 f1 f2 f3 (0,2,1) (3,2,2) P3 g1 g2
5
Optimal chain partition of a poset
Width the size of the largest antichain (a subset of poset whose every distinct pair is mutually incomparable) A poset cannot be partition into k chains if k < width(P) [R.P. Dilworth] Debugging Visualization Testing & Analyzing Bounded sum predicates (x1+ x2+ x3 < k) Mutual exclusion violation Let’s define what we mean by optimal chain partition of a poset. The width of the poset is defined as the size of the largest subset that every distinct pair in this set is incomparable. And due to Dilworth’s thm, we know that a poset can be partitioned into k number of chains if k is greater then or equal to the width. In short, size of an optimal chain partition of a poset equals to its width. What it’s use to us? We can use it to visualize the distributed computation for debugging purposes. We can use it in predicate detection. Next I’ll go over an example to demonstrate the relation between mutual exclusion problem and optimal chain partition,.
6
Mutual exclusion violation example
(1,0,0) (2,2,0) (3,2,0) P1 (1,0,0) (0,2,0) (3,2,2) (0,1,0) (0,2,0) (0,3,0) (3,2,0) (0,3,0) P2 (0,2,1) (3,2,2) Let’s look at the previous computation. Assume that the green events have executed critical section. We can trivially partition this poset as one chain per process. If we examine the timestamps, it’s easy to see that we can attach the third chain, at the end of the first chain. Then, we get a reduction into two chains. To show that we cannot reduce this partition any more we need to find an antichain of size two. And here’s our two concurrent events, so we detected a mutual exclusion violation. P3 Critical event
7
Problem definition & Previous algorithms
Given a chain partition of P, C = {C1, ,CN} into N disjoint chains, rearrange these chains into a chain partition with the fewest number of chains. Previous algorithms that answer the question given k chains whether it is possible to partition it into k-1 chains. Bogart and Magagnosc BM Tomlinson and Garg TG We define the chain reduction problem as follows. Given a chain partition of P into N disjoint chains, we rearrange the elements into an optimal chain partition of the poset. Previous algorithms are given by Bogart and Magagnosc , and Tomplinson and Garg . They reduce the chain partition by one if it’s possible, otherwise they return an antichain. From, here on I’ll refer to Bogart and Magagnosc algorithm as BM, and Refer to Tomplinson and Grag’s algorithm as TG. I am going to go over these algorithms by using an example.
8
Bogart & Magagnosc A sequence of elements a0,b0,a1,b1,…,as,bs is a reducing sequence if a0 is the least element of some chain, bi is the immediate predecessor of ai+1 in some chain, all bi’s are distinct, for all i: ai > bi in the partial order, bs is the greatest element of its chain. C1 C2 C3 (1,0,0) (0,1,0) (0,0,1) (2,0,0) (0,2,0) (0,0,2) (3,0,0) (0,3,0) (0,0,3) C4 (3,4,0) (0,3,7) (0,0,4) BM algorithm keeps an adjacency list for each event. It starts with an element, And tries to find a pattern that is called reducing sequence which enables the reduction. As an example if we start with the first tail, we’ll find that it is less then 3,4,0, 0,3,0 is less then 0,3,7 0,0,3 is less then 0,0,4. By rearranging these chains we’ll reduce the chain partition by one. (3,5,0) (0,3,8) (0,0,5) (0,0,6)
9
Tomlinson and Garg Output Chains Input Chains
┴ ┴ ┴ (0,0,1) (1,0,0) (0,1,0) (0,0,1) (0,0,4) Input Chains (2,0,0) (0,2,0) (0,0,2) (0,0,5) TG solves the same problem by merging them into k-1 chains. They introduce a spanning tree between input chains and output chains. At each step it only compares the chain heads to decide which event is going to be placed into the output chains. For example, in the first step it’ll find out that 0,0,1 is less then 0,0,4 and places it to the queue that’s between these two input chains. (3,0,0) (0,3,0) (0,0,3) (0,0,6) (3,4,0) (0,3,7) (3,5,0) (0,3,8)
10
Tomlinson and Garg Output Chains Input Chains (0,0,1) (0,0,1) (0,0,2)
┴ ┴ (0,1,0) (0,0,2) (0,0,3) (1,0,0) (0,2,0) (0,1,0) (0,0,3) (0,3,7) (0,0,4) Input Chains (2,0,0) (0,2,0) (0,3,0) (0,3,8) (0,3,7) (0,0,5) (3,0,0) (3,4,0) (0,3,0) (0,3,8) (0,0,6) (3,5,0) (3,4,0) (3,5,0)
11
Tomlinson and Garg Output Chains Input Chains (0,1,0) (0,0,1) (0,2,0)
(0,0,2) ┴ (0,3,0) ┴ ┴ (0,2,0) (0,0,3) (1,0,0) (0,3,0) (3,4,0) (0,3,7) (0,0,4) Input Chains (2,0,0) (3,4,0) (3,5,0) (0,3,8) (0,0,5) (3,0,0) (3,5,0) (0,0,6)
12
Tomlinson and Garg Output Chains Input Chains
(0,1,0) (0,0,1) Output Chains (1,0,0) (0,2,0) (0,0,2) (1,0,0) (2,0,0) (0,3,0) (0,0,3) (2,0,0) (3,0,0) (3,4,0) (0,3,7) (0,0,4) Input Chains (3,0,0) (3,5,0) (0,3,8) (0,0,5) This process continues till it finds an empty input chain (0,0,6)
13
Tomlinson and Garg Output Chains Input Chains
(1,0,0) (0,1,0) (0,0,1) Output Chains (2,0,0) (0,2,0) (0,0,2) (3,0,0) (0,3,0) (0,0,3) ┴ (3,4,0) (0,3,7) (0,0,4) Input Chains (3,5,0) (0,3,8) (0,0,5) , or it finds that all the heads are incomparable to each other. In this example, it finds the first input chain empty, and appends the rest into output chains. (0,0,6)
14
Tomlinson and Garg Output Chains Input Chains (1,0,0) (0,1,0) (0,0,1)
(2,0,0) (0,2,0) (0,0,2) (3,0,0) (0,3,0) (0,0,3) (3,4,0) (0,3,7) (0,0,4) (3,5,0) (0,3,8) (0,0,5) (0,0,6) Input Chains ┴ ┴ ┴
15
Offline vs Incremental
Call BM or TG algorithm until there is no further reduction in the number of chains Incremental (Naïve) Place the new element into a new chain, and call the BM or TG algorithm We can use these algorithms both in offline and incremental manner. In the offline version we start with the trivial partition, and call the algorithm over and over until they return an antichain. In the naïve incremental version, we create a new chain that contains only the new element, and call the algorithm, and use it’s output as new chain partition. In order to operate on incremental posets we need to make some assumptions.
16
Linear extension hypothesis [Bouchitte & Rampon]
Let f be the new event, then we assume that all the events happed before f are already arrived and processed. f We assume that events arrive according to linear extension hypothesis defined by Bouchitte and Rampon. Which mainly says that, the order of event arrivals is actually a linearization of the poset. As an example; Let f be the new event, then we assume that all the events that are happed before f are already arrived and processed. We implemented these algorithms and measure their performance.
17
Memory Usage We restricted the memory usage to 512MB.
According to memory usage, TG is better then BM. And this is expected since an adjacency list is created for every event in BM.
18
Running times According to running time performances,
Incremental algorithms are far from the offline versions. This is also expected, since we compare the same events over and over. Among two offline algorithms, TG performed better then BM. I believe this is due to the setup complexity of bm, since at every step, adjacency lists are modified. Now I move to the new incremental algorithm.
19
Idea behind Chain Partitioner
Maximum antichain New element No effect Updates the maximum antichain Expands the maximum antichain Given the maximum antichain, it’s sufficient to compare the new element with the maximum antichain I want to give the idea, or improvement behind the new algorithm that we call chain partitioner or CP for short. Assume that we have seen the events in blue shaded area so far. And red dashed line shows the maximum antichain in the subposet. A new element may affect the chain partition in 3 ways. Either it has no effect on the maximum antichain Or it updates the antichain by replacing an event Or it increases the maximum antichain Therefore We don’t need to start from the beginning. It is sufficient to compare the new element with the maximum antichain to define the partition.
20
CP example To describe the CP algorithm, Linearization:
(1,2,1) (1,2,3) (1,2,0) (2,3,0) (2,4,1) (2,0,0) (3,0,0) 4,3,0) (0,1,0) (1,0,0) To describe the CP algorithm, I’ll go over an example, and we’ll see the effect of these 3 cases. Assume that the following computation is given and, we have observed this linearization. Which means that the events arrive in this order. Linearization: (1,0,0), (0,1,0),(1,2,0),(2,0,0),(1,2,1),(2,3,0),(3,0,0),(1,2,3),(2,4,1),(4,3,0)
21
CP example CP Merge CP has two sets called history and work space.
(1,2,1) (1,2,3) (1,0,0), (0,1,0),(1,2,0),(2,0,0),(1,2,1),(2,3,0),(3,0,0),(1,2,3),(2,4,1),(4,3,0) (0,1,0) (1,2,0) (2,3,0) (2,4,1) (1,0,0) (2,0,0) (3,0,0) 4,3,0) CP Merge H0: { } W0: { } H: { } O: Q: ┴ CP has two sets called history and work space. Let’s start our example; It places the first element as a new chain to the work space, then adds the second one as the second chain, it finds a tail to append for the third one. But it cannot find a chain to append 2,0,0. So it creates a new chain for it. Then makes a merge call. We also modified the merge, so that it also has history chains. Merge appends the content of output chains whenever it finds an antihain of size k or k-1, for this example k is 3. Merge takes the work chains of CP as input chains, creates output and history chains, and the tree structure. k = 3 (1,0,0) (0,1,0) (2,0,0) (1,0,0) (0,1,0) (2,0,0) (1,2,0) (1,2,0)
22
CP example Merge CP In the first step,
(1,2,1) (1,2,3) (1,0,0), (0,1,0),(1,2,0),(2,0,0),(1,2,1),(2,3,0),(3,0,0),(1,2,3),(2,4,1),(4,3,0) (0,1,0) (1,2,0) (2,3,0) (2,4,1) (1,0,0) (2,0,0) (3,0,0) 4,3,0) CP Merge H4: { } W4: { } H: { } O: Q: (1,0,0) ┴ ┴ In the first step, merge finds out that 1,0,0 is less then 2,0,0, and finds an antichain of size 2. Hence places 1,0,0 to history queues, updates the spanning tree. ┴ ┴ k = 3 (1,0,0) (0,1,0) (2,0,0) (1,0,0) (1,2,0) (0,1,0) (2,0,0) (1,2,0) (1,2,0)
23
CP example Merge CP And it continues until there’s an empty chain.
(1,2,1) (1,2,3) (1,0,0), (0,1,0),(1,2,0),(2,0,0),(1,2,1),(2,3,0),(3,0,0),(1,2,3),(2,4,1),(4,3,0) (0,1,0) (1,2,0) (2,3,0) (2,4,1) (1,0,0) (2,0,0) (3,0,0) 4,3,0) CP Merge H4: { } W4: { } H: { } O: Q: (1,0,0) (0,1,0) ┴ And it continues until there’s an empty chain. ┴ ┴ k = 3 (1,0,0) (0,1,0) (2,0,0) (1,2,0) (0,1,0) (2,0,0) (1,2,0)
24
CP example Merge CP Then it returns the result to CP,
(1,2,1) (1,2,3) (1,0,0), (0,1,0),(1,2,0),(2,0,0),(1,2,1),(2,3,0),(3,0,0),(1,2,3),(2,4,1),(4,3,0) (0,1,0) (1,2,0) (2,3,0) (2,4,1) (1,0,0) (2,0,0) (3,0,0) 4,3,0) CP Merge H4: { } W4: { } H: { } O: Q: (1,0,0) (0,1,0) Then it returns the result to CP, CP updates its work space with the output queues of merge, and appends history chains to its history space. Let’s continue to introduce new events to CP. (1,2,0) (2,0,0) k = 3 (1,0,0) (0,1,0) (2,0,0) (1,2,0)
25
CP example (1,2,1) (1,2,3) (1,0,0), (0,1,0),(1,2,0),(2,0,0),(1,2,1),(2,3,0),(3,0,0),(1,2,3),(2,4,1),(4,3,0) (0,1,0) (1,2,0) (2,3,0) (2,4,1) (1,0,0) (2,0,0) (3,0,0) 4,3,0) CP Merge H4: { } W4: { } H: { } O: Q: (1,0,0) (0,1,0) ┴ ┴ It finds a place for fifth and sixth events at the existing chains. However it cannot place 3,0,0, hence makes a merge call. ┴ ┴ k = 3 (1,2,0) (2,0,0) (3,0,0) (1,2,0) (2,0,0) (3,0,0) (1,2,1) (2,3,0) (1,2,1) (2,3,0)
26
CP example Merge CP k = 3 H4: { } W4: { } H: { } O: Q: (1,2,1) (1,2,3)
(1,0,0), (0,1,0),(1,2,0),(2,0,0),(1,2,1),(2,3,0),(3,0,0),(1,2,3),(2,4,1),(4,3,0) (0,1,0) (1,2,0) (2,3,0) (2,4,1) (1,0,0) (2,0,0) (3,0,0) 4,3,0) CP Merge H4: { } W4: { } H: { } O: Q: (1,0,0) (0,1,0) ┴ (2,0,0) ┴ ┴ ┴ k = 3 (1,2,0) (2,0,0) (3,0,0) (1,2,0) (2,3,0) (2,0,0) (3,0,0) (1,2,1) (2,3,0) (1,2,1) (2,3,0)
27
CP example (1,2,1) (1,2,3) (1,0,0), (0,1,0),(1,2,0),(2,0,0),(1,2,1),(2,3,0),(3,0,0),(1,2,3),(2,4,1),(4,3,0) (0,1,0) (1,2,0) (2,3,0) (2,4,1) (1,0,0) (2,0,0) (3,0,0) 4,3,0) CP Merge H0: { } W0: { } H: { } O: Q: (1,0,0) (0,1,0) (1,2,0) ┴ (2,0,0) At the third round, merge finds out that all the heads are incomparable to each other. So it stops and returns the result to CP. ┴ ┴ k = 3 (1,2,0) (2,0,0) (3,0,0) (1,2,0) (1,2,1) (2,3,0) (3,0,0) (1,2,1) (2,3,0) (1,2,1)
28
CP example Merge CP CP updates its work space with the input chains.
(1,2,1) (1,2,3) (1,0,0), (0,1,0),(1,2,0),(2,0,0),(1,2,1),(2,3,0),(3,0,0),(1,2,3),(2,4,1),(4,3,0) (0,1,0) (1,2,0) (2,3,0) (2,4,1) (1,0,0) (2,0,0) (3,0,0) 4,3,0) CP Merge H7: { } W7: { } H: { } O: Q: (1,0,0) (0,1,0) (2,0,0) (1,2,0) CP updates its work space with the input chains. If we continue, we see that CP places the rest of the elements to the existing chains without a merge call. In conclusion, for 10 events we made 2 merge calls, while the naïve algorithm makes 10 calls. Also, by pruning the work space, we get rid of the redundant comparisons. We implemented this algorithm and compared it to TG since it has better performance compared to others. k = 3 (2,3,0) (1,2,1) (3,0,0) (2,4,1) (1,2,3) 4,3,0)
29
Running times By looking at the simulation result,
we find out that pruning the workspace has a big impact on the incremental algorithm. To better understand the behavior, we increased our test cases.
30
Test suites 7 new test suites are created.
Each test suite contains 18 different test cases initial partition vary from 10 to 450 chains, size vary from 100 to 70,000 We used a fixed vectorclock size (10) Reducing factor = (N-w)/N where N is the size of the initial partition, and w is the width of the poset. Posets are randomly created according to a given width and size. Test suites differ in their reducing factor. We created 7 new test suites that contains 18 different test cases. Each test case has different initial partition, and size. Here we define a new term called reducing factor. Mainly, it’s the reduction percentage in the initial partition size. We created the posets randomly according to a given width and size, and partition it for a given reducing factor. We fixed the vector clock size to 10. //Assume that initial partition size is N, and width is w, then reducing factor is defined as n minus w over n.
31
Reducing factor effect
Here, we show only 3 of test cases out of 18. Unfortunately, there’s no distinct cut of point for all cases. But TG computes at best 8 times faster then CP (when reducing factor is 0), while CP computes at best 30 times faster then TG when reducing is 0.6. To better identify the result, we calculated the average time spend per event. //The first test case has 5000 elements, and we observed that incremental algorithm gets better after the reducing factor is O point 2. While for others it’s 0.4.
32
Average run time per event
And in average CP performs better when reducing factor is higher then 0.2.
33
Comparison with related work
34
Concluding Remarks Partitioning a distributed computation
Under the linear extension hypothesis pruning the work space (without any significant extra cost ) improves the performance of the incremental algorithm Main limitation: x1+ x2+ x3 < k (efficient only for small k) A decentralized algorithm Integrating with computation slicing As concluding remarks We presented a new incremental algorithm for finding the optimal chain partition of distributed program traces, and show that exploiting the structure of a poset increases the algorithms performance. Main limitation of cp is for bounded sum predicates, the variables have to be positive integers and if they are not binary, then cp works efficiently only for small ks. For future work, it will be nice to find a decentralized algorithm. And integrating it with the computation slicing technique would also be nice
35
Questions ?
36
Problem definition & Previous algorithms
Given a chain partition of P, C = {C1, ,CN} into N disjoint chains, rearrange these chains into a chain partition with the fewest number of chains. Previous algorithms that answer the question given k chains whether it is possible to partition it into k-1 chains. Bogart and Magagnosc BM Tomlinson and Garg TG Let P be a partially ordered set with n elements, then we define the chain reduction problem as follows. Given a chain partition of P into N disjoint chains, we rearrange the elements into an optimal chain partition of the poset. Previous algorithms given by Bogart and Magagnosc , and Tomplinson and Garg reduces the chain partition by one if it’s possible, otherwise they return an antichain. I am going to go over these algorithms by using an example.
37
Trace Model: Total Order vs Partial Order
Total order: interleaving of events in a trace relevant tools: Temporal Rover [Drusinsky 00], Java-MaC [Kim, Kannan, Lee, Sokolsky, and Viswanathan 01], jPaX [Havelund and Rosu 01] Partial order: Lamport’s happened-before model e.g., jMPaX [Sen, Rosu, and Agha 03] Total order: + low computational complexity Partial order: + suitable for concurrent and distributed programs + encodes exponential number of total orders ) captures bugs that may not be found with a total order
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.