Download presentation
Presentation is loading. Please wait.
Published byCory Joseph Modified over 9 years ago
1
An Optimal Cache-Oblivious Priority Queue and Its Applications in Graph Algorithms By Arge, Bender, Demaine, Holland-Minkley, Munro Presented by Adam Sheffer
2
Priority Queue – A Reminder Maintains a set of elements, each with a priority. Supports insert and delete-min operations. Priority Queue Insert(A,2) Insert(B,4) Insert(D,3) Insert(C,4) Delete-min Insert(E,7) Delete-min Insert(F,4) A D
3
Cache Oblivious Alg. – A Reminder and are defined as usual. The algorithm cannot use and An optimal paging strategy is assumed. The “tall-cache” assumption: The analysis can assume memory. Standard Notations:
4
“An Optimal Cache-Oblivious "Priority Queue In order to discuss optimality, a lower bound is required. In [Aggarwal & Vitter ’88], it was proved that the number of memory transfers required for sorting is If we can perform both insert and delete-min with only memory transfers, we can achieve a more efficient sorting.
5
An Optimal Solution? Why not just use a cache-oblivious B-tree? Inserting elements into a B-tree takes We are lacking a factor of:
6
The Optimal Priority Queue We want a queue which performs both insert and delete-min with memory transfers. For certain reasonable values: We need an amortized analysis. ?
7
Similar Work [Brodal and Fagerberg ‘02] presents the funnel heap. This is a cache oblivious priority queue, with exactly the same results. [Brodal et al. ‘04] presents a cache oblivious priority queue which also supports the update operation, with memory transfers. [Chowdhury and Ramachandran ’04] supports the decrease-key operation, with the same results.
8
The Main Data Structure Level holds cells. The last level holds at most cells.
9
The Inside of a Single Level The elements inside a buffer are not ordered. … Buffers
10
An Order Between Levels Elements in the last down-buffer of level have a smaller priority than elements in the first down- buffer of level. There is no order between up buffers.
11
The Space Complexity of the DS We verify that the size of the largest buffer is, by occasionally performing a global rebuilding (explained later on). We can store the buffers consecutively in a large array, with a size of
12
The Push Operation The operation pushes input elements into level. Works as follows: –First, the input elements are sorted. –By scanning through the down buffers, each input element is appended to the end of the appropriate buffer. –Input elements with a larger priority than the elements in the down buffer, are appended into the end of the up buffer.
13
An Illustrated Push 712253033 4451153334 5511333444
14
The Push Operation (cont.) When a down-buffer grows to a size of, it will be split into two buffers with a size of. If there are already down buffers, the last down-buffer is moved into the up-buffer. When the up-buffer is full, its elements are recursively pushed into level.
15
Another Illustrated Push … Buffers Push Too Much
16
Another Illustrated Push … Buffers Too Much Recursive push
17
The Pull Operation The operation pulls the elements with the lowest priority from level. –When there are not enough elements in the down buffers, a recursive pull is performed.
18
Finally, The Algorithm Two additional buffers, each with cells: –The insertion buffer holds the most recently inserted elements. –The deletion buffer holds the elements with the smallest priority (sorted). Both buffers are constantly maintained in the memory.
19
The Insert Operation An insert operation moves the new element into the end of the insertion buffer. When the buffer is full, its elements are pushed into the lowest level of the DS. Insertion buffer Insert(A,5) Push
20
The Delete-Min Operation A delete-min removes the first element of the deletion buffer. When the buffer is empty, we pull elements from the lowest level of the DS (and sort them). Deletion buffer delete-min() Pull
21
A Slight Correction What if an inserted element has a lower priority than the last element in the deletion buffer? The last element from the deletion buffer is moved into the insertion buffer. The new element is moved into the deletion buffer. - 10 6 2 2 34 14 35 14 - - 7 Deletion Insertion Insert
22
Amortized Analysis of a Push Inserting elements into level. Sorting the input elements. Scanning through the down buffers, and appending the appropriate elements to them. Appending the rest of the elements to the end of the up buffer. Splitting overfull down-buffers. Moving down-buffers to the end of the up-buffer. Recursive pushes. Ignore
23
Amortized Analysis of a Push Splitting one down-buffer with elements, into two with elements. In [Frigo et al. ’99], it was shown how to compute a median in Therefore, a split takes memory transfers. At least elements are inserted into a buffer before it splits. For each element that a push operation inserts into a block, it will pay. This can also pay for the moving of a down buffer into the up buffer.
24
Amortized Analysis of a Push Inserting elements into level. Sorting the input elements. Scanning through the down buffers, and appending the appropriate elements to them. Appending the rest of the elements to the end of the up buffer. Splitting overfull down-buffers. Moving down-buffers to the end of the up-buffer. Recursive pushes. Ignore
25
Amortized Analysis of a Push Scanning through the down buffers, and appending the appropriate elements to them: There are down buffers. Tall cache assumption – All levels of size can be constantly kept in memory. Therefore, we may assume that If then We are left with the case where Applies for a single level, with at most down buffers. A block from each buffer can permanently kept in memory.
26
Amortized Analysis of a Push Inserting elements into level. Sorting the input elements. Scanning through the down buffers, and appending the appropriate elements to them. Appending the rest of the elements to the end of the up buffer. Splitting overfull down-buffers. Moving down-buffers to the end of the up-buffer. Recursive pushes. Ignore
27
Amortized Analysis of a Push A push of elements into level can be performed in amortized number of memory transfers, not counting recursive push operations. A pull of elements from level, can be analyzed for the results.
28
Analysis of Insert Increases the size of the insertion buffer by one. Causes memory transfers if and only if the insertion buffer is full. We need the amortized number of memory transfers to be
29
A Potential Function Level - Push coin Level - Pull coin
30
Spreading the Coins? Each element in the insertion buffer has a push coin and a pull coin for every level of the DS. On level, each element in the first half of a down buffer has a pull coin for every level On level, each element in the second half of a down buffer, or in the up buffer, has a push coin for every level, and a pull coin for all levels.
31
Amortized Cost of an Insert An insert operation adds a single element to the insertion buffer. This element needs a push coin and a pull coin for every level in the DS. The cost of an insert is:
32
Paying for a Push Operation Pushing the elements of the up buffer of level into level. Before the push, each element had a push coin for every level and a pull coin for every level. After the push, in the worst case, each element needs a push coin for every level and a pull coin for every level. We get at least spare coins, each worth These pay for the memory transfers of the push.
33
Splitting a Buffer
34
Moving Up a Down Buffer When moving a down buffer into an up buffer, we will need additional push coins for each level, and additional pull coins for each level. This is exactly the number of coins which were released due to the split.
35
Summing Up the Analysis A similar analysis shows that the coins pay for the pull operations. An insert operation costs amortized memory transfers. A delete-min operation is free, since it does not add any coins.
36
Global Rebuilding In order to maintain, we rebuild the DS after every operations. At each rebuild, we define Level holds an empty up buffer, and down buffers, each with elements. Level has at most down buffers with elements, and a single buffer with fewer elements. The global rebuilding can be done by sorting and scanning, using memory transfers.
37
Global Rebuilding (cont.) After the rebuilding, every element is in the first half of a down buffer, so there are no push coins. We bound the cost of all the pull coins, by assuming that they are all in level
38
Global Rebuilding (cont.) The cost of the global rebuilding is We will split it between the operations that occurred since the last rebuilding. This means that both insert and delete-min take (amortized memory transfers)
39
The Delete Operation It is possible to support a delete operation, which takes amortized memory transfers. The input for the operation is the id of the element and its priority.
40
The Delete Operation (cont.) Elements with the same priority are ordered according to their id. When a delete operation occurs, a special delete element is inserted, with the priority and id of the input.
41
The Delete Operation (cont.) A delete-min operation checks if the two first elements in the deletion buffer have the same id. If so, it throws them away and starts over. - {9,G} {6,E} {2,D} Deletion buffer Delete-min()
42
The Delete Operation (cont.) A delete-min operation will check if the two first elements in the deletion buffer have the same id. If so, it will throw them away and start over. - {9,G} {6,E} Deletion buffer Delete-min() - - E
43
The Delete Operation (cont.) In a global rebuilding, every element pair with the same id will be removed before choosing 5,E5,F5,E3,J5,K7,A 9,B 5,E5,F5,E3,J5,K7,A 9,B
44
Application for the Priority Queue.
45
List Ranking We are given a linked list with weights on the edges (or an array, with each of its cells containing the position of the next). We need to rank each node according to its weighted distance from the end of the list.
46
List Ranking We are given a linked list with weights on the edges (or an array, with each of its cells containing the position of the next). 71602534 We need to rank each node according to its weighted distance from the end of the list.
47
High-Level Algorithm Find an independent set of size Bridge-out the nodes from the set.
48
High-Level Algorithm Find an independent set of size Bridge-out the nodes from the set. Remove nodes from list and solve recursively.
49
High-Level Algorithm Find an independent set of size 41032 Bridge-out the nodes from the set. Remove nodes from list and solve recursively. Reinsert the nodes and fix the list.
50
High-Level Algorithm Find an independent set of size Bridge-out the nodes from the set. Remove nodes from list and solve recursively. Reinsert the nodes and fix the list. 71602534
51
Bridging Out Create a second copy of the list and sort it by successor position. ABCDEFGH EABGCHFD
52
Bridging Out (cont) Traverse both lists simultaneously. For each node which precedes a marked cell, add the position of the marked node’s successor. ABCDEFGH EABGCHFD
53
Removing Marked Elements Scan the list again, and move the unmarked nodes into a new list. During the scan, maintain another list of old and new node positions. Sort the first list by successor position. Scan the two lists simultaneously to fix the successors positions.
54
Reinserting Marked Elements Reinsert the nodes from the independent set, and update the ranks. This can be done by using a few scans and sorts, as in the bridge-out phase. 71602534
55
List Ranking Analysis Assuming that the independent set can be found in memory transfers, we achieve the following recurrence relation:
56
Independent Set We will show how to compute a 3-coloring of the list. We can choose the most common color to be the independent set. ABCDEFGH
57
3-Coloring Split the list into forward running sub-lists and backward running sub-lists. ABCDEFGH Each node is a member of a single list, unless it is the head of one list and the tail of another.
58
Coloring a Forward List In a forward list, we color the first node in grey, and then alternate between red and grey. ACFH
59
Coloring a Backward List BEGH In a backward list, we color the first node in orange, and then alternate between red and orange.
60
3-Coloring (cont.) A node which gets a color as a head and a different color as a tail, is colored with the color of the head. ABCDEFGH
61
Cache Oblivious 3-Coloring We show how to cache-obliviously color the forward lists: –Find all the head nodes. (Can be achieved by making a duplicated list, sorting it by successor position, and performing a simultaneous scan on both lists). –Color the head nodes in grey. –Create a priority queue. For each head node, insert a red element with the position of the node’s successor as its priority. –While the queue is not empty, remove the minimum node, color it, and if its successor is in a higher position in the list, enter it into the queue (with the other color).
62
Summing Up We can find a 3-coloring with (amortized) memory transfers. We can find an independent set with (amortized) memory transfers. We can perform a list ranking with (amortized) memory transfers.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.