Data Structures Binomial Heaps Fibonacci Heaps Haim Kaplan & Uri Zwick

Slides:



Advertisements
Similar presentations
Fibonacci Heaps Especially desirable when the number of calls to Extract-Min & Delete is small (note that all other operations run in O(1) This arises.
Advertisements

Priority Queues  MakeQueuecreate new empty queue  Insert(Q,k,p)insert key k with priority p  Delete(Q,k)delete key k (given a pointer)  DeleteMin(Q)delete.
Advanced Data structure
Rank-Pairing Heaps Robert Tarjan, Princeton University & HP Labs Joint work with Bernhard Haeupler and Siddhartha Sen, ESA
Pairing Heaps. Experimental results suggest that pairing heaps are actually faster than Fibonacci heaps.  Simpler to implement.  Smaller runtime overheads.
Fibonacci Heaps CS 252: Algorithms Geetika Tewari 252a-al Smith College December, 2000.
Rank-Pairing Heaps Bernhard Haeupler, Siddhartha Sen, and Robert Tarjan, ESA
Dijkstra/Prim 1 make-heap |V| insert |V| delete-min |E| decrease-key Priority Queues make-heap Operation insert find-min delete-min union decrease-key.
Instructors: C. Y. Tang and J. S. Roger Jang All the material are integrated from the textbook "Fundamentals of Data Structures in C" and some supplement.
Fibonacci Heaps. Single Source All Destinations Shortest Paths
Binomial Heaps Jyh-Shing Roger Jang ( 張智星 ) CSIE Dept, National Taiwan University.
Princeton University COS 423 Theory of Algorithms Spring 2002 Kevin Wayne Fibonacci Heaps These lecture slides are adapted from CLRS, Chapter 20.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2001 Lecture 11 Tuesday, 12/4/01 Advanced Data Structures Chapters.
ANALYSIS OF SOFT HEAP Varun Mishra April 16,2009.
1 Binomial heaps, Fibonacci heaps, and applications.
Strict Fibonacci Heaps Gerth Stølting Brodal Aarhus University George Lagogiannis Robert Endre Tarjan Agricultural University of Athens Princeton University.
Chapter 9 Heap Structures
Binomial heaps, Fibonacci heaps, and applications
The Power of Incorrectness A Brief Introduction to Soft Heaps.
Christopher Moh 2005 Application of Data Structures.
Lecture X Fibonacci Heaps
Change Keys in heaps Fibonacci heap Zhao Xiaobin.
1 Binomial Tree Binomial tree. n Recursive definition: B k-1 B0B0 BkBk B0B0 B1B1 B2B2 B3B3 B4B4.
Four different data structures, each one best in a different setting. Simple Heap Balanced Heap Fibonacci Heap Incremental Heap Our results.
Data StructuresData Structures Priority Queue. Recall Queues FIFO:First-In, First-Out Some contexts where this seems right? Some contexts where some things.
1 Fat heaps (K & Tarjan 96). 2 Goal Want to achieve the performance of Fibonnaci heaps but on the worst case. Why ? Theoretical curiosity and some applications.
Two-tier relaxed heaps Presented by Claus Jensen Joint work with Amr Elmasry and Jyrki Katajainen Slides available at
CMSC 341 Binomial Queues and Fibonacci Heaps. Basic Heap Operations OpBinary Heap Leftist Heap Binomial Queue Fibonacci Heap insertO(lgN) deleteMinO(lgN)
Fibonacci Heaps.
Fibonacci Heaps. Analysis FibonacciAnalysis.ppt Video  iew/cop5536sahni
1 Fibonacci heaps: idea List of multiway trees which are all heap-ordered. Definition: A tree is called heap-ordered if the key of each node is greater.
Fibonacci Heap Fibonacci heapshave better asymptotic time bounds than binary heaps for the INSERT, UNION, and DECREASE-KEY operations, and they.
Fibonacci Heaps. Fibonacci Binary insert O(1) O(log(n)) find O(1) N/A union O(1) N/A minimum O(1) O(1) decrease key O(1) O(log(n)) delete O(log(n) O(log(n))
Leftist Trees Linked binary tree.
Analysis of Algorithms
Binomial heaps, Fibonacci heaps, and applications
Analysis Of Binomial Heaps
Binomial Heaps On the surface it looks like Binomial Heaps are great if you have no remove mins. But, in this case you need only keep track of the current.
October 30th – Priority QUeues
Heaps Binomial Heaps Lazy Binomial Heaps 1.
Binomial heaps, Fibonacci heaps, and applications
Binomial Heaps Chapter 19.
Binomial Tree Adapted from: Kevin Wayne Bk-1 B0 Bk
Priority Queues MakeQueue create new empty queue
Fibonacci Heaps Remove arbitrary is useful in (for example) correspondence structures and may also be used to do an increase key in a min structure (remove.
Data Structures Lecture 4 AVL and WAVL Trees Haim Kaplan and Uri Zwick
Pairing Heaps Actual Complexity.
Selection in heaps and row-sorted matrices
B-Trees (continued) Analysis of worst-case and average number of disk accesses for an insert. Delete and analysis. Structure for B-tree node.
A simpler implementation and analysis of Chazelle’s
Chapter 20 Binomial Heaps
Fibonacci Heap.
ערמות בינומיות ופיבונצ'י
CS 583 Analysis of Algorithms
§3 Worst-Case vs. Amortized
Strict Fibonacci Heaps
CS 583 Analysis of Algorithms
Fibonacci Heaps Remove arbitrary is useful in (for example) correspondence structures.
21장. Fibonacci Heaps 숭실대학교 인공지능 연구실 석사 2학기 김완섭.
Binary and Binomial Heaps
Binomial heaps, Fibonacci heaps, and applications
Binomial heaps, Fibonacci heaps, and applications
Union-Find with Constant Time Deletions
Fibonacci Heaps.
Binomial heaps, Fibonacci heaps, and applications
Priority Queues Supports the following operations. Insert element x.
Pairing Heaps Actual Complexity
Heaps & Multi-way Search Trees
Priority Queues Binary Heaps
CS210- Lecture 13 June 28, 2005 Agenda Heaps Complete Binary Tree
Presentation transcript:

Data Structures Binomial Heaps Fibonacci Heaps Haim Kaplan & Uri Zwick December 2013

Heaps / Priority queues Fibonacci Heaps Lazy Binomial Heaps Binomial Heaps Binary Heaps O(1) O(log n) Insert Find-min Delete-min Decrease-key – Meld Worst case Amortized Delete can be implemented using Decrease-key + Delete-min Decrease-key in O(1) time important for Dijkstra and Prim

Binomial Heaps [Vuillemin (1978)]

Binomial Trees B0 B1 B2 B3 B4

Binomial Trees B0 B1 B2 B3 B4

Binomial Trees B0 B1 B2 B3 Bk−1 Bk−2 … Bk Bk Bk−1

Binomial Trees Bk−1 Bk−2 … Bk Bk Bk−1 B0

Min-heap Ordered Binomial Trees 2 15 5 17 25 40 20 35 18 11 38 45 58 31 45 67 key of child  key of parent

Tournaments  Binomial Trees D G H F E A B D C D F A D F G A B C D E F G H The children of x are the items that lost matches with x, in the order in which the matches took place.

Binomial Heap A list of binomial trees, at most one of each rank Pointer to root with minimal key 23 9 33 45 67 40 58 20 31 15 35 Each number n can be written in a unique way as a sum of powers of 2 At most log2 (n+1) trees 11 = (1011)2 = 8+2+1

Ordered forest  Binary tree 23 9 15 x 33 40 20 35 info key child next 45 58 31 67 child – leftmost child 2 pointers per node next – next “sibling”

Heap order  “half ordered” Forest  Binary tree 40 45 20 67 58 31 35 9 15 33 23 23 9 33 45 67 40 58 20 31 15 35 Heap order  “half ordered”

Binomial heap representation Q 2 pointers per node n first No explicit rank information min How do we determine ranks? x 23 9 15 33 40 20 35 info key child next 45 58 31 “Structure V” [Brown (1978)] 67

Alternative representation Q Reverse sibling pointers n first Make lists circular min Avoids reversals during meld 23 9 15 33 40 20 35 info key child next 45 58 31 “Structure R” [Brown (1978)] 67

Linking binomial trees a ≤ b x Bk a Bk−1 y b Bk−1 O(1) time

Linking binomial trees Linking in first representation Linking in second representation

Melding binomial heaps Link trees of same degree Q1: Q2:

Melding binomial heaps Link trees of same degree B1 B2 B3 Q1: B0 B1  B3 Q2: B0  B2 B3    B3 B4 Like adding binary numbers Maintain a pointer to the minimum O(log n) time

Insert O(log n) time New item is a one tree binomial heap 11 23 15 33 9 40 20 35 45 58 31 67 New item is a one tree binomial heap Meld it to the original heap O(log n) time

When we delete the minimum, we get a binomial heap Delete-min 23 15 33 9 40 20 35 When we delete the minimum, we get a binomial heap 45 58 31 67

Delete-min O(log n) time 23 15 33 40 20 35 When we delete the minimum, we get a binomial heap 45 58 31 67 Meld it to the original heap (Need to reverse list of roots in first representation) O(log n) time

Decrease-key using “sift-up” 2 I 15 5 17 25 40 20 35 18 11 38 45 58 31 45 67 Decrease-key(Q,I,7)

Decrease-key using “sift-up” 2 I 15 5 17 25 40 20 35 18 11 38 45 58 7 45 Need parent pointers (not needed before) 67

Decrease-key using “sift-up” 2 I 15 5 17 25 40 7 35 18 11 38 45 58 20 45 67 Need to update the node pointed by I

Decrease-key using “sift-up” 2 I 15 5 17 25 40 7 35 18 11 38 45 58 20 45 67 Need to update the node pointed by I

Decrease-key using “sift-up” 2 I 7 5 17 25 40 15 35 18 11 38 45 58 20 45 67 How can we do it?

Adding parent pointers Q Adding parent pointers n first min 23 9 15 33 40 20 35 45 58 31 67

Adding a level of indirection Q Adding a level of indirection n “Endogenous vs. exogenous” first min 23 9 15 item child next x parent node info key I 33 40 20 35 45 58 31 Nodes and items are distinct entities 67

Heaps / Priority queues Fibonacci Heaps Lazy Binomial Heaps Binomial Heaps Binary Heaps O(1) O(log n) Insert Find-min Delete-min Decrease-key – Meld Worst case Amortized

Lazy Binomial Heaps

Binomial Heaps Lazy Binomial Heaps A list of binomial trees, at most one of each rank, sorted by rank (at most O(log n) trees) Pointer to root with minimal key Lazy Binomial Heaps An arbitrary list of binomial trees (possibly n trees of size 1) Pointer to root with minimal key

Lazy Binomial Heaps An arbitrary list of binomial trees Pointer to root with minimal key 10 29 9 33 59 45 67 40 58 20 31 15 35 87 19 20

Lazy Meld Lazy Insert Concatenate the two lists of trees Update the pointer to root with minimal key O(1) worst case time Lazy Insert Add the new item to the list of roots Update the pointer to root with minimal key O(1) worst case time

Lazy Delete-min ? Remove the minimum root and meld ? 10 29 15 33 59 9 19 20 40 20 35 45 58 31 67 May need (n) time to find the new minimum

Consolidating / Successive Linking 10 29 15 59 40 20 35 19 33 45 58 31 20 67 … 1 2 3

Consolidating / Successive Linking 29 15 59 40 20 35 19 33 45 58 31 20 67 … 10 1 2 3

Consolidating / Successive Linking 15 59 40 20 35 19 33 45 58 31 20 67 10 … 29 1 2 3

Consolidating / Successive Linking 59 40 20 35 19 45 58 31 20 67 10 15 29 … 33 1 2 3

Consolidating / Successive Linking 40 20 35 19 45 58 31 20 67 10 15 29 … 59 33 1 2 3

Consolidating / Successive Linking 20 35 19 31 20 45 67 40 58 15 33 10 29 … 59 1 2 3

Consolidating / Successive Linking 35 19 20 45 67 40 58 15 33 10 29 … 20 59 31 1 2 3

Consolidating / Successive Linking 19 20 35 45 67 40 58 15 33 10 29 59 … 20 31 1 2 3

Consolidating / Successive Linking 19 20 45 67 40 58 15 33 10 29 20 35 31 … 59 1 2 3

Consolidating / Successive Linking At the end of the process, we obtain a non-lazy binomial heap containing at most log (n+1) trees, at most one of each rank 45 67 40 58 15 33 10 29 20 35 31 … 19 20 59 1 2 3

Consolidating / Successive Linking At the end of the process, we obtain a non-lazy binomial heap containing at most log n trees, at most one of each degree Worst case cost – O(n) Amortized cost – O(log n) Potential = Number of Trees

Cost of Consolidating T0 – Number of trees before T1 – Number of trees after L – Number of links T1 = T0−L (Each link reduces the number of tree by 1) Total number of trees processed – T0+L (Each link creates a new tree) Putting trees into buckets or finding trees to link with Linking Handling the buckets Total cost = O( (T0+L) + L + log2n ) = O( T0+ log2n ) As L ≤ T0

Amortized Cost of Consolidating (Scaled) actual cost = T0 + log2n Change in potential =  = T1−T0 Amortized cost = (T0 + log2n ) + (T1−T0)  = T1 + log2n  ≤ 2 log2n As T1 ≤ log2n Another view: A link decreases the potential by 1. This can pay for handling all the trees involved in the link. The only “unaccounted” trees are those that were not the input nor the output of a link operation.

Lazy Binomial Heaps O(1) 1 Insert Find-min O(log n) Delete-min Amortized cost Change in potential Actual cost O(1) 1 Insert Find-min O(log n) k1+T1−T0 O(k+T0+log n) Delete-min Decrease-key Meld Rank of deleted root

Heaps / Priority queues Fibonacci Heaps Lazy Binomial Heaps Binomial Heaps Binary Heaps O(1) O(log n) Insert Find-min Delete-min Decrease-key – Meld Worst case Amortized

One-pass successive linking A tree produced by a link is immediately put in the output list and not linked again Worst case cost – O(n) Amortized cost – O(log n) Potential = Number of Trees Exercise: Prove it!

One-pass successive Linking 10 29 15 59 40 20 35 19 33 45 58 31 20 67 … 1 2 3

One-pass successive Linking 29 15 59 40 20 35 19 33 45 58 31 20 67 … 10 1 2 3

One-pass successive Linking 15 59 40 20 35 19 33 45 58 31 20 67 … 1 2 3 10 Output list: 29

Fibonacci Heaps [Fredman-Tarjan (1987)]

Decrease-key in O(1) time? 2 x 15 5 17 25 40 20 35 18 11 38 45 58 31 45 Decrease-key(Q,x,30) 67

Decrease-key in O(1) time? 2 x 15 5 17 25 30 20 35 18 11 38 45 58 31 45 Decrease-key(Q,x,10) 67

Decrease-key in O(1) time? 2 x 15 5 17 25 10 20 35 18 11 38 45 58 31 45 67 Heap order violation

Decrease-key in O(1) time? 2 x 15 5 17 25 10 20 35 18 11 38 45 58 31 45 67 Cut the edge

Decrease-key in O(1) time? 2 x 15 5 17 25 10 20 35 18 11 38 45 58 31 45 67 Tree no longer binomial

Fibonacci Heaps A list of heap-ordered trees Pointer to root with minimal key 10 29 9 59 87 15 19 25 22 33 40 20 35 58 31 Not all trees may appear in Fibonacci heaps

Fibonacci heap representation Q min 1 3 23 9 15 2 33 40 20 35 Explicit ranks = degrees Doubly linked lists of children 1 45 58 Arbitrary order of children child points to an arbitrary child 67 4 pointers + rank + mark bit per node

Fibonacci heap representation Q min 1 x info rank child next prev parent mark key 3 23 9 15 2 33 40 20 35 1 45 58 67 4 pointers + rank + mark bit per node

Are simple cuts enough? A binomial tree of rank k contains at least 2k We may get trees of rank k containing only k+1 nodes Ranks not necessarily O(log n) Analysis breaks down

Cascading cuts Invariant: Each node looses at most one child after becoming a child itself To maintain the invariant, use a mark bit Each node is initially unmarked. When a non-root node looses its first child, it becomes marked When a marked node looses a second child, it is cut from its parent

Our convention: Roots are unmarked Cascading cuts Invariant: Each node looses at most one child after becoming a child itself When xy is cut: x becomes unmarked If y is unmarked, it becomes marked If y is marked, it is cut from its parent Our convention: Roots are unmarked

Perform a cascading-cut process starting at x Cascading cuts Perform a cascading-cut process starting at x Cut x from its parent y

Cascading cuts … … …

Cascading cuts … … …

Cascading cuts … … …

Cascading cuts … … …

Cascading cuts … … …

Cascading cuts … … …

Cascading cuts … … …

Cascading cuts … … …

Cascading cuts … …

Number of cuts Potential = Number of marked nodes A decrease-key operation may trigger many cuts Lemma 1: The first d decrease-key operations trigger at most 2d cuts Proof in a nutshell: Number of times a second child is lost is at most the number of times a first child is lost Potential = Number of marked nodes

Number of cuts Potential = Number of marked nodes c cuts Amortized number of cuts ≤ c + (1(c1)) = 2

Trees formed by cascading cuts Lemma 2: Let x be a node of rank k and let y1,y2,…,yk be the current children of x, in the order in which they were linked to x. Then, the rank of yi is at least i2. Proof: When yi was linked to x, y1,…yi1 were already children of x. At that time, the rank of x and yi was at least i1. As yi is still a child of x, it lost at most one child.

Trees formed by cascading cuts Lemma 3: A node of rank k in a Fibonacci Heap has at least Fk+2 ≥ k descendants, including itself. n 1 2 3 4 5 6 7 8 9 Fn 13 21 34

Trees formed by cascading cuts Lemma 3: A node of rank k in a Fibonacci Heap has at least Fk+2 ≥ k descendants, including itself. Let Sk be the minimum number of descendants of a node of rank at least k … k k3 k2 1

Trees formed by cascading cuts Lemma 3: A node of rank k in a Fibonacci Heap has at least Fk+2 ≥ k descendants, including itself. Corollary: In a Fibonacci heap containing n items, all ranks are at most logn ≤ 1.4404 log2n Ranks are again O(log n) Are we done?

Putting it all together Are we done? A cut increases the number of trees… We need a potential function that gives good amortized bounds on both successive linking and cascading cuts Potential = #trees + 2 #marked

Cost of Consolidating T0 – Number of trees before T1 – Number of trees after L – Number of links T1 = T0−L (Each link reduces the number of tree by 1) Total number of trees processed – T0+L (Each link creates a new tree) Putting trees into buckets or finding trees to link with Linking Handling the buckets Total cost = O( (T0+L) + L + logn ) = O( T0+ logn ) As L ≤ T0

Cost of Consolidating T0 – Number of trees before T1 – Number of trees after L – Number of links T1 = T0−L (Each link reduces the number of tree by 1) Total number of trees processed – T0+L (Each link creates a new tree) Only change: logn instead of log2n Total cost = O( (T0+L) + L + logn ) = O( T0+ logn ) As L ≤ T0

Number of cuts performed Fibonacci heaps Amortized cost  Marks  Trees Actual cost O(1) 1 Insert Find-min O(log n) ≤ 0 k1+T1−T0 O(k+T0+log n) Delete-min ≤ 2c c O(c) Decrease-key Meld Rank of deleted root Number of cuts performed

Heaps / Priority queues Fibonacci Heaps Lazy Binomial Heaps Binomial Heaps Binary Heaps O(1) O(log n) Insert Find-min Delete-min Decrease-key – Meld Worst case Amortized

Consolidating / Successive linking