Pipelined van Emde Boas Tree: Algorithms, Analysis, and Applications Hao Wang and Bill Lin University of California, San Diego.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms
Advertisements

COL 106 Shweta Agrawal and Amit Kumar
Heaps1 Part-D2 Heaps Heaps2 Recall Priority Queue ADT (§ 7.1.3) A priority queue stores a collection of entries Each entry is a pair (key, value)
Treaps.  Good (logarithmic) performance with random data  Linear performance if the data is sorted ◦ Solutions – Splay Trees Amortized O( lg n) performance.
Comp 122, Spring 2004 Binary Search Trees. btrees - 2 Comp 122, Spring 2004 Binary Trees  Recursive definition 1.An empty tree is a binary tree 2.A node.
Transform and Conquer Chapter 6. Transform and Conquer Solve problem by transforming into: a more convenient instance of the same problem (instance simplification)
Trees Types and Operations
Design and Analysis of a Robust Pipelined Memory System Hao Wang †, Haiquan (Chuck) Zhao *, Bill Lin †, and Jun (Jim) Xu * † University of California,
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Binary Heaps CSE 373 Data Structures Lecture 11. 2/5/03Binary Heaps - Lecture 112 Readings Reading ›Sections
Comp 122, Spring 2004 Heapsort. heapsort - 2 Lin / Devi Comp 122 Heapsort  Combines the better attributes of merge sort and insertion sort. »Like merge.
1 Succinct Priority Indexing Structures for the Management of Large Priority Queues Hao Wang and Bill Lin University of California, San Diego IEEE IWQoS.
Fast and scalable priority queue architecture for high-speed network switches Ranjita Bhagwan, Bill Lin Center for wireless communications university of.
BST Data Structure A BST node contains: A BST contains
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 7 Heapsort and priority queues Motivation Heaps Building and maintaining heaps.
10/22/2002CSE Red Black Trees CSE Algorithms Red-Black Trees Augmenting Search Trees Interval Trees.
B + -Trees (Part 1) Lecture 20 COMP171 Fall 2006.
B + -Trees (Part 1). Motivation AVL tree with N nodes is an excellent data structure for searching, indexing, etc. –The Big-Oh analysis shows most operations.
B + -Trees (Part 1) COMP171. Slide 2 Main and secondary memories  Secondary storage device is much, much slower than the main RAM  Pages and blocks.
Tirgul 4 Order Statistics Heaps minimum/maximum Selection Overview
B + -Trees COMP171 Fall AVL Trees / Slide 2 Dictionary for Secondary storage * The AVL tree is an excellent dictionary structure when the entire.
Introduction to Analysis of Algorithms CAS CS 330 Lecture 16 Shang-Hua Teng Thanks to Charles E. Leiserson and Silvio Micali of MIT for these slides.
Heapsort CIS 606 Spring Overview Heapsort – O(n lg n) worst case—like merge sort. – Sorts in place—like insertion sort. – Combines the best of both.
More Trees COL 106 Amit Kumar and Shweta Agrawal Most slides courtesy : Douglas Wilhelm Harder, MMath, UWaterloo
1 B-Trees Section AVL (Adelson-Velskii and Landis) Trees AVL tree is binary search tree with balance condition –To ensure depth of the tree is.
§3 Binary Heap 1. Structure Property: 【 Definition 】 A binary tree with n nodes and height h is complete iff its nodes correspond to the nodes numbered.
ALGORITHMS FOR ISNE DR. KENNETH COSH WEEK 6.
ADT Table and Heap Ellen Walker CPSC 201 Data Structures Hiram College.
CS 1031 Tree Traversal Techniques; Heaps Tree Traversal Concept Tree Traversal Techniques: Preorder, Inorder, Postorder Full Trees Almost Complete Trees.
TECH Computer Science Data Abstraction and Basic Data Structures Improving efficiency by building better  Data Structure Object IN  Abstract Data Type.
Merge Sort. What Is Sorting? To arrange a collection of items in some specified order. Numerical order Lexicographical order Input: sequence of numbers.
BINARY SEARCH TREE. Binary Trees A binary tree is a tree in which no node can have more than two children. In this case we can keep direct links to the.
The Binary Heap. Binary Heap Looks similar to a binary search tree BUT all the values stored in the subtree rooted at a node are greater than or equal.
Priority Queues and Binary Heaps Chapter Trees Some animals are more equal than others A queue is a FIFO data structure the first element.
Chapter 11 Heap. Overview ● The heap is a special type of binary tree. ● It may be used either as a priority queue or as a tool for sorting.
Author: Sriram Ramabhadran, George Varghese Publisher: SIGMETRICS’03 Presenter: Yun-Yan Chang Date: 2010/12/29 1.
B + -Trees. Motivation An AVL tree with N nodes is an excellent data structure for searching, indexing, etc. The Big-Oh analysis shows that most operations.
Symbol Tables and Search Trees CSE 2320 – Algorithms and Data Structures Vassilis Athitsos University of Texas at Arlington 1.
1 Joe Meehean.  We wanted a data structure that gave us... the smallest item then the next smallest then the next and so on…  This ADT is called a priority.
Priority Queues and Heaps. October 2004John Edgar2  A queue should implement at least the first two of these operations:  insert – insert item at the.
Data Structure II So Pak Yeung Outline Review  Array  Sorted Array  Linked List Binary Search Tree Heap Hash Table.
Computer Algorithms Lecture 9 Heapsort Ch. 6, App. B.5 Some of these slides are courtesy of D. Plaisted et al, UNC and M. Nicolescu, UNR.
CSC 413/513: Intro to Algorithms Solving Recurrences Continued The Master Theorem Introduction to heapsort.
B-TREE. Motivation for B-Trees So far we have assumed that we can store an entire data structure in main memory What if we have so much data that it won’t.
Prof. Amr Goneid, AUC1 Analysis & Design of Algorithms (CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part R3. Priority Queues.
Internal and External Sorting External Searching
David Luebke 1 2/5/2016 CS 332: Algorithms Introduction to heapsort.
Heapsort A minimalist's approach Jeff Chastine. Heapsort Like M ERGE S ORT, it runs in O(n lg n) Unlike M ERGE S ORT, it sorts in place Based off of a.
CS 367 Introduction to Data Structures Lecture 8.
Priority Queues and Heaps Data Structures and Algorithms CS 244 Brent M. Dingle, Ph.D. Department of Mathematics, Statistics, and Computer Science University.
Heaps © 2010 Goodrich, Tamassia. Heaps2 Priority Queue ADT  A priority queue (PQ) stores a collection of entries  Typically, an entry is a.
Block-Based Packet Buffer with Deterministic Packet Departures Hao Wang and Bill Lin University of California, San Diego HSPR 2010, Dallas.
Sept Heapsort What is a heap? Max-heap? Min-heap? Maintenance of Max-heaps -MaxHeapify -BuildMaxHeap Heapsort -Heapsort -Analysis Priority queues.
2 Binary Heaps What if we’re mostly concerned with finding the most relevant data?  A binary heap is a binary tree (2 or fewer subtrees for each node)
Course: Programming II - Abstract Data Types HeapsSlide Number 1 The ADT Heap So far we have seen the following sorting types : 1) Linked List sort by.
Priority Queues A priority queue is an ADT where:
Top 50 Data Structures Interview Questions
Priority Queues © 2010 Goodrich, Tamassia Priority Queues 1
Double-Ended Priority Queues
Heaps © 2010 Goodrich, Tamassia Heaps Heaps
CMSC 341 Lecture 13 Leftist Heaps
Heap Sort The idea: build a heap containing the elements to be sorted, then remove them in order. Let n be the size of the heap, and m be the number of.
ITEC 2620M Introduction to Data Structures
Ch. 8 Priority Queues And Heaps
Lecture 3 / 4 Algorithm Analysis
Introduction to Algorithms
Heaps By JJ Shepherd.
Total running time is O(E lg E).
Heaps & Multi-way Search Trees
Presentation transcript:

Pipelined van Emde Boas Tree: Algorithms, Analysis, and Applications Hao Wang and Bill Lin University of California, San Diego

IEEE Infocom Introduction Priority queues used many network applications Per-flow weighted fair queueing Management of per-flow packet buffers using DRAM Maintenance of exact statistics counters for real-time network measurements Items in priority queue sorted at all times (e.g. smallest key first) Challenge: Need to operate at high speeds (e.g. 10+ Gb/s)

IEEE Infocom Introduction Binary heap common data structure for priority queues O(lg n) time complexity, where n is # items e.g. in fine-grained per-flow weighted fair queueing, n can be very large (e.g. 1 million) O(lg n) may be too slow for high line rates Pipeline heaps [Bhagwan, Lin 2000][Ioannou, Katevenis 2001] Reduced time complexity to O(1) At the expense of O(lg n) pipeline stages

IEEE Infocom This Talk Instead of pipelining binary heaps, we present new approach based on pipelining van Emde Boas trees van Emde Boas (vEB) trees introduced in 1975 Instead of maintaining a priority queue of sorted items, maintain a sorted dictionary of keys In many applications, since keys are represented by a w -bit integer, possible keys can only be from fixed universe of u = 2 w values Only O(lg lg u) complexity vs. O(lg n) for heaps Main result: pipelined vEB with O(1) operation and O(lg lg u) pipeline stages

IEEE Infocom van Emde Boas (vEB) Trees Goal: Maintain a sorted subset S from universe U = {0, 1, …, u – 1} of size u = 2 w, subject to I NSERT, D ELETE, E XTRACT M IN, S UCCESSOR, P REDECESSOR. e.g. I NSERT – Inserts new item into the queue E XTRACT M IN – Removes item with smallest key

IEEE Infocom vEB Trees Size u vs. n depends on application. e.g. If u is only polynomial in n, i.e. u = O(n c ), then O(lg lg u) = O(lg lg n), exponential speedup over O(lg n) Per-flow Fair Queues 24 bits 4 × 10 6 Statistics Counters 10 6 counters 512 groups 10 6 flows Applicationwu = 2 w n 5 3 O(lg lg u) 20 8 O(lg n) u ≈ n u « n u ≈ n compare 5128 bits

IEEE Infocom H and L[0], …, L[ – 1] recursively defined as vEB trees with w/2 bits vEB Trees Conceptually think of the universe of w -bits, U = {0, 1, …, 2 w – 1}, as a binary tree with height w Think of top part H of w/2 bits, and bottom part of sub-trees of w/2 bits, L[0], L[1], …, L[ – 1] L[0]L[1]L[ -1]... H w/2 bits sub-trees

IEEE Infocom vEB Trees Suppose w = 8 bits, consider e.g. a key x = 31 Split x into high and low parts: Consider I NSERT ( x, S ) If x h H, recursively call I NSERT ( x l, L[x h ] ) Else, recursively call I NSERT ( x h, H ) I NSERT ( x l, L[x h ] ) Can avoid 2 nd recursion by storing min[S] L[0]L[1]L[ -1]... H w/2 bits sub-trees xhxh xlxl

IEEE Infocom Representing vEB Trees min[S] : minimum key in S n[S] : number of elements in S H, L’s : just pointers to corresponding vEB sub-trees H L[0] L[1] L[ – 1] min[S]n[S]

IEEE Infocom I NSERT Operation // if S empty set min to x // if x is smaller swap them // increment size of S // recursive call to either L[x h ] or H Only One Recursive Call

IEEE Infocom I NSERT Operation I NSERT ( x, S ) If x h H, recursively call I NSERT ( x l, L[x h ] ) Else, recursively call I NSERT ( x h, H ) min( L[x h ] ) ← x l L[0]L[1]L[255]... H 4 bits Suppose w = 8 x = xhxh xlxl

IEEE Infocom I NSERT Operation I NSERT ( x, S ) If x h H, recursively call I NSERT ( x l, L[x h ] ) Else, recursively call I NSERT ( x h, H ) min( L[x h ] ) ← x l L[0]L[1]L[255]... H 4 bits Suppose w = 8 x = xhxh xlxl

IEEE Infocom I NSERT Operation I NSERT ( x, S ) If x h H, recursively call I NSERT ( x l, L[x h ] ) Else, recursively call I NSERT ( x h, H ) min( L[x h ] ) ← x l L[1] 4 bits Now w = 4 x = 1111 xhxh xlxl

IEEE Infocom I NSERT Operation I NSERT ( x, S ) If x h H, recursively call I NSERT ( x l, L[x h ] ) Else, recursively call I NSERT ( x h, H ) min( L[x h ] ) ← x l L[0]L[1]L[255]... 2 bits H Now w = 4 x = 1111 xhxh xlxl

IEEE Infocom I NSERT Operation I NSERT ( x, S ) If x h H, recursively call I NSERT ( x l, L[x h ] ) Else, recursively call I NSERT ( x h, H ) min( L[x h ] ) ← x l L[0]L[1]L[255]... H 2 bits Now w = 4 x = 1111 xhxh xlxl

IEEE Infocom I NSERT Operation I NSERT ( x, S ) If x h H, recursively call I NSERT ( x l, L[x h ] ) Else, recursively call I NSERT ( x h, H ) min( L[x h ] ) ← x l L[0]L[1]L[255]... 2 bits H Now w = 4 x = 1111 xhxh xlxl Overall O(lg w) or O(lg lg u) time

IEEE Infocom E XTRACT M IN Operation // if S empty, nothing to return // set return to current min // decrement size of S // recursive call to either H or L[m h ] // set new minimum and return value

IEEE Infocom E XTRACT M IN Operation // if S empty, nothing to return // set return to current min // decrement size of S // recursive call to either H or L[m h ] // set new minimum and return value Again Only One Recursive Call

IEEE Infocom Basic Idea of Pipelining vEB operations are recursively defined Each operation only makes one recursive call at each level of recursion Each call goes down to a sub-tree with universe defined by w/2 bits (then w/4 bits, w/8 bits, etc) Therefore, intuitively can unroll into lg lg u steps L[0]L[1]L[ -1]... H w/2 bits sub-trees

IEEE Infocom Pipeline Structure stage 1 op1arg1 stage 2 op2arg2 stage 3 op3arg3 op4arg4 input :: M1 = pointers to all subtrees of width w/2 output minn HL0L0 L1L1 … n HL0L0 L1L1 … n HL0L0 L1L1 … n HL0L0 L1L1 … stage 4 M2 = pointers to all subtrees of width w/4 M3 = pointers to all subtrees of width w/8 M4 = pointers to all subtrees of width w/16

IEEE Infocom Main Issues Want to initiate new operation at every pipeline cycle (e.g. initiate an I NSERT or E XTRACT M IN ) Even if throughput is O(1), Don’t want to wait lg lg u pipeline cycles to retrieve data (e.g. want data in same pipeline cycle for E XTRACT M IN ) But previous I NSERT and E XTRACT M IN operation(s) may still be in pipeline Two pipeline stages may need to access same data Need to resolve memory access orders

IEEE Infocom Basic Idea Operations proceed from top to bottom min[S] for the vEB tree rooted at that level is updated immediately as operations flow thru pipeline e.g. I NSERT at Stage 1 will update min[S] immediately so next E XTRACT M IN at Stage 1 will retrieve correct min[S] stage 1 op1arg1 stage 2 op2arg2 stage 3 op3arg3 op4arg4 input :: output minn HL0L0 L1L1 … n HL0L0 L1L1 … n HL0L0 L1L1 … n HL0L0 L1L1 … stage 4

IEEE Infocom Other Operations In addition to I NSERT and E XTRACT M IN, paper describes S UCCESSOR ( x, S ) – Returns next element in S larger than x in the universe U Can also define in a similar pipeline fashion P REDECESSOR ( x, S ) – Returns next element in S smaller than x in the universe U E XTRACT M AX ( S ) – Removes item with largest key To facilitate these operations, we also Store max[S] for corresponding S at each pipeline stage Provide another pipeline memory structure for tracking “in-flight” S UCCESSOR and P REDECESSOR operations

IEEE Infocom Conclusions Pipelined van Emde Boas trees can achieve O(1) operations with O(lg lg u) pipelined stages Same O(1) time complexity as pipelined heaps, but exponentially fewer pipeline stages than O(lg n) required for pipelined heaps when u is a polynomial of n Can simultaneously support E XTRACT M IN and E XTRACT M AX, which is harder to do with heaps Can support other operations like S UCCESSOR and P REDECESSOR, which have potential application to various network problem instances

Thank You