History Independent Data-Structures. What is History Independent Data-Structure ? Sometimes data structures keep unnecessary information. –not accessible.

Slides:



Advertisements
Similar presentations
Lower and Upper Bounds on Obtaining History Independence
Advertisements

QuickSort Average Case Analysis An Incompressibility Approach Brendan Lucier August 2, 2005.
B+-Trees (PART 1) What is a B+ tree? Why B+ trees? Searching a B+ tree
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
September 19, Algorithms and Data Structures Lecture IV Simonas Šaltenis Nykredit Center for Database Research Aalborg University
CS 315 March 24 Goals: Heap (Chapter 6) priority queue definition of a heap Algorithms for Insert DeleteMin percolate-down Build-heap.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu.
CS 253: Algorithms Chapter 6 Heapsort Appendix B.5 Credit: Dr. George Bebis.
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
Analysis of Algorithms CS 477/677
Heapsort. 2 Why study Heapsort? It is a well-known, traditional sorting algorithm you will be expected to know Heapsort is always O(n log n) Quicksort.
Tirgul 4 Sorting: – Quicksort – Average vs. Randomized – Bucket Sort Heaps – Overview – Heapify – Build-Heap.
Priority queues CS310 – Data Structures Professor Roch Weiss, Chapter 6.9, 21 All figures marked with a chapter and section number are copyrighted © 2006.
Lower and Upper Bounds on Obtaining History Independence Niv Buchbinder and Erez Petrank Technion, Israel.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 7 Heapsort and priority queues Motivation Heaps Building and maintaining heaps.
Tirgul 8 Universal Hashing Remarks on Programming Exercise 1 Solution to question 2 in theoretical homework 2.
Point Location Computational Geometry, WS 2007/08 Lecture 5 Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik Fakultät für.
Unit 11a 1 Unit 11: Data Structures & Complexity H We discuss in this unit Graphs and trees Binary search trees Hashing functions Recursive sorting: quicksort,
B + -Trees (Part 1). Motivation AVL tree with N nodes is an excellent data structure for searching, indexing, etc. –The Big-Oh analysis shows most operations.
Tirgul 7 Heaps & Priority Queues Reminder Examples Hash Tables Reminder Examples.
Tirgul 4 Order Statistics Heaps minimum/maximum Selection Overview
DAST 2005 Week 4 – Some Helpful Material Randomized Quick Sort & Lower bound & General remarks…
PQ, binary heaps G.Kamberova, Algorithms Priority Queue ADT Binary Heaps Gerda Kamberova Department of Computer Science Hofstra University.
History-Independent Cuckoo Hashing Weizmann Institute Israel Udi WiederMoni NaorGil Segev Microsoft Research Silicon Valley.
Heapsort CIS 606 Spring Overview Heapsort – O(n lg n) worst case—like merge sort. – Sorts in place—like insertion sort. – Combines the best of both.
Binary Trees Chapter 6.
Computer Algorithms Lecture 11 Sorting in Linear Time Ch. 8
Sorting in Linear Time Lower bound for comparison-based sorting
Heapsort Based off slides by: David Matuszek
1 HEAPS & PRIORITY QUEUES Array and Tree implementations.
Compiled by: Dr. Mohammad Alhawarat BST, Priority Queue, Heaps - Heapsort CHAPTER 07.
Brought to you by Max (ICQ: TEL: ) February 5, 2005 Advanced Data Structures Introduction.
Heaps, Heapsort, Priority Queues. Sorting So Far Heap: Data structure and associated algorithms, Not garbage collection context.
Sorting with Heaps Observation: Removal of the largest item from a heap can be performed in O(log n) time Another observation: Nodes are removed in order.
The Binary Heap. Binary Heap Looks similar to a binary search tree BUT all the values stored in the subtree rooted at a node are greater than or equal.
Priority Queues and Binary Heaps Chapter Trees Some animals are more equal than others A queue is a FIFO data structure the first element.
Chapter 6 Binary Trees. 6.1 Trees, Binary Trees, and Binary Search Trees Linked lists usually are more flexible than arrays, but it is difficult to use.
Binary Trees, Binary Search Trees RIZWAN REHMAN CENTRE FOR COMPUTER STUDIES DIBRUGARH UNIVERSITY.
September 29, Algorithms and Data Structures Lecture V Simonas Šaltenis Aalborg University
Data Structures Week 8 Further Data Structures The story so far  Saw some fundamental operations as well as advanced operations on arrays, stacks, and.
Outline Binary Trees Binary Search Tree Treaps. Binary Trees The empty set (null) is a binary tree A single node is a binary tree A node has a left child.
Symbol Tables and Search Trees CSE 2320 – Algorithms and Data Structures Vassilis Athitsos University of Texas at Arlington 1.
Data Structure & Algorithm II.  In a multiuser computer system, multiple users submit jobs to run on a single processor.  We assume that the time required.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 9.
1 Heaps (Priority Queues) You are given a set of items A[1..N] We want to find only the smallest or largest (highest priority) item quickly. Examples:
Lecture 11COMPSCI.220.FS.T Balancing an AVLTree Two mirror-symmetric pairs of cases to rebalance the tree if after the insertion of a new key to.
Priority Queues and Heaps. October 2004John Edgar2  A queue should implement at least the first two of these operations:  insert – insert item at the.
CS 361 – Chapter 5 Priority Queue ADT Heap data structure –Properties –Internal representation –Insertion –Deletion.
S. Raskhodnikova and A. Smith. Based on slides by C. Leiserson and E. Demaine. 1 Adam Smith L ECTURES Priority Queues and Binary Heaps Algorithms.
Heapsort. What is a “heap”? Definitions of heap: 1.A large area of memory from which the programmer can allocate blocks as needed, and deallocate them.
Lecture 15 Jianjun Hu Department of Computer Science and Engineering University of South Carolina CSCE350 Algorithms and Data Structure.
Lecture 8 : Priority Queue Bong-Soo Sohn Assistant Professor School of Computer Science and Engineering Chung-Ang University.
Heaps and basic data structures David Kauchak cs161 Summer 2009.
Tree Data Structures. Heaps for searching Search in a heap? Search in a heap? Would have to look at root Would have to look at root If search item smaller.
Advanced Data Structure By Kayman 21 Jan Outline Review of some data structures Array Linked List Sorted Array New stuff 3 of the most important.
FALL 2005CENG 213 Data Structures1 Priority Queues (Heaps) Reference: Chapter 7.
Internal and External Sorting External Searching
Mergeable Heaps David Kauchak cs302 Spring Admin Homework 7?
1 Chapter 6 Heapsort. 2 About this lecture Introduce Heap – Shape Property and Heap Property – Heap Operations Heapsort: Use Heap to Sort Fixing heap.
Course: Programming II - Abstract Data Types HeapsSlide Number 1 The ADT Heap So far we have seen the following sorting types : 1) Linked List sort by.
Priority Queues and Heaps. John Edgar  Define the ADT priority queue  Define the partially ordered property  Define a heap  Implement a heap using.
CSE373: Data Structures & Algorithms Priority Queues
School of Computing Clemson University Fall, 2012
October 30th – Priority QUeues
Heaps © 2010 Goodrich, Tamassia Heaps Heaps
Heap Sort Example Qamar Abbas.
Tree data structure.
Tree data structure.
Enumerating Distances Using Spanners of Bounded Degree
Sorting We have actually seen already two efficient ways to sort:
Presentation transcript:

History Independent Data-Structures

What is History Independent Data-Structure ? Sometimes data structures keep unnecessary information. –not accessible via the legitimate interface of the data structure. –can be restored from the data-structure layout. The core problem: history of operations applied on the data-structure may be revealed.

History Independence - Motivation A privacy issue if an adversary gains control over the data-structure layout - Laptop was stolen. Sometimes you just send the data-structure over the web … Word documents Search indexes inside a data-structure List of Students/ grades etc.

Example Data structure with three operations: Insert(D, x) Remove(D, x) Print(D) Used for a wedding invitee list. Naive Implementation – an array. Insert – adds last entry. Remove entry i – move entries i+1 to n backwards (wiser implementation - linked list on an array) Layout implies the order. For example, who was invited last !

Weak History Independence [Naor, Teague]: A Data structure implementation is (weakly) History Independent if: Any two sequences of operations S 1 and S 2 that yield the same content induce the same distribution on memory layout. Security: Nothing gained from layout beyond the content.

Example – cont. Making the previous data structure weakly history independent: Insert(x): (say, n elements in data-structure) –Choose uniformly at random r  {1,2,…,n+1} –Set A[n+1]  A[r]; A[r]  x Remove entry i: A[i]  A[n] The array is a uniformly chosen permutation on the elements

Weak History Independence Problems No Information leaks if adversary gets layout once (e.g., the laptop was stolen). But what if adversary may get layout several times ? Information on content modifications leaks. We want: no more information leakage.

Strong History Independence ¼ Pair of sequences S 1, S 2 ¼ two lists of stop points in S 1, S 2 If content is the same in each pair of corresponding stop points Then: Joint distribution of memory layouts at stop points is identical in the two sequences. [Naor-Teague]: A Data structure implementation is (Strongly) History Independent if: Security: We cannot distinguish between any such two sequences.

Strong History Independence S 1 = ins(1), ins(2), ins(3), ins(4) S 2 = ins(2), ins(1), ins(5), ins(4), ins(3), del(5) First stopSecond stop First stopSecond stop We should not be able to tell from the layouts which of the two sequences happened

Example – cont. Recall example: Insert(x) : (say, n elements in database) –Choose uniformly at random r  {1,2,…,n+1} –Set A[n+1]  A[r]; A[r]  x Remove entry i: A[i]  A[n] Is this implementation strongly history independent ? No !

Example – cont Assume you get the layout of the array twice: First time you see: Second time you see: What could not happen: The empty sequenceRemove(4), Insert(4) Lots of other constraints…

Example – last Making the data structure strongly history independent We can keep the array aligned left and sorted. Each content has only one possible layout. Problem: The time complexity of Insert and Remove is Ω(n), (“Usually” shift Ω (n) elements during insert or delete)

History of History Independence [Micciancio97] Weak history independent 2-3 trees (motivated by the problem of private incremental cryptography [BGG95]). [Naor-Teague01] History-independent hash-table, union-find. Weak history-independent memory allocation. History independent Dynamic Perfect Hashing

History of History Independence [Hartline et al. 02] Strong history independence means canonical layout. Relaxation of strong history independence. History independent memory resize. [Buchbinder, Petrank 03] Lower bounds on Strong History independent data-structures. History independent heaps.

What’s Next Trees 2.HashTables 3.Strong History Independence means Canonical Representation. 4.Lower Bounds on strong history independence. 5.Lower Bounds on relaxed strong history independence. 6.Obtaining a weak history independent heap.

2-3 Trees and Cryptography You sign a word document. Now you make a small change – do you need to resign the whole document? Maybe you want to deliver the document to a few people with only small changes. Do you need to compute several signatures? You don’t want to work hard!!

2-3 Trees and Cryptography Solution: Partition the document into small blocks. Sign each block. Construct a 2-3 tree on the blocks. Each internal node is a signature of its children along with the size of its sub-tree.  When delete/add/edit a block only O(log n) small signatures are needed. Problem: The structure of the 2-3 tree reveals some edit information!

2-3 Trees Why does the standard implementation of 2-3 Trees not History Independent? Insert: 1,2,3,4,

2-3 Trees S 1 = Insert: 1,2,3,4, Insert 1 We may distinguish between the two sequences 2345 Remove 1 S 2 = Insert: 1,2,3,4,5, Remove 1, Insert 1

2-3 Trees – Solution The number of children of each internal node is 2 or 3 with equal probability. (except the last one) CreateTree – O(n) Find – O(log n) Insert/Remove – O(log n) expected time.

2-3 Trees – CreateTree Prob = 1/2Prob = 1/4 3 nodes in level 2 Process should be continued

2-3 Trees – CreateTree cont Prob = 1/ Prob = 1/4

2-3 Trees – Solution We want: Insert/Remove generate the same distribution as CreateTree.  History independent Idea: When inserting/removing a leaf: –The previous leaves/nodes are Ok. –Fix the next leaves by new coin tosses.

2-3 Trees – inserting a new node Prob = 1/ Continue on …

2-3 Trees – Insert/Remove Complexity proof Ideas: In each two successive iterations we synchronize with previous grouping with constant probability. The number of nodes “touched” in each level is O(1). The total number of nodes “touched” in all levels is O(log n).

What’s Next Trees 2.HashTables 3.Strong History Independence means Canonical Representation. 4.Lower Bounds on strong history independence. 5.Lower Bounds on relaxed strong history independence. 6.Obtaining a weak history independent heap.

Hash-Tables Standard implementation (open address): Choose hash functions: h 1, h 2, h 3 … Insert(x): If h i (x) is occupied – try h i+1 (x) … Delete(x): Mark the cell as deleted - No actual delete.

Hash-Tables - Problems The deleted items still appear! If h 1 (x) = h 1 (y) then we can know whether x or y where inserted first  the one that was hashed by h 2 Solution: No deletions. When h i (x) = h i (y): decide to rehash x or y by some predetermined order between them.  The hash-table has canonical form  Strong History independent

What’s Next Trees 2.HashTables 3.Strong History Independence means Canonical Representation. 4.Lower Bounds on strong history independence. 5.Lower Bounds on relaxed strong history independence. 6.Obtaining a weak history independent heap.

Strong History Independence = Canonical Representation Definition [content graph]: The content graph of data-structure: Vertices: The possible contents. Edges: C 1  C 2 if  operation OP and parameters σ such that OP(C 1, σ)= C 2. Definition [well behaved]: An abstract data- structure is well behaved if its content graph is strongly connected.

Strong History Independence = Canonical Representation Lemma: For any strongly history independent implementation of a well behaved data- structure:  layout L,  operation Op, Op(L) yields only one possible layout. Corollary: Any strongly history independent implementation of well-behaved data- structure is canonical.

Canonical Representation: Proof cont. Corollary: Any strongly history independent implementation of well-behaved data- structure is canonical. Proof sketch (assuming the lemma): Let S be a sequence of operations yielding content C. Each operation in S generates one layout.  By induction S yields one possible layout. By strong history independence any other sequence yielding C creates the same layout.

Canonical Representation Proof of Lemma Lemma: For any strongly history independent implementation of a well behaved data- structure:  layout L,  operation Op, Op(L) yields only one possible layout. Assuming well-behaved, any operation Op has a sequence OP -1 that “reverses” Op. Assuming strong history independence we may set any two sequences with stop points.

Canonical Representation Proof of Lemma Proof sketch: Fix any layout L, fix any operation Op. We need to show that Op(L) yields a single specific layout L’. Let S be any sequence of operation yielding L with probability > 0. Consider the following sequences with the following ‘stop’ points: S 1 = S S 2 = S ◦ Op ◦ OP The two stop points are the same in S 1.  The same layout must also appear in S 2.

Canonical Representation Proof of Lemma S 1 = S S 2 = S ◦ Op ◦ OP Suppose L appears after S. L must appear again at the end of S 2. Otherwise, we could distinguish between the two sequences. For any L i =Op(L), Op -1 must transform L i to L with probability 1. L L2L2 L1L1 LkLk Op L Op -1 Op Op -1

Canonical Representation: Proof Now let’s extend the sequence and modify stop points: S 3 = S ◦ Op 1212 S 4 = S ◦ Op ◦ Op -1 ◦ Op Suppose some L i =Op(L) appears after S ◦ Op.  L i must appear also at the end of S 4. Otherwise, we could distinguish between the two sequences. After Op -1 the layout is again L. The operation of Op depends only on L. Op cannot “know” which L i to create.  There is only one L i = Op(L) L L2L2 L1L1 LkLk Op L Op -1 Op Op -1

What’s Next Trees 2.HashTables 3.Strong History Independence means Canonical Representation. 4.Lower Bounds on strong history independence. 5.Lower Bounds on relaxed strong history independence. 6.Obtaining a weak history independent heap.

Lower Bounds: an example Lemma: D: Data-structure whose content is the set of keys stored inside it. I: Implementation of D that is : comparison-based and canonical. The operation Insert(D, x) requires time Ω (n). This lemma applies for example to: Heaps, Dictionaries, Search trees.

Why is Comparison Based implementation important? It is “natural”: –Standard implementations for most data structure operations are like that. –Therefore, we should know not to design this way when seeking strong history independence Library functions are easy to use: –Only implement the comparison operation on data structure elements.

Lower Bounds – cont. Proof sketch: comparison-based: keys are treated as ‘black boxes’ according to the comparison order.  The algorithm treats any n keys only according to their total order.  The canonical layout of any n different keys is the same no matter what their real values are. d 1, d 2, … d n - memory addresses of n keys in the layout according to their total order. d’ 1, d’ 2, … d’ n+1 - memory addresses of n+1 keys in the layout according to their total order.

Lower Bounds – cont. Δ: The number of indices for which d i  d’ i Consider the content C = { k 2, k 3, …, k n+1 } k 2 < k 3 < … < k n+1 : Case 1 - Δ > n/2 - consider insert(C, k n+2 ): Puts k n+2 in address d’ n+1. Moves each k i (2  i  n+1) from d i-1 to d’ i-1.  The operation moves at least n/2 keys. Case 2 - Δ  n/2 - consider insert(C, k 1 ): Puts k 1 in d’ 1 Moves each k i (2  i  n+1) from d i-1 to d’ i.  The operation moves at least n/2 keys.

More Lower Bounds By similar methods we can show: Remove-key requires time Ω(n). For a Heap: –Increase-key requires time Ω(n). –Build-Heap Operation requires time Ω(n log n). For a queue: either Insert-first or Remove- Last requires time Ω(n).

What’s Next Trees 2.HashTables 3.Strong History Independence means Canonical Representation. 4.Lower Bounds on strong history independence. 5.Lower Bounds on relaxed strong history independence. 6.Obtaining a weak history independent heap.

Relaxed strong history independence Strong history independence implies very strong lower bounds. How can we relax the definition allowing more efficient data structures ? One possible way [HHMPR02 ]: Allowing the adversary to distinguish between the empty sequence and other sequences. Does this definition implies canonical memory layout ?

Relaxed strong history independence (cont.) The relaxed definition does not implies canonical memory layout. Possible implementation of previous data structure: In each operation - choose a new independent uniformly chosen permutation of the elements. 1.Not canonical … 2.Relaxed strong history independent. 3.Each operation - O(n)

Relaxed strong history independence Is this relaxation enough ? (for efficient implementations) No We may prove almost the same lower bounds using different property of these data structures.

What’s Next Trees 2.HashTables 3.Strong History Independence means Canonical Representation. 4.Lower Bounds on strong history independence. 5.Lower Bounds on relaxed strong history independence. 6.Obtaining a weak history independent heap.

The Binary Heap Binary heap - a simple implementation of a priority queue. The keys are stored in an almost full binary tree. Heap property - For each node i: V(parent(i))  V(i) Assume that all values in the heap are unique

The Binary Heap: Heapify Heapify - used to preserve the heap property. Input: a root and two proper sub-heaps of height  h-1. Output: a proper heap of height h. The node always chooses to sift down to the direction of the larger value

Heapify Operation

Reversing Heapify heapify -1 : “reversing” heapify: Heapify -1 (H: Heap, i: position) Root  v i All the path from the root to node i are shifted down The parameter i is a position in the heap H

Heapify -1 Operation Heapify(Heapify -1 (H, i)) = H Property: If all the keys in the heap are unique then for any i:

The Binary Heap: Build-heap in O(n) Building a heap - applying heapify on any sub-tree in the heap in a bottom up manner. Time Complexity

Reversing Build-heap Build-Heap -1 (H: heap) : Tree If size(H) = 1 then return (H); Choose a node i uniformly at random among the nodes in the heap H; H  Heapify -1 (H, i); Return TREE(root(H), build-heap -1 (H L ), build-heap -1 (H R )); For any random choice: Build-heap(Build-heap -1 (H)) = H Works in a Top-Bottom manner

Uniformly Chosen Heaps Build-heap is a Many-To-One procedure. Build-heap -1 is a One-To-Many procedure depending on the random choices. Support(H) : The set of permutations (trees) such that build-heap(T) = H Facts (without proof): 1.For each heap H the size of Support(H) is the same. 2.Build-heap -1 returns one of these heaps uniformly.

How to Obtain a Weak History Independent Heap Main idea: keeping a uniformly random heap at all time. We want: 1.Build-heap: Return one of the possible heaps uniformly. 2.Other operations: preserve this property.

An Easy Implementation: Build-Heap Apply random permutation on the input elements and then use the standard build-heap. Analysis: Each heap has the same size of Support group  each heap has the same probability. More intuition: Applying random permutation on the elements erases all data about the order of the elements.  There is no information on the history.

Another Easy Implementation: Increase-key Standard Increase-key - changes the value of element and sift it up until it gets to the correct place

Increase-key – cont. The standard increase-key is good for us. 1.The increase-key operation is reversible: decreasing the value of the key back will return the key to its previous location. 2.The number of heaps with n different keys is the same no matter of the actual values of keys.  The increase-key function is 1-1.  If we had uniformly chosen heap then after increase-key it stays uniformly chosen heap.

Not So Easy: Extract-max and Insert Extract-max(H) Replace the value at the root with the value of the last leaf. Let the value sift down to the right position. The standard operation of extract-max: Is this good for us ? No !

Standard Extract-max is Not Good Three possible heaps with 4 elements: One heap has probability 1/3 while the other has probability of 2/3 ! /

Naive Implementation: Extract-max Extract-max(H) 1. T = build-heap -1 (H) 2. Remove the last node v in the tree (T’). 3. H’ = build-heap(T’) 4. If we already removed the maximal value return H’ Otherwise: 5. Replace the root with v and let v sift down to its correct position. build-heap -1 and build-heap works in O(n) … but this implementation is history independent.

Analysis: Extract-max Extract-max(H) 1.T = build-heap -1 (H) 2.Remove the last node v in the tree (T’). 3.H’ = build-heap(T’) H’ is a random uniform heap on the n original keys of the heap excluding a random key v. T is a random uniform permutation on the n+1 keys of the heap. T’ is a random uniform permutation on n keys of the heap excluding the random key v.

Analysis: Extract-max 4.If we already removed the maximal value return H’ Otherwise: 5.Replace the root with v and let v sift down to its correct position. If we already removed the maximal value we are done. Otherwise: This is just applying increase/decrease-key on the value at the root. (this is a 1-1 function …)

Improving Complexity: Extract-max First 3 steps of Extract-max(H) 1.T = build-heap -1 (H) 2.Remove the last node v in the tree. 3.H’ = build-heap(T’) Main problem - steps 1 to 3 that takes O(n). Simple observation reduces the complexity of these steps to O(log 2 (n)) instead of O(n)

Reducing the Complexity to O(log 2 (n)) Observation: Most of the operations of build-heap -1 are redundant. they are always canceled by the operation of build- heap. Only the operations applied on nodes lying on the path from the root to the last leaf are really needed

Reducing the Complexity to O(log 2 (n)) Complexity analysis: Each heapify -1 and heapify operation takes at most O(log n). There are O(log n) such operations.

Reducing the Complexity: O(log(n)) Expected Time Extract-max(H) 1. T = build-heap -1 (H) 2. Remove the last node v in the tree (T’). 3. H’ = build-heap(T’) 4. If we already removed the maximal value return H’ Otherwise: 5. Replace the root with v and let v sift down to its correct position. We actually remove the last value of a uniformly chosen permutation and build back the heap

Reducing the Complexity: O(log(n)) Expected Time This is the most complex part Main ideas: We can show that there are actually O(1) operations of heapify -1 and heapify that make a differnce (in average over the random choices made by the algorithm in each step). We can detect these operations and apply only them.

Reducing the Complexity: O(log(n)) Expected Time Main lemma: When applying build-heap on a uniformly chosen permutation: The height of the last value in the permutation is O(1). Proof idea: Backward analysis on build-heap -1 instead of build-heap.

The Insert Operation The standard implementation of insert is not good for us. Good implementation must use randomization in order to be efficient (otherwise it should be canonical …) Making insert history independent is also not easy. The general method is similar to Extract-max.

Naive Implementation: Insert Insert(H, v) 1.Choose uniformly a random number 1≤i≤n+1 2.Let v i be the value in the heap in that place. 3. If i=n+1 skip to step 5 4.H’  Increase-key(H, i, v) H’ is a uniformly chosen heap without the value v i The value v i not in the heap is a randomly chosen value.

Naive Implementation: Insert Insert(H, v) 5.T = build-heap -1 (H’) 6.T’  T “+” Add the value v i to the n+1 position 7. H = build-heap(T’) 8.Return (H) T is a uniformly chosen permutation without the value v i. T’ is a uniformly chosen permutation with the value v i. H is a uniformly chosen heap with the value v.

Insert Operation – Reducing complexity The general ideas are similar to Extract-max. Reducing the complexity to O(log 2 n) by running heapify -1 and heapify only on the path to the newly added node. The most difficult part is again reducing the complexity from O(log 2 n) to O(log n) expected time ** notice that we insert a random key into a random heap.

Conclusions 1.Demanding strong history independence usually requires a high efficiency penalty in the comparison based model. 2.Weak history independent heap in the comparison-based model without penalty, Complexity: build-heap - O(n) worst case. increase-key - O(log n) worst case. extract-max, insert- O(log n) expected time, O(log 2 n) worst case.

Bounds Summary OperationWeak History Independence Strong History Independence heap: insertO(log n)Ω(n) heap: increase-keyO(log n)Ω(n) heap: extract-maxO(log n)No lower bound heap: build-heapO(n)Ω(n log n) queue: max{ insert- first, remove-last} O(1)Ω(n)

Memory allocation Assume we allocate fixed size records. We would like: After an arbitrary number of allocate/delete the memory “dump” do not reveal information about the allocations. Main idea: When allocate a cell k: Choose random number 1≤i≤k Put the cell in the ith place. Copy the ith cell to the kth position. Require to change all incoming pointers into the two cells.

Memory allocation Require to change all incoming pointers into the two cells. Can be done using doubly linked pointers. We can make any pointer based with bounded in degree, fixed size record data structure history independent. If its “shape” is history independent Example: 2-3 trees we saw.

Memory allocation: non-fixed size Main idea: We partition the allocation into sizes of [2 i, 2 i+1 ). The larger allocations are more left according to their order. Each group is uniformly ordered as fixed size. When we allocate: We round up the size. We make place moving records in “smaller” groups. Allocation of size ‘s’ in time O(s log s).

Open Questions 1.Can we show separation between weak and strong History independence in the non- comparison model ? 2.History independent implementation of other, more complex, data structures. 3.Strong History independent implementations that do not require canonical representation – Union find. Thank you