Download presentation
Presentation is loading. Please wait.
Published byTerence Wade Modified over 9 years ago
1
CSE 326 Hashing David Kaplan Dept of Computer Science & Engineering Autumn 2001
2
HashingCSE 326 Autumn 20012 Reminder: Dictionary ADT Dictionary operations insert find delete create destroy Stores values associated with user- specified keys values may be any (homogeneous) type keys may be any (homogeneous) comparable type Adrien Roller-blade demon Hannah C++ guru Dave Older than dirt … insert find(Adrien) Adrien Roller-blade demon Donald l33t haxtor
3
HashingCSE 326 Autumn 20013 Dictionary Implementations So Far InsertFindDelete Unsorted listO(1)O(n) TreesO(log n) Sorted arrayO(n)O(log n)O(n) Array special case known keys {1, …, K} O(1)
4
HashingCSE 326 Autumn 20014 ADT Legalities: A Digression on Keys Methods are the contract between an ADT and the outside agent (client code) Ex: Dictionary contract is {insert, find, delete} Ex: Priority Q contract is {insert, deleteMin} Keys are the currency used in transactions between an outside agent and ADT Ex: insert(key), find(key), delete(key) So … How about O(1) insert/find/delete for any key type?
5
HashingCSE 326 Autumn 20015 Hash Table Goal: Key as Index We can access a record as a[5]We want to access a record as a[“Hannah”] Adrien roller-blade demon 2 Hannah C++ guru 5 Adrien roller-blade demon Adrien Hannah C++ guru Hannah
6
HashingCSE 326 Autumn 20016 Hash Table Approach But… is there a problem with this pipe-dream? f(x) Hannah Dave Adrien Donald Ed
7
HashingCSE 326 Autumn 20017 Hash Table Dictionary Data Structure Hash function: maps keys to integers Result: Can quickly find the right spot for a given entry Unordered and sparse table Result: Cannot efficiently list all entries Cannot efficiently find min, max, ordered ranges f(x) Hannah Dave Adrien Donald Ed
8
HashingCSE 326 Autumn 20018 Hash Table Taxonomy f(x) Hannah Dave Adrien Donald Ed hash function collision keys load factor = # of entries in table tableSize
9
HashingCSE 326 Autumn 20019 Agenda: Hash Table Design Decisions What should the hash function be? What should the table size be? How should we resolve collisions?
10
HashingCSE 326 Autumn 200110 Hash Function Hash function maps a key to a table index Value & find(Key & key) { int index = hash(key) % tableSize; return Table[index]; }
11
HashingCSE 326 Autumn 200111 What Makes A Good Hash Function? Fast runtime O(1) and fast in practical terms Distributes the data evenly hash(a) % size hash(b) % size Uses the whole hash table for all 0 i < size, k such that hash(k) % size = i
12
HashingCSE 326 Autumn 200112 Good Hash Function for Integer Keys Choose tableSize is prime hash(n) = n Example: tableSize = 7 insert(4) insert(17) find(12) insert(9) delete(17) 3 2 1 0 6 5 4
13
HashingCSE 326 Autumn 200113 Good Hash Function for Strings? Let s = s 1 s 2 s 3 s 4 …s n : choose hash(s) = s 1 + s 2 128 + s 3 128 2 + s 4 128 3 + … + s n 128 n Think of the string as a base 128 (aka radix 128) number Problems: hash(“really, really big”) = well… something really, really big hash(“one thing”) % 128 = hash(“other thing”) % 128
14
HashingCSE 326 Autumn 200114 String Hashing Issues and Techniques Minimize collisions Make tableSize and radix relatively prime Typically, make tableSize not a multiple of 128 Simplify computation Use Horner’s Rule int hash(String s) { h = 0; for (i = s.length() - 1; i >= 0; i--) { h = (s[i] + 128*h) % tableSize; } return h; }
15
HashingCSE 326 Autumn 200115 Good Hashing: Multiplication Method Hash function is defined by size plus a parameter A h A (k) = size * (k*A mod 1) where 0 < A < 1 Example: size = 10, A = 0.485 h A (50) = 10 * (50*0.485 mod 1) = 10 * (24.25 mod 1) = 10 * 0.25 = 2 no restriction on size! when building a static table, we can try several values of A more computationally intensive than a single mod
16
HashingCSE 326 Autumn 200116 Hashing Dilemma Suppose your Worst Enemy 1) knows your hash function; 2) gets to decide which keys to send you? Faced with this enticing possibility, Worst Enemy decides to: a) Send you keys which maximize collisions for your hash function. b) Take a nap. Moral: No single hash function can protect you! Faced with this dilemma, you: a) Give up and use a linked list for your Dictionary. b) Drop out of software, and choose a career in fast foods. c) Run and hide. d) Proceed to the next slide, in hope of a better alternative.
17
HashingCSE 326 Autumn 200117 Universal Hashing 1 Suppose we have a set K of possible keys, and a finite set H of hash functions that map keys to entries in a hashtable of size m. 1 Motivation: see previous slide (or visit http://www.burgerking.com/jobs)http://www.burgerking.com/jobs Definition: H is a universal collection of hash functions if and only if … For any two keys k 1, k 2 in K, there are at most |H|/m functions in H for which h(k 1 ) = h(k 2 ). So … if we randomly choose a hash function from H, our chances of collision are no more than if we get to choose hash table entries at random! 0 1...... m-1 K H h hihi hjhj k2k2 k1k1
18
HashingCSE 326 Autumn 200118 Random Hashing – Not! How can we “randomly choose a hash function”? Certainly we cannot randomly choose hash functions at runtime, interspersed amongst the inserts, finds, deletes! Why not? We can, however, randomly choose a hash function each time we initialize a new hashtable. Conclusions Worst Enemy never knows which hash function we will choose – neither do we! No single input (set of keys) can always evoke worst-case behavior
19
HashingCSE 326 Autumn 200119 Good Hashing: Universal Hash Function A (UHF a ) Parameterized by prime table size and vector: a = where 0 <= a i < size Represent each key as r + 1 integers where k i < size size = 11, key = 39752 ==> size = 29, key = “hello world” ==> h a (k) =
20
HashingCSE 326 Autumn 200120 UHF a : Example Context: hash strings of length 3 in a table of size 131 let a = h a (“xyz”) = (35*120 + 100*121 + 21*122) % 131 = 129
21
HashingCSE 326 Autumn 200121 Thinking about UHF a Strengths: works on any type as long as you can form k i ’s if we’re building a static table, we can try many values of the hash vector random has guaranteed good properties no matter what we’re hashing Weaknesses must choose prime table size larger than any k i
22
HashingCSE 326 Autumn 200122 Good Hashing: Universal Hash Function 2 (UHF 2 ) Parameterized by j, a, and b: j * size should fit into an int a and b must be less than size h j,a,b (k) = ((ak + b) mod (j*size))/j
23
HashingCSE 326 Autumn 200123 UHF 2 : Example Context: hash integers in a table of size 16 let j = 32, a = 100, b = 200 h j,a,b (1000) = ((100*1000 + 200) % (32*16)) / 32 = (100200 % 512) / 32 = 360 / 32 = 11
24
HashingCSE 326 Autumn 200124 Thinking about UHF 2 Strengths if we’re building a static table, we can try many parameter values random a,b has guaranteed good properties no matter what we’re hashing can choose any size table very efficient if j and size are powers of 2 (why?) Weaknesses need to turn non-integer keys into integers
25
HashingCSE 326 Autumn 200125 Hash Function Summary Goals of a hash function reproducible mapping from key to table index evenly distribute keys across the table separate commonly occurring keys (neighboring keys?) fast runtime Some hash function candidates h(n) = n % size h(n) = string as base 128 number % size Multiplication hash: compute percentage through the table Universal hash function A: dot product with random vector Universal hash function 2: next pseudo-random number
26
HashingCSE 326 Autumn 200126 Hash Function Design Considerations Know what your keys are Study how your keys are distributed Try to include all important information in a key in the construction of its hash Try to make “neighboring” keys hash to very different places Prune the features used to create the hash until it runs “fast enough” (very application dependent)
27
HashingCSE 326 Autumn 200127 Handling Collisions Pigeonhole principle says we can’t avoid all collisions try to hash without collision n keys into m slots with n > m try to put 6 pigeons into 5 holes What do we do when two keys hash to the same entry? Separate Chaining: put a little dictionary in each entry Open Addressing: pick a next entry to try within hashtable Terminology madness :-( Separate Chaining sometimes called Open Hashing Open Addressing sometimes called Closed Hashing
28
HashingCSE 326 Autumn 200128 3 2 1 0 6 5 4 ad eb c Separate Chaining Put a little dictionary at each entry Commonly, unordered linked list (chain) Or, choose another Dictionary type as appropriate (search tree, hashtable, etc.) Properties can be greater than 1 performance degrades with length of chains Alternate Dictionary type (e.g. search tree, hashtable) can speed up secondary search h(a) = h(d) h(e) = h(b)
29
HashingCSE 326 Autumn 200129 Separate Chaining Code [private] Dictionary & findBucket(const Key & k) { return table[hash(k)%table.size]; } void insert(const Key & k, const Value & v) { findBucket(k).insert(k,v); } Value & find(const Key & k) { return findBucket(k).find(k); } void delete(const Key & k) { findBucket(k).delete(k); }
30
HashingCSE 326 Autumn 200130 Load Factor in Separate Chaining Search cost unsuccessful search: successful search: Desired load factor:
31
HashingCSE 326 Autumn 200131 Open Addressing Allow one key at each table entry two objects that hash to the same spot can’t both go there first one there gets the spot next one must go in another spot Properties 1 performance degrades with difficulty of finding right spot a c e 3 2 1 0 6 5 4 h(a) = h(d) h(e) = h(b) d b
32
HashingCSE 326 Autumn 200132 Probing Requires collision resolution function f(i) Probing how to: First probe - given a key k, hash to h(k) Second probe - if h(k) is occupied, try h(k) + f(1) Third probe - if h(k) + f(1) is occupied, try h(k) + f(2) And so forth Probing properties we force f(0) = 0 i th probe is to (h(k) + f(i)) mod size if i reaches size - 1, the probe has failed depending on f(), the probe may fail sooner long sequences of probes are costly!
33
HashingCSE 326 Autumn 200133 Linear Probing f(i) = i Probe sequence is h(k) mod size h(k) + 1 mod size h(k) + 2 mod size … bool findEntry(const Key & k, Entry *& entry) { int probePoint = hash(k); do { entry = &table[probePoint]; probePoint = (probePoint + 1) % size; } while (!entry->isEmpty() && entry->key != k); return !entry->isEmpty(); }
34
Linear Probing Example probes: 47 93 40 10 3 2 1 0 6 5 4 insert(55) 55%7 = 6 3 76 3 2 1 0 6 5 4 insert(76) 76%7 = 6 1 76 3 2 1 0 6 5 4 insert(93) 93%7 = 2 1 93 76 3 2 1 0 6 5 4 insert(40) 40%7 = 5 1 93 40 76 3 2 1 0 6 5 4 insert(47) 47%7 = 5 3 47 93 40 76 10 3 2 1 0 6 5 4 insert(10) 10%7 = 3 1 55 76 93 40 47
35
HashingCSE 326 Autumn 200135 Load Factor in Linear Probing For any < 1, linear probing will find an empty slot Search cost (for large table sizes) successful search: unsuccessful search: Linear probing suffers from primary clustering Performance quickly degrades for > 1/2
36
HashingCSE 326 Autumn 200136 Quadratic Probing f(i) = i 2 Probe sequence: h(k) mod size h(k) + 1 mod size h(k) + 4 mod size h(k) + 9 mod size … bool findEntry(const Key & k, Entry *& entry) { int probePoint = hash(k), i = 0; do { entry = &table[probePoint]; i++; probePoint = (probePoint + (2*i - 1)) % size; } while (!entry->isEmpty() && entry->key != k); return !entry->isEmpty(); }
37
Good Quadratic Probing Example probes: 76 3 2 1 0 6 5 4 insert(76) 76%7 = 6 1 76 3 2 1 0 6 5 4 insert(40) 40%7 = 5 1 40 76 3 2 1 0 6 5 4 insert(48) 48%7 = 6 2 4847 40 76 3 2 1 0 6 5 4 insert(5) 5%7 = 5 3 55 40 55 3 2 1 0 6 5 4 insert(55) 55%7 = 6 3 76 47
38
Bad Quadratic Probing Example probes: 76 3 2 1 0 6 5 4 insert(76) 76%7 = 6 1 35 93 40 76 3 2 1 0 6 5 4 insert(47) 47%7 = 5 76 3 2 1 0 6 5 4 insert(93) 93%7 = 2 1 93 76 3 2 1 0 6 5 4 insert(40) 40%7 = 5 1 40 93 40 76 3 2 1 0 6 5 4 insert(35) 35%7 = 0 1 35
39
HashingCSE 326 Autumn 200139 Quadratic Probing Succeeds for ½ If size is prime and ½, then quadratic probing will find an empty slot in size/2 probes or fewer. show for all 0 i, j size/2 and i j (h(x) + i 2 ) mod size (h(x) + j 2 ) mod size by contradiction: suppose that for some i, j: (h(x) + i 2 ) mod size = (h(x) + j 2 ) mod size i 2 mod size = j 2 mod size (i 2 - j 2 ) mod size = 0 [(i + j)(i - j)] mod size = 0 but how can i + j = 0 or i + j = size when i j and i,j size/2 ? same for i - j mod size = 0
40
HashingCSE 326 Autumn 200140 Quadratic Probing May Fail for > ½ For any i larger than size/2, there is some j smaller than i that adds with i to equal size (or a multiple of size). D’oh!
41
HashingCSE 326 Autumn 200141 Load Factor in Quadratic Probing For any ½, quadratic probing will find an empty slot For > ½, quadratic probing may find a slot Quadratic probing does not suffer from primary clustering Quadratic probing does suffer from secondary clustering How could we possibly solve this?
42
HashingCSE 326 Autumn 200142 Double Hashing f(i) = i*hash 2 (k) Probe sequence: h 1 (k) mod size (h 1 (k) + 1 h 2 (x)) mod size (h 1 (k) + 2 h 2 (x)) mod size … bool findEntry(const Key & k, Entry *& entry) { int probePoint = hash 1 (k), delta = hash 2 (k); do { entry = &table[probePoint]; probePoint = (probePoint + delta) % size; } while (!entry->isEmpty() && entry->key != k); return !entry->isEmpty(); }
43
HashingCSE 326 Autumn 200143 A Good Double Hash Function… … is quick to evaluate. … differs from the original hash function. … never evaluates to 0 (mod size). One good choice: Choose a prime p < size Let hash 2 (k)= p - (k mod p)
44
Double Hashing Double Hashing Example (p=5) probes: 93 55 40 10 3 2 1 0 6 5 4 insert(55) 55%7 = 6 5 - (55%5) = 5 2 76 3 2 1 0 6 5 4 insert(76) 76%7 = 6 1 76 3 2 1 0 6 5 4 insert(93) 93%7 = 2 1 93 76 3 2 1 0 6 5 4 insert(40) 40%7 = 5 1 93 40 76 3 2 1 0 6 5 4 insert(47) 47%7 = 5 5 - (47%5) = 3 2 47 93 40 76 10 3 2 1 0 6 5 4 insert(10) 10%7 = 3 1 47 76 93 40 47
45
HashingCSE 326 Autumn 200145 Load Factor in Double Hashing For any < 1, double hashing will find an empty slot (given appropriate table size and hash 2 ) Search cost appears to approach optimal (random hash): successful search: unsuccessful search: No primary clustering and no secondary clustering One extra hash calculation
46
HashingCSE 326 Autumn 200146 0 1 2 7 3 2 1 0 6 5 4 delete(2) 0 1 7 3 2 1 0 6 5 4 find(7) Where is it?! Deletion in Open Addressing Must use lazy deletion! On insertion, treat a (lazily) deleted item as an empty slot
47
HashingCSE 326 Autumn 200147 The Squished Pigeon Principle Insert using Open Addressing cannot work with 1. Insert using Open Addressing with quadratic probing may not work with ½. With Separate Chaining or Open Addressing, large load factors lead to poor performance! How can we relieve the pressure on the pigeons? Hint: what happens when we overrun array storage in a {queue, stack, heap}? What else must happen with a hashtable?
48
HashingCSE 326 Autumn 200148 Rehashing When the gets “too large” (over some constant threshold), rehash all elements into a new, larger table: takes O(n), but amortized O(1) as long as we (just about) double table size on the resize spreads keys back out, may drastically improve performance gives us a chance to retune parameterized hash functions avoids failure for Open Addressing techniques allows arbitrarily large tables starting from a small table clears out lazily deleted items
49
HashingCSE 326 Autumn 200149 Case Study Spelling dictionary 30,000 words static arbitrary(ish) preprocessing time Goals fast spell checking minimal storage Practical notes almost all searches are successful – Why? words average about 8 characters in length 30,000 words at 8 bytes/word ~.25 MB pointers are 4 bytes there are many regularities in the structure of English words
50
HashingCSE 326 Autumn 200150 Case Study: Design Considerations Possible Solutions sorted array + binary search Separate Chaining Open Addressing + linear probing Issues Which data structure should we use? Which type of hash function should we use?
51
HashingCSE 326 Autumn 200151 Case Study: Storage Assume words are strings and entries are pointers to strings Array + binary search Separate Chaining … Open addressing How many pointers does each use?
52
HashingCSE 326 Autumn 200152 Case Study: Analysis storagetime Binary search n pointers + words = 360KB log 2 n 15 probes per access, worst case Separate Chaining n + n/ pointers + words ( = 1 600KB) 1 + /2 probes per access on average ( = 1 1.5 probes) Open Addressing n/ pointers + words ( = 0.5 480KB) (1 + 1/(1 - ))/2 probes per access on average ( = 0.5 1.5 probes) What to do, what to do? …
53
HashingCSE 326 Autumn 200153 Perfect Hashing When we know the entire key set in advance … Examples: programming language keywords, CD- ROM file list, spelling dictionary, etc. … then perfect hashing lets us achieve: Worst-case O(1) time complexity! Worst-case O(n) space complexity!
54
HashingCSE 326 Autumn 200154 Perfect Hashing Technique Static set of n known keys Separate chaining, two-level hash Primary hash table size=n j th secondary hash table size=n j 2 (where n j keys hash to slot j in primary hash table) Universal hash functions in all hash tables Conduct (a few!) random trials, until we get collision-free hash functions 3 2 1 0 6 5 4 Primary hash table Secondary hash tables
55
HashingCSE 326 Autumn 200155 Perfect Hashing Theorems 1 Theorem: If we store n keys in a hash table of size n 2 using a randomly chosen universal hash function, then the probability of any collision is < ½. Theorem: If we store n keys in a hash table of size m=n using a randomly chosen universal hash function, then where n j is the number of keys hashing to slot j. Corollary: If we store n keys in a hash table of size m=n using a randomly chosen universal hash function and we set the size of each secondary hash table to m j =n j 2, then: a)The expected amount of storage required for all secondary hash tables is less than 2n. b)The probability that the total storage used for all secondary hash tables exceeds 4n is less than ½. 1 Intro to Algorithms, 2 nd ed. Cormen, Leiserson, Rivest, Stein
56
HashingCSE 326 Autumn 200156 Perfect Hashing Conclusions Perfect hashing theorems set tight expected bounds on sizes and collision behavior of all the hash tables (primary and all secondaries). Conduct a few random trials of universal hash functions, by simply varying UHF parameters, until we get a set of UHFs and associated table sizes which deliver … Worst-case O(1) time complexity! Worst-case O(n) space complexity!
57
HashingCSE 326 Autumn 200157 Extendible Hashing: Cost of a Database Query I/O to CPU ratio is 300-to-1!
58
HashingCSE 326 Autumn 200158 Extendible Hashing Hashing technique for huge data sets optimizes to reduce disk accesses each hash bucket fits on one disk block better than B-Trees if order is not important – why? Table contains buckets, each fitting in one disk block, with the data a directory that fits in one disk block is used to hash to the correct bucket
59
HashingCSE 326 Autumn 200159 001010011 110 111 101 Extendible Hash Table Directory entry: key prefix (first k bits) and a pointer to the bucket with all keys starting with its prefix Each block contains keys matching on first j k bits, plus the data associated with each key 000 100 (2) 00001 00011 00100 00110 (2) 01001 01011 01100 (3) 10001 10011 (3) 10101 10110 10111 (2) 11001 11011 11100 11110 directory for k = 3
60
Inserting (easy case) 001010011 110 111 101000 100 (2) 00001 00011 00100 00110 (2) 01001 01011 01100 (3) 10001 10011 (3) 10101 10110 10111 (2) 11001 11011 11100 11110 insert(11011) 001010011 110 111 101000 100 (2) 00001 00011 00100 00110 (2) 01001 01011 01100 (3) 10001 10011 (3) 10101 10110 10111 (2) 11001 11100 11110
61
Splitting a Leaf 001010011 110 111 101000 100 (2) 00001 00011 00100 00110 (2) 01001 01011 01100 (3) 10001 10011 (3) 10101 10110 10111 (2) 11001 11011 11100 11110 insert(11000) 001010011 110 111 101000 100 (2) 00001 00011 00100 00110 (2) 01001 01011 01100 (3) 10001 10011 (3) 10101 10110 10111 (3) 11000 11001 11011 (3) 11100 11110
62
HashingCSE 326 Autumn 200162 Splitting the Directory 1. insert(10010) But, no room to insert and no adoption! 2. Solution: Expand directory 3. Then, it’s just a normal split. 01101100 (2) 01101 (2) 10000 10001 10011 10111 (2) 11001 11110 001010011 110 111 101000 100
63
HashingCSE 326 Autumn 200163 If Extendible Hashing Doesn’t Cut It Store only pointers to the items + (potentially) much smaller M + fewer items in the directory – one extra disk access! Rehash + potentially better distribution over the buckets + fewer unnecessary items in the directory – can’t solve the problem if there’s simply too much data What if these don’t work? use a B-Tree to store the directory!
64
HashingCSE 326 Autumn 200164 Hash Wrap Collision resolution Separate Chaining Expand beyond hashtable via secondary Dictionaries Allows > 1 Open Addressing Expand within hashtable Secondary probing: {linear, quadratic, double hash} 1 (by definition!) ½ (by preference!) Rehashing Tunes up hashtable when crosses the line Hash functions Simple integer hash: prime table size Multiplication method Universal hashing guarantees no (always) bad input Perfect hashing Requires known, fixed keyset Achieves O(1) time, O(n) space - guaranteed! Extendible hashing For disk-based data Combine with b-tree directory if needed
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.