Dictionaries and Hash Tables1 Hash Tables   0 1 2 3 4 451-229-0004 981-101-0002 025-612-0001.

Slides:



Advertisements
Similar presentations
© 2004 Goodrich, Tamassia Hash Tables
Advertisements

© 2004 Goodrich, Tamassia Hash Tables
What we learn with pleasure we never forget. Alfred Mercier Smitha N Pai.
© 2004 Goodrich, Tamassia Hash Tables1  
Chapter 9: Maps, Dictionaries, Hashing Nancy Amato Parasol Lab, Dept. CSE, Texas A&M University Acknowledgement: These slides are adapted from slides provided.
Maps. Hash Tables. Dictionaries. 2 CPSC 3200 University of Tennessee at Chattanooga – Summer 2013 © 2010 Goodrich, Tamassia.
Data Structures Lecture 12 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
Log Files. O(n) Data Structure Exercises 16.1.
Hashing Techniques.
CSC401 – Analysis of Algorithms Lecture Notes 5 Heaps and Hash Tables Objectives: Introduce Heaps, Heap-sorting, and Heap- construction Analyze the performance.
Dictionaries and Hash Tables1  
Hash Tables1 Part E Hash Tables  
Hash Tables1 Part E Hash Tables  
Hash Tables1 Part E Hash Tables  
COMP 171 Data Structures and Algorithms Tutorial 10 Hash Tables.
Hashing General idea: Get a large array
Dictionaries 4/17/2017 3:23 PM Hash Tables  
Hash Tables. Container of elements where each element has an associated key Each key is mapped to a value that determines the table cell where element.
Maps, Dictionaries, Hashing
1. 2 Problem RT&T is a large phone company, and they want to provide enhanced caller ID capability: –given a phone number, return the caller’s name –phone.
CS 221 Analysis of Algorithms Data Structures Dictionaries, Hash Tables, Ordered Dictionary and Binary Search Trees.
1 Chapter 5 Hashing General ideas Methods of implementing the hash table Comparison among these methods Applications of hashing Compare hash tables with.
CS 202, Spring 2003 Fundamental Structures of Computer Science II Bilkent University1 Hashing CS 202 – Fundamental Structures of Computer Science II Bilkent.
Hashing Chapter 20. Hash Table A hash table is a data structure that allows fast find, insert, and delete operations (most of the time). The simplest.
1 HashTable. 2 Dictionary A collection of data that is accessed by “key” values –The keys may be ordered or unordered –Multiple key values may/may-not.
Hash Tables1   © 2010 Goodrich, Tamassia.
1 HASHING Course teacher: Moona Kanwal. 2 Hashing Mathematical concept –To define any number as set of numbers in given interval –To cut down part of.
Prof. Amr Goneid, AUC1 CSCI 210 Data Structures and Algorithms Prof. Amr Goneid AUC Part 5. Dictionaries(2): Hash Tables.
Hashing Hashing is another method for sorting and searching data.
© 2004 Goodrich, Tamassia Hash Tables1  
WEEK 1 Hashing CE222 Dr. Senem Kumova Metin
1 Hashing - Introduction Dictionary = a dynamic set that supports the operations INSERT, DELETE, SEARCH Dictionary = a dynamic set that supports the operations.
Chapter 10 Hashing. The search time of each algorithm depend on the number n of elements of the collection S of the data. A searching technique called.
Hashing Basis Ideas A data structure that allows insertion, deletion and search in O(1) in average. A data structure that allows insertion, deletion and.
Tirgul 11 Notes Hash tables –reminder –examples –some new material.
Hashtables. An Abstract data type that supports the following operations: –Insert –Find –Remove Search trees can be used for the same operations but require.
CHAPTER 9 HASH TABLES, MAPS, AND SKIP LISTS ACKNOWLEDGEMENT: THESE SLIDES ARE ADAPTED FROM SLIDES PROVIDED WITH DATA STRUCTURES AND ALGORITHMS IN C++,
CMSC 341 Hashing Readings: Chapter 5. Announcements Midterm II on Nov 7 Review out Oct 29 HW 5 due Thursday CMSC 341 Hashing 2.
Prof. Amr Goneid, AUC1 CSCI 210 Data Structures and Algorithms Prof. Amr Goneid AUC Part 5. Dictionaries(2): Hash Tables.
Lecture14: Hashing Bohyung Han CSE, POSTECH CSED233: Data Structures (2014F)
Hash Maps Rem Collier Room A1.02 School of Computer Science and Informatics University College Dublin, Ireland.
Algorithms Design Fall 2016 Week 6 Hash Collusion Algorithms and Binary Search Trees.
Hash Tables 1/28/2018 Presentation for use with the textbook Data Structures and Algorithms in Java, 6th edition, by M. T. Goodrich, R. Tamassia, and M.
CE 221 Data Structures and Algorithms
Hashing (part 2) CSE 2011 Winter March 2018.
CSCI 210 Data Structures and Algorithms
Hashing CSE 2011 Winter July 2018.
Hashing Alexandra Stefan.
Dictionaries and Hash Tables
Dictionaries Dictionaries 07/27/16 16:46 07/27/16 16:46 Hash Tables 
© 2013 Goodrich, Tamassia, Goldwasser
Dictionaries 9/14/ :35 AM Hash Tables   4
Hash Tables 3/25/15 Presentation for use with the textbook Data Structures and Algorithms in Java, 6th edition, by M. T. Goodrich, R. Tamassia, and M.
Searching.
Data Structures Maps and Hash.
Hash Tables 11/22/2018 3:15 AM Hash Tables  1 2  3  4
Dictionaries 11/23/2018 5:34 PM Hash Tables   Hash Tables.
Dictionaries and Hash Tables
Hash Tables   Maps Dictionaries 12/7/2018 5:58 AM Hash Tables  
CH 9.2 : Hash Tables Acknowledgement: These slides are adapted from slides provided with Data Structures and Algorithms in C++, Goodrich, Tamassia and.
Dictionaries and Hash Tables
Dictionaries 1/17/2019 7:55 AM Hash Tables   4
CH 9.2 : Hash Tables Acknowledgement: These slides are adapted from slides provided with Data Structures and Algorithms in C++, Goodrich, Tamassia and.
Hash Tables Computer Science and Engineering
Dictionaries and Hash Tables
Hash Tables Computer Science and Engineering
Hash Tables Computer Science and Engineering
Dictionaries 4/5/2019 1:49 AM Hash Tables  
CS210- Lecture 16 July 11, 2005 Agenda Maps and Dictionaries Map ADT
Dictionaries and Hash Tables
Presentation transcript:

Dictionaries and Hash Tables1 Hash Tables  

2 Hashtables Which is better, an array or a linked list? Well, an array is better if we know how many objects will be stored because we can access each element of an array in O(1) time!!! But, a linked list is better if we have to change the number of objects stored dynamically and the changes are significant, or if there are complex relationships between objects that indicate “neighbors”.

3 Implementation of Hashtables Which is better, an array or a linked list? Well, an array is better if we know how many objects will be stored because we can access each element of an array in O(1) time!!! But, a linked list is better if we have to change the number of objects stored dynamically and the changes are significant, or if there are complex relationships between objects that indicate “neighbors”. What to do?

4 Which is better, an array or a linked list? Well, an array is better if we know how many objects will be stored because we can access each element of an array in O(1) time!!! But, a linked list is better if we have to change the number of objects stored dynamically and the changes are significant, or if there are complex relationships between objects that indicate “neighbors”. What to do? I know! Combine the advantages of both!!! Implementation of Hashtables

5 Which is better, an array or a linked list? Well, an array is better if we know how many objects will be stored because we can access each element of an array in O(1) time!!! But, a linked list is better if we have to change the number of objects stored dynamically and the changes are significant, or if there are complex relationships between objects that indicate “neighbors”. What to do? Combine the advantages of both!!! WELCOME HASHTABLES Implementation of Hashtables

6 Hashtables Wouldn't it be great if arrays could be made of infinite size without penalty? Then every object would have a place in the array

7 Hashtables Wouldn't it be great if arrays could be made of infinite size without penalty? Then every object would have a place in the array Object

8 Hashtables Wouldn't it be great if arrays could be made of infinite size without penalty? Then every object would have a place in the array Object But this is impractical

9 Hashtables So we “fold” the array infinitely many times. In other words, the number of spaces for holding objects is small but each space may hold lots of objects Object...

10 Hashtables But how do we maintain all the objects belonging to the same space? Object...

11 Hashtables But how do we maintain all the objects belonging to the same space? Answer: linked list Object...

12 Hashtables How do we locate an object? Object...

13 Hashtables How do we locate an object? 1. Use a hash function to locate an array element Object...

14 Hashtables How do we locate an object? 1. Use a hash function to locate an array element 2. Follow the linked list from that element to find the object Object...

15 Hashtables How do we locate an object? 1. Use a hash function to locate an array element 2. Follow the linked list from that element to find the object Object...

16 Hashtables How do we locate an object? 1. Use a hash function to locate an array element 2. Follow the linked list from that element to find the object Object...

17 Hashtables How do we locate an object? 1. Use a hash function to locate an array element 2. Follow the linked list from that element to find the object Object...

18 Hashtables Now the problem is how to construct the hash function and the hash table size!!!

19 Hashtables Now the problem is how to construct the hash function and the hash table size!!! Let's do the hash function first. The idea is to take some object identity and convert it to some random number.

20 Hashtables Now the problem is how to construct the hash function and the hash table size!!! Let's do the hash function first. The idea is to take some object identity and convert it to some random number. Consider... long hashvalue (void *object) { long n = (long)valfun->value(object); return (long)(n*357 % size); }

21 Table Size? What about the table size??? If the table size is N and the number of objects stored is M then the average length of a linked list is M/N so if M is about 2N the average length is 2 meaning that to locate an object takes O(1) time!!!! Just like for arrays!!!

22 Hash ADT The hash ADT models a searchable collection of key- element items The main operations of a hash system (e.g., dictionary) are searching, inserting, and deleting items Multiple items with the same key are allowed Applications: address book credit card authorization mapping host names (e.g., to internet addresses (e.g., ) Hash (dictionary) ADT operations: find(k): if the dictionary has an item with key k, returns the position of this element, else, returns a null position. insertItem(k, o): inserts item (k, o) into the dictionary removeElement(k): if the dictionary has an item with key k, removes it from the dictionary and returns its element. An error occurs if there is no such element. size(), isEmpty() keys(), Elements()

23 Efficiency of Hash Tables A dictionary is a hash table implemented by means of an unsorted sequence We store the items of the dictionary in a sequence (based on a doubly-linked lists or a circular array), in arbitrary order Performance: insertItem takes O(1) time since we can insert the new item at the beginning or at the end of the sequence find and removeElement take O(n) time since in the worst case (the item is not found) we traverse the entire sequence to look for an item with the given key

24 Hash Functions and Hash Tables A hash function h maps keys of a given type to integers in a fixed interval [0, N  1] Example: h(x)  x mod N is a hash function for integer keys The integer h(x) is called the hash value of key x A hash table for a given key type consists of Hash function h Array (called table) of size N When implementing a dictionary with a hash table, the goal is to store item (k, o) at index i  h(k)

25 Example We design a hash table for a dictionary storing items (TC-no, Name), where TC- no (social security number) is a nine-digit positive integer Our hash table uses an array of size N  10,000 and the hash function h(x)  last four digits of x     …

26 Hash Functions A hash function is usually specified as the composition of two functions: Hash code map: h 1 : keys  integers Compression map: h 2 : integers  [0, N  1] The hash code map is applied first, and the compression map is applied next on the result, i.e., h(x) = h 2 (h 1 (x)) The goal of the hash function is to “disperse” the keys in an apparently random way

27 Hash Code Maps Memory address: We reinterpret the memory address of the key object as an integer Good in general, except for numeric and string keys Integer cast: We reinterpret the bits of the key as an integer Suitable for keys of length less than or equal to the number of bits of the integer type (e.g., char, short, int and float on many machines) Component sum: We partition the bits of the key into components of fixed length (e.g., 16 or 32 bits) and we sum the components (ignoring overflows) Suitable for numeric keys of fixed length greater than or equal to the number of bits of the integer type (e.g., long and double on many machines)

28 Hash Code Maps (cont.) Polynomial accumulation: We partition the bits of the key into a sequence of components of fixed length (e.g., 8, 16 or 32 bits) a 0 a 1 … a n  1 We evaluate the polynomial p(z)  a 0  a 1 z  a 2 z 2  … …  a n  1 z n  1 at a fixed value z, ignoring overflows Especially suitable for strings (e.g., the choice z  33 gives at most 6 collisions on a set of 50,000 English words) Polynomial p(z) can be evaluated in O(n) time using Horner’s rule: The following polynomials are successively computed, each from the previous one in O(1) time p 0 (z)  a n  1 p i (z)  a n  i  1  zp i  1 (z) (i  1, 2, …, n  1) We have p(z)  p n  1 (z)

29 Hash Functions Using Horner’s Rule compute this by Horner’s Rule Izmir University of Economics

30 Horner’s Method: Compute a hash value A 3 X 3 +A 2 X 2 +A 1 X 1 +A 0 X 0 Can be evaluated as (((A 3 )X+A 2 )X+A 1 ) X+A 0 Compute the has value of string junk ‘j’ ‘u’ ‘n’ ‘k’ Apply mod (%) after each multiplication! X = 128

31 Compression Maps Division: h 2 (y)  y mod N The size N of the hash table is usually chosen to be a prime The reason has to do with number theory and is beyond the scope of this course Multiply, Add and Divide (MAD): h 2 (y)  (ay  b) mod N a and b are nonnegative integers such that a mod N  0 Otherwise, every integer would map to the same value b

32 Collision Handling Collisions occur when different elements are mapped to the same cell Chaining: let each cell in the table point to a linked list of elements that map there Chaining is simple, but requires additional memory outside the table   

33 Handling Collisions Separate Chaining Open Addressing Linear Probing Quadratic Probing Double Hashing Izmir University of Economics

34 Separate Chaining Keep a list of elements that hash to the same value New elements can be inserted at the front of the list ex: x=i 2 and hash(x)=x%10 Izmir University of Economics

35 Performance of Separate Chaining Load factor of a hash table, λ λ = N/M (# of elements in the table/TableSize) So the average length of list is λ Search Time = Time to evaluate hash function + the time to traverse the list Unsuccessful search= λ nodes are examined Successful search=1 + ½* (N-1)/M (the node searched + half the expected # of other nodes) =1+1/2 *λ Observation: Table size is not important but load factor is. For separate chaining make λ 1 Izmir University of Economics

36 Separate Chaining: Disadvantages Parts of the array might never be used. As chains get longer, search time increases to O(N) in the worst case. Constructing new chain nodes is relatively expensive (still constant time, but the constant is high). Is there a way to use the “unused” space in the array instead of using chains to make more space? Izmir University of Economics

37 Linear Probing Open addressing: the colliding item is placed in a different cell of the table Linear probing handles collisions by placing the colliding item in the next (circularly) available table cell Each table cell inspected is referred to as a “probe” Colliding items lump together, causing future collisions to cause a longer sequence of probes Example: h(x)  x mod 13 Insert keys 18, 41, 22, 44, 59, 32, 31, 73, in this order

38 Linear Probing: i = h(x) = key % tableSize Hash(89,10) = 9 Hash(18,10) = 8 Hash(49,10) = 9 Hash(58,10) = 8 Hash(9,10) = 9 After insert 89After insert 18After insert 49After insert After insert 9 Strategy: A[(i+1) %N], A[(i+2) %N], A[(i+3) %N],... Employing wrap around iAAAAA

39 Quadratic Probing: i = h(x) = key % tableSize Hash(89,10) = 9 Hash(18,10) = 8 Hash(49,10) = 9 Hash(58,10) = 8 Hash(9,10) = 9 After insert 89After insert 18After insert 49After insert After insert 9 Strategy: A[(i+1 2 ) %N], A[(i+2 2 ) %N], A[(i+3 2 ) %N],..., Employing wrap around iAAAAA

40 Search with Linear Probing Consider a hash table A that uses linear probing find (k) We start at cell h(k) We probe consecutive locations until one of the following occurs  An item with key k is found, or  An empty cell is found, or  N cells have been unsuccessfully probed Algorithm find(k) i  h(k) p  0 repeat c  A[i] if c   return Position(null) else if c.key ()  k return Position(c) else i  (i  1) mod N p  p  1 until p  N return Position(null)

41 Updates with Linear Probing To handle insertions and deletions, we introduce a special object, called AVAILABLE, which replaces deleted elements removeElement (k) We search for an item with key k If such an item (k, o) is found, we replace it with the special item AVAILABLE and we return the position of this item Else, we return a null position insertItem (k, o) We throw an exception if the table is full We start at cell h(k) We probe consecutive cells until one of the following occurs  A cell i is found that is either empty or stores AVAILABLE, or  N cells have been unsuccessfully probed We store item (k, o) in cell i

42 Exercise: A hash function #include unsigned int hash(const string &key, int tableSize){ unsigned int hashVal = 0; for (int i= 0;i <key.length(); i++) hashVal = (hashVal * key[i]) % tableSize; return hashVal; }

43 Exercise: Another hash function // A hash routine for string objects // key is the string to hash. // tableSize is the size of the hash table. #include unsigned int hash(const string &key, int tableSize){ unsigned int hashVal = 0; for (int i= 0;i <key.length(); i++) hashVal = (hashVal * 37 + key[i]); return hashVal % tableSize; }

44 Performance of Hashing In the worst case, searches, insertions and removals on a hash table take O(n) time The worst case occurs when all the keys inserted into the dictionary collide The load factor   n  N affects the performance of a hash table Assuming that the hash values are like random numbers, it can be shown that the expected number of probes for an insertion with open addressing is 1  (1   ) The expected running time of all the dictionary ADT operations in a hash table is O(1) In practice, hashing is very fast provided the load factor is not close to 100% Applications of hash tables: small databases compilers browser caches

45 Quadratic Probing With Prime Table Size Theorem: Theorem: If a Quadratic probing is used and the table size is prime, then A new element can always be inserted if the table is at least half empty. During the insertion, no cell is probed twice

46 Exp: Find a prime >= n #include // prime number must be at least as large as n int nextPrime(int n) { if (n % 2 == ) n++; for ( ; !isPrime(n); n +=2) ; // NOP return n; }

47 Universal Hashing A family of hash functions is universal if, for any 0<i,j<M-1, Pr(h(j)=h(k)) < 1/N. Choose p as a prime between M and 2M. Randomly select 0<a<p and 0<b<p, and define h(k)=(ak+b mod p) mod N Theorem: The set of all functions, h, as defined here, is universal.

48 Proof of Universality (Part 1) Let f(k) = ak+b mod p Let g(k) = k mod N So h(k) = g(f(k)). f causes no collisions: Let f(k) = f(j). Suppose k<j. Then So a(j-k) is a multiple of p But both are less than p So a(j-k) = 0. I.e., j=k. (contradiction) Thus, f causes no collisions.

49 Proof of Universality (Part 2) If f causes no collisions, only g can make h cause collisions. Fix a number x. Of the p integers y=f(k), different from x, the number such that g(y)=g(x) is at most Since there are p choices for x, the number of h’s that will cause a collision between j and k is at most There are p(p-1) functions h. So probability of collision is at most Therefore, the set of possible h functions is universal.