Lecture 11 oct 6 Goals: hashing hash functions chaining closed hashing application of hashing.

Slides:



Advertisements
Similar presentations
1 Designing Hash Tables Sections 5.3, 5.4, Designing a hash table 1.Hash function: establishing a key with an indexed location in a hash table.
Advertisements

Hashing General idea Hash function Separate Chaining Open Addressing
Lecture 6 Hashing. Motivating Example Want to store a list whose elements are integers between 1 and 5 Will define an array of size 5, and if the list.
CSCE 3400 Data Structures & Algorithm Analysis
Searching: Self Organizing Structures and Hashing
Hashing Chapters What is Hashing? A technique that determines an index or location for storage of an item in a data structure The hash function.
Using arrays – Example 2: names as keys How do we map strings to integers? One way is to convert each letter to a number, either by mapping them to 0-25.
Lecture 10 Sept 29 Goals: hashing dictionary operations general idea of hashing hash functions chaining closed hashing.
Lecture 11 March 5 Goals: hashing dictionary operations general idea of hashing hash functions chaining closed hashing.
REPRESENTING SETS CSC 172 SPRING 2002 LECTURE 21.
Hashing Text Read Weiss, §5.1 – 5.5 Goal Perform inserts, deletes, and finds in constant average time Topics Hash table, hash function, collisions Collision.
Design and Analysis of Algorithms - Chapter 71 Hashing b A very efficient method for implementing a dictionary, i.e., a set with the operations: – insert.
Introduction to Hashing CS 311 Winter, Dictionary Structure A dictionary structure has the form: (Key, Data) Dictionary structures are organized.
Lecture 11 oct 7 Goals: hashing hash functions chaining closed hashing application of hashing.
Hashing General idea: Get a large array
Data Structures Using C++ 2E Chapter 9 Searching and Hashing Algorithms.
L. Grewe. Computing hash function for a string Horner’s rule: (( … (a 0 x + a 1 ) x + a 2 ) x + … + a n-2 )x + a n-1 ) int hash( const string & key )
Lecture 6 Hashing. Motivating Example Want to store a list whose elements are integers between 1 and 5 Will define an array of size 5, and if the list.
Hashing. Hashing as a Data Structure Performs operations in O(c) –Insert –Delete –Find Is not suitable for –FindMin –FindMax –Sort or output as sorted.
§3 Separate Chaining ---- keep a list of all keys that hash to the same value struct ListNode; typedef struct ListNode *Position; struct HashTbl; typedef.
1. 2 Problem RT&T is a large phone company, and they want to provide enhanced caller ID capability: –given a phone number, return the caller’s name –phone.
1 Chapter 5 Hashing General ideas Methods of implementing the hash table Comparison among these methods Applications of hashing Compare hash tables with.
CS 202, Spring 2003 Fundamental Structures of Computer Science II Bilkent University1 Hashing CS 202 – Fundamental Structures of Computer Science II Bilkent.
1.  We’ll discuss the hash table ADT which supports only a subset of the operations allowed by binary search trees.  The implementation of hash tables.
DATA STRUCTURES AND ALGORITHMS Lecture Notes 7 Prepared by İnanç TAHRALI.
Hashing Table Professor Sin-Min Lee Department of Computer Science.
Hashing Chapter 20. Hash Table A hash table is a data structure that allows fast find, insert, and delete operations (most of the time). The simplest.
Algorithm Course Dr. Aref Rashad February Algorithms Course..... Dr. Aref Rashad Part: 4 Search Algorithms.
Implementing Dictionaries Many applications require a dynamic set that supports dictionary-type operations such as Insert, Delete, and Search. E.g., a.
Hashing Vishnu Kotrajaras, PhD. What do we want to do? Insert Delete find (constant time) No sorting No Findmin findmax.
Data Structures Hash Tables. Hashing Tables l Motivation: symbol tables n A compiler uses a symbol table to relate symbols to associated data u Symbols:
1 CSE 326: Data Structures: Hash Tables Lecture 12: Monday, Feb 3, 2003.
CSE 326: Data Structures Lecture #13 Bart Niswonger Summer Quarter 2001.
Hashing Sections 10.2 – 10.3 CS 302 Dr. George Bebis.
Prof. Amr Goneid, AUC1 CSCI 210 Data Structures and Algorithms Prof. Amr Goneid AUC Part 5. Dictionaries(2): Hash Tables.
Hashing as a Dictionary Implementation Chapter 19.
Searching Given distinct keys k 1, k 2, …, k n and a collection of n records of the form »(k 1,I 1 ), (k 2,I 2 ), …, (k n, I n ) Search Problem - For key.
CS201: Data Structures and Discrete Mathematics I Hash Table.
WEEK 1 Hashing CE222 Dr. Senem Kumova Metin
Lecture 17 April 11, 11 Chapter 5, Hashing dictionary operations general idea of hashing hash functions chaining closed hashing.
David Luebke 1 11/26/2015 Hash Tables. David Luebke 2 11/26/2015 Hash Tables ● Motivation: Dictionaries ■ Set of key/value pairs ■ We care about search,
Chapter 5: Hashing Part I - Hash Tables. Hashing  What is Hashing?  Direct Access Tables  Hash Tables 2.
Hash Tables CSIT 402 Data Structures II. Hashing Goal Perform inserts, deletes, and finds in constant average time Topics Hash table, hash function, collisions.
Hashing Chapter 7 Section 3. What is hashing? Hashing is using a 1-D array to implement a dictionary o This implementation is called a "hash table" Items.
Hash Tables. 2 Exercise 2 /* Exercise 1 */ void mystery(int n) { int i, j, k; for (i = 1; i
COSC 2007 Data Structures II Chapter 13 Advanced Implementation of Tables IV.
Hashtables. An Abstract data type that supports the following operations: –Insert –Find –Remove Search trees can be used for the same operations but require.
1 Data Structures CSCI 132, Spring 2014 Lecture 34 Analyzing Hash Tables.
Data Structures Using C++
Hashing TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA Course: Data Structures Lecturer: Haim Kaplan and Uri Zwick.
Hashing Goal Perform inserts, deletes, and finds in constant average time Topics Hash table, hash function, collisions Collision handling Separate chaining.
Hashing. Search Given: Distinct keys k 1, k 2, …, k n and collection T of n records of the form (k 1, I 1 ), (k 2, I 2 ), …, (k n, I n ) where I j is.
CMSC 341 Hashing Readings: Chapter 5. Announcements Midterm II on Nov 7 Review out Oct 29 HW 5 due Thursday CMSC 341 Hashing 2.
Hashing Vishnu Kotrajaras, PhD Nattee Niparnan, PhD.
1 Resolving Collision Although collisions should be avoided as much as possible, they are inevitable Need a strategy for resolving collisions. We look.
1 Designing Hash Tables Sections 5.3, 5.4, 5.5, 5.6.
Prof. Amr Goneid, AUC1 CSCI 210 Data Structures and Algorithms Prof. Amr Goneid AUC Part 5. Dictionaries(2): Hash Tables.
Fundamental Structures of Computer Science II
Hash table CSC317 We have elements with key and satellite data
Hashing Problem: store and retrieving an item using its key (for example, ID number, name) Linked List takes O(N) time Binary Search Tree take O(logN)
CMSC 341 Hashing.
Lecture 17 April 11, 11 Chapter 5, Hashing dictionary operations
Hashing.
Hash Tables – 2 Comp 122, Spring 2004.
CSCE 3110 Data Structures & Algorithm Analysis
Hashing Alexandra Stefan.
CMSC 341 Hashing.
Collision Handling Collisions occur when different elements are mapped to the same cell.
Hashing Vishnu Kotrajaras, PhD.
Hash Tables – 2 1.
Presentation transcript:

Lecture 11 oct 6 Goals: hashing hash functions chaining closed hashing application of hashing

Computing hash function for a string Horner’s rule: (( … (a 0 x + a 1 ) x + a 2 ) x + … + a n-2 )x + a n-1 ) int hash( const string & key ) { int hashVal = 0; for( int i = 0; i < key.length( ); i++ ) hashVal = 37 * hashVal + key[ i ]; return hashVal; }

Computing hash function for a string int myhash( const HashedObj & x ) const { int hashVal = hash( x ); hashVal %= theLists.size( ); return hashVal; } Alternatively, we can apply % theLists.size() after each iteration of the loop in hash function. int myHash( const string & key ) { int hashVal = 0; int s = theLists.size(); for( int i = 0; i < key.length( ); i++ ) hashVal = (37 * hashVal + key[ i ]) % s; return hashVal % s; }

Analysis of open hashing/chaining Open hashing uses more memory than open addressing (because of pointers), but is generally more efficient in terms of time. If the keys arriving are random and the hash function is good, keys will be nicely distributed to different buckets and so each list will be roughly the same size. Let n = the number of keys present in the hash table. m = the number of buckets (lists) in the hash table. If there are n elements in set, then each bucket will have roughly n/m If we can estimate n and choose m to be ~ n, then the average bucket will be O(1). (Most buckets will have a small number of items).

Analysis continued Average time per dictionary operation: m buckets, n elements in dictionary  average n/m elements per bucket n/m = is called the load factor. insert, search, remove operation take O(1+n/m) = O(1  time each (1 for the hash function computation) If we can choose m ~ n, constant time per operation on average. (Assuming each element is likely to be hashed to any bucket, running time constant, independent of n.)

Closed Hashing Associated with closed hashing is a rehash strategy: “If we try to place x in bucket h(x) and find it occupied, find alternative location h 1 (x), h 2 (x), etc. Try each in order, if none empty table is full,” h(x) is called home bucket Simplest rehash strategy is called linear hashing h i (x) = (h(x) + i) % D In general, our collision resolution strategy is to generate a sequence of hash table slots (probe sequence) that can hold the record; test each slot until find empty one (probing)

Closed Hashing (open addressing) Example: m = 8, keys a,b,c,d have hash values h(a)=3, h(b)=0, h(c)=4, h(d)= b a c Where do we insert d? 3 already filled Probe sequence using linear hashing: h 1 (d) = (h(d)+1)%8 = 4%8 = 4 h 2 (d) = (h(d)+2)%8 = 5%8 = 5* h 3 (d) = (h(d)+3)%8 = 6%8 = 6 Etc. Wraps around to the beginning of the table d

Operations Using Linear Hashing Test for membership: search Examine h(k), h 1 (k), h 2 (k), …, until we find k or an empty bucket or home bucket case 1: successful search -> return true case 2: unsuccessful search -> false case 3: unsuccessful search and table is full If deletions are not allowed, strategy works! What if deletions?

Operations Using Linear Hashing What if deletions? If we reach empty bucket, cannot be sure that k is not somewhere else and empty bucket was occupied when k was inserted Need special placeholder deleted, to distinguish bucket that was never used from one that once held a value

Implementation of closed hashing Code slightly modified from the text. // CONSTRUCTION: an approximate initial size or default of 101 // // ******************PUBLIC OPERATIONS********************* // bool insert( x ) --> Insert x // bool remove( x ) --> Remove x // bool contains( x ) --> Return true if x is present // void makeEmpty( ) --> Remove all items // int hash( string str ) --> Global method to hash strings There is no distinction between hash function used in closed hashing and open hashing. (I.e., they can be used in either context interchangeably.)

template class HashTable { public: HashTable( nextPrime( size ) { makeEmpty( ); } bool contains( const HashedObj & x ) const { return isActive( findPos( x ) ); } void makeEmpty( ) { currentSize = 0; for( int i = 0; i < array.size( ); i++ ) array[ i ].info = EMPTY; }

bool insert( const HashedObj & x ) { int currentPos = findPos( x ); if( isActive( currentPos ) ) return false; array[ currentPos ] = HashEntry( x, ACTIVE ); if( ++currentSize > array.size( ) / 2 ) rehash( ); // rehash when load factor exceeds 0.5 return true; } bool remove( const HashedObj & x ) { int currentPos = findPos( x ); if( !isActive( currentPos ) ) return false; array[ currentPos ].info = DELETED; return true; } enum EntryType { ACTIVE, EMPTY, DELETED };

private: struct HashEntry { HashedObj element; EntryType info; }; vector array; int currentSize; bool isActive( int currentPos ) const { return array[ currentPos ].info == ACTIVE; }

int findPos( const HashedObj & x ) { int offset = 1; // int offset = s_hash(x); /* double hashing */ int currentPos = myhash( x ); while( array[ currentPos ].info != EMPTY && array[ currentPos ].element != x ) { currentPos += offset; // Compute ith probe // offset += 2 /* quadratic probing */ if( currentPos >= array.size( ) ) currentPos -= array.size( ); } return currentPos; } How should the code be modified if table can be full?

Performance Analysis - Worst Case Initialization: O(m), m = # of buckets Insert and search: O(n), n number of elements currently in the table – Suppose there are close to n elements in the table that form a chain. Now want to search x, and say x is not in the table. It may happen that h(x) = start address of a very long chain. Then, it will take O(c) time to conclude failure. c ~ n. No better than linear list for maintaining dictionary! THIS IS NOT A RARE OCCURRENCE WHEN THE TABLE IS NEARLY FULL. (this is why we rehash when  reaches some value like 0.5)

Example h(k) = k%11 = What if next element has home bucket 0?  go to bucket 3 Same for elements with home bucket 1 or 2! Only a record with home position 3 will stay.  p = 4/11 that next record will go to bucket 3 2. Similarly, records hashing to 7,8,9 will end up in Only records hashing to 4 will end up in 4 (p=1/11); same for 5 and 6 I II insert 1052 (h.b. 7) 1052 next element in bucket 3 with p = 8/11

Performance Analysis - Average Case Distinguish between successful and unsuccessful searches Delete = successful search for record to be deleted Insert = unsuccessful search along its probe sequence Expected cost of hashing is a function of how full the table is: load factor = n/m

Random probing model vs. linear probing model It can be shown that average costs under linear hashing (probing) are: Insertion: 1/2(1 + 1/(1 - ) 2 ) Deletion: 1/2(1 + 1/(1 - )) Random probing: Suppose we use the following approach: we create a sequence of hash functions h, h,… all of which are independent of each other. insertion: 1/(1 – ) deletion: 1/ log(1/ (1 – ))

Random probing – analysis of insertion (unsuccessful search) What is the expected number of times one should roll a die before getting 4? Answer: 6 (probability of success = 1/6.) More generally, if the probability of success = p, expected number of times you repeat until you succeed is 1/p. Probes are assumed to be independent. Success in the case of insertion involves finding an empty slot to insert.

Proof for the case insertion: 1/(1 – ) Recall: geometric distribution involves a sequence of independent random experiments, each with outcome success (with prob = p) or failure (with prob = 1 – p). We repeat the experiment until we get success. The question is: what is the expected number of trials performed? Answer: 1/p In case of insertion, success involves finding an empty slot. Probability of success is thus 1 –. Thus, the expected number of probes = 1/(1 – )

Improved Collision Resolution Linear probing: h i (x) = (h(x) + i) % D all buckets in table will be candidates for inserting a new record before the probe sequence returns to home position clustering of records, leads to long probing sequence Linear probing with increment c > 1: h i (x) = (h(x) + ic) % D c constant other than 1 records with adjacent home buckets will not follow same probe sequence Double hashing: h i (x) = (h(x) + i g(x)) % D G is another hash function that is used as the increment amount. Avoids clustering problems associated with linear probing.

Comparison with Closed Hashing Worst case performance is O(n) for both. Average case is a small constant in both cases when  is small. Closed hashing – uses less space. Open hashing – behavior is not sensitive to load factor. Also no need to resize the table since memory is dynamically allocated.