Chapter 2.5: Dictionaries and Hash Tables

Slides:



Advertisements
Similar presentations
© 2004 Goodrich, Tamassia Hash Tables
Advertisements

© 2004 Goodrich, Tamassia Hash Tables1  
Chapter 2.5: Dictionaries and Hash Tables  
Chapter 9: Maps, Dictionaries, Hashing Nancy Amato Parasol Lab, Dept. CSE, Texas A&M University Acknowledgement: These slides are adapted from slides provided.
Data Structures Lecture 12 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
CSC401 – Analysis of Algorithms Lecture Notes 5 Heaps and Hash Tables Objectives: Introduce Heaps, Heap-sorting, and Heap- construction Analyze the performance.
Dictionaries and Hash Tables1  
Hashing (Ch. 14) Goal: to implement a symbol table or dictionary (insert, delete, search)  What if you don’t need ordered keys--pred, succ, sort, select?
Hash Tables1 Part E Hash Tables  
Hash Tables1 Part E Hash Tables  
Hash Tables1 Part E Hash Tables  
Hashing General idea: Get a large array
Dictionaries 4/17/2017 3:23 PM Hash Tables  
Maps, Dictionaries, Hashing
1. 2 Problem RT&T is a large phone company, and they want to provide enhanced caller ID capability: –given a phone number, return the caller’s name –phone.
CS 221 Analysis of Algorithms Data Structures Dictionaries, Hash Tables, Ordered Dictionary and Binary Search Trees.
1 HashTable. 2 Dictionary A collection of data that is accessed by “key” values –The keys may be ordered or unordered –Multiple key values may/may-not.
Hash Tables1   © 2010 Goodrich, Tamassia.
Dictionaries and Hash Tables1 Hash Tables  
David Luebke 1 10/25/2015 CS 332: Algorithms Skip Lists Hash Tables.
© 2004 Goodrich, Tamassia Hash Tables1  
Hashtables. An Abstract data type that supports the following operations: –Insert –Find –Remove Search trees can be used for the same operations but require.
Hashing 1 Hashing. Hashing 2 Hashing … * Again, a (dynamic) set of elements in which we do ‘search’, ‘insert’, and ‘delete’ n Linear ones: lists, stacks,
CHAPTER 9 HASH TABLES, MAPS, AND SKIP LISTS ACKNOWLEDGEMENT: THESE SLIDES ARE ADAPTED FROM SLIDES PROVIDED WITH DATA STRUCTURES AND ALGORITHMS IN C++,
Chapter 2.5: Dictionaries and Hash Tables  
Prof. Amr Goneid, AUC1 CSCI 210 Data Structures and Algorithms Prof. Amr Goneid AUC Part 5. Dictionaries(2): Hash Tables.
Algorithms Design Fall 2016 Week 6 Hash Collusion Algorithms and Binary Search Trees.
Hash Tables 1/28/2018 Presentation for use with the textbook Data Structures and Algorithms in Java, 6th edition, by M. T. Goodrich, R. Tamassia, and M.
Hashing (part 2) CSE 2011 Winter March 2018.
Data Structures Using C++ 2E
Hash table CSC317 We have elements with key and satellite data
CSCI 210 Data Structures and Algorithms
Hashing CSE 2011 Winter July 2018.
CS 332: Algorithms Hash Tables David Luebke /19/2018.
Hashing Alexandra Stefan.
Hashing Alexandra Stefan.
Dictionaries and Hash Tables
Data Structures Using C++ 2E
Dictionaries Dictionaries 07/27/16 16:46 07/27/16 16:46 Hash Tables 
© 2013 Goodrich, Tamassia, Goldwasser
Dictionaries 9/14/ :35 AM Hash Tables   4
Hash Tables 3/25/15 Presentation for use with the textbook Data Structures and Algorithms in Java, 6th edition, by M. T. Goodrich, R. Tamassia, and M.
Searching.
Advanced Associative Structures
Hash Table.
Dictionaries < > = Dictionaries Dictionaries
Hash Tables 11/22/2018 3:15 AM Hash Tables  1 2  3  4
Dictionaries 11/23/2018 5:34 PM Hash Tables   Hash Tables.
Dictionaries and Hash Tables
Chapter 2.5: Dictionaries and Hash Tables
Copyright © Aiman Hanna All rights reserved
Dictionaries < > = /3/2018 8:58 AM Dictionaries
Hash Tables   Maps Dictionaries 12/7/2018 5:58 AM Hash Tables  
CH 9.2 : Hash Tables Acknowledgement: These slides are adapted from slides provided with Data Structures and Algorithms in C++, Goodrich, Tamassia and.
Hash Tables and Associative Containers
Dictionaries < > = /9/2018 3:06 AM Dictionaries
Dictionaries and Hash Tables
Hashing Alexandra Stefan.
Dictionaries 1/17/2019 7:55 AM Hash Tables   4
CH 9.2 : Hash Tables Acknowledgement: These slides are adapted from slides provided with Data Structures and Algorithms in C++, Goodrich, Tamassia and.
Dictionaries < > = /17/2019 4:20 PM Dictionaries
Hash Tables Computer Science and Engineering
Advanced Implementation of Tables
Hash Tables Computer Science and Engineering
Dictionaries 4/5/2019 1:49 AM Hash Tables  
CS210- Lecture 16 July 11, 2005 Agenda Maps and Dictionaries Map ADT
Hashing.
Dictionaries < > = Dictionaries Dictionaries
Dictionaries and Hash Tables
Lecture-Hashing.
Presentation transcript:

Chapter 2.5: Dictionaries and Hash Tables  1 025-612-0001 2 981-101-0002 3  4 451-229-0004

Dictionary ADT (§2.5.1) The dictionary ADT models a searchable collection of key-element items The main operations of a dictionary are searching, inserting, and deleting items Multiple items with the same key are allowed Applications: address book credit card authorization mapping host names (e.g., cs16.net) to internet addresses (e.g., 128.148.34.101) Dictionary ADT methods: findElement(k): if the dictionary has an item with key k, returns its element, else, returns the special element NO_SUCH_KEY insertItem(k, o): inserts item (k, o) into the dictionary removeElement(k): if the dictionary has an item with key k, removes it from the dictionary and returns its element, else returns the special element NO_SUCH_KEY size(), isEmpty() keys(), Elements()

Log File (§2.5.1) A log file is a dictionary implemented by means of an unsorted sequence We store the items of the dictionary in a sequence (based on a doubly-linked lists or a circular array), in arbitrary order Performance: insertItem takes O(1) time since we can insert the new item at the beginning or at the end of the sequence findElement and removeElement take O(n) time since in the worst case (the item is not found) we traverse the entire sequence to look for an item with the given key The log file is effective only for dictionaries of small size or for dictionaries on which insertions are the most common operations, while searches and removals are rarely performed (e.g., historical record of logins to a workstation)

Lookup Table A lookup table is a dictionary implemented with a sorted sequence We store the items of the dictionary in an array-based sequence, sorted by key We use an external comparator for the keys Performance: findElement takes O(log n) time, using binary search insertItem takes O(n) time since in the worst case we have to shift O(n) items to make room for the new item removeElement take O(n) time since in the worst case we have to shift O(n) items to compact the items after the removal Effective for small dictionaries or for dictionaries where searches are common but inserts and deletes are rare (e.g. credit card authorizations)

Hashing (2.5.2) Application: word occurrence statistics Operations: insert, find Dictionary: insert, delete, find Are O(log n) comparisons necessary? (no) Hashing basic plan: create a big array for the items to be stored use a function to figure out storage location from key (hash function) a collision resolution scheme is necessary

Hash Table Example Simple Hash function: Example: Treat the key as a large integer K h(K) = K mod M, where M is the table size let M be a prime number. Example: Suppose we have 101 buckets in the hash table. ‘abcd’ in hex is 0x61626364 Converted to decimal it’s 1633831724 1633831724 % 101 = 11 Thus h(‘abcd’) = 11. Store the key at location 11. “dcba” hashes to 57. “abbc” also hashes to 57 – collision. What to do? If you have billions of possible keys and hundreds of buckets, lots of collisions are possible!

Hash Functions (§ 2.5.3) A hash function is usually specified as the composition of two functions: Hash code map: h1: keys  integers Compression map: h2: integers  [0, N - 1] The hash code map is applied first, and the compression map is applied next on the result, i.e., h(x) = h2(h1(x)) The goal of the hash function is to “disperse” the keys in an apparently random way

Hash Code Maps (§2.5.3) Polynomial accumulation: like component sum, but multiply each term by 1, z, z2, z3, ... p(z) = a0 + a1 z + a2 z2 + … … + an-1zn-1 at a fixed value z, ignoring overflows Can be evaluated in O(n) time using Horner’s rule: Each term is computed from the previous in O(1) time p0(z) = an-1 pi (z) = an-i-1 + zpi-1(z) (i = 1, 2, …, n -1) Memory address: interpret the memory address of the key as an integer Integer cast: interpret the bits of the key as an integer (for short keys) Component sum: partition the bits of the key into chunks (e.g., 16 or 32 bits) and sum, ignoring overflows (for long keys)

Compression Maps (§2.5.4) Multiply, Add and Divide (MAD): Division: h2 (y) = (ay + b) mod N a and b are nonnegative integers such that a mod N  0 Otherwise, every integer would map to the same value b Division: h2 (y) = y mod N The size N of the hash table is usually chosen to be a prime The reason has to do with number theory and is beyond the scope of this course

Hashing Strings h(‘aVeryLongVariableName’)? Horner’s method example: 256 * 97 + 86 = 24918 % 101 = 72 256 * 72 + 101 = 18533 % 101 = 50 256 * 50 + 114 = 12914 % 101 = 87 Scramble by replacing 256 with 117 int hash(char *v, int M) { int h, a=117; for (h=0; *v; v++) h = (a*h + *v) % M; return h; }

Collisions (§2.5.5) How likely are collisions? Birthday paradox M sqrt(p M/2) (about 1.25 sqrt(M)) 100 12 1000 40 10000 125 [1.25 sqrt(365) is about 24] Experiment: generate random numbers 0..100 84 35 45 32 89 1 58 16 38 69 5 90 16 16 53 61 … Collision at 13th number, as predicted What to do about collisions?

Collision Resolution: Chaining Build a linked list for each bucket Linear search within each list Simple, practical, widely used Cuts search time by a factor of M over sequential search But, requires extra memory outside of table  1 2 3 4 451-229-0004 981-101-0004 025-612-0001

Chaining 2 Insertion time? Average search cost, successful search? O(N/2M) Average search cost, unsuccessful? O(N/M) M large: CONSTANT average search time Worst case: N (“probabilistically unlikely”) Keep lists sorted? insert time O(N/2M) unsuccessful search time O(N/2M)

Linear Probing (§2.5.5) Or, we could keep everything in the same table Insert: upon collision, search for a free spot Search: same (if you find one, fail) Runtime? Still O(1) if table is sparse But: as table fills, clustering occurs Skipping c spots doesn’t help…

Clustering Long clusters tend to get longer Precise analysis difficult Theorem (Knuth): Insert cost: approx. (1 + 1/(1-N/M)2)/2 (50% full  2.5 probes; 80% full  13 probes) Search (hit) cost: approx. (1 + 1/(1-N/M))/2 (50% full  1.5 probes; 80% full  3 probes) Search (miss): same as insert Too slow when table gets 70-80% full How to reduce/avoid clustering?

Double Hashing Use a second hash function to compute increment seq. Analysis extremely difficult About like ideal (random probe) Thm (Guibas-Szemeredi): Insert: approx 1+1/(1-N/M) Search hit: ln(1+N/M)/(N/M) Search miss: same as insert Not too slow until the table is about 90% full

Example of Double Hashing 1 2 3 4 5 6 7 8 9 10 11 12 31 41 18 32 59 73 22 44 Consider a hash table storing integer keys that handles collision with double hashing N = 13 h(k) = k mod 13 d(k) = 7 - k mod 7 Insert keys 18, 41, 22, 44, 59, 32, 31, 73, in this order

Dynamic Hash Tables Suppose you are making a symbol table for a compiler. How big should you make the hash table? If you don’t know in advance how big a table to make, what to do? Could grow the table when it “fills” (e.g. 50% full) Make a new table of twice the size. Make a new hash function Re-hash all of the items in the new table Dispose of the old table

Table Growing Analysis Worst case insertion: Q(n), to re-hash all items Can we make any better statements? Average case? O(1), since insertions n through 2n cost O(n) (on average) for insertions and O(2n) (on average) for rehashing  O(n) total (with 3x the constant) Amortized analysis? The result above is actually an amortized result for the rehashing. Any sequence of j insertions into an empty table has O(j) average cost for insertions and O(2j) for rehashing. Or, think of it as billing 3 time units for each insertion, storing 2 in the bank. Withdraw them later for rehashing.

Separate Chaining vs. Double Hashing Assume the same amount of space for keys, links (use pointers for long or variable-length keys) Separate chaining: 1M buckets, 4M keys 4M links in nodes 9M words total; avg search time 2 Double hashing in same space: 4M items, 9M buckets in table average search time: 1/(1-4/9) = 1.8: 10% faster Double hashing in same time 4M items, average search time 2 space needed: 8M words (1/(1-4/8) = 2) (11% less space)

Deletion How to implement delete() with separate chaining? Simply unlink unwanted item Runtime? Same as search() How to implement delete() with linear probing? Can’t just erase it. (Why not?) Re-hash entire cluster Or mark as deleted? How to delete() with double hashing? Re-hashing cluster doesn’t work – which “cluster”? Mark as deleted Every so often re-hash entire table to prune “dead-wood”

Comparisons Separate chaining advantages: Why use hashing? Idiot-proof (degrades gracefully) No large chunks of memory needed Why use hashing? Fastest dictionary implementation Constant time search and insert, on average Easy to implement; built into many environments Why not use hashing? No performance guarantees Uses extra space Doesn’t support pred, succ, sort, etc. – no notion of order Where did perl “hashes” get their name?

Hashing Summary Separate chaining: easiest to deploy Linear probing: fastest (but takes more memory) Double hashing: least memory (but takes more time to compute the second hash function) Dynamic (grow): handles any number of inserts at < 3x time Curious use of hashing: early unix spell checker (back in the days of 3M machines…) Construction Search Miss Chain Probe Dbl Grow Chain Probe Dbl Grow 5k 1 4 4 3 1 0 1 0 50k 18 11 12 22 15 8 8 8 100k 35 21 23 47 45 23 21 15 190k 79 106 59 155 144 2194 261 30 200k 84 159 156 33

File tamper test Problem: you want to guarantee that a file hasn’t been tampered with. How?

Password verification Problem: you run a website. You need to verify people’s logins. How? Possible techniques? Ethics?

Cache filenames Problem: caching FlexScores $outfileName = $outDirectory . "/" . $cfg->hymn . "-" . $cfg->instrument; if ($custom) $outfileName .= "-" . substr(md5($cfg->asXML()),0,6); $outfileNameLy = $outfileName . ".ly";

Hash Tables in C# System.Collections: basic data structures Dictionary ArrayList (See handout)

GUI Programming in C# Microsoft has provided a number of different GUI programming environments over the years: MFC: Microsoft Foundation Classes C++, legacy Windows Forms .net wrapper for access to native Windows interface WPF: Windows Presentation Foundation .net, Managed code, primarily C# and VB XML file defines user interface Better support for media, animation, etc. MFC: old C++-based library fro GUI programming. Windows Forms:.Net GUI api. Replaces MFC. (Not the most recent, though.) Like Java’s AWT. Obsolete, but still used. WPF previously known as Avalon. Released as part of .Net 3.0. Uses DirectX. Uses XML-oriented object model, XAML, to define a user interface and link various UI elements. Runtime libraries are included with all recent versions of windows. Sliverlight is mostly a subset of WPF for embedded Web controls.

Turing Test You want to generate random text that sounds like it makes sense. How?

Google ngrams Bag-of-words context word2vec Word2vec: how frequently the word appears in different contexts. Row, with hundreds of columns for different contexts Maybe context is a bag of words before and after Big matrix: rows for words, columns for contexts Would have enormous millions x millions matrix, but decompose into much smaller dimensions Result is vector for each word with hundreds of entries, encoding something about meaning If two words have similar vector, then similar meaning Can add+subtract vectors to add and subtract meanings [king] – [man] + [woman] is very similar to [queen] Paris is to France as ?? Is to Germany: how to figure out? [Paris] – [France] + [England] Should give a vector very similar to the one for “London” Run billions of words through; each time a pair of words co-occurs in the same context, move those vectors a hair closer [effectively a stochastic singular value decomposition of the PMI matrix, but faster] Could ask questions like: Aaron is the brother of Moses. Who are the brothers of Jesus?

Babble You want to generate natural-sounding random text. How?