Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Structures and Algorithms for Information Processing

Similar presentations


Presentation on theme: "Data Structures and Algorithms for Information Processing"— Presentation transcript:

1 Data Structures and Algorithms for Information Processing
Lecture 10: Searching II Lecture 10: Searching

2 Outline One more O/A scheme – Ordered Hashing (Tough Schoolboy problem) Analysis of hashing algorithms Consistent Hashing Radix searching Lecture 10: Searching

3 Open vs. Chained Hashing
How big should the table be? Open addressing can be inconvenient when the number of insertions and deletions is unpredictable - overflow. Simple solution to overflow: Resize (double) table, rehashing everything into the new table Use Knuth’s approach and double hashing to avoid clustering. Lecture 10: Searching

4 Variant: Ordered Hashing
In linear probing, we stop search when we find an empty cell or a record with a key equal to the search key In ordered hashing we stop when we find a key less than or equal to the search key (tough schoolboy hashing) Lecture 10: Searching

5 Tough Schoolboy hashing
13 chairs in the classroom Each boy has a preferred seat Each boy has a jump value Boys later in the alphabet are bigger Lecture 10: Searching

6 Class in the morning Inserts Don prefers 3 jumps 2
Bill prefers 5 jumps 4 Al prefers 3 jumps 6 Joe prefers 3 jumps 4 Lecture 10: Searching

7 1 2 3 DON 4 5 6 7 8 9 10 11 12 Lecture 10: Searching

8 1 2 3 DON 4 5 BILL 6 7 8 9 10 11 12 Lecture 10: Searching

9 1 2 DON Al can’t sit here!! 4 5 BILL 6 7 8 9 10 11 12
1 2 DON Al can’t sit here!! 4 5 BILL 6 7 8 9 10 11 12 Lecture 10: Searching

10 1 2 DON 4 5 BILL 6 7 8 9 Al 10 11 12 Lecture 10: Searching

11 1 2 DON Joe kicks out Don 4 5 BILL 6 7 8 9 Al 10 11 12
1 2 DON Joe kicks out Don 4 5 BILL 6 7 8 9 Al 10 11 12 Lecture 10: Searching

12 1 2 Joe 4 Don kicks out Bill 6 7 8 9 Al 10 11 12 Lecture 10: Searching

13 9 Al and Bill argue and Al gets kicked out 10 11
1 2 Joe 4 Don 6 7 8 9 Al and Bill argue and Al gets kicked out 10 11 12 Lecture 10: Searching

14 1 2 AL Joe 4 Don 6 7 8 9 Bill 10 11 12 Lecture 10: Searching

15 Searching the classroom
Search for Don, Bill, Al, and Joe Search for Ken who prefers 3 and jumps 1 Lecture 10: Searching

16 Variant: Ordered Hashing
This reduces the time of unsuccessful search to about the same as successful search Useful for applications where we expect to have a large number of unsuccessful searches Lecture 10: Searching

17 Summary of Basic Searching
Hashing is preferred to binary tree methods in general, since it is faster. But binary search trees are truly dynamic (no advance info on size needed). BSTs also give worst case guarantees (hash function could be lousy). BSTs support more operations — sorting. Lecture 10: Searching

18 Time Analysis Open address hashing methods store N records in a table of size M. M > N The performance of the operations depends on the load factor alpha = N/M For chained hashing, alpha may be greater than 1. Lecture 10: Searching

19 Linear Probing Open address hashing with linear probing requires, on average: 1/2 ( 1 + 1/(1-alpha)^2) operations for an unsuccessful search 1/2 ( 1 + 1/(1-alpha)) operations for a successful search E.g., for alpha = 2/3 we’ll make 5 probes for an average unsuccessful search, and 2 for a successful search Lecture 10: Searching

20 Double Hashing Open address hashing with double hashing requires, on average: 1/(1-alpha) operations for an unsuccessful search -log(1-alpha)/alpha operations for a successful search E.g., for alpha = 2/3 we’ll make 3 probes for an average unsuccessful search, and 1.65 for a successful search Lecture 10: Searching

21 Chained Hashing Chained hashing requires, on average:
1+alpha operations for an unsuccessful search 1+alpha/2 operations for a successful search E.g., for alpha = 2/3 we’ll make 1.66 probes for an average unsuccessful search, and 1.33 for a successful search Lecture 10: Searching

22 Time Analysis These formulas require significant mathematical analysis, which we won’t go into. Lecture 10: Searching

23 Average Number of Probes
Successful Search Lecture 10: Searching

24 Consistent Hashing Not covered in old data structure texts.
Developed in 1997 by Karger et al. MIT. Gave birth to Akamai. At the heart of Chord (P2P DHT). Solves problems in peer to peer networks. Amazon Dynamo and distributed storage. A lightweight alternative to databases. Data is stored in memory on many machines rather than on a disk controlled by a DBMS. Lecture 10: Searching

25 Consistent Hashing Given a machine’s IP address, we can hash
that address with a cryptographic hash. There will likely be no collisions. Given an object to store, we can hash that object with cryptographic hash. Again, very unlikely that any collisions will occur. SHA1,e.g, generates values between 0..(2^160) -1. So, we can imagine objects and computers arranged in a circle – all with unique SHA1 values. Create a balanced BST organized by SHA1 hashes of IP’s. Store in the tree (SHA1 hash of ip, IP Address) pairs. We hash the object and do a lookup in the tree. No matches will occur – but we can find the successor node fast. The machine at this IP is responsible for this object. Lecture 10: Searching

26 Consistent Hashing Machines and keys share the same address space.
Insert Machine at IP: hash(ip) add hash(ip), ip pair to BST take appropriate keys from successor SHA1 would produce values between 0 and (2^160) - 1 Insert object: hash(object) Look in BST for successor’s IP send object to machine with IP Delete machine at IP: Find successor in BST Move all items to successor Remove machine IP Lookup object: hash(object) Look in BST for successor’s IP request object from machine with IP BST stores hash(machine IP), IP pairs Accessible globally, perhaps from a central player or distributed. Why BST and not A Hash Table? BST is ordered. A successor is easy to find. The BST could be stored in each node but scales poorly. Next slide shows an approach that scales. Lecture 10: Searching

27 A 16-node Chord Network A B Each machine maintains a table A’s BST has
of size log(n) entries. n is the size of the ring - 16 in this example. A’s BST has four entries. If you were using Sha1, you would need 160 entries. Scales well. B Suppose we ask machine A to find a value stored on B. Diagram by Seth Terashima Lecture 10: Searching

28 A 16-node Chord Network We cut the ring in half in the worst case.
Diagram by Seth Terashima Lecture 10: Searching

29 Radix Searching For many applications, keys can be thought of as numbers Searching methods that take advantage of digital properties of these keys are called radix searches Radix searches treat keys as numbers in base M (the radix) and work with individual digits Lecture 10: Searching

30 Radix Searching Provide reasonable worst-case performance without complication of balanced trees. Provide way to handle variable length keys. Compete with BST and Hash Tables. Lecture 10: Searching

31 The Simplest Radix Search
Digital Search Trees — like BSTs but branch according to the key’s bits. Key comparison replaced by function that accesses the key’s next bit. Works for variable length keys. Data is not sorted by key. Lecture 10: Searching

32 Digital Search Example
Lecture 10: Searching

33 Digital Search Trees Requires O(log N) comparisons on average for lookup. Why? Requires b comparisons in the worst case for a tree built with N random b-bit keys Lecture 10: Searching

34 Digital Search Problem: At each node we make a full key comparison — this may be expensive, e.g. very long keys Solution: store keys only at the leaves, use radix expansion to do intermediate key comparisons Lecture 10: Searching

35 Radix Tries Used for Retrieval.
Internal nodes used for branching, external nodes used for final key comparison, and to store data. Lecture 10: Searching

36 Radix Trie Example H E A C S R A 00001 S 10011 E 00101 R 10010 C 00011
Lecture 10: Searching

37 Radix Tries Left subtree has all keys which have 0 for the leading bit, right subtree has all keys which have 1 for the leading bit An insert or search requires O(log N) bit comparisons in the average case, and b bit comparisons in the worst case Note that the tree is in order for O(n) sorting. Lecture 10: Searching

38 Radix Tries Problem: lots of extra nodes for keys that differ only in low order bits (See R and S nodes in example above) This is addressed by Patricia trees, which allow “lookahead” to the next relevant bit Practical Algorithm To Retrieve Information Coded In Alphanumeric (Patricia) In the slides that follow the entire alphabet would be included in the indexes. Review two Radix Tries and then a Patricia Tree. Lecture 10: Searching

39 Radix Trie Empty Radix Trie Insert “ARA” # A E I P R ARA
Lecture 10: Searching

40 # A E I P R ARA Radix Trie # A E I P R ARA K_L AREA K Insert “AREA” #
Lecture 10: Searching

41 Radix Trie P # A E I P R ARA AREA Insert “A” P A Lecture 10: Searching

42 # A E I P R # A E I P R # A E I P R # A E I P R # A E I P R # A E I P
PIER EIRE IPA IRE EERIE A # A E I P R # A E I P R ARA # A E I P R ERA ERIE ERE PEER ARE PEAR PER AREA Lecture 10: Searching

43 A L Radix Trie O What’s the problem? G G E I A N D R ADAM LOGGIA
LOGGING LOGGED LOGGERHEAD Lecture 10: Searching

44 A L Patricia Tree 4 E 1 I 1 D R A N ADAM LOGGIA LOGGING LOGGERHEAD
4 ADAM E 1 I 1 D R A N LOGGIA LOGGING LOGGERHEAD LOGGED Lecture 10: Searching


Download ppt "Data Structures and Algorithms for Information Processing"

Similar presentations


Ads by Google