Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hash-Based Indexes. Introduction uAs for any index, 3 alternatives for data entries k*: w Data record with key value k w w Choice orthogonal to the indexing.

Similar presentations


Presentation on theme: "Hash-Based Indexes. Introduction uAs for any index, 3 alternatives for data entries k*: w Data record with key value k w w Choice orthogonal to the indexing."— Presentation transcript:

1 Hash-Based Indexes

2 Introduction uAs for any index, 3 alternatives for data entries k*: w Data record with key value k w w Choice orthogonal to the indexing technique uHash-based indexes are best for equality selections. Cannot support range searches. uStatic and dynamic hashing techniques exist; trade-offs similar to B+ trees.

3 Static Hashing u# primary pages fixed, allocated sequentially, never de-allocated; overflow pages if needed. uh(k) mod M = bucket to which data entry with key k belongs. (M = # of buckets) h(key) mod N h key Primary bucket pages Overflow pages 1 0 N-1

4 Static Hashing (Contd.) uBuckets contain data entries. uHash function works on search key field of record r. Must distribute values over range 0... M-1. w h(key) = (a * key + b) usually works well. w a and b are constants; lots known about how to tune h. uLong overflow chains can develop and degrade performance. w Extendible and Linear Hashing: Dynamic techniques to fix this problem.

5 Extendible Hashing uSituation: Bucket (primary page) becomes full. Why not re-organize file by doubling # of buckets? w Reading and writing all pages is expensive! w Idea: Use directory of pointers to buckets, double # of buckets by doubling the directory, splitting just the bucket that overflowed! w Directory much smaller than file, so doubling it is much cheaper. Only one page of data entries is split. No overflow page! w Trick lies in how hash function is adjusted!

6 Example uDirectory is array of size 4. uTo find bucket for r, take last global depth # bits of h(r). w If h(5) = binary 101, it is in bucket pointed to by 01.  Insert : If bucket is full, split it ( allocate new page, re-distribute ).  If necessary, double the directory. (As we will see, splitting a bucket does not always require doubling; we can tell by comparing global depth with local depth for the split bucket.) 13* 00 01 10 11 2 2 2 2 2 LOCAL DEPTH GLOBAL DEPTH DIRECTORY Bucket A Bucket B Bucket C Bucket D DATA PAGES 10* 1*21* 4*12*32* 16* 15*7*19* 5*

7 Insert 20=10100: Causes Doubling 20* 00 01 10 11 2 2 2 2 LOCAL DEPTH 2 2 DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket C Bucket D Bucket A2 (`split image' of Bucket A) 1* 5*21*13* 32* 16* 10* 15*7*19* 4*12* Key Values: 4 = 100 12 = 1100 20 = 10100 16 = 10000 32 = 100000 13* 00 01 10 11 2 2 2 2 2 LOCAL DEPTH GLOBAL DEPTH DIRECTORY Bucket A Bucket B Bucket C Bucket D 10* 1*21* 4*12*32* 16* 15*7*19* 5*

8 Insert h(r)=20 (Causes Doubling) 20* 00 01 10 11 2 2 2 2 LOCAL DEPTH 2 2 DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket C Bucket D Bucket A2 (`split image' of Bucket A) 1* 5*21*13* 32* 16* 10* 15*7*19* 4*12* 19* 2 2 2 000 001 010 011 100 101 110 111 3 3 3 DIRECTORY Bucket A Bucket B Bucket C Bucket D Bucket A2 (`split image' of Bucket A) 32* 1*5*21*13* 16* 10* 15* 7* 4* 20* 12* LOCAL DEPTH GLOBAL DEPTH

9 Points to Note u20 = binary 10100. Last 2 bits (00) tell us r belongs in A or A2. Last 3 bits needed to tell which. w Global depth of directory: Max # of bits needed to tell which bucket an entry belongs to. w Local depth of a bucket: # of bits used to determine if an entry belongs to this bucket. uWhen does bucket split cause directory doubling? w Before insert, local depth of bucket = global depth. Insert causes local depth to become > global depth; directory is doubled by copying it over and `fixing’ pointer to split image page. (Use of least significant bits enables efficient doubling via copying of directory!)

10 Comments on Extendible Hashing uIf directory fits in memory, equality search answered with one disk access. w 100MB file, 100 bytes/rec, 4K pages contains 1,000,000 records (as data entries) and 25,000 directory elements; chances are high that directory will fit in memory. w Directory grows in spurts, and, if the distribution of hash values is skewed, directory can grow large. w Multiple entries with same hash value cause problems! uDelete: If removal of data entry makes bucket empty, can be merged with `split image’. If each directory element points to same bucket as its split image, can halve directory.

11 Linear Hashing (LH) uThis is another dynamic hashing scheme, an alternative to Extendible Hashing. uLH handles the problem of long overflow chains without using a directory, and handles duplicates. u Idea: Use a family of hash functions h 0, h 1, h 2,... w h i (key) = h(key) mod(2 i N); N = initial # buckets w If N = 2 d0, for some d0, h i consists of applying h and looking at the last di bits, where di = d0 + i. w h i+1 doubles the range of h i (similar to directory doubling)

12 Linear Hashing (Contd.) uDirectory avoided in LH by using overflow pages, and choosing bucket to split round-robin. w Splitting proceeds in `rounds’. Round ends when all N R initial (for round R) buckets are split. Buckets 0 to Next-1 have been split; Next to N R yet to be split. w Current round number is Level. w Search: To find bucket for data entry r, find h Level (r): If h Level (r) in range `Next to N R ’, r belongs here. Else, r could belong to bucket h Level (r) or bucket h Level (r) + N R ; must apply h Level+1 (r) to find out.

13 Overview of LH File uIn the middle of a round. Level h Buckets that existed at the beginning of this round: this is the range of Next Bucket to be split of other buckets) in this round Level hsearch key value) ( )( Buckets split in this round: If is in this range, must use h Level+1 `split image' bucket. to decide if entry is in created (through splitting `split image' buckets:

14 Linear Hashing (Contd.) uInsert: Find bucket by applying h Level / h Level+1 : w If bucket to insert into is full: Add overflow page and insert data entry. (Maybe) Split Next bucket and increment Next. uCan choose any criterion to `trigger’ split, e.g., split when num_records/num_pages > threshold uSince buckets are split round-robin, long overflow chains don’t develop! uDoubling of directory in Extendible Hashing is similar; switching of hash functions is implicit in how the # of bits examined is increased.

15 Example of Linear Hashing: split Next if insert causes overflow uOn split, h Level+1 is used to re-distribute entries. 0 h h 1 ( This info is for illustration only!) Level=0, N=4 00 01 10 11 000 001 010 011 (The actual contents of the linear hashed file) Next =0 PRIMARY PAGES Data entry r with h(r)=5 Primary bucket page 44* 36* 32* 25* 9*5* 14*18* 10* 30* 31*35* 11* 7* 0 h h 1 Level=0 00 01 10 11 000 001 010 011 Next =1 PRIMARY PAGES 44* 36* 32* 25* 9*5* 14*18* 10* 30* 31*35* 11* 7* OVERFLOW PAGES 43* 00 100 Insert 43=101011

16 16 0 h h 1 Level=0 00 01 10 11 000 001 010 011 Next =1 PRIMARY PAGES 44* 36* 32* 25* 9*5* 14*18* 10* 30* 31*35* 11* 7* OVERFLOW PAGES 43* 00 100 Insert 37, 29, 22,66,34,50 37 = 100101, 29 = 11101, 22 = 10110, 66=1000010 34 = 100010, 50 = 110010

17 Example: End of a Round 0 h h 1 22* 00 01 10 11 000 001 010 011 00 100 Next=3 01 10 101 110 Level=0 PRIMARY PAGES OVERFLOW PAGES 32* 9* 5* 14* 25* 66* 10* 18* 34* 35*31* 7* 11* 43* 44*36* 37*29* 30* 0 h h 1 37* 00 01 10 11 000 001 010 011 00 100 10 101 110 Next=0 Level=1 111 11 PRIMARY PAGES OVERFLOW PAGES 11 32* 9*25* 66* 18* 10* 34* 35* 11* 44* 36* 5* 29* 43* 14* 30* 22* 31*7* 50* Insert 37, 29, 22,66,34,50 37 = 100101, 29 = 11101, 22 = 10110, 66=1000010 34 = 100010, 50 = 110010

18 LH Described as a Variant of EH uThe two schemes are actually quite similar: w Begin with an EH index where directory has N elements. w Use overflow pages, split buckets round-robin. w First split is at bucket 0. (Imagine directory being doubled at this point.) But elements,,... are the same. So, need only create directory element N, which differs from 0, now. When bucket 1 splits, create directory element N+1, etc. uSo, directory can double gradually. Also, primary bucket pages are created in order. If they are allocated in sequence too (so that finding i’th is easy), we actually don’t need a directory! Voila, LH.

19 Summary uHash-based indexes: best for equality searches, cannot support range searches. uStatic Hashing can lead to long overflow chains. uExtendible Hashing avoids overflow pages by splitting a full bucket when a new data entry is to be added to it. (Duplicates may require overflow pages.) w Directory to keep track of buckets, doubles periodically. w Can get large with skewed data; additional I/O if this does not fit in main memory.

20 Summary (Contd.) uLinear Hashing avoids directory by splitting buckets round-robin, and using overflow pages. w Overflow pages not likely to be long. w Duplicates handled easily. w Space utilization could be lower than Extendible Hashing, since splits not concentrated on `dense’ data areas. Can tune criterion for triggering splits to trade-off slightly longer chains for better space utilization. uFor hash-based indexes, a skewed data distribution is one in which the hash values of data entries are not uniformly distributed!

21 Multi-dimensional Indexes Motivation: Find records where DEPT = “Toy” AND SAL > 50k uGeographic Data DATA: x y... Queries: What is within 5 miles from ? Which is closest point to ? Queries: What is within 5 miles from ? Which is closest point to ?

22 Multi-dimensional Indexes uMulti-key Indexes uGrid Files uPartitioned Hash Indexes ukd-Trees uQuad Trees uR Trees

23 Multi-Key Indexes: Strategy I uUse one index, say Dept. uGet all Dept = “Toy” records and check their salary I1I1

24 uUse 2 Indexes; Manipulate Pointers ToySal > 50k Multi-Key Indexes: Strategy II

25 uMultiple Key Index One idea: Multi-key Indexes: Strategy III I1I1 I2I2 I3I3

26 Example Record Dept Index Salary Index Name=Joe DEPT=Sales SAL=15k Art Sales Toy 10k 15k 17k 21k 12k 15k 19k

27 For which queries is this index good? Find RECs Dept = “Sales” SAL=20k Find RECs Dept = “Sales” SAL > 20k Find RECs Dept = “Sales” Find RECs SAL = 20k

28 Grid-File Index Key 2 X 1 X 2 …… X n V 1 V 2 Key 1 V n To records with key1=V 3, key2=X 2

29 CLAIM uCan quickly find records with wkey 1 = V i  Key 2 = X j wkey 1 = V i wkey 2 = X j And also ranges…. –E.g., key 1  V i  key 2 < X j

30  But there is a catch with Grid Indexes! How is Grid Index stored on disk? Like Array... X1X2X3X4 X1X2X3X4X1X2X3X4 V1V2V3 Problem: Need regularity so we can compute position of entry

31 Solution: Use Indirection Buckets V 1 V 2 V 3 * Grid only V 4 contains pointers to buckets Buckets -- X1 X2 X3

32 With indirection: uGrid can be regular without wasting space uWe do have price of indirection

33 Can also index grid on value ranges SalaryGrid Linear Scale 123 ToySalesPersonnel 0-20K1 20K-50K2 50K-3 8

34 Grid files Good for multiple-key search Space, management overhead (nothing is free) Need partitioning ranges that evenly split keys + - -


Download ppt "Hash-Based Indexes. Introduction uAs for any index, 3 alternatives for data entries k*: w Data record with key value k w w Choice orthogonal to the indexing."

Similar presentations


Ads by Google