Download presentation
1
Chapter 10,11 Indexes 11. Hash-based Indexes
10. Tree-structured Indexes Intuition for tree indexes ISAM: a static structure B+ Trees Operations Search Insert Delete Special cases Key Compression Bulk Loading Order Updates may change rids 11. Hash-based Indexes Static Hashing Extendible Hashing Directory Insert Delete Linear Hashing When to split? Extendible vs. Linear Hashing
2
Learning Objectives Describe the pros and cons of ISAM
Perform inserts and deletes on Btrees, extndible and linear hash indexes For Btrees: Describe algorithms for bulk loading and key compression Explain why hash indexes are rarely used
3
Indexes in Real DBMSs SQLServer, Oracle, DB2: Btree only
Postgres: Btree, Hash (discouraged) gist Generalized index type Models RTrees* and other indexes gin primarily text search MySql: Depends on storage engine. Mainly Btree, some hash, Rtree* Bottom Line: Hash indexes are rare *RTrees index rectangles and higher dimensional structures
4
10.1 Intuition ``Find all students with gpa > 3.0’’ 2 Simple ideas:
10. Trees ``Find all students with gpa > 3.0’’ If data is in sorted file, do binary search to find first such student, then scan to find others. Cost of binary search on disk can be high. 2 Simple ideas: Create an `index’ file. Use a large fanout F since each dereference is $$$$$. Index File k1 k2 kN Data File Page 1 Page 2 Page 3 Page N Can do binary search on (smaller) index file! 3
5
ISAM 10. Trees index entry P K P K P K P 1 1 2 m 2 m Index file may still be quite large. But we can apply the idea repeatedly! Non-leaf Pages Leaf Pages Overflow page Primary pages Leaf pages contain data entries. 4
6
10. Trees Review of Indexes As for any index, 3 alternatives for data entries k*: Data record with key value k <k, rid of data record with search key value k> <k, list of rids of data records with search key k> Choice is orthogonal to the indexing technique used to locate data entries k*. Tree-structured indexing techniques support both range searches and equality searches. ISAM: static structure; B+ tree: dynamic, adjusts gracefully under inserts and deletes. 2
7
Comments on ISAM Data Pages File creation: Leaf (data) pages allocated sequentially, sorted by search key; then index pages allocated, then space for overflow pages. Index entries: <search key value, page id>; they `direct’ search for data entries, in leaf pages. Search: Start at root; use key comparisons to go to leaf. Cost log F N ; F = # entries/index pg, N = # leaf pgs Insert: Find leaf data entry belongs to, and put it there. Perhaps in an overflow page Delete: Find and remove from leaf; if empty overflow page, de-allocate. Thus data pages remain sequential/continguous Index Pages Overflow pages Static tree structure: inserts/deletes affect only leaf pages. 5
8
10. Trees Example ISAM Tree Each node can hold 2 entries; no need for `next-leaf-page’ pointers. (Why?) 10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97* 20 33 51 63 40 Root 6
9
After Inserting 23*, 48*, 41*, 42* ... 10.Trees Root Index Pages
40 Pages 20 33 51 63 Primary Leaf 10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97* Pages Overflow 23* 48* 41* Pages 42* 7
10
... Then Deleting 42*, 51*, 97* 10. Trees
Root 40 20 33 51 63 10* 15* 20* 27* 33* 37* 40* 46* 55* 63* 23* 48* 41* Note that 51* appears in index levels, but not in leaf! 8
11
Pros and Cons of ISAM Cons Pros
After many inserts and deletes, long overflow chains can develop Overflow records may not be sorted Pros Inserts and deletes are fast since there’s no need to balance the tree No need to lock nodes of the original tree for concurrent access If the tree has had few updates, then interval queries are fast.
12
B+ Tree: Most Widely Used Index
10. Trees B+ Tree: Most Widely Used Index Insert/delete at log F N cost; keep tree height-balanced. (F = fanout, N = # leaf pages) Minimum 50% occupancy (except for root). Each node contains d <= m <= 2d entries. The parameter d is called the order of the tree. This ensures that the height is relatively small Supports equality and range-searches efficiently. Index Entries Data Entries ("Sequence set") (Direct search) 9
13
Example B+ Tree 10. Trees Search begins at root, and key comparisons direct it to a leaf (as in ISAM). Search for 5*, 15*, all data entries >= 24* ... Root 17 24 30 2* 3* 5* 7* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39* 13 Based on the search for 15*, we know it is not in the tree! 10
14
Inserting a Data Entry into a B+ Tree
10. Trees Inserting a Data Entry into a B+ Tree Find correct leaf L. Put data entry onto L. If L has enough space, done! Else, must split L (into L and a new node L2) Redistribute entries evenly, copy up middle key. Insert index entry pointing to L2 into parent of L. This can happen recursively To split index node, redistribute entries evenly, but push up middle key. (Contrast with leaf splits.) Splits “grow” tree; root split increases height. Tree growth: gets wider or one level taller at top. 6
15
Inserting 8* into Example B+ Tree
10. Trees Inserting 8* into Example B+ Tree Entry to be inserted in parent node. Observe how minimum occupancy is guaranteed in both leaf and index pg splits. Note difference between copy-up and push-up; be sure you understand the reasons for this. 5 (Note that 5 is s copied up and continues to appear in the leaf.) 2* 3* 5* 7* 8* 5 24 30 17 13 Entry to be inserted in parent node. (Note that 17 is pushed up and only this with a leaf split.) appears once in the index. Contrast 12
16
Example B+ Tree After Inserting 8*
10. Trees Example B+ Tree After Inserting 8* Root 17 5 13 24 30 2* 3* 5* 7* 8* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39* Notice that root was split, leading to increase in height. In this example, we can avoid split by re-distributing entries; however, this is usually not done in practice. 13
17
Deleting a Data Entry from a B+ Tree
10. Trees Deleting a Data Entry from a B+ Tree Start at root, find leaf L where entry belongs. Remove the entry. If L is at least half-full, done! If L has only d-1 entries, Try to re-distribute, borrowing from sibling (adjacent node with same parent as L). If re-distribution fails, merge L and sibling. If merge occurred, must delete entry (pointing to L or sibling) from parent of L. Merge could propagate to root, decreasing height. 14
18
Example Tree After (Inserting 8*, Then) Deleting 19* and 20* ...
10. Trees Example Tree After (Inserting 8*, Then) Deleting 19* and 20* ... Root 17 5 13 27 30 2* 3* 5* 7* 8* 14* 16* 22* 24* 27* 29* 33* 34* 38* 39* Deleting 19* is easy. Deleting 20* is done with re-distribution. Notice how middle key is copied up. 15
19
... And Then Deleting 24* Must merge.
10. Trees ... And Then Deleting 24* Must merge. Observe `toss’ of index entry (on right), and `pull down’ of index entry (below). 30 22* 27* 29* 33* 34* 38* 39* Root 5 13 17 30 2* 3* 5* 7* 8* 14* 16* 22* 27* 29* 33* 34* 38* 39* 16
20
Example of Non-leaf Re-distribution
10. Trees Example of Non-leaf Re-distribution Tree is shown below during deletion of 24*. (What could be a possible initial tree?) In contrast to previous example, can re-distribute entry from left child of root to right child. Root 13 5 17 20 22 30 14* 16* 17* 18* 20* 33* 34* 38* 39* 22* 27* 29* 21* 7* 5* 8* 3* 2* 17
21
After Re-distribution
10. Trees After Re-distribution Intuitively, entries are re-distributed by `pushing through’ the splitting entry in the parent node. It suffices to re-distribute index entry with key 20; we’ve re-distributed 17 as well for illustration. Root 17 5 13 20 22 30 2* 3* 5* 7* 8* 14* 16* 17* 18* 20* 21* 22* 27* 29* 33* 34* 38* 39* 18
22
B+ Trees in Practice Typical values for B+ tree parameters
Page size 8K Key: at most 8 bytes (compression – later) Pointer: at most 4 bytes Thus entries in index are at most 12 bytes, and a page can hold at least 683 entries. Occupancy: 67%, so a page can hold at least 455 entries, estimate that conservatively with 256 = 28. Top two levels often in memory: Top level, root of tree: 1 page = 8K bytes Next level, 28 pages = 28 * 23K bytes = 2 Megabytes
23
B-Trees vs Hash Indexes
A typical B-tree height is 2-3 Height 0 supports 28 = 256 records Height 2 supports 224 = 32M records Height 3 supports 232 = 4G records A B-tree of height 2-3 requires 2-3 I/Os Including one I/O to access data Assuming top two levels are in memory Assuming alternative 2 or 3 This is why DBMSs either don’t support or don’t recommend hash indexes on base tables Though hashing is widely used elsewhere.
24
Prefix Key Compression
10. Trees Prefix Key Compression Important to increase fan-out. (Why?) Key values in index entries only `direct traffic’; can often compress them. E.g., If we have adjacent index entries with search key values Dannon Yogurt, David Smith and Devarakonda Murthy, we can abbreviate David Smith to Dav. (The other keys can be compressed too ...) Is this correct? Not quite! What if there is a data entry Davey Jones? (Can only compress David Smith to Davi) In general, while compressing, must leave each index entry greater than every key value (in any subtree) to its left. Insert/delete must be suitably modified.
25
Bulk Loading of a B+ Tree
10. Trees Bulk Loading of a B+ Tree If we have a large collection of records, and we want to create a B+ tree on some field, doing so by repeatedly inserting records is very slow. Bulk Loading can be done much more efficiently. Initialization: Sort all data entries, insert pointer to first (leaf) page in a new (root) page. Root Sorted pages of data entries; not yet in B+ tree 3* 4* 6* 9* 10* 11* 12* 13* 20* 22* 23* 31* 35* 36* 38* 41* 44* 20
26
Bulk Loading (Contd.) 10. Trees
3* 4* 6* 9* 10* 11* 12* 13* 20* 22* 23* 31* 35* 36* 38* 41* 44* Root Data entry pages not yet in B+ tree 35 23 12 6 10 20 Index entries for leaf pages always entered into right-most index page just above leaf level. When this fills up, it splits. (Split may go up right-most path to the root.) Much faster than repeated inserts, especially when one considers locking! 3* 4* 6* 9* 10* 11* 12* 13* 20* 22* 23* 31* 35* 36* 38* 41* 44* 6 Root 10 12 23 20 35 38 not yet in B+ tree Data entry pages 21
27
Summary of Bulk Loading
10. Trees Summary of Bulk Loading Option 1: multiple inserts. Slow. Does not give sequential storage of leaves. Option 2: Bulk Loading Has advantages for concurrency control. Fewer I/Os during build. Leaves will be stored sequentially (and linked, of course). Can control “fill factor” on pages. 10
28
10. Trees A Note on `Order’ Order (d) concept replaced by physical space criterion in practice (`at least half-full’). Index pages can typically hold many more entries than leaf pages. Variable sized records and search keys mean different nodes will contain different numbers of entries. Even with fixed length fields, multiple records with the same search key value (duplicates) can lead to variable-sized data entries (if we use Alternative (3)). 22
29
10.8.4 Effect of Inserts and Deletes on RIDs
The text raises this problem: Suppose there is an index using alternative 1. As happens with SQLServer and Oracle if a primary index is declared on a table. RIDs will change with updates and deletes. Why? Splits and merges. Then pointers in other, secondary, indexes will be wrong. Text suggests that index pointers can be updated. This is impractical. What do SQL Server and Oracle do? They use logical RIDs in secondary indexes.
30
Logical Pointers in Data Entries
What is a logical pointer? A primary key value For example, an Employee ID Thus a data entry for an age index might be <42,C24> 42 is the age, C24 is the ID of an employee aged 42. To find that employee with age 42, must use the primary key index! This approach makes primary key indexes faster (alternative 1 instead of 2) but secondary key indexes slower.
31
11. Hash-based Indexes review
As for any index, 3 alternatives for data entries k*: Data record with key value k <k, rid of data record with search key value k> <k, list of rids of data records with search key k> Choice orthogonal to the indexing technique Hash-based indexes are best for equality selections. Cannot support range searches. Static and dynamic hashing techniques exist; trade-offs similar to ISAM vs. B+ trees. 2
32
11.Hash Static Hashing # primary pages fixed, allocated sequentially, never de-allocated; overflow pages if needed. h(k) mod M = bucket to which data entry with key k belongs. (M = # of buckets) h(key) mod M 1 key h M-1 Primary bucket pages Overflow pages 3
33
Static Hashing (Contd.)
Buckets contain data entries. Hash fn works on search key field of record r. Must distribute values over range M-1. h(key) = (a * key + b) usually works well. a and b are constants; lots known about how to tune h. Long overflow chains can develop and degrade performance. Extendible and Linear Hashing: Dynamic techniques to fix this problem. 4
34
11.Hash 11.2 Extendible Hashing Situation: Bucket (primary page) becomes full. Why not re-organize file by doubling # of buckets? Reading and writing all pages is expensive! Idea: Use directory of pointers to buckets, double # of buckets by doubling the directory, splitting just the bucket that overflowed! Directory much smaller than file, so doubling it is much cheaper. Only one page of data entries is split. No overflow page! Trick lies in how hash function is adjusted! 5
35
Insert Example Directory is array of size 4.
LOCAL DEPTH 2 Insert Example Bucket A 4* 12* 32* 16* GLOBAL DEPTH 2 2 Bucket B 00 1* 5* 21* 13* Directory is array of size 4. To find bucket for r, take last `global depth’ # bits of h(r); we denote r by h(r). If h(r) = 5 = binary 101, it is in bucket pointed to by 01. 01 10 2 Bucket C 11 10* 2 DIRECTORY Bucket D 15* 7* 19* DATA PAGES Insert: If bucket is full, split it (allocate new page, re-distribute). If necessary, double the directory. (As we will see, splitting a bucket does not always require doubling; we can tell by comparing global depth with local depth for the split bucket.) 6
36
Insert h(r)=20 (Causes Doubling)
11.Hash Insert h(r)=20 (Causes Doubling) 2 LOCAL DEPTH 3 LOCAL DEPTH Bucket A 32* 16* GLOBAL DEPTH 32* 16* Bucket A GLOBAL DEPTH 2 2 3 2 Bucket B 00 1* 5* 21* 13* 000 1* 5* 21* 13* Bucket B 01 001 10 2 2 010 Bucket C 11 10* 011 10* Bucket C 100 2 DIRECTORY 2 101 Bucket D 15* 7* 19* 15* 7* 19* 110 Bucket D 111 2 3 Bucket A2 4* 12* 20* DIRECTORY (`split image' 4* 12* 20* Bucket A2 of Bucket A) (`split image' of Bucket A) 7
37
11.Hash Points to Note 20 = binary Last 2 bits (00) tell us r belongs in A or A2. Last 3 bits needed to tell which. Global depth of directory: Max # of bits needed to tell which bucket an entry belongs to. Local depth of a bucket: # of bits used to determine if an entry belongs to this bucket. When does bucket split cause directory doubling? Before insert, local depth of bucket = global depth. Insert causes local depth to become > global depth; directory is doubled by copying it over and `fixing’ pointer to split image page. (Use of least significant bits enables efficient doubling via copying of directory!) 8
38
Performance, Deletions
11.Hash Performance, Deletions If directory fits in memory, equality search answered with one disk access; else two. 100MB file, 100 bytes/rec, contains 1,000,000 records (as data entries). If pages are 4K then the file requires 25,000 directory elements; chances are high that directory will fit in memory. Directory grows in spurts, and, if the distribution of hash values is skewed, directory can grow large. Multiple entries with same hash value cause problems! Delete: If removal of data entry makes bucket empty, can be merged with `split image’. If each directory element points to same bucket as its split image, can halve directory. 9
39
11.Hash 11.3 Linear Hashing This is another dynamic hashing scheme, an alternative to Extendible Hashing. LH handles the problem of long overflow chains without using a directory, and handles duplicates. Idea: Use a family of hash functions h0, h1, h2, ... hi(key) = h(key) mod(2iN); N = initial # buckets h is some hash function (range is not 0 to N-1) If N = 2d0, for some d0, hi consists of applying h and looking at the last di bits, where di = d0 + i. hi+1 doubles the range of hi (similar to directory doubling) 10
40
Linear Hashing (Contd.)
Directory avoided in LH by using overflow pages, and choosing bucket to split round-robin. Splitting proceeds in `rounds’. Round ends when all NR initial (for round R) buckets are split. Buckets 0 to Next-1 have been split; Next to NR yet to be split. Current round number is Level. Search: To find bucket for data entry r, find hLevel(r): If hLevel(r) in range `Next to NR’ , r belongs here. Else, r could belong to bucket hLevel(r) or bucket hLevel(r) + NR; must apply hLevel+1(r) to find out. 11
41
Overview of LH File In the middle of a round. 11.Hash h Level
Buckets split in this round: Bucket to be split If h ( search key value ) Level Next is in this range, must use h Level+1 ( search key value ) Buckets that existed at the to decide if entry is in beginning of this round: `split image' bucket. this is the range of h Level `split image' buckets: created (through splitting of other buckets) in this round 11
42
When to split? Insert: Find bucket by applying hLevel / hLevel+1:
11.Hash When to split? Insert: Find bucket by applying hLevel / hLevel+1: If bucket to insert into is full: Add overflow page and insert data entry. (Maybe) Split Next bucket and increment Next. Can choose any criterion to `trigger’ split. Since buckets are split round-robin, long overflow chains don’t develop! Doubling of directory in Extendible Hashing is similar; switching of hash functions is implicit in how the # of bits examined is increased. 12
43
Example of Linear Hashing
h 1 Level=0 00 01 10 11 000 001 010 011 Next=1 PRIMARY PAGES 44* 36* 32* 25* 9* 5* 14* 18* 10* 30* 31* 35* 11* 7* OVERFLOW 43* 100 On split, hLevel+1 is used to re-distribute entries. Level=0, N=4 h h PRIMARY 1 Next=0 PAGES 32* 44* 36* 000 00 Data entry r 9* 25* 5* 001 01 with h(r)=5 14* 18* 10* 30* 010 10 Primary bucket page 31* 35* 7* 11* 011 11 (This info is for illustration only!) (The actual contents of the linear hashed file) 13
44
Example: End of a Round 11.Hash Level=1 PRIMARY OVERFLOW h 1 h PAGES
PAGES PAGES Next=0 Level=0 000 00 32* PRIMARY OVERFLOW h h PAGES 1 PAGES 001 01 9* 25* 000 00 32* 010 10 66* 18* 10* 34* 50* 001 01 9* 25* 011 11 43* 35* 11* 010 10 66* 18* 10* 34* Next=3 100 00 44* 36* 011 11 31* 35* 7* 11* 43* 101 11 5* 37* 29* 100 00 44* 36* 101 5* 37* 29* 110 10 14* 30* 22* 01 14* 30* 22* 111 11 31* 7* 110 10 14
45
LH Described as a Variant of EH
11.Hash LH Described as a Variant of EH The two schemes are actually quite similar: Begin with an EH index where directory has N elements. Use overflow pages, split buckets round-robin. First split is at bucket 0. (Imagine directory being doubled at this point.) But elements <1,N+1>, <2,N+2>, ... are the same. So, need only create directory element N, which differs from 0, now. When bucket 1 splits, create directory element N+1, etc. So, directory can double gradually. Also, primary bucket pages are created in order. If they are allocated in sequence too (so that finding i’th is easy), we actually don’t need a directory! Voila, LH. 15
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.