CS232A: Database System Principles INDEXING

Slides:



Advertisements
Similar presentations
DBMS 2001Notes 4.2: Hashing1 Principles of Database Management Systems 4.2: Hashing Techniques Pekka Kilpeläinen (after Stanford CS245 slide originals.
Advertisements

Hashing and Indexing John Ortiz.
CS 245Notes 51 CS 245: Database System Principles Hector Garcia-Molina Notes 5: Hashing and More.
©Silberschatz, Korth and Sudarshan12.1Database System Concepts Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree.
Index tuning Hash Index. overview Introduction Hash-based indexes are best for equality selections. –Can efficiently support index nested joins –Cannot.
Dr. Kalpakis CMSC 661, Principles of Database Systems Index Structures [13]
CS4432: Database Systems II
CS CS4432: Database Systems II Basic indexing.
1 Hash-Based Indexes Yanlei Diao UMass Amherst Feb 22, 2006 Slides Courtesy of R. Ramakrishnan and J. Gehrke.
CS 277 – Spring 2002Notes 41 CS 277: Database System Implementation Notes 4: Indexing Arthur Keller.
CPSC-608 Database Systems Fall 2010 Instructor: Jianer Chen Office: HRBB 315C Phone: Notes #8.
CPSC-608 Database Systems Fall 2011 Instructor: Jianer Chen Office: HRBB 315C Phone: Notes #11.
CPSC-608 Database Systems Fall 2009 Instructor: Jianer Chen Office: HRBB 309B Phone: Notes #9.
CPSC-608 Database Systems Fall 2008 Instructor: Jianer Chen Office: HRBB 309B Phone: Notes #8.
CS 245Notes 51 CS 245: Database System Principles Hector Garcia-Molina Notes 5: Hashing and More.
CS 4432lecture #10 - indexing & hashing1 CS4432: Database Systems II Lecture #10 Professor Elke A. Rundensteiner.
CS 277 – Spring 2002Notes 51 CS 277: Database System Implementation Arthur Keller Notes 5: Hashing and More.
CS 4432lecture #71 CS4432: Database Systems II Lecture #7 Professor Elke A. Rundensteiner.
CS 245Notes 41 CS 245: Database System Principles Notes 4: Indexing Hector Garcia-Molina.
CS CS4432: Database Systems II. CS Index definition in SQL Create index name on rel (attr) (Check online for index definitions in SQL) Drop.
CPSC-608 Database Systems Fall 2008 Instructor: Jianer Chen Office: HRBB 309B Phone: Notes #9.
1 CS143: Index. 2 Topics to Learn Important concepts –Dense index vs. sparse index –Primary index vs. secondary index (= clustering index vs. non-clustering.
Index tuning-- B+tree. overview © Dennis Shasha, Philippe Bonnet 2001 B+-Tree Locking Tree Traversal –Update, Read –Insert, Delete phantom problem: need.
1 Chapter 12: Indexing and Hashing Indexing Indexing Basic Concepts Basic Concepts Ordered Indices Ordered Indices B+-Tree Index Files B+-Tree Index Files.
1 CS232A: Database System Principles INDEXING. 2 Given condition on attribute find qualified records Attr = value Condition may also be Attr>value Attr>=value.
12.1 Chapter 12: Indexing and Hashing Spring 2009 Sections , , Problems , 12.7, 12.8, 12.13, 12.15,
Hashing and Hash-Based Index. Selection Queries Yes! Hashing  static hashing  dynamic hashing B+-tree is perfect, but.... to answer a selection query.
CS 245Notes 51 CS 245: Database System Principles Hector Garcia-Molina Notes 5: Hashing and More.
CS 245Notes 51 CS 245: Database System Principles Hector Garcia-Molina Notes 5: Hashing and More.
Index Tuning Conventional index Secondary index To speed up queries on attributes not within primary key Primary index –Determine.
Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke1 Hash-Based Indexes Chapter 11 Modified by Donghui Zhang Jan 30, 2006.
Indexes. Primary Indexes Dense Indexes Pointer to every record of a sequential file, (ordered by search key). Can make sense because records may be much.
B+ tree & B tree Extracted from Garcia Molina
1 Chapter 12: Indexing and Hashing Indexing Indexing Basic Concepts Basic Concepts Ordered Indices Ordered Indices B+-Tree Index Files B+-Tree Index Files.
CS4432: Database Systems II
1 Ullman et al. : Database System Principles Notes 5: Hashing and More.
CPSC 8620Notes 61 CPSC 8620: Database Management System Design Notes 6: Hashing and More.
1 Ullman et al. : Database System Principles Notes 4: Indexing.
Access Structures COMP3211 Advanced Databases Dr Nicholas Gibbins
COMP3017 Advanced Databases
CS 245: Database System Principles
Indexing and hashing.
CS 245: Database System Principles Notes 4: Indexing
CS 245: Database System Principles Notes 4: Indexing
COMP 430 Intro. to Database Systems
Hash-Based Indexes Chapter 11
CPSC-608 Database Systems
Database Management Systems (CS 564)
CS 245: Database System Principles
Chapter 11: Indexing and Hashing
Introduction to Database Systems
Yan Huang - CSCI5330 Database Implementation – Access Methods
(Slides by Hector Garcia-Molina,
CS222: Principles of Data Management Notes #8 Static Hashing, Extendible Hashing, Linear Hashing Instructor: Chen Li.
Hash-Based Indexes Chapter 10
CS222P: Principles of Data Management Notes #8 Static Hashing, Extendible Hashing, Linear Hashing Instructor: Chen Li.
CS 245: Database System Principles
Hash-Based Indexes Chapter 11
Index tuning Hash Index.
CS 245: Database System Principles Notes 4: Indexing
B+Tree Example n=3 Root
Indexing and Hashing B.Ramamurthy Chapter 11 2/5/2019 B.Ramamurthy.
Database Systems (資料庫系統)
Database Design and Programming
Chapter 11: Indexing and Hashing
Chapter 11 Instructor: Xin Zhang
CS222/CS122C: Principles of Data Management UCI, Fall 2018 Notes #07 Static Hashing, Extendible Hashing, Linear Hashing Instructor: Chen Li.
CS4432: Database Systems II
Index Structures Chapter 13 of GUW September 16, 2019
Presentation transcript:

CS232A: Database System Principles INDEXING

Indexing ? Given condition on attribute find qualified records Attr = value Condition may also be Attr>value Attr>=value Qualified records ? value value value

Indexing Data Stuctures used for quickly locating tuples that meet a specific type of condition Equality condition: find Movie tuples where Director=X Other conditions possible, eg, range conditions: find Employee tuples where Salary>40 AND Salary<50 Many types of indexes. Evaluate them on Access time Insertion time Deletion time Disk Space needed (esp. as it effects access time)

Topics Conventional indexes B-trees Hashing schemes

Terms and Distinctions Primary index the index on the attribute (a.k.a. search key) that determines the sequencing of the table Secondary index index on any other attribute Dense index every value of the indexed attribute appears in the index Sparse index many values do not appear A Dense Primary Index Sequential File

Dense and Sparse Primary Indexes Dense Primary Index Sparse Primary Index Find the index record with largest value that is less or equal to the value we are looking. + can tell if a value exists without accessing file (consider projection) + better access to overflow records + less index space more + and - in a while

Sparse vs. Dense Tradeoff Sparse: Less index space per record can keep more of index in memory Dense: Can tell if any record exists without accessing file (Later: sparse better for insertions dense needed for secondary indexes)

Multi-Level Indexes Treat the index as a file and build an index on it “Two levels are usually sufficient. More than three levels are rare.” Q: Can we build a dense second level index for a dense index ?

A Note on Pointers Record pointers consist of block pointer and position of record in the block Using the block pointer only saves space at no extra disk accesses cost

Representation of Duplicate Values in Primary Indexes Index may point to first instance of each value only

Deletion from Dense Index Delete 40, 80 Deletion from dense primary index file with no duplicate values is handled in the same way with deletion from a sequential file Q: What about deletion from dense primary index with duplicates Lists of available entries

Deletion from Sparse Index Delete 40 if the deleted entry does not appear in the index do nothing

Deletion from Sparse Index (cont’d) Delete 30 if the deleted entry does not appear in the index do nothing if the deleted entry appears in the index replace it with the next search-key value comment: we could leave the deleted value in the index assuming that no part of the system may assume it still exists without checking the block

Deletion from Sparse Index (cont’d) Delete 40, then 30 if the deleted entry does not appear in the index do nothing if the deleted entry appears in the index replace it with the next search-key value unless the next search key value has its own index entry. In this case delete the entry

Insertion in Sparse Index if no new block is created then do nothing

Insertion in Sparse Index if no new block is created then do nothing else create overflow record Reorganize periodically Could we claim space of next block? How often do we reorganize and how much expensive it is? B-trees offer convincing answers Insert overflow record 15

Secondary indexes File not sorted on secondary search key 30 50 20 70 Sequence field 50 30 File not sorted on secondary search key 70 20 40 80 10 100 60 90

Secondary indexes does not make sense! Sparse index 30 50 20 70 80 40 Sequence field Sparse index does not make sense! 50 30 30 20 80 100 70 20 90 ... 40 80 10 100 60 90

Secondary indexes Dense index sparse high level Sequence field Dense index 10 20 30 40 50 60 70 ... 50 30 10 50 90 ... sparse high level 70 20 40 80 10 100 60 90 First level has to be dense, next levels are sparse (as usual)

Duplicate values & secondary indexes 10 20 40 20 40 10 40 10 40 30

Duplicate values & secondary indexes one option... 10 20 10 20 Problem: excess overhead! disk space search time 40 20 20 30 40 40 10 40 10 40 ... 40 30

Duplicate values & secondary indexes another option: lists of pointers 10 20 10 40 20 Problem: variable size records in index! 20 40 10 30 40 40 10 40 30

Duplicate values & secondary indexes 10 20  10 20 30 40 40 20 40 10 50 60 ... 40 10  40 30  Yet another idea : Chain records with same key?  Problems: Need to add fields to records, messes up maintenance Need to follow chain to know records

Duplicate values & secondary indexes 10 20 10 20 30 40 40 20 50 60 ... 40 10 40 10 40 30 buckets

Why “bucket” idea is useful Enables the processing of queries working with pointers only. Very common technique in Information Retrieval Indexes Records Name: primary EMP (name,dept,year,...) Dept: secondary Year: secondary

Advantage of Buckets: Process Queries Using Pointers Only Find employees of the Toys dept with 4 years in the company SELECT Name FROM Employee WHERE Dept=“Toys” AND Year=4 Year Index Dept Index Intersect toy bucket and 2nd Floor bucket to get set of matching EMP’s

This idea used in text information retrieval Documents Buckets known as Inverted lists cat dog ...the cat is fat ... ...my cat and my dog like each other... ...Fido the dog ...

Information Retrieval (IR) Queries Find articles with “cat” and “dog” Intersect inverted lists Find articles with “cat” or “dog” Union inverted lists Find articles with “cat” and not “dog” Subtract list of dog pointers from list of cat pointers Find articles with “cat” in title Find articles with “cat” and “dog” within 5 words

Common technique: more info in inverted list position location type cat d1 Title 5 Author 10 Abstract 57 d2 d3 dog Title 100 Title 12

Size of a posting: 10-15 bits (compressed) Posting: an entry in inverted list. Represents occurrence of term in article Size of a list: 1 Rare words or (in postings) mis-spellings 106 Common words Size of a posting: 10-15 bits (compressed)

Vector space model w1 w2 w3 w4 w5 w6 w7 … DOC = <1 0 0 1 1 0 0 …> Query= <0 0 1 1 0 0 0 …> PRODUCT = 1 + ……. = score

Tricks to weigh scores + normalize e.g.: Match on common word not as useful as match on rare words...

Summary of Indexing So Far Basic topics in conventional indexes multiple levels sparse/dense duplicate keys and buckets deletion/insertion similar to sequential files Advantages simple algorithms index is sequential file Disadvantages eventually sequentiality is lost because of overflows, reorganizations are needed

Example Index (sequential) continuous free space 10 39 31 35 36 32 38 34 33 overflow area (not sequential) 20 30 40 50 60 70 80 90

Outline: Conventional indexes B-Trees  NEXT Hashing schemes

NEXT: Another type of index Give up on sequentiality of index Try to get “balance”

B+Tree Example n=3 Root 100 120 150 180 30 3 5 11 120 130 180 200 30 35 100 101 110 150 156 179

Sample non-leaf to keys to keys to keys to keys < 57 57 k<81 81k<95 95 57 81 95

Sample leaf node: From non-leaf node to next leaf in sequence 57 81 95 with key 57 with key 81 To record with key 85

In textbook’s notation n=3 Leaf: Non-leaf: 30 35 30 35 30 30

Size of nodes: n+1 pointers n keys (fixed)

Non-root nodes have to be at least half-full Use at least Non-leaf: (n+1)/2 pointers Leaf: (n+1)/2 pointers to data

n=3 Full node min. node Non-leaf Leaf 120 150 180 30 3 5 11 30 35

B+tree rules tree of order n (1) All leaves at same lowest level (balanced tree) (2) Pointers in leaves point to records except for “sequence pointer”

(3) Number of pointers/keys for B+tree Max Max Min Min ptrs keys ptrsdata keys Non-leaf (non-root) n+1 n (n+1)/2 (n+1)/2- 1 Leaf (non-root) n+1 n (n+1)/2 (n+1)/2 Root n+1 n 1 1

Insert into B+tree (a) simple case (b) leaf overflow space available in leaf (b) leaf overflow (c) non-leaf overflow (d) new root

(a) Insert key = 32 n=3 100 30 3 5 11 30 31 32

(a) Insert key = 7 n=3 100 30 7 3 5 11 30 31 3 5 7

(c) Insert key = 160 n=3 100 160 120 150 180 180 150 156 179 180 200 160 179

(d) New root, insert 45 n=3 30 new root 10 20 30 40 1 2 3 10 12 20 25 32 40 40 45

Deletion from B+tree (a) Simple case - no example (b) Coalesce with neighbor (sibling) (c) Re-distribute keys (d) Cases (b) or (c) at non-leaf

(b) Coalesce with sibling Delete 50 n=4 10 40 100 40 10 20 30 40 50

(c) Redistribute keys Delete 50 n=4 10 40 100 35 10 20 30 35 40 50

n=4 (d) Non-leaf coalese Delete 37 new root 25 25 10 20 30 40 40 30 25 26 1 3 10 14 20 22 30 37 40 45

B+tree deletions in practice Often, coalescing is not implemented Too hard and not worth it!

Comparison: B-trees vs. static indexed sequential file Ref #1: Held & Stonebraker “B-Trees Re-examined” CACM, Feb. 1978

Ref # 1 claims: - Concurrency control harder in B-Trees - B-tree consumes more space For their comparison: block = 512 bytes key = pointer = 4 bytes 4 data records per block

Example: 1 block static index 1 data block 127 keys (127+1)4 = 512 Bytes -> pointers in index implicit! up to 127 contigous blocks k1 k2 k2 k3 k3

Example: 1 block B-tree k1 k2 63 keys ... k63 63x(4+4)+8 = 512 Bytes 1 data block 63 keys 63x(4+4)+8 = 512 Bytes -> pointers needed in B-tree up to 63 blocks because index and data blocks are not contiguous k2 ... k2 k63 k3 - next

Size comparison Ref. #1 Static Index B-tree # data # data blocks height blocks height 2 -> 127 2 2 -> 63 2 128 -> 16,129 3 64 -> 3968 3 16,130 -> 2,048,383 4 3969 -> 250,047 4 250,048 -> 15,752,961 5

Ref. #1 analysis claims For an 8,000 block file, after 32,000 inserts after 16,000 lookups  Static index saves enough accesses to allow for reorganization Ref. #1 conclusion Static index better!!

Ref #2: M. Stonebraker, “Retrospective on a database system,” TODS, June 1980 Ref. #2 conclusion B-trees better!!

DBA does not know when to reorganize Ref. #2 conclusion B-trees better!! DBA does not know when to reorganize Self-administration is important target DBA does not know how full to load pages of new index

Ref. #2 conclusion B-trees better!! Buffering B-tree: has fixed buffer requirements Static index: large & variable size buffers needed due to overflow

Speaking of buffering… Is LRU a good policy for B+tree buffers?  Of course not!  Should try to keep root in memory at all times (and perhaps some nodes from second level)

n is number of keys / node Interesting problem: For B+tree, how large should n be? … n is number of keys / node

Assumptions You have the right to set the disk page size for the disk where a B-tree will reside. Compute the optimum page size n assuming that The items are 4 bytes long and the pointers are also 4 bytes long. Time to read a node from disk is 12+.003n Time to process a block in memory is unimportant B+tree is full (I.e., every page has the maximum number of items and pointers

Can get: f(n) = time to find a record nopt n

 What happens to nopt as  FIND nopt by f’(n) = 0 Answer should be nopt = “few hundred”  What happens to nopt as Disk gets faster? CPU get faster?

Variation on B+tree: B-tree (no +) Idea: Avoid duplicate keys Have record pointers in non-leaf nodes

K1 P1 K2 P2 K3 P3 to record to record to record with K1 with K2 with K3 to keys to keys to keys to keys < K1 K1<x<K2 K2<x<k3 >k3 K1 P1 K2 P2 K3 P3

B-tree example n=2 sequence pointers not useful now! 65 125 25 45 85 (but keep space for simplicity) 65 125 25 45 85 105 145 165 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180

So, for B-trees: MAX MIN Tree Rec Keys Tree Rec Keys Ptrs Ptrs Ptrs Ptrs Non-leaf non-root n+1 n n (n+1)/2 (n+1)/2-1 (n+1)/2-1 Leaf non-root 1 n n 1 (n+1)/2 (n+1)/2 Root non-leaf n+1 n n 2 1 1 Leaf 1 n n 1 1 1

Tradeoffs:  B+trees preferred!  B-trees have marginally faster average lookup than B+trees (assuming the height does not change) in B-tree, non-leaf & leaf different sizes Smaller fan-out  in B-tree, deletion more complicated  B+trees preferred!

Example: - Pointers 4 bytes - Keys 4 bytes - Blocks 100 bytes (just example) - Look at full 2 level tree

B-tree: Root has 8 keys + 8 record pointers + 9 son pointers = 8x4 + 8x4 + 9x4 = 100 bytes Each of 9 sons: 12 rec. pointers (+12 keys) = 12x(4+4) + 4 = 100 bytes 2-level B-tree, Max # records = 12x9 + 8 = 116

B+tree: Root has 12 keys + 13 son pointers = 12x4 + 13x4 = 100 bytes Each of 13 sons: 12 rec. ptrs (+12 keys) = 12x(4 +4) + 4 = 100 bytes 2-level B+tree, Max # records = 13x12 = 156

So... B+ B Conclusion: For fixed block size, 8 records ooooooooooooo ooooooooo 156 records 108 records Total = 116 B+ B Conclusion: For fixed block size, B+ tree is better because it is bushier

Logistics Extension students: pls check out www.db.ucsd.edu/CSE232W10 Send me your emails

Outline/summary Conventional Indexes B trees Sparse vs. dense Primary vs. secondary B trees B+trees vs. B-trees B+trees vs. indexed sequential Hashing schemes --> Next

Hashing hash function h(key) returns address of bucket if the keys for a specific hash value do not fit into one page the bucket is a linked list of pages Buckets Records key h(key) key

Example hash function Key = ‘x1 x2 … xn’ n byte character string Have b buckets h: add x1 + x2 + ….. xn compute sum modulo b

 This may not be best function …  Read Knuth Vol. 3 if you really need to select a good function. Good hash  Expected number of function: keys/bucket is the same for all buckets

Within a bucket: Do we keep keys sorted? Yes, if CPU time critical & Inserts/Deletes not too frequent

Next: example to illustrate inserts, overflows, deletes h(K)

EXAMPLE 2 records/bucket INSERT: h(a) = 1 h(b) = 2 h(c) = 1 h(d) = 0 1 2 3 d a c b e h(e) = 1

EXAMPLE: deletion Delete: e f c a b d c d e f g 1 2 3 maybe move 1 2 3 a d b d c c e maybe move “g” up f g

Rule of thumb: Try to keep space utilization between 50% and 80% Utilization = # keys used total # keys that fit If < 50%, wasting space If > 80%, overflows significant depends on how good hash function is & on # keys/bucket

How do we cope with growth? Overflows and reorganizations Dynamic hashing Extensible Linear

Extensible hashing: two ideas (a) Use i of b bits output by hash function b h(K)  use i  grows over time…. 00110101

(b) Use directory h(K)[0-i ] to bucket . .

Example: h(k) is 4 bits; 2 keys/bucket New directory 2 00 01 10 11 i = 1 i = 0001 1 1 1001 1 1100 1010 1100 Insert 1010

Example continued 2 0000 0111 0001 i = 2 00 01 10 11 1 0001 0111 2 1001 1010 Insert: 0111 0000 2 1100

Example continued i = 3 2 0000 0001 i = 2 2 0111 1001 1010 2 1001 1010 110 111 3 i = 0000 2 i = 0001 2 00 01 10 11 0111 2 1001 1010 2 1001 1010 Insert: 1001 2 1100

Extensible hashing: deletion No merging of blocks Merge blocks and cut directory if possible (Reverse insert procedure)

Deletion example: Run thru insert example in reverse!

Extensible hashing Summary Can handle growing files - with less wasted space - with no full reorganizations + Indirection (Not bad if directory in memory) Directory doubles in size (Now it fits, now it does not) -

Linear hashing Another dynamic hashing scheme Two ideas: (a) Use i low order bits of hash 01110101 grows b i (b) File grows linearly

Example b=4 bits, i =2, 2 keys/bucket 0101 can have overflow chains! insert 0101 Future growth buckets 0000 0101 1010 1111 00 01 10 11 m = 01 (max used block) If h(k)[i ]  m, then look at bucket h(k)[i ] else, look at bucket h(k)[i ] - 2i -1 Rule

Example b=4 bits, i =2, 2 keys/bucket 0101 insert 0101 1111 0101 Future growth buckets 11 0000 1010 0101 10 1010 1111 00 01 10 11 m = 01 (max used block)

Example Continued: How to grow beyond this? 100 101 110 111 3 i = 2 0000 100 0101 101 0101 1010 1111 0101 00 01 10 11 . . . m = 11 (max used block)

 When do we expand file? Keep track of: # used slots = U total # of slots = U If U > threshold then increase m (and maybe i )

Linear Hashing Summary Can handle growing files - with less wasted space - with no full reorganizations No indirection like extensible hashing + + Can still have overflow chains -

Example: BAD CASE Very full Very empty Need to move m here… Would waste space...

Summary Hashing - How it works - Dynamic hashing - Extensible - Linear

Next: Indexing vs Hashing Index definition in SQL Multiple key access

Indexing vs Hashing Hashing good for probes given key e.g., SELECT … FROM R WHERE R.A = 5

Indexing vs Hashing INDEXING (Including B Trees) good for Range Searches: e.g., SELECT FROM R WHERE R.A > 5

Index definition in SQL Create index name on rel (attr) Create unique index name on rel (attr) defines candidate key Drop INDEX name

CANNOT SPECIFY TYPE OF INDEX (e.g. B-tree, Hashing, …) OR PARAMETERS (e.g. Load Factor, Size of Hash,...) ... at least in SQL... Note

ATTRIBUTE LIST  MULTIKEY INDEX (next) e.g., CREATE INDEX foo ON R(A,B,C) Note

Multi-key Index Motivation: Find records where DEPT = “Toy” AND SAL > 50k

Strategy I: Use one index, say Dept. Get all Dept = “Toy” records and check their salary I1

Strategy II: Use 2 Indexes; Manipulate Pointers Toy Sal > 50k

Strategy III: Multiple Key Index One idea: I2 I3 I1

Example Example Record Dept Index Salary 10k 15k Art Sales Toy 17k 21k Name=Joe DEPT=Sales SAL=15k 12k 15k 15k 19k

For which queries is this index good? Find RECs Dept = “Sales” SAL=20k Find RECs Dept = “Sales” SAL > 20k Find RECs Dept = “Sales” Find RECs SAL = 20k

Interesting application: Geographic Data DATA: <X1,Y1, Attributes> <X2,Y2, Attributes> y x . . .

Queries: What city is at <Xi,Yi>? What is within 5 miles from <Xi,Yi>? Which is closest point to <Xi,Yi>?

Example 25 15 35 20 40 30 10 10 20 Search points near f 15 35 20 40 30 10 10 20 i d e h Search points near f Search points near b b n f 5 15 l o c j g m k h i a b c d e f g n o m l j k

Queries Find points with Yi > 20 Find points with Xi < 5 Find points “close” to i = <12,38> Find points “close” to b = <7,24>

Many types of geographic index structures have been suggested Quad Trees R Trees

Two more types of multi key indexes Grid Partitioned hash

Grid Index Key 2 X1 X2 …… Xn V2 Key 1 Vn V1 To records with key1=V3, key2=X2

CLAIM Can quickly find records with And also ranges…. key 1 = Vi  Key 2 = Xj key 1 = Vi key 2 = Xj And also ranges…. E.g., key 1  Vi  key 2 < Xj

 But there is a catch with Grid Indexes! How is Grid Index stored on disk? Like Array... X1 X2 X3 X4 V1 V2 V3 Problem: Need regularity so we can compute position of <Vi,Xj> entry

Solution: Use Indirection Buckets V1 V2 V3 *Grid only V4 contains pointers to buckets X1 X2 X3 -- -- -- -- --

With indirection: Grid can be regular without wasting space We do have price of indirection

Can also index grid on value ranges Salary Grid 0-20K 1 20K-50K 2 50K- 8 3 Linear Scale 1 2 3 Toy Sales Personnel

Grid files Good for multiple-key search Space, management overhead (nothing is free) Need partitioning ranges that evenly split keys + - -

Partitioned hash function Idea: Key1 Key2 010110 1110010 h1 h2

<Joe><Sally> EX: h1(toy) =0 000 h1(sales) =1 001 h1(art) =1 010 . 011 . h2(10k) =01 100 h2(20k) =11 101 h2(30k) =01 110 h2(40k) =00 111 <Fred,toy,10k>,<Joe,sales,10k> <Sally,art,30k> <Joe><Sally> <Fred> Insert

Find Emp. with Dept. = Sales  Sal=40k h1(toy) =0 000 h1(sales) =1 001 h1(art) =1 010 . 011 . h2(10k) =01 100 h2(20k) =11 101 h2(30k) =01 110 h2(40k) =00 111 Find Emp. with Dept. = Sales  Sal=40k <Fred> <Joe><Jan> <Mary> <Sally> <Tom><Bill> <Andy>

h1(toy) =0 000 h1(sales) =1 001 h1(art) =1 010 . 011 . h2(10k) =01 100 . 011 . h2(10k) =01 100 h2(20k) =11 101 h2(30k) =01 110 h2(40k) =00 111 Find Emp. with Sal=30k <Fred> look here <Joe><Jan> <Mary> <Sally> <Tom><Bill> <Andy>

Find Emp. with Dept. = Sales h1(toy) =0 000 h1(sales) =1 001 h1(art) =1 010 . 011 . h2(10k) =01 100 h2(20k) =11 101 h2(30k) =01 110 h2(40k) =00 111 Find Emp. with Dept. = Sales <Fred> <Joe><Jan> <Mary> look here <Sally> <Tom><Bill> <Andy>

Summary Post hashing discussion: - Indexing vs. Hashing - SQL Index Definition - Multiple Key Access - Multi Key Index Variations: Grid, Geo Data - Partitioned Hash

Reading Chapter 5 Skim the following sections: Read the rest 5.3.6, 5.3.7, 5.3.8 5.4.2, 5.4.3, 5.4.4 Read the rest

Revisit: Processing queries without accessing records until last step Find employees of the Toys dept with 4 years in the company SELECT Name FROM Employee WHERE Dept=“Toys” AND Year=4 Year Index Dept Index

Bitmap indices: Alternate structure, heavily used in OLAP Assume the tuples of the Employees table are ordered. Year Index Dept Index + Find even more quickly intersections and unions (e.g., Dept=“Toys” AND Year=4) ? Seems it needs too much space -> We’ll do compression ? How do we deal with insertions and deletions -> Easier than you think

2logm Compression Toys: 00011010 1011 00 0 1 Naive solution needs mn bits, where m is #distinct values and n is #tuples But there is just n 1’s=> let’s utilize this Bit encoding of sequence of runs (e.g. [3,0,1]) Toys: 00011010 Third run says: The 3rd ace appears after 1 zero after the 2nd 1 3 First run says: The first ace appears after 3 zeros Second run says: The 2nd ace appears immediately after the 1st 1011 00 0 1 10 says: The binary encoding of the first number needs 1+1 digits. 11 says: The first number is 3

2log m compression Example Pens: 01000001 Sequence [1,5] Encoding: 01110101

Insertions and deletions & miscellaneous engineering Assume tuples are inserted in order Deletions: Do nothing Insertions: If tuple t with value v is inserted, add one more run in v’s sequence (compact bitmap)

The BIG picture…. NEXT Chapters 2 & 3: Storage, records, blocks... Chapter 4 & 5: Access Mechanisms - Indexes - B trees - Hashing - Multi key Chapter 6 & 7: Query Processing NEXT