Download presentation
Presentation is loading. Please wait.
Published bySpencer Rogers Modified over 8 years ago
1
Query Processing Basic Steps in Query Processing – an overview Measures of Query Cost Query Processing- Several algorithms Selection Operation Join Operation : different algorithms to implement joins Nested-loop join - Block nested-loop join Indexed nested-loop join Other Operations – an overview Query Optimization using Heuristics Query tree, Graph tree Transformation of Relational Expressions Equivalence Rules Pushing Selections, Join Ordering, etc. Choice of Evaluation Plans Structure of Query Optimizers Evaluation of Expressions – Materialization, Pipelining Statistics for Cost Estimation A complement - Sorting Course outlines
2
2 Statistics About data SQL query Relational Algebra Expression Result Execution Plan DATA Parser & Translator Query Optimizer Evaluation Engine Basic Steps in Query Processing 1.Parsing and translation translate the query into its internal form. This is then translated into relational algebra. Query Tree, Graph Parser checks syntax, verifies relations 2.Query Optimization: The process of choosing a suitable execution strategy for processing a query. next slides 3.Evaluation The query-execution engine takes a query- evaluation plan, executes that plan, and returns the answers to the query.
3
3 Basic Steps in Query Processing : Optimization A relational algebra expression may have many equivalent expressions balance 2500 ( balance (account)) is equivalent to balance ( balance 2500 (account)) Each relational algebra operation can be evaluated using one of several different algorithms. Correspondingly, a relational-algebra expression can be evaluated in many ways. Annotated expression specifying detailed evaluation strategy is called an evaluation- plan. Examples: can use an index on balance to find accounts with balance < 2500, or can perform complete relation scan and discard accounts with balance 2500 Query Optimization: Among all equivalent evaluation plans choose the one with lowest cost. Cost is estimated using statistical information from the database catalog e.g. number of tuples in each relation, size of tuples, etc.
4
4 Translating SQL Queries into Relational Algebra (1) Query block: The basic unit that can be translated into the algebraic operators and optimized. A query block contains: a single SELECT-FROM-WHERE expression, as well as GROUP BY and HAVING clause if these are part of the block. Nested queries within a query are identified as separate query blocks. Aggregate operators in SQL must be included in the extended algebra. SELECT lname, fname FROM Employee WHERE Salary > ( SELECT MAX (Salary ) FROM Employee WHERE DNO = 5); SELECT lname, fname FROM Employee WHERE Salary > SELECT MAX (Salary) FROM Employee WHERE DNO = 5); π lname, fname ( σ Salary>C ( Employee )) ℱ MAX Salary ( σ DNO=5 ( Employee ))
5
5 Measures of Query Cost Cost is generally measured as total elapsed time for answering query. Many factors contribute to time cost: disk accesses, CPU, or even network communication disk access: Typically is the predominant cost, and is also relatively easy to estimate. Measured by taking into account Number of seeks * average-seek-cost Number of blocks read * average-block-read-cost Number of blocks written * average-block-write-cost For simplicity, we just use the number of block transfers from disk and the number of seeks as the cost measure t T : time to transfer one blockCost for b block transfers plus S seeks t S : time for one seekb * t T + S * t S We ignore CPU costs for simplicity - Real systems take CPU cost into account, differentiate between sequential and random I/O, and take buffer size into account Several algorithms can reduce disk IO by using extra buffer space: Amount of real memory available to buffer depends on concurrent queries and OS processes, known only during execution Required data may be buffer resident already, avoiding disk I/O - but hard to take into account for cost estimation Cost to write a block is greater than cost to read a block data is read back after being written to ensure that the write was successful
6
6 Selection Operation File scan: search algorithms that locate and retrieve records that fulfill a selection condition. Algorithm A1 (linear search): Scan each file block and test all records to see whether they satisfy the selection condition. Cost estimate = b r block transfers + 1 seek If selection is on a key attribute, cost = (b r /2) block transfers + 1 seek //stop on finding record Linear search can be applied regardless of selection condition or ordering of records in the file, or availability of indices Algorithm A2 (binary search): Applicable if selection is an equality comparison on the attribute on which file is ordered. Assume that the blocks of a relation are stored contiguously Cost estimate (number of disk blocks to be scanned): cost of locating the first tuple by a binary search on the blocks log 2 ( b r ) * ( t T + t S ) + number of blocks containing records that satisfy selection condition Query Processing- Several algorithms can reduce disk IO … balance 2500 b r denotes number of blocks containing records from relation r Algorithms for individual operations
7
7 Selections Using Indices Index scan: search algorithms that use an index selection condition must be on search-key of index. A3 (primary index on candidate key, equality) - Retrieve a single record that satisfies the corresponding equality condition Cost = ( h i + 1) * ( t T + t S ) Cost = HT i + 1 HT i : number of levels in index I i.e., the height of i. A4 (primary index on nonkey, equality) - Retrieve multiple records. Records will be on consecutive blocks Cost = h i * ( t T + t S ) + t S + t T * b where b = number of blocks containing matching records A5 (equality on search-key of secondary index). Retrieve a single record if the search-key is a candidate key Cost = (h i + 1) * (t T + t S ) Retrieve multiple records if search-key is not a candidate key, each of n matching records may be on a different block Cost = (h i + n) * (t T + t S ) Can be very expensive! Query Processing- Several algorithms can reduce disk IO … Algorithms for individual operations
8
8 Selections Involving Comparisons Selections can be implemented by using a linear file scan or binary search, or by using indices in the following ways: A6 ( primary index, comparison) - Relation is sorted on A A V (r) use index to find first tuple v and scan relation sequentially from there A V (r) just scan relation sequentially till first tuple > v; do not use index A7 ( secondary index, comparison ). A V (r) use index to find first index entry v and scan index sequentially from there, to find pointers to records. A V ( r ) just scan leaf pages of index finding pointers to records, till first entry > v In either case, retrieve records that are pointed to requires an I/O for each record Linear file scan may be cheaper Query Processing- Several algorithms can reduce disk IO … Selections of the form “ A V (r)” Algorithms for individual operations
9
9 Implementation of Complex Selections A8 (conjunctive selection using one index). Select a combination of i and algorithms A1 through A7 that results in the least cost for i (r). Test other conditions on tuple after fetching it into memory buffer. A9 (conjunctive selection using multiple-key index) - Use appropriate composite (multiple-key) index if available. A10 (conjunctive selection by intersection of identifiers). Requires indices with record pointers. Use corresponding index for each condition, and take of all the obtained sets of record pointers. Then fetch records from file If some conditions do not have appropriate indices, apply test in memory. A11 (disjunctive selection by union of identifiers). Applicable if all conditions have available indices - Otherwise use linear scan. Use corresponding index for each condition, and take of all the obtained sets of record pointers. Then fetch records from file Negation: (r) Use linear scan on file If very few records satisfy , and an index is applicable to Find satisfying records using index and fetch from file Conjunction 1 2 ... n (r) Query Processing- Several algorithms can reduce disk IO … Disjunction: 1 2 ... n (r). Algorithms for individual operations
10
10 Join Operation Several different algorithms to implement joins Nested-loop join Block nested-loop join Indexed nested-loop join Merge-join Hash-join Choice based on cost estimate Algorithms for individual operations for more details about Merge and Hash join algorithms Refer to used text-books
11
11 Join Operation: Nested-Loop Join for each tuple t r in r do begin for each tuple t s in s do begin test pair (t r,t s ) to see if they satisfy the join condition if they do, add t r t s to the result. end end r is called the outer relation and s the inner relation of the join. Requires no indices and can be used with any kind of join condition. Expensive since it examines every pair of tuples in the two relations. Analysis if there is enough memory only to hold one block of each relation, the estimated cost is n r *b s + b r block transfer + ( n r + b r seeks ). If the smaller relation fits entirely in memory, use that as the inner relation. Reduces cost to b r + b s disk accesses. Assuming worst case memory availability cost estimate is with depositor as outer relation: 5000 400 + 100 = 2,000,100 + (5000+100 seeks) with customer as the outer relation: 10000 100 + 400 = 1,000,400 + (10000+400 seeks) If smaller relation ( depositor) fits entirely in memory, the cost estimate will be 500 disk accesses. outer relation inner relation trtr tsts r s R S Theta join CustomerDepositor Number of records ‘n’10,0005000 Number of blocks ‘b’400100 Algorithms for individual operations Algorithms to implement joins
12
12 Join Operation: Block Nested-Loop Join For each block B r of r do for each block B s of s do for each tuple t r in B r do for each tuple t s in B s do Check if (t r,t s ) satisfy the join condition if they do, add t r t s to the result. end end end end Analysis Worst case estimate: b r b s + b r block transfers + 2 * b r seeks Each block in the inner relation s is read once for each block in the outer relation (instead of once for each tuple in the outer relation Best case: b r + b s block transfers + 2 seeks. Variant of nested-loop join in which every block of inner relation is paired with every block of outer relation. Algorithms for individual operations Algorithms to implement joins Previous case n r *b s + b r
13
13 Join Operation: Indexed Nested-Loop Join Index lookups can replace file scans if join is an equi-join or natural join and an index is available on the inner relation’s join attribute (Can construct a temporary index). For each tuple t r in the outer relation r, use the index to look up tuples in s that satisfy the join condition with tuple t r. Analysis - Worst case: buffer has space for only one page of r, and, for each tuple in r, we perform an index lookup on s. Cost of the join: b r + n r c, w here c is the cost of traversing index and fetching all matching s tuples for one tuple or r, c can be estimated as cost of a single selection on s using the join condition. If indices are available on join attributes of both r and s, use the relation with fewer tuples as the outer relation. Example of Nested-Loop Join Costs: depositor customer With depositor as the outer relation and customer have a primary B + -tree index on the join attribute c- name, which contains 20 entries in each index node. Cost of block nested loops join: 400*100 + 100 = 40,100 block transfers + 2 * 100 = 200 seeks assuming worst case memory (may be significantly less with more memory) Cost of indexed nested loops join: 100 + 5000 * 5 = 25,100 block transfers and seeks. customer has 10,000 records, the tree depth = 4, and one more access is needed to find the actual data depositor has 5000 records Algorithms for individual operations Algorithms to implement joins
14
14 Complex Joins Join with a conjunctive condition: Either use nested loops/block nested loops, Or, Compute the result of one of the simpler joins (r i s) final result comprises those tuples in the intermediate result that satisfy the remaining conditions 1 ... i –1 i +1 ... n Join with a disjunctive condition Either use nested loops/block nested loops, Or, Compute as the union of the records in individual joins (r i s): (r 1 s) (r 2 s) ... (r n s) r 1 2 ... n s r 1 2 ... n s Algorithms for individual operations
15
15 Other Operations Duplicate elimination can be implemented via hashing or sorting. On sorting duplicates will come adjacent to each other, and all but one set of duplicates can be deleted. Optimization: duplicates can be deleted during run generation as well as at intermediate merge steps in external sort-merge. Hashing is similar – duplicates will come into the same bucket. Projection is implemented by performing projection on each tuple followed by duplicate elimination. Aggregation can be implemented in a manner similar to duplicate elimination. Sorting or hashing can be used to bring tuples in the same group together, and then the aggregate functions can be applied on each group. Optimization: combine tuples in the same group during run generation and intermediate merges, by computing partial aggregate values For count, min, max, sum: keep aggregate values on tuples found so far in the group. For avg, keep sum and count, and divide sum by count at the end Set operations ( , and ): can either use variant of merge-join after sorting, or hash-join. Outer join: can be computed either as A join followed by addition of null-padded non-participating tuples. by modifying the join algorithms (merge join, hash join) Algorithms for individual operations
16
Query Optimization Using Heuristics
17
17 Using Heuristics in Query Optimization(1) Process for heuristics optimization 1.The parser of a high-level query generates an initial internal representation; 2.Apply heuristics rules to optimize the internal representation. 3.A query execution plan is generated to execute groups of operations based on the access paths available on the files involved in the query. The main heuristic is to apply first the operations that reduce the size of intermediate results. E.g., Apply SELECT and PROJECT operations before applying the JOIN or other binary operations. Query tree: A tree data structure that corresponds to a relational algebra expression. It represents the input relations of the query as leaf nodes of the tree, and represents the relational algebra operations as internal nodes. An execution of the query tree consists of executing an internal node operation whenever its operands are available and then replacing that internal node by the relation that results from executing the operation. Query graph: A graph data structure that corresponds to a relational calculus expression. It does not indicate an order on which operations to perform first. There is only a single graph corresponding to each query. Query Optimization
18
18 An overview Equivalent expressions Different algorithms for each operation An evaluation plan defines exactly: what algorithm is used for each operation, and how the execution of the operations is coordinated. Alternative ways of evaluating a given query Query Optimization - using Heuristics Steps in cost-based query optimization Generate logically equivalent expressions using equivalence rules Choose the cheapest plan based on estimated cost Estimation of plan cost based on: Statistical information about relations - number of tuples, number of distinct values for an attribute Statistics estimation for intermediate results
19
19 Equivalence Rules (1/2) 1.Conjunctive selection operations can be deconstructed into a sequence of individual selections. 2.Selection operations are commutative. 3.Only the last in a sequence of projection operations is needed, the others can be omitted. 4.Selections can be combined with Cartesian products and theta joins. (E1 X E2) = E1 E2 1 (E1 2 E2) = E1 1 2 E2 5.Theta-join operations (and natural joins) are commutative: E 1 E 2 = E 2 E 1 6.(a) Natural join operations are associative: (E 1 E 2 ) E 3 = E 1 (E 2 E 3 ) (b) Theta joins are associative in the following manner: ( E 1 1 E 2 ) 2 3 E 3 = E 1 1 3 ( E 2 2 E 3 ) where 2 involves attributes from only E 2 and E 3. 7.The selection operation distributes over the theta join operation under the following two conditions: (a) When all the attributes in 0 involve only the attributes of one of the expressions ( E 1 ) being joined. 0 E 1 E 2 ) = ( 0 (E 1 )) E 2 (b) When 1 involves only the attributes of E1 and 2 involves only the attributes of E2. 1 E 1 E 2 ) = ( 1 (E 1 )) ( (E 2 )) Transformation of Relational Expressions Generating Equivalent Expressions
20
20 Equivalence Rules (1/2) 8.The projection operation distributes over the theta join operation as follows: (a)if involves only attributes from L 1 L 2: L1 L2 E 1 E 2 ) = ( L1 (E 1 )) ( L (E 2 )) (b)Consider a join E1 E2. Let L 1 and L 2 be sets of attributes from E 1 and E 2, respectively. Let L 3 be attributes of E 1 that are involved in join condition , but are not in L 1 L 2, and Let L 4 be attributes of E 2 that are involved in join condition , but are not in L 1 L 2. L1 L2 E 1 E 2 ) = L1 L2 ( ( L1 L3 (E 1 )) ( L2 L4 (E 2 )) ) 9.The set operations union and intersection are commutative: E 1 E 2 = E 2 E 1, E 1 E 2 = E 2 E 1 10.Set union and intersection are associative. (E 1 E 2 ) E 3 = E 1 (E 2 E 3 ), E 1 E 2 ) E 3 = E 1 (E 2 E 3 ) 11.The selection operation distributes over , and –. (E 1 – E 2 ) = (E 1 ) – (E 2 ) and similarly for and in place of – Also: (E 1 – E 2 ) = (E 1 ) – E 2 and similarly for in place of –, but not for 12.The projection operation distributes over union: L (E 1 E 2 ) = ( L (E 1 )) ( L (E 2 )) Transformation of Relational Expressions Generating Equivalent Expressions
21
21 Example- Pushing Selections Query: Find the names of all customers who have an account at some branch located in Brooklyn. customer_name ( branch_city = “Brooklyn” branch (account depositor))) Transformation using rule 7a ( 0 E 1 E 2 ) = ( 0 (E 1 )) E 2 ). customer_name (( branch_city =“Brooklyn” (branch)) (account depositor)) Performing the selection as early as possible reduces the size of the relation to be joined. Transformation of Relational Expressions Generating Equivalent Expressions
22
22 Example - with Multiple Transformations Query: Find the names of all customers with an account at a Brooklyn branch whose account balance is over $1000. customer_name ( branch_city = “Brooklyn” balance > 1000 (branch (account depositor))) Transformation using join associatively (Rule 6a): customer_name (( branch_city = “Brooklyn” balance > 1000 (branch account)) depositor) Second form provides an opportunity to apply the “perform selections early” rule, resulting in the subexpression branch_city = “Brooklyn” (branch) balance > 1000 (account) Generating Equivalent Expressions
23
23 Join Ordering For all relations r 1, r 2, and r 3, (r 1 r 2 ) r 3 = r 1 (r 2 r 3 ) If r 2 r 3 is quite large and r 1 r 2 is small, we choose (r 1 r 2 ) r 3 Example: Consider the expression customer_name (( branch_city = “Brooklyn” (branch)) (account depositor)) Could compute (account depositor) first, and join result with ( branch_city = “Brooklyn” (branch) but (account depositor) is likely to be a large relation. Only a small fraction of the bank’s customers are likely to have accounts in branches located in Brooklyn it is better to compute ” branch_city = “Brooklyn” (branch) account ” first. Query Optimization
24
24 Query Optimization- Using Heuristics Summary of Heuristics for Algebraic Optimization: 1.The main heuristic is to apply first the operations that reduce the size of intermediate results. 2.Perform select operations as early as possible to reduce the number of tuples and perform project operations as early as possible to reduce the number of attributes. (This is done by moving select and project operations as far down the tree as possible.) 3.The select and join operations that are most restrictive should be executed before other similar operations. (This is done by reordering the leaf nodes of the tree among themselves and adjusting the rest of the tree appropriately.) Query Execution Plans An execution plan for a relational algebra query consists of a combination of the relational algebra query tree and information about the access methods to be used for each relation as well as the methods to be used in computing the relational operators stored in the tree. Materialized evaluation: the result of an operation is stored as a temporary relation. Pipelined evaluation: as the result of an operator is produced, it is forwarded to the next operator in sequence.
25
25 Choice of Evaluation Plans Must consider the interaction of evaluation techniques when choosing evaluation plans choosing the cheapest algorithm for each operation independently may not yield best overall algorithm. Practical query optimizers incorporate elements of the following two broad approaches: 1.Search all the plans and choose the best plan in a cost-based fashion. 2.Uses heuristics to choose a plan. Cost-Based Optimization Consider finding the best join-order for r 1 r 2 ... r n. There are (2(n – 1))!/(n – 1)! different join orders for above expression. With n = 7, the number is 665280, with n = 10, the number is greater than 176 billion! No need to generate all the join orders. Using dynamic programming, the least-cost join order for any subset of { r 1, r 2,... r n } is computed only once and stored for future use.
26
26 Structure of Query Optimizers Many optimizers considers only left-deep join orders. Plus heuristics to push selections and projections down the query tree Reduces optimization complexity and generates plans amenable to pipelined evaluation. Heuristic optimization used in some versions of Oracle Intricacies of SQL complicate query optimization. E.g. nested subqueries Some query optimizers integrate heuristic selection and the generation of alternative access plans. Frequently used approach heuristic rewriting of nested block structure and aggregation followed by cost-based join-order optimization for each block Some optimizers (e.g. SQL Server) apply transformations to entire query and do not depend on block structure Even with the use of heuristics, cost-based query optimization imposes a substantial overhead. But is worth it for expensive queries Optimizers often use simple heuristics for very cheap queries, and perform exhaustive enumeration for more expensive queries
27
27 Evaluation of Expressions Alternatives for evaluating an entire expression tree Materialization: generate results of an expression whose inputs are relations or are already computed, materialize (store) it on disk. Repeat. Pipelining: pass on tuples to parent operations even as an operation is being executed
28
28 Materialization Materialized evaluation: Evaluate one operation at a time, Starting at the lowest-level. Use intermediate results materialized into temporary relations to evaluate next-level operations. E.g., in figure below, “ customer-name ( balance 2500 (account) customer)" 1.compute and store “ balance 2500 (account)" 2.then compute the store its join with customer, 3.and finally compute the projections on customer-name. Materialized evaluation is always applicable Cost of writing results to disk and reading them back can be quite high Overall cost = Sum of costs of individual operations + cost of writing intermediate results to disk Double buffering: use two output buffers for each operation, when one is full write it to disk while the other is getting filled Alternatives for evaluating an expression tree Evaluation of Expressions
29
29 Pipelining evaluation (1/2) evaluate several operations simultaneously, passing the results of one operation on to the next. E.g., in expression tree, don’t store result of “ balance 2500 (account)" instead, pass tuples directly to the join.. Similarly, don’t store result of join, pass tuples directly to projection. Evaluations: Much cheaper than materialization: no need to store a temporary relation to disk. Pipelining may not always be possible – e.g., sort, hash-join. For pipelining to be effective, use evaluation algorithms that generate output tuples even as tuples are received for inputs to the operation. Pipelines can be executed in two ways: demand driven and producer driven Alternatives for evaluating an expression tree Evaluation of Expressions
30
30 Pipelining (2/2) In demand driven or lazy evaluation system repeatedly requests next tuple from top level operation Each operation requests next tuple from children operations as required, in order to output its next tuple In between calls, operation has to maintain “ state ” so it knows what to return next Each operation is implemented as an iterator implementing open/next/close operations In produce-driven or eager pipelining Operators produce tuples eagerly and pass them up to their parents Buffer maintained between operators, child puts tuples in buffer, parent removes tuples from buffer if buffer is full, child waits till there is space in the buffer, and then generates more tuples System schedules operations that have space in output buffer and can process more input tuples Evaluation Some algorithms are not able to output results even as they get input tuples E.g. merge join, or hash join These result in intermediate results being written to disk and then read back always Algorithm variants are possible to generate (at least some) results on the fly, as input tuples are read in Pipelines can be executed in two ways: demand driven and producer driven Alternatives for evaluating an expression tree Evaluation of Expressions
31
31 Assignment - Complex Joins Strategy 1. Compute “ depositor customer ”; use result to compute loan ( depositor customer ) Strategy 2. Computer “ loan depositor ” first, and then join the result with customer. Strategy 3. Perform the pair of joins at once. Build and index on loan for loan-number, and on customer for customer-name. For each tuple t in depositor, look up the corresponding tuples in customer and the corresponding tuples in loan. Each tuple of deposit is examined exactly once. Evaluation: Strategy 3 combines two operations into one special-purpose operation that is more efficient than implementing two joins of two relations. depositor each t customer Indexed on loan-number loan Indexed on customer-name Join involving three relations: loan depositor customer
32
Statistics for Cost Estimation n r :number of tuples in a relation r. b r : number of blocks containing tuples of r. s r : size of a tuple of r. f r : blocking factor of r. i.e., the number of tuples of r that fit into one block. V(A, r): number of distinct values that appear in r for attribute A; same as the size of A (r). SC(A, r): selection cardinality of attribute A of relation r; average number of records that satisfy equality on A. If tuples of r are stored together physically in a file, then: Information for Cost Estimation
33
33 Selection Size Estimation A=v (r) n r / V(A,r) : number of records that will satisfy the selection equality condition on a key attribute: size estimate = 1 A V (r) (case of A V ( r ) is symmetric) Let c denote the estimated number of tuples satisfying the condition. If min(A,r) and max(A,r) are available in catalog c = 0 if v < min(A,r) c = In absence of statistical information c is assumed to be n r / 2. Size Estimation of Complex Selections The selectivity of a condition i is the probability that a tuple in the relation r satisfies i. If s i is the number of satisfying tuples in r, the selectivity of i is given by s i /n r. Conjunction: 1 2 ... n ( r). Assuming impendence, estimate of tuples in the result is: Disjunction: 1 2 ... n (r). Estimated number of tuples: Negation: ( r). Estimated number of tuples: n r – size( ( r))
34
34 Join Operation: Running Example Running example: depositor customer Catalog information for join examples: n customer = 10,000. f customer = 25, which implies that b customer =10000/25 = 400. n depositor = 5000. f depositor = 50, which implies that b depositor = 5000/50 = 100. V(customer-name, depositor) = 2500, which implies that, on average, each customer has two accounts. Also assume that customer-name in depositor is a foreign key on customer. V(customer_name, customer) = 10000 (primary key!)
35
35 Estimation of the Size of Joins The Cartesian product r x s contains n r.n s tuples; each tuple occupies s r + s s bytes. If R S = , then r s is the same as r x s. If R S is a key for R, then a tuple of s will join with at most one tuple from r therefore, the number of tuples in r s is no greater than the number of tuples in s. If R S in S is a foreign key in S referencing R, then the number of tuples in r s is exactly the same as the number of tuples in s. The case for R S being a foreign key referencing S is symmetric. In the example query depositor customer, customer_name in depositor is a foreign key of customer hence, the result has exactly n depositor tuples, which is 5000 If R S = { A } is not a key for R or S. If we assume that every tuple t in R produces tuples in R S, the number of tuples in R S, is estimated to be: If the reverse is true, the estimate obtained will be: The lower of these two estimates is probably the more accurate one. Compute the size estimates for depositor customer without using information about foreign keys: V(customer_name, depositor) = 2500, and V(customer_name, customer) = 10000 The two estimates are 5000 * 10000/2500 - 20,000 and 5000 * 10000/10000 = 5000 We choose the lower estimate, which in this case, is the same as our earlier computation using foreign keys.
36
A complement - Sorting We may build an index on the relation, and then use the index to read the relation in sorted order. May lead to one disk block access for each tuple. For relations that fit in memory, techniques like quicksort can be used. For relations that don’t fit in memory, external sort- merge is a good choice.
37
37 External Sort-Merge Let M denote memory size (in pages). Create sorted runs. Let i be 0 initially. Repeatedly do the following till the end of the relation: (a) Read M blocks of relation into memory (b) Sort the in-memory blocks (c) Write sorted data to run R i ; increment i. Let the final value of I be N Merge the runs (N-way merge). We assume (for now) that N < M. Use N blocks of memory to buffer input runs, and 1 block to buffer output. Read the first block of each run into its buffer page repeat 1.Select the first record (in sort order) among all buffer pages 2.Write the record to the output buffer. If the output buffer is full write it to disk. 3.Delete the record from its input buffer page. If the buffer page becomes empty then read the next block (if any) of the run into the buffer. until all input buffer pages are empty: If i M, several merge passes are required. In each pass, contiguous groups of M - 1 runs are merged. A pass reduces the number of runs by a factor of M -1, and creates runs longer by the same factor. E.g. If M=11, and there are 90 runs, one pass reduces the number of runs to 9, each 10 times the size of the initial runs Repeated passes are performed till all runs have been merged into one. A complement - Sorting
38
38 Sort-Merge, an example Cost analysis: Total number of merge passes required: log M–1 (b r /M) . Disk accesses for initial run creation as well as in each pass is 2 b r Thus total number of disk accesses for external sorting: b r ( 2 log M–1 (b r / M) + 1) Cost of seeks During run generation: one seek to read each run and one seek to write each run 2 b r / M During the merge phase Buffer size: b b (read/write b b blocks at a time) Need 2 b r /b b seeks for each merge pass except the final one which does not require a write Total number of seeks: 2 b r / M + b r / b b (2 log M–1 (b r / M) -1) A complement - Sorting External Sorting Using Sort-Merge
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.