Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed DBMSPage 5. 1 © 1998 M. Tamer Özsu & Patrick Valduriez Outline Introduction Background Distributed DBMS Architecture  Distributed Database.

Similar presentations


Presentation on theme: "Distributed DBMSPage 5. 1 © 1998 M. Tamer Özsu & Patrick Valduriez Outline Introduction Background Distributed DBMS Architecture  Distributed Database."— Presentation transcript:

1 Distributed DBMSPage 5. 1 © 1998 M. Tamer Özsu & Patrick Valduriez Outline Introduction Background Distributed DBMS Architecture  Distributed Database Design à Fragmentation à Data Location  Semantic Data Control  Distributed Query Processing  Distributed Transaction Management o Parallel Database Systems o Distributed Object DBMS o Database Interoperability o Current Issues

2 Distributed DBMSPage 5. 2 © 1998 M. Tamer Özsu & Patrick Valduriez Design Problem In the general setting : Making decisions about the placement of data and programs across the sites of a computer network as well as possibly designing the network itself. In Distributed DBMS, the placement of applications entails à placement of the distributed DBMS software; and à placement of the applications that run on the database

3 Distributed DBMSPage 5. 3 © 1998 M. Tamer Özsu & Patrick Valduriez Dimensions of the Problem Level of sharing Level of knowledge Access pattern behavior partial information dynamic static data data + program complete information

4 Distributed DBMSPage 5. 4 © 1998 M. Tamer Özsu & Patrick Valduriez Distribution Design Top-down à mostly in designing systems from scratch à mostly in homogeneous systems Bottom-up à when the databases already exist at a number of sites

5 Distributed DBMSPage 5. 5 © 1998 M. Tamer Özsu & Patrick Valduriez Top-Down Design User Input View Integration User Input Requirements Analysis Objectives Conceptual Design View Design Access Information ES’sGCS Distribution Design Physical Design LCS’s LIS’s

6 Distributed DBMSPage 5. 6 © 1998 M. Tamer Özsu & Patrick Valduriez Distribution Design Issues  Why fragment at all?  How to fragment?  How much to fragment?  How to test correctness?  How to allocate?  Information requirements?

7 Distributed DBMSPage 5. 7 © 1998 M. Tamer Özsu & Patrick Valduriez Fragmentation Can't we just distribute relations? What is a reasonable unit of distribution? à relation  views are subsets of relations  locality  extra communication à fragments of relations (sub-relations)  concurrent execution of a number of transactions that access different portions of a relation  views that cannot be defined on a single fragment will require extra processing  semantic data control (especially integrity enforcement) more difficult

8 Distributed DBMSPage 5. 8 © 1998 M. Tamer Özsu & Patrick Valduriez PROJ 1 :projects with budgets less than $200,000 PROJ 2 :projects with budgets greater than or equal to $200,000 PROJ 1 PNOPNAMEBUDGET LOC P3CAD/CAM250000New York P4Maintenance310000Paris P5CAD/CAM500000Boston PNOPNAME LOC P1Instrumentation150000Montreal P2Database Develop.135000New York BUDGET PROJ 2 Fragmentation Alternatives – Horizontal New York PROJ PNOPNAMEBUDGETLOC P1Instrumentation150000Montreal P3CAD/CAM250000 P2Database Develop.135000 P4Maintenance310000Paris P5CAD/CAM500000Boston New York

9 Distributed DBMSPage 5. 9 © 1998 M. Tamer Özsu & Patrick Valduriez Fragmentation Alternatives – Vertical PROJ 1 :information about project budgets PROJ 2 :information about project names and locations PNOBUDGET P1150000 P3250000 P2135000 P4310000 P5500000 PNOPNAMELOC P1InstrumentationMontreal P3CAD/CAMNew York P2Database Develop.New York P4MaintenanceParis P5CAD/CAMBoston PROJ 1 PROJ 2 New York PROJ PNOPNAMEBUDGETLOC P1Instrumentation150000Montreal P3CAD/CAM250000 P2Database Develop.135000 P4Maintenance310000Paris P5CAD/CAM500000Boston New York

10 Distributed DBMSPage 5. 10 © 1998 M. Tamer Özsu & Patrick Valduriez Degree of Fragmentation Finding the suitable level of partitioning within this range tuples or attributes relations finite number of alternatives

11 Distributed DBMSPage 5. 11 © 1998 M. Tamer Özsu & Patrick Valduriez Completeness à Decomposition of relation R into fragments R 1, R 2,..., R n is complete if and only if each data item in R can also be found in some R i Reconstruction  If relation R is decomposed into fragments R 1, R 2,..., R n, then there should exist some relational operator  such that R =  1≤ i ≤ n R i  Disjointness à If relation R is decomposed into fragments R 1, R 2,..., R n, and data item d i is in R j, then d i should not be in any other fragment R k ( k ≠ j ). Correctness of Fragmentation

12 Distributed DBMSPage 5. 12 © 1998 M. Tamer Özsu & Patrick Valduriez Allocation Alternatives Non-replicated à partitioned : each fragment resides at only one site Replicated à fully replicated : each fragment at each site à partially replicated : each fragment at some of the sites Rule of thumb: If replication is advantageous, otherwise replication may cause problems read - only queries update quries  1

13 Distributed DBMSPage 5. 13 © 1998 M. Tamer Özsu & Patrick Valduriez Comparison of Replication Alternatives Full-replicationPartial-replicationPartitioning QUERY PROCESSING Easy Same Difficulty DIRECTORY MANAGEMENT Easy or Non-existant CONCURRENCY CONTROL EasyDifficultModerate RELIABILITYVery highHighLow REALITY Possible application Realistic Possible application

14 Distributed DBMSPage 5. 14 © 1998 M. Tamer Özsu & Patrick Valduriez Four categories: à Database information à Application information à Communication network information à Computer system information Information Requirements

15 Distributed DBMSPage 5. 15 © 1998 M. Tamer Özsu & Patrick Valduriez Horizontal Fragmentation (HF) à Primary Horizontal Fragmentation (PHF) à Derived Horizontal Fragmentation (DHF) Vertical Fragmentation (VF) Hybrid Fragmentation (HF) Fragmentation

16 Distributed DBMSPage 5. 16 © 1998 M. Tamer Özsu & Patrick Valduriez Database Information à relationship à cardinality of each relation: card ( R ) PHF – Information Requirements TITLE,SAL SKILL ENO,ENAME, TITLEPNO, PNAME, BUDGET, LOC ENO, PNO, RESP, DUR EMPPROJ ASG L 1 L 2 L 3

17 Distributed DBMSPage 5. 17 © 1998 M. Tamer Özsu & Patrick Valduriez Application Information à simple predicates : Given R [ A 1, A 2, …, A n ], a simple predicate p j is p j : A i  Value where  {=,,≥,≠}, Value  D i and D i is the domain of A i. For relation R we define Pr = { p 1, p 2, …, p m } Example : PNAME = "Maintenance" BUDGET ≤ 200000 à minterm predicates : Given R and Pr ={ p 1, p 2, …, p m } define M ={ m 1, m 2,…, m r } as M ={ m i | m i =  p j  Pr  p j * }, 1≤ j ≤ m, 1≤ i ≤ z where p j * = p j or p j * = ¬( p j ). PHF - Information Requirements

18 Distributed DBMSPage 5. 18 © 1998 M. Tamer Özsu & Patrick Valduriez Example m 1 : PNAME="Maintenance"  BUDGET≤200000 m 2 : NOT (PNAME="Maintenance")  BUDGET≤200000 m 3 : PNAME= "Maintenance"  NOT (BUDGET≤200000) m 4 : NOT (PNAME="Maintenance")  NOT (BUDGET≤200000) PHF – Information Requirements

19 Distributed DBMSPage 5. 19 © 1998 M. Tamer Özsu & Patrick Valduriez Application Information à minterm selectivitie s: sel ( m i )  The number of tuples of the relation that would be accessed by a user query which is specified according to a given minterm predicate m i. à access frequencies : acc ( q i )  The frequency with which a user application qi accesses data.  Access frequency for a minterm predicate can also be defined. PHF – Information Requirements

20 Distributed DBMSPage 5. 20 © 1998 M. Tamer Özsu & Patrick Valduriez Definition : R j =  F j  ( R ), 1 ≤ j ≤ w where F j is a selection formula, which is (preferably) a minterm predicate. Therefore, A horizontal fragment R i of relation R consists of all the tuples of R which satisfy a minterm predicate m i.  Given a set of minterm predicates M, there are as many horizontal fragments of relation R as there are minterm predicates. Set of horizontal fragments also referred to as minterm fragments. Primary Horizontal Fragmentation

21 Distributed DBMSPage 5. 21 © 1998 M. Tamer Özsu & Patrick Valduriez Given:A relation R, the set of simple predicates Pr Output:The set of fragments of R = { R 1, R 2, …, R w } which obey the fragmentation rules. Preliminaries : à Pr should be complete à Pr should be minimal PHF – Algorithm

22 Distributed DBMSPage 5. 22 © 1998 M. Tamer Özsu & Patrick Valduriez A set of simple predicates Pr is said to be complete if and only if the accesses to the tuples of the minterm fragments defined on Pr requires that two tuples of the same minterm fragment have the same probability of being accessed by any application. Example : à Assume PROJ[PNO,PNAME,BUDGET,LOC] has two applications defined on it. à Find the budgets of projects at each location.(1) à Find projects with budgets less than $200000.(2) Completeness of Simple Predicates

23 Distributed DBMSPage 5. 23 © 1998 M. Tamer Özsu & Patrick Valduriez According to (1), Pr ={LOC=“Montreal”,LOC=“New York”,LOC=“Paris”} which is not complete with respect to (2). Modify Pr ={LOC=“Montreal”,LOC=“New York”,LOC=“Paris”, BUDGET≤200000,BUDGET>200000} which is complete. Completeness of Simple Predicates

24 Distributed DBMSPage 5. 24 © 1998 M. Tamer Özsu & Patrick Valduriez If a predicate influences how fragmentation is performed, (i.e., causes a fragment f to be further fragmented into, say, f i and f j ) then there should be at least one application that accesses f i and f j differently. In other words, the simple predicate should be relevant in determining a fragmentation. If all the predicates of a set Pr are relevant, then Pr is minimal. acc ( m i ) ––––– card ( f i ) acc ( m j ) ––––– card ( f j ) ≠ Minimality of Simple Predicates

25 Distributed DBMSPage 5. 25 © 1998 M. Tamer Özsu & Patrick Valduriez Example : Pr ={LOC=“Montreal”,LOC=“New York”, LOC=“Paris”, BUDGET≤200000,BUDGET>200000} is minimal (in addition to being complete). However, if we add PNAME = “Instrumentation” then Pr is not minimal. Minimality of Simple Predicates

26 Distributed DBMSPage 5. 26 © 1998 M. Tamer Özsu & Patrick Valduriez Given:a relation R and a set of simple predicates Pr Output:a complete and minimal set of simple predicates Pr' for Pr Rule 1 :a relation or fragment is partitioned into at least two parts which are accessed differently by at least one application. COM_MIN Algorithm

27 Distributed DBMSPage 5. 27 © 1998 M. Tamer Özsu & Patrick Valduriez  Initialization : find a p i  Pr such that p i partitions R according to Rule 1 set Pr' = p i ; Pr  Pr – p i ; F  f i  Iteratively add predicates to Pr' until it is complete find a p j  Pr such that p j partitions some f k defined according to minterm predicate over Pr' according to Rule 1 set Pr' = Pr'  p i ; Pr  Pr – p i ; F  F  f i if  p k  Pr' which is nonrelevant then Pr'  Pr' – p k F  F – f k COM_MIN Algorithm

28 Distributed DBMSPage 5. 28 © 1998 M. Tamer Özsu & Patrick Valduriez Makes use of COM_MIN to perform fragmentation. Input:a relation R and a set of simple predicates Pr Output:a set of minterm predicates M according to which relation R is to be fragmented  Pr '  COM_MIN ( R, Pr )  determine the set M of minterm predicates  determine the set I of implications among p i  Pr  eliminate the contradictory minterms from M PHORIZONTAL Algorithm

29 Distributed DBMSPage 5. 29 © 1998 M. Tamer Özsu & Patrick Valduriez Two candidate relations : PAY and PROJ. Fragmentation of relation PAY à Application: Check the salary info and determine raise.  Employee records kept at two sites  application run at two sites à Simple predicates p 1 : SAL ≤ 30000 p 2 : SAL > 30000 Pr = { p 1, p 2 } which is complete and minimal Pr' = Pr à Minterm predicates m 1 : (SAL ≤ 30000) m 2 : NOT (SAL ≤ 30000)  (SAL > 30000) PHF – Example

30 Distributed DBMSPage 5. 30 © 1998 M. Tamer Özsu & Patrick Valduriez PHF – Example TITLE Mech. Eng. Programmer SAL 27000 24000 PAY 1 PAY 2 TITLE Elect. Eng. Syst. Anal. SAL 40000 34000

31 Distributed DBMSPage 5. 31 © 1998 M. Tamer Özsu & Patrick Valduriez Fragmentation of relation PROJ à Applications:  Find the name and budget of projects given their no.  Issued at three sites  Access project information according to budget  one site accesses ≤200000 other accesses >200000 à Simple predicates à For application (1) p 1 : LOC = “Montreal” p 2 : LOC = “New York” p 3 : LOC = “Paris” à For application (2) p 4 : BUDGET ≤ 200000 p 5 : BUDGET > 200000 à Pr = Pr' = { p 1, p 2, p 3, p 4, p 5 } PHF – Example

32 Distributed DBMSPage 5. 32 © 1998 M. Tamer Özsu & Patrick Valduriez Fragmentation of relation PROJ continued à Minterm fragments left after elimination m 1 : (LOC = “Montreal”)  (BUDGET ≤ 200000) m 2 : (LOC = “Montreal”)  (BUDGET > 200000) m 3 : (LOC = “New York”)  (BUDGET ≤ 200000) m 4 : (LOC = “New York”)  (BUDGET > 200000) m 5 : (LOC = “Paris”)  (BUDGET ≤ 200000) m 6 : (LOC = “Paris”)  (BUDGET > 200000) PHF – Example

33 Distributed DBMSPage 5. 33 © 1998 M. Tamer Özsu & Patrick Valduriez PHF – Example PROJ 1 PNOPNAMEBUDGETLOC PNOPNAMEBUDGETLOC P1Instrumentation150000Montreal P2 Database Develop. 135000New York PROJ 2 PROJ 4 PROJ 6 PNOPNAMEBUDGETLOC P3CAD/CAM250000New York PNOPNAMEBUDGETLOC MaintenanceP4310000Paris

34 Distributed DBMSPage 5. 34 © 1998 M. Tamer Özsu & Patrick Valduriez Completeness à Since Pr ' is complete and minimal, the selection predicates are complete Reconstruction à If relation R is fragmented into F R = { R 1, R 2,…, R r } R =   R i  FR R i Disjointness à Minterm predicates that form the basis of fragmentation should be mutually exclusive. PHF – Correctness

35 Distributed DBMSPage 5. 35 © 1998 M. Tamer Özsu & Patrick Valduriez Defined on a member relation of a link according to a selection operation specified on its owner. à Each link is an equijoin. à Equijoin can be implemented by means of semijoins. Derived Horizontal Fragmentation TITLE,SAL SKILL ENO, ENAME, TITLEPNO, PNAME, BUDGET, LOC ENO, PNO,RESP, DUR EMPPROJ ASG L1L1 L2L2 L3L3

36 Distributed DBMSPage 5. 36 © 1998 M. Tamer Özsu & Patrick Valduriez Given a link L where owner ( L )= S and member ( L )= R, the derived horizontal fragments of R are defined as R i = R  F S i, 1≤ i ≤ w where w is the maximum number of fragments that will be defined on R and S i =  F i  ( S ) where F i is the formula according to which the primary horizontal fragment S i is defined. DHF – Definition

37 Distributed DBMSPage 5. 37 © 1998 M. Tamer Özsu & Patrick Valduriez Given link L 1 where owner( L 1 )=SKILL and member( L 1 )=EMP EMP 1 = EMP  SKILL 1 EMP 2 = EMP  SKILL 2 where SKILL 1 =   SAL≤30000  (SKILL) SKILL 2 =  SAL>30000  (SKILL) DHF – Example ENOENAMETITLE E3A. LeeMech. Eng. E4J. MillerProgrammer E7R. DavisMech. Eng. EMP 1 ENOENAMETITLE E1J. DoeElect. Eng. E2M. SmithSyst. Anal. E5B. CaseySyst. Anal. EMP 2 E6L. ChuElect. Eng. E8J. JonesSyst. Anal.

38 Distributed DBMSPage 5. 38 © 1998 M. Tamer Özsu & Patrick Valduriez Completeness à Referential integrity à Let R be the member relation of a link whose owner is relation S which is fragmented as F S = { S 1, S 2,..., S n }. Furthermore, let A be the join attribute between R and S. Then, for each tuple t of R, there should be a tuple t' of S such that t [ A ]= t' [ A ] Reconstruction à Same as primary horizontal fragmentation. Disjointness à Simple join graphs between the owner and the member fragments. DHF – Correctness

39 Distributed DBMSPage 5. 39 © 1998 M. Tamer Özsu & Patrick Valduriez Has been studied within the centralized context à design methodology à physical clustering More difficult than horizontal, because more alternatives exist. Two approaches : à grouping  attributes to fragments à splitting  relation to fragments Vertical Fragmentation

40 Distributed DBMSPage 5. 40 © 1998 M. Tamer Özsu & Patrick Valduriez Overlapping fragments à grouping Non-overlapping fragments à splitting We do not consider the replicated key attributes to be overlapping. Advantage: Easier to enforce functional dependencies (for integrity checking etc.) Vertical Fragmentation

41 Distributed DBMSPage 5. 41 © 1998 M. Tamer Özsu & Patrick Valduriez Application Information à Attribute affinities  a measure that indicates how closely related the attributes are  This is obtained from more primitive usage data à Attribute usage values  Given a set of queries Q = { q 1, q 2,…, q q } that will run on the relation R [ A 1, A 2,…, A n ], use ( q i, ) can be defined accordingly VF – Information Requirements  use ( q i,A j ) = 1 if attribute A j is referenced by query q i 0 otherwise  

42 Distributed DBMSPage 5. 42 © 1998 M. Tamer Özsu & Patrick Valduriez Consider the following 4 queries for relation PROJ q 1 : SELECT BUDGET q 2 : SELECT PNAME,BUDGET FROM PROJ WHERE PNO=Value q 3 : SELECT PNAME q 4 : SELECTSUM (BUDGET) FROM PROJ WHERE LOC=Value Let A 1 = PNO, A 2 = PNAME, A 3 = BUDGET, A 4 = LOC VF – Definition of use ( q i, A j ) q1q1 q2q2 q3q3 q4q4 A1A1 10 1 0 0011 0011 00 1 1 A2A2 A3A3 A4A4

43 Distributed DBMSPage 5. 43 © 1998 M. Tamer Özsu & Patrick Valduriez The attribute affinity measure between two attributes A i and A j of a relation R [ A 1, A 2, …, A n ] with respect to the set of applications Q = ( q 1, q 2, …, q q ) is defined as follows : VF – Affinity Measure aff ( A i, A j ) aff ( A i, A j )  (query access) all queries that access A i and A j  query access  access frequency of a query  access execution all sites 

44 Distributed DBMSPage 5. 44 © 1998 M. Tamer Özsu & Patrick Valduriez Assume each query in the previous example accesses the attributes once during each execution. Also assume the access frequencies Then aff ( A 1, A 3 )= 15*1 + 20*1+10*1 = 45 and the attribute affinity matrix AA is VF – Calculation of aff ( A i, A j ) q 1 q 2 q 3 q 4 S 1 S 2 S 3 1520 10 500 25 30 0 A A AA 1234 A A A A 1 2 3 4 450 0 0 805 75 455533 0 753 78

45 Distributed DBMSPage 5. 45 © 1998 M. Tamer Özsu & Patrick Valduriez Take the attribute affinity matrix AA and reorganize the attribute orders to form clusters where the attributes in each cluster demonstrate high affinity to one another. Bond Energy Algorithm (BEA) has been used for clustering of entities. BEA finds an ordering of entities (in our case attributes) such that the global affinity measure is maximized. VF – Clustering Algorithm AM  (affinity of A i and A j with their neighbors) j  i 

46 Distributed DBMSPage 5. 46 © 1998 M. Tamer Özsu & Patrick Valduriez Input:The AA matrix Output:The clustered affinity matrix CA which is a perturbationof AA  Initialization : Place and fix one of the columns of AA in CA.  Iteration : Place the remaining n-i columns in the remaining i +1 positions in the CA matrix. For each column, choose the placement that makes the most contribution to the global affinity measure.  Row order :Order the rows according to the column ordering. Bond Energy Algorithm

47 Distributed DBMSPage 5. 47 © 1998 M. Tamer Özsu & Patrick Valduriez “Best” placement? Define contribution of a placement: cont ( A i, A k, A j ) = 2 bond ( A i, A k )+2 bond ( A k, A l ) –2 bond ( A i, A j ) where Bond Energy Algorithm bond ( A x, A y ) = aff ( A z, A x ) aff ( A z, A y ) z  1 n 

48 Distributed DBMSPage 5. 48 © 1998 M. Tamer Özsu & Patrick Valduriez Consider the following AA matrix and the corresponding CA matrix where A 1 and A 2 have been placed. Place A 3 : Ordering (0-3-1) : cont ( A 0, A 3, A 1 )= 2 bond ( A 0, A 3 )+2 bond ( A 3, A 1 )–2 bond ( A 0, A 1 ) = 2* 0 + 2* 4410 – 2*0 = 8820 Ordering (1-3-2) : cont ( A 1, A 3, A 2 )= 2 bond ( A 1, A 3 )+2 bond ( A 3, A 2 )–2 bond ( A 1, A 2 ) = 2* 4410 + 2* 890 – 2*225 = 10150 Ordering (2-3-4) : cont ( A 2, A 3, A 4 )= 1780 BEA – Example

49 Distributed DBMSPage 5. 49 © 1998 M. Tamer Özsu & Patrick Valduriez Therefore, the CA matrix has to form BEA – Example A1A1 A2A2 A3A3 45 0 0 5 53 3 0 80 5 75

50 Distributed DBMSPage 5. 50 © 1998 M. Tamer Özsu & Patrick Valduriez When A 4 is placed, the final form of the CA matrix (after row organization) is BEA – Example AAAA 1234 A A A A 1 2 3 4 45 0 0 53 5 3 0 5 80 75 0 3 78

51 Distributed DBMSPage 5. 51 © 1998 M. Tamer Özsu & Patrick Valduriez How can you divide a set of clustered attributes { A 1, A 2, …, A n } into two (or more) sets { A 1, A 2, …, A i } and { A i, …, A n } such that there are no (or minimal) applications that access both (or more than one) of the sets. VF – Algorithm A1A1 A2A2 AiAi A i+1 AmAm … A1A1 A2A2 A3A3 AiAi AmAm BA... TA

52 Distributed DBMSPage 5. 52 © 1998 M. Tamer Özsu & Patrick Valduriez Define TQ =set of applications that access only TA BQ =set of applications that access only BA OQ =set of applications that access both TA and BA and CTQ =total number of accesses to attributes by applications that access only TA CBQ =total number of accesses to attributes by applications that access only BA COQ =total number of accesses to attributes by applications that access both TA and BA Then find the point along the diagonal that maximizes VF – ALgorithm CTQ  CBQ  COQ 2

53 Distributed DBMSPage 5. 53 © 1998 M. Tamer Özsu & Patrick Valduriez Two problems :  Cluster forming in the middle of the CA matrix à Shift a row up and a column left and apply the algorithm to find the “best” partitioning point à Do this for all possible shifts à Cost O ( m 2 )  More than two clusters à m -way partitioning à try 1, 2, …, m– 1 split points along diagonal and try to find the best point for each of these à Cost O (2 m ) VF – Algorithm

54 Distributed DBMSPage 5. 54 © 1998 M. Tamer Özsu & Patrick Valduriez A relation R, defined over attribute set A and key K, generates the vertical partitioning F R = { R 1, R 2, …, R r }. Completeness à The following should be true for A : A =  A R i Reconstruction à Reconstruction can be achieved by R =  K R i  R i  F R Disjointness à TID's are not considered to be overlapping since they are maintained by the system à Duplicated keys are not considered to be overlapping VF – Correctness

55 Distributed DBMSPage 5. 55 © 1998 M. Tamer Özsu & Patrick Valduriez Hybrid Fragmentation R HF R1R1 VF R 11 R 12 R 21 R 22 R 23 R2R2

56 Distributed DBMSPage 5. 56 © 1998 M. Tamer Özsu & Patrick Valduriez Fragment Allocation Problem Statement Given F = { F 1, F 2, …, F n } fragments S ={ S 1, S 2, …, S m } network sites Q = { q 1, q 2,…, q q }applications Find the "optimal" distribution of F to S. Optimality à Minimal cost  Communication + storage + processing (read & update)  Cost in terms of time (usually) à Performance Response time and/or throughput à Constraints  Per site constraints (storage & processing)

57 Distributed DBMSPage 5. 57 © 1998 M. Tamer Özsu & Patrick Valduriez Information Requirements Database information à selectivity of fragments à size of a fragment Application information à access types and numbers à access localities Communication network information à unit cost of storing data at a site à unit cost of processing at a site Computer system information à bandwidth à latency à communication overhead

58 Distributed DBMSPage 5. 58 © 1998 M. Tamer Özsu & Patrick Valduriez File Allocation (FAP) vs Database Allocation (DAP): à Fragments are not individual files  relationships have to be maintained à Access to databases is more complicated  remote file access model not applicable  relationship between allocation and query processing à Cost of integrity enforcement should be considered à Cost of concurrency control should be considered Allocation

59 Distributed DBMSPage 5. 59 © 1998 M. Tamer Özsu & Patrick Valduriez Allocation – Information Requirements Database Information à selectivity of fragments à size of a fragment Application Information à number of read accesses of a query to a fragment à number of update accesses of query to a fragment à A matrix indicating which queries updates which fragments à A similar matrix for retrievals à originating site of each query Site Information à unit cost of storing data at a site à unit cost of processing at a site Network Information à communication cost/frame between two sites à frame size

60 Distributed DBMSPage 5. 60 © 1998 M. Tamer Özsu & Patrick Valduriez General Form min(Total Cost) subject to response time constraint storage constraint processing constraint Decision Variable Allocation Model x ij  1 if fragment F i is stored at site S j 0 otherwise   

61 Distributed DBMSPage 5. 61 © 1998 M. Tamer Özsu & Patrick Valduriez Total Cost Storage Cost (of fragment F j at S k ) Query Processing Cost (for one query) processing component + transmission component Allocation Model (unit storage cost at S k )  (size of F j )  x jk query processing cost  all queries  cost of storing a fragment at a site all fragments  all sites 

62 Distributed DBMSPage 5. 62 © 1998 M. Tamer Özsu & Patrick Valduriez Query Processing Cost Processing component access cost + integrity enforcement cost + concurrency control cost à Access cost à Integrity enforcement and concurrency control costs  Can be similarly calculated Allocation Model ( no. of update accesses+ no. of read accesses)  all fragments  all sites  x ij  local processing cost at a site

63 Distributed DBMSPage 5. 63 © 1998 M. Tamer Özsu & Patrick Valduriez Query Processing Cost Transmission component cost of processing updates + cost of processing retrievals à Cost of updates à Retrieval Cost Allocation Model update message cost  all fragments  all sites  acknowledgment cost all fragments  all sites  ( min all sites all fragments  cost of retrieval command  cost of sending back the result)

64 Distributed DBMSPage 5. 64 © 1998 M. Tamer Özsu & Patrick Valduriez Constraints à Response Time execution time of query ≤ max. allowable response time for that query à Storage Constraint (for a site) à Processing constraint (for a site) Allocation Model storage requirement of a fragment at that site  all fragments  storage capacity at that site processing load of a query at that site  all queries  processing capacity of that site

65 Distributed DBMSPage 5. 65 © 1998 M. Tamer Özsu & Patrick Valduriez Solution Methods à FAP is NP-complete à DAP also NP-complete Heuristics based on à single commodity warehouse location (for FAP) à knapsack problem à branch and bound techniques à network flow Allocation Model

66 Distributed DBMSPage 5. 66 © 1998 M. Tamer Özsu & Patrick Valduriez Attempts to reduce the solution space à assume all candidate partitionings known; select the “best” partitioning à ignore replication at first à sliding window on fragments Allocation Model


Download ppt "Distributed DBMSPage 5. 1 © 1998 M. Tamer Özsu & Patrick Valduriez Outline Introduction Background Distributed DBMS Architecture  Distributed Database."

Similar presentations


Ads by Google