Download presentation
Presentation is loading. Please wait.
Published byHugo Shelton Modified over 9 years ago
1
III. Current Trends: 2 - Distributed DBMSsSlide 1/47 III. Current Trends Distributed DBMSs: Advanced Concepts 3C13/D63C13/D6
2
III. Current Trends: 2 - Distributed DBMSsSlide 2/47 13.0 Content 13.1 Objectives 13.2 Distributed Transaction Management 13.3 Distributed Concurrency Control - Objectives - Locking Protocols - Timestamp Protocols 13.4 Distributed Deadlock Management 13.5 Distributed Database Recovery - Failures in a distributed environment - How failure affects recovery - Two-Phase Commit (2PC) - Three-Phase Commit (3PC) Content 13.6 Replication Servers - Data Replication Concepts 13.7 Mobile Databases 13.8 Summary
3
III. Current Trends: 2 - Distributed DBMSsSlide 3/47 13.1 Objectives Objectives In this Lecture you will learn: Distributed transaction management. Distributed concurrency control. Distributed deadlock detection. Distributed recovery control. Distributed integrity control. X/OPEN DTP standard. Replication servers as an alternative. How DBMSs can support the mobile worker.
4
III. Current Trends: 2 - Distributed DBMSsSlide 4/47 13.2 Distributed Transaction Management Distributed Transaction Management Objectives of distributed T processing same as centralized. – more complex: ensure atomicity of global T and each subT Previously: Four CENTRALIZED database modules: 1.Transaction manager 2.Scheduler: (or Lock manager) 3.Recovery manager 4. Buffer manager DDBMS has these in local DBMSs In addition: Global Transaction Manager (or transaction coordinator TC) at each site. -TC coordinates local and global Ts initiated at that site. -Inter-site communication still through Data communications
5
III. Current Trends: 2 - Distributed DBMSsSlide 5/47 13.2 Distributed Transaction Management Distributed Transaction Management Objectives of distributed T processing same as centralized. – more complex: ensure atomicity of global T and each subT Previously: Four CENTRALIZED database modules: 1.Transaction manager 2.Scheduler: (or Lock manager) 3.Recovery manager 4. Buffer manager DDBMS has these in local DBMSs In addition: Global Transaction Manager (or transaction coordinator TC) at each site. -TC coordinates local and global Ts initiated at that site. -Inter-site communication still through Data communications
6
III. Current Trends: 2 - Distributed DBMSsSlide 6/47 13.3 Distributed Concurrency Control Distributed Concurrency Control Objectives: Given no failure: all concurrency control (CC) mechanisms must ensure consistency of data items is preserved. each atomic action completes in finite time
7
III. Current Trends: 2 - Distributed DBMSsSlide 7/47 13.3 Distributed Concurrency Control Distributed Concurrency Control Objectives: Given no failure: all concurrency control (CC) mechanisms must ensure consistency of data items is preserved. each atomic action completes in finite time In addition good CC mechanisms should: Be resilient to site and comms failure Permit parallelism Have modest comp and storage overheads Perform in a network with comms delay Place few constraints on structure of atomic actions
8
III. Current Trends: 2 - Distributed DBMSsSlide 8/47 13.3 Distributed Concurrency Control Distributed Concurrency Control Objectives: Given no failure: all concurrency control (CC) mechanisms must ensure consistency of data items is preserved. each atomic action completes in finite time Multiple-copy consistency problem: Update of original, but not copies of data stored in different locations -database becomes inconsistent Assume for now updates of copies are synchronous… In addition good CC mechanisms should: Be resilient to site and comms failure Permit parallelism Have modest comp and storage overheads Perform in a network with comms delay Place few constraints on structure of atomic actions
9
III. Current Trends: 2 - Distributed DBMSsSlide 9/47 13.3 Distributed Concurrency Control Locking Protocols Can employ one of the following 4 protocols (based on 2PL) to ensure serializability for DDBMSs…
10
III. Current Trends: 2 - Distributed DBMSsSlide 10/47 13.3 Distributed Concurrency Control Locking Protocols Can employ one of the following 4 protocols (based on 2PL) to ensure serializability for DDBMSs… 1.Centralized 2PL: single site maintains all locking information. –Local TMs involved in global T: request and release locks from lock manager. –Or, T-Controller can make all locking requests on behalf of local TMs. –Advantage: easy to implement. –Disadvantages: bottlenecks and lower reliability.
11
III. Current Trends: 2 - Distributed DBMSsSlide 11/47 13.3 Distributed Concurrency Control Locking Protocols Can employ one of the following 4 protocols (based on 2PL) to ensure serializability for DDBMSs… 1.Centralized 2PL: single site maintains all locking information. –Local TMs involved in global T: request and release locks from lock manager. –Or, T-Controller can make all locking requests on behalf of local TMs. –Advantage: easy to implement. –Disadvantages: bottlenecks and lower reliability. 2.Primary Copy 2PL: lock managers distributed to a number of sites. –Each lock manager responsible for managing locks for set of data items. –For data copies, one copy chosen as primary copy, others are slave copies –Only need to write-lock primary copy of data item that is to be updated. –changes can be propagated to slaves. –Advantages: lower comms costs and faster than centralized 2PL. –Disadvantages: deadlock handling more complex; still rather centralized
12
III. Current Trends: 2 - Distributed DBMSsSlide 12/47 13.3 Distributed Concurrency Control Locking Protocols 3.Distributed 2PL: Lock managers distributed to every site. –Each lock manager responsible for locks for data at that site. –If data not replicated, equivalent to primary copy 2PL. –Otherwise, implements a Read-One-Write-All (ROWA) replica control protocol. –Using ROWA protocol: Any copy of replicated item can be used for read. All copies must be write-locked before item can be updated. –Disadvantages: deadlock handling more complex; comms costs higher
13
III. Current Trends: 2 - Distributed DBMSsSlide 13/47 13.3 Distributed Concurrency Control Locking Protocols 3.Distributed 2PL: Lock managers distributed to every site. –Each lock manager responsible for locks for data at that site. –If data not replicated, equivalent to primary copy 2PL. –Otherwise, implements a Read-One-Write-All (ROWA) replica control protocol. –Using ROWA protocol: Any copy of replicated item can be used for read. All copies must be write-locked before item can be updated. –Disadvantages: deadlock handling more complex; comms costs higher 4.Majority Locking: Extension of distributed 2PL. –To read/write data replicated at n sites, sends lock request to>1/2n sites –Transaction cannot proceed until majority of locks obtained. –Overly strong in case of read locks.
14
III. Current Trends: 2 - Distributed DBMSsSlide 14/47 13.3 Distributed Concurrency Control Timestamp Protocols Distributed: Need to generate unique timestamps both locally and globally. Objective: order Ts globally so older Ts (smaller timestamps) get priority in event of conflict.
15
III. Current Trends: 2 - Distributed DBMSsSlide 15/47 13.3 Distributed Concurrency Control Timestamp Protocols Distributed: Need to generate unique timestamps both locally and globally. System clock/event counter at each site unsuitable. Objective: order Ts globally so older Ts (smaller timestamps) get priority in event of conflict.
16
III. Current Trends: 2 - Distributed DBMSsSlide 16/47 13.3 Distributed Concurrency Control Timestamp Protocols Distributed: Need to generate unique timestamps both locally and globally. System clock/event counter at each site unsuitable. Concatenate local timestamp with a unique site identifier: Site identifier placed in least significant position - ensures events ordered according to occurrence not location. Objective: order Ts globally so older Ts (smaller timestamps) get priority in event of conflict.
17
III. Current Trends: 2 - Distributed DBMSsSlide 17/47 13.3 Distributed Concurrency Control Timestamp Protocols Distributed: Need to generate unique timestamps both locally and globally. System clock/event counter at each site unsuitable. Concatenate local timestamp with a unique site identifier: Site identifier placed in least significant position - ensures events ordered according to occurrence not location. To prevent busy site generating larger timestamps than slower sites: –Each site includes their timestamps in messages. –Site compares timestamps with message and, if its timestamp smaller, sets it to some value greater than message timestamp. Objective: order Ts globally so older Ts (smaller timestamps) get priority in event of conflict.
18
III. Current Trends: 2 - Distributed DBMSsSlide 18/47 13.4 Distributed Deadlock Management Distributed Deadlock Management Any locking based/timestamp based Concurrency Control mechanism may result in deadlock. - more complicated to detect if lock manager not centralized
19
III. Current Trends: 2 - Distributed DBMSsSlide 19/47 13.4 Distributed Deadlock Management Distributed Deadlock Management Any locking based/timestamp based Concurrency Control mechanism may result in deadlock. - more complicated to detect if lock manager not centralized 1. Centralized deadlock detection: Single site appointed deadlock detection coordinator (DDC). - DDC responsible for constructing and maintaining Global “Wait-For Graph” (WFG). - If cycles>0, DDC breaks each cycle (selects Ts to be rolled back and restarted)
20
III. Current Trends: 2 - Distributed DBMSsSlide 20/47 13.4 Distributed Deadlock Management Distributed Deadlock Management Any locking based/timestamp based Concurrency Control mechanism may result in deadlock. - more complicated to detect if lock manager not centralized 1. Centralized deadlock detection: Single site appointed deadlock detection coordinator (DDC). - DDC responsible for constructing and maintaining Global “Wait-For Graph” (WFG). - If cycles>0, DDC breaks each cycle (selects Ts to be rolled back and restarted) 2. Hierarchical deadlock detection: Sites are organized into a hierarchy. - Each site sends its Local WFG to detection site above it in hierarchy. - Reduces dependence on centralized detection site.
21
III. Current Trends: 2 - Distributed DBMSsSlide 21/47 13.4 Distributed Deadlock Management Distributed Deadlock Management Any locking based/timestamp based Concurrency Control mechanism may result in deadlock. - more complicated to detect if lock manager not centralized 1. Centralized deadlock detection: Single site appointed deadlock detection coordinator (DDC). - DDC responsible for constructing and maintaining Global “Wait-For Graph” (WFG). - If cycles>0, DDC breaks each cycle (selects Ts to be rolled back and restarted) 2. Hierarchical deadlock detection: Sites are organized into a hierarchy. - Each site sends its Local WFG to detection site above it in hierarchy. - Reduces dependence on centralized detection site. 3. Distributed deadlock detection: Most well-known method developed by Obermarck (1982). - An external node, T ext, is added to Local WFG to indicate remote agent. - If a Local WFG contains a cycle that does not involve T ext, then site and DDBMS are in deadlock. If it contains a cycle that does involve T ext, then MAYBE deadlock
22
III. Current Trends: 2 - Distributed DBMSsSlide 22/47 13.5 Distributed Database Recovery Distributed Database Recovery Four types of failure particular to distributed DBMSs: 1.Loss of a message 2.Failure of communication link 3.Failure of a site 4.Network partitioning
23
III. Current Trends: 2 - Distributed DBMSsSlide 23/47 13.5 Distributed Database Recovery Distributed Database Recovery Four types of failure particular to distributed DBMSs: 1.Loss of a message 2.Failure of communication link 3.Failure of a site 4.Network partitioning DDBMS highly dependent on ability of all sites to be able to communicate reliably with one another.
24
III. Current Trends: 2 - Distributed DBMSsSlide 24/47 13.5 Distributed Database Recovery Distributed Database Recovery Four types of failure particular to distributed DBMSs: 1.Loss of a message 2.Failure of communication link 3.Failure of a site 4.Network partitioning DDBMS highly dependent on ability of all sites to be able to communicate reliably with one another. Comms failures can result in network becoming split into 2+ partitions.
25
III. Current Trends: 2 - Distributed DBMSsSlide 25/47 13.5 Distributed Database Recovery Distributed Database Recovery Four types of failure particular to distributed DBMSs: 1.Loss of a message 2.Failure of communication link 3.Failure of a site 4.Network partitioning DDBMS highly dependent on ability of all sites to be able to communicate reliably with one another. Comms failures can result in network becoming split into 2+ partitions. May be difficult to distinguish whether comm link or site has failed.
26
III. Current Trends: 2 - Distributed DBMSsSlide 26/47 13.5 Distributed Database Recovery 2-Phase commit Two phases: a voting phase and a decision phase. Coordinator asks all participants if they are prepared to commit T. –If one participant votes abort, or fails to respond within a timeout period, coordinator instructs all participants to abort transaction. –If all vote commit, coordinator instructs all participants to commit. All participants must adopt global decision.
27
III. Current Trends: 2 - Distributed DBMSsSlide 27/47 13.5 Distributed Database Recovery 2-Phase commit Two phases: a voting phase and a decision phase. Coordinator asks all participants if they are prepared to commit T. –If one participant votes abort, or fails to respond within a timeout period, coordinator instructs all participants to abort transaction. –If all vote commit, coordinator instructs all participants to commit. All participants must adopt global decision. - Assumes each site has own local log and can rollback or commit T reliably. - If participant fails to vote, abort is assumed. - If participant gets no vote instruction from coordinator, can abort.
28
III. Current Trends: 2 - Distributed DBMSsSlide 28/47 13.5 Distributed Database Recovery 2-Phase commit Two phases: a voting phase and a decision phase. Coordinator asks all participants if they are prepared to commit T. –If one participant votes abort, or fails to respond within a timeout period, coordinator instructs all participants to abort transaction. –If all vote commit, coordinator instructs all participants to commit. All participants must adopt global decision. - Assumes each site has own local log and can rollback or commit T reliably. - If participant fails to vote, abort is assumed. - If participant gets no vote instruction from coordinator, can abort. State transitions 2PC. (a) Coordinator (b) Participant
29
III. Current Trends: 2 - Distributed DBMSsSlide 29/47 13.5 Distributed Database Recovery 2-Phase commit Termination Protocols: invoked whenever a coordinator or participant fails to receive an expected message and times out. Coordinator Timeout in WAITING state - Globally abort the transaction. Timeout in DECIDED state - Send global decision again to sites that have not acknowledged. Participant Simplest termination protocol: leave participant blocked until comm with coordinator is re-established. Alternatively: Timeout in INITIAL state- Unilaterally abort the transaction. Timeout in the PREPARED state- Without more information, participant blocked. Could get decision from another participant.
30
III. Current Trends: 2 - Distributed DBMSsSlide 30/47 13.5 Distributed Database Recovery 2-Phase commit Recovery Protocols: Action to be taken by operational site in event of failure. Depends on what stage coordinator or participant had reached. Coordinator Failure Failure in INITIAL state- Recovery starts commit procedure. Failure in WAITING state- Recovery restarts commit procedure Failure in DECIDED state- On restart, if coordinator received all, complete successfully. Otherwise, has to initiate termination protocol Participant Failure Objective to ensure that participant on restart performs same action as all other participants and that this restart can be performed independently. Failure in INITIAL state- Unilaterally abort the transaction. Failure in PREPARED state- Recovery via termination protocol Failure in ABORTED/COMMITTED states - On restart, no further action
31
III. Current Trends: 2 - Distributed DBMSsSlide 31/47 13.5 Distributed Database Recovery 3-Phase commit 2PC is not a non-blocking protocol. Probability of blocking occurring in practice is sufficiently rare that most existing systems use 2PC. For example, a process that times out after voting commit, but before receiving global instruction, is blocked if it can communicate only with sites that do not know global decision.
32
III. Current Trends: 2 - Distributed DBMSsSlide 32/47 13.5 Distributed Database Recovery 3-Phase commit 2PC is not a non-blocking protocol. Probability of blocking occurring in practice is sufficiently rare that most existing systems use 2PC. For example, a process that times out after voting commit, but before receiving global instruction, is blocked if it can communicate only with sites that do not know global decision. 3PC introduces 3 rd phase: pre-commit - On receiving all votes from participants, coordinator sends global pre-commit message. -Participant who receives global pre- commit, knows all other participants have voted commit and that, in time, participant itself will definitely commit.
33
III. Current Trends: 2 - Distributed DBMSsSlide 33/47 13.5 Distributed Database Recovery 3-Phase commit 2PC is not a non-blocking protocol. Probability of blocking occurring in practice is sufficiently rare that most existing systems use 2PC. For example, a process that times out after voting commit, but before receiving global instruction, is blocked if it can communicate only with sites that do not know global decision. 3PC introduces 3 rd phase: pre-commit - On receiving all votes from participants, coordinator sends global pre-commit message. -Participant who receives global pre- commit, knows all other participants have voted commit and that, in time, participant itself will definitely commit. (a)Coordinator (b)Participant
34
III. Current Trends: 2 - Distributed DBMSsSlide 34/47 13.7 Replication Servers Replication Servers General purpose DDBMSs have not been widely accepted. Instead, Database replication: the copying and maintenance of data on multiple servers, may be more preferred solution.
35
III. Current Trends: 2 - Distributed DBMSsSlide 35/47 13.7 Replication Servers Replication Servers General purpose DDBMSs have not been widely accepted. Instead, Database replication: the copying and maintenance of data on multiple servers, may be more preferred solution. Synchronous: updates to replicated data part of enclosing transaction. - If sites>0 that hold replicas are unavailable T cannot complete. - Large no. of messages required to coordinate synchronization. Asynchronous: target database updated after source database modified. - Delay in regaining consistency may range from few seconds to days
36
III. Current Trends: 2 - Distributed DBMSsSlide 36/47 13.7 Replication Servers Replication Servers General purpose DDBMSs have not been widely accepted. Instead, Database replication: the copying and maintenance of data on multiple servers, may be more preferred solution. Synchronous: updates to replicated data part of enclosing transaction. - If sites>0 that hold replicas are unavailable T cannot complete. - Large no. of messages required to coordinate synchronization. Asynchronous: target database updated after source database modified. - Delay in regaining consistency may range from few seconds to days Functionality: has to be able to copy data from one database to another with Scalability. Mapping and Transformation. Object Replication. Specification of Replication Schema. Subscription mechanism. Initialization mechanism.
37
III. Current Trends: 2 - Distributed DBMSsSlide 37/47 13.7 Replication Servers Data Replication Concepts Data Ownership: ownership relates to which site has privilege to update the data. 3 Main types of ownership:
38
III. Current Trends: 2 - Distributed DBMSsSlide 38/47 13.7 Replication Servers Data Replication Concepts Data Ownership: ownership relates to which site has privilege to update the data. 3 Main types of ownership: 1.Master/slave (or asymmetric replication), –Asynchronously replicated data is owned by one (master) site, and can be updated by only that site. –Using ‘publish-and-subscribe’ metaphor, master site is ‘publisher’ –Other sites ‘subscribe’ to data and receive read-only copies. –Potentially, each site can be master for non-overlapping data sets, but need to avoid update conflicts.
39
III. Current Trends: 2 - Distributed DBMSsSlide 39/47 13.7 Replication Servers Data Replication Concepts Data Ownership: ownership relates to which site has privilege to update the data. 3 Main types of ownership: 1.Master/slave (or asymmetric replication), –Asynchronously replicated data is owned by one (master) site, and can be updated by only that site. –Using ‘publish-and-subscribe’ metaphor, master site is ‘publisher’ –Other sites ‘subscribe’ to data and receive read-only copies. –Potentially, each site can be master for non-overlapping data sets, but need to avoid update conflicts. Example: mobile computing. Replication is one method of providing data to mobile workforce. Download/upload data on demand from local workgroup server.
40
III. Current Trends: 2 - Distributed DBMSsSlide 40/47 13.7 Replication Servers Data Replication Concepts 2.Workflow Ownership –Avoids update conflicts, provides more dynamic ownership model. –Allows right to update replicated data to move from site to site. –However, at any one moment, only ever one site that may update that particular data set.
41
III. Current Trends: 2 - Distributed DBMSsSlide 41/47 13.7 Replication Servers Data Replication Concepts 2.Workflow Ownership –Avoids update conflicts, provides more dynamic ownership model. –Allows right to update replicated data to move from site to site. –However, at any one moment, only ever one site that may update that particular data set. Example: order processing system, which follows series of steps, such as order entry, credit approval, invoicing, shipping, and so on.
42
III. Current Trends: 2 - Distributed DBMSsSlide 42/47 13.7 Replication Servers Data Replication Concepts 2.Workflow Ownership –Avoids update conflicts, provides more dynamic ownership model. –Allows right to update replicated data to move from site to site. –However, at any one moment, only ever one site that may update that particular data set. Example: order processing system, which follows series of steps, such as order entry, credit approval, invoicing, shipping, and so on. 3.Update anywhere (symmetric replication) ownership –Creates peer-to-peer environment where multiple sites have equal rights to update replicated data. –Allows local sites to function autonomously, –Shared ownership can lead to conflict scenarios and have to employ methodology for conflict detection and resolution.
43
III. Current Trends: 2 - Distributed DBMSsSlide 43/47 13.8 Mobile Databases Mobile Databases Work as if in the office but in reality working from remote locations. ‘Office’ may accompany remote worker in form of laptop, PDA Mobile Database: is portable and physically separate from a centralized database server but capable of comms with server from remote sites allowing sharing of corporate data.
44
III. Current Trends: 2 - Distributed DBMSsSlide 44/47 13.8 Mobile Databases Mobile Databases Work as if in the office but in reality working from remote locations. ‘Office’ may accompany remote worker in form of laptop, PDA Mobile Database: is portable and physically separate from a centralized database server but capable of comms with server from remote sites allowing sharing of corporate data.
45
III. Current Trends: 2 - Distributed DBMSsSlide 45/47 13.8 Mobile Databases Mobile Databases Work as if in the office but in reality working from remote locations. ‘Office’ may accompany remote worker in form of laptop, PDA Functionality of mobile DBMSs: - comm with centralized database server - wireless or Internet access - replicate data on centralized database server and mobile device - synchronize data on centralized database server and mobile device - capture data from various sources - create customized mobile apps Mobile Database: is portable and physically separate from a centralized database server but capable of comms with server from remote sites allowing sharing of corporate data.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.