Distributed Locking. Distributed Locking (No Replication) Assumptions Lock tables are managed by individual sites. The component of a transaction at a.

Slides:



Advertisements
Similar presentations
Types of Distributed Database Systems
Advertisements

More About Transaction Management Chapter 10. Contents Transactions that Read Uncommitted Data View Serializability Resolving Deadlocks Distributed Databases.
Concurrency II. Shared/Exclusive Locks Problem: while simple locks + 2PL guarantee conflict­serializability, they do not allow two readers of DB element.
CS3771 Today: deadlock detection and election algorithms  Previous class Event ordering in distributed systems Various approaches for Mutual Exclusion.
Synchronization Chapter clock synchronization * 5.2 logical clocks * 5.3 global state * 5.4 election algorithm * 5.5 mutual exclusion * 5.6 distributed.
Distributed Databases John Ortiz. Lecture 24Distributed Databases2  Distributed Database (DDB) is a collection of interrelated databases interconnected.
Distributed databases
Peer-to-Peer Distributed Search. Peer-to-Peer Networks A pure peer-to-peer network is a collection of nodes or peers that: 1.Are autonomous: participants.
MIS 385/MBA 664 Systems Implementation with DBMS/ Database Management Dave Salisbury ( )
Chapter 13 (Web): Distributed Databases
Concurrent Transactions Even when there is no “failure,” several transactions can interact to turn a consistent state into an inconsistent state.
Distributed Databases Logical next step in geographically dispersed organisations goal is to provide location transparency starting point = a set of decentralised.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Distributed Snapshots –Termination detection Election algorithms –Bully –Ring.
1 ICS 214B: Transaction Processing and Distributed Data Management Replication Techniques.
Overview Distributed vs. decentralized Why distributed databases
Lecture-12 Concurrency Control in Distributed Databases
Manajemen Basis Data Pertemuan 10 Matakuliah: M0264/Manajemen Basis Data Tahun: 2008.
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Chapter 18.2: Distributed Coordination Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 18 Distributed Coordination Chapter.
Chapter 18: Distributed Coordination (Chapter 18.1 – 18.5)
Concurrency. Correctness Principle A transaction is atomic -- all or none property. If it executes partly, an invalid state is likely to result. A transaction,
Freenet A Distributed Anonymous Information Storage and Retrieval System I Clarke O Sandberg I Clarke O Sandberg B WileyT W Hong.
Distributed Databases and Query Processing. Distributed DB’s vs. Parallel DB’s Many autonomous processors that may participate in database operations.
Definition of terms Definition of terms Explain business conditions driving distributed databases Explain business conditions driving distributed databases.
1 ICS 214B: Transaction Processing and Distributed Data Management Distributed Database Systems.
Computer Science Lecture 16, page 1 CS677: Distributed OS Last Class:Consistency Semantics Consistency models –Data-centric consistency models –Client-centric.
Database System Concepts ©Silberschatz, Korth and Sudarshan See for conditions on re-usewww.db-book.com Remote Backup Systems.
Distributed Commit. Example Consider a chain of stores and suppose a manager – wants to query all the stores, – find the inventory of toothbrushes at.
Database Systems: Design, Implementation, and Management Eighth Edition Chapter 10 Transaction Management and Concurrency Control.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms.
Distributed Databases
1 Distributed and Parallel Databases. 2 Distributed Databases Distributed Systems goal: –to offer local DB autonomy at geographically distributed locations.
Replication and Distribution CSE 444 Spring 2012 University of Washington.
04/18/2005Yan Huang - CSCI5330 Database Implementation – Distributed Database Systems Distributed Database Systems.
DISTRIBUTED DATABASE SYSTEM.  A distributed database system consists of loosely coupled sites that share no physical component  Database systems that.
Computer Science Lecture 10, page 1 CS677: Distributed OS Last Class: Naming Name distribution: use hierarchies DNS X.500 and LDAP.
Chap No: 04 Advanced Relational Database
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms –Bully algorithm.
Concurrency Control in Distributed Databases Gul Sabah Arif.
G063 - Distributed Databases. Learning Objectives: By the end of this topic you should be able to: explain how databases may be stored in more than one.
Operating Systems Distributed Coordination. Topics –Event Ordering –Mutual Exclusion –Atomicity –Concurrency Control Topics –Event Ordering –Mutual Exclusion.
Distributed Databases Midterm review. Lectures covered Everything until (including) March 2 nd Everything until (including) March 2 nd Focus on distributed.
Global State (1) a)A consistent cut b)An inconsistent cut.
1 Distributed Databases BUAD/American University Distributed Databases.
Distributed Database. Introduction A major motivation behind the development of database systems is the desire to integrate the operational data of an.
PMIT-6101 Advanced Database Systems By- Jesmin Akhter Assistant Professor, IIT, Jahangirnagar University.
Replication (1). Topics r Why Replication? r System Model r Consistency Models – How do we reason about the consistency of the “global state”? m Data-centric.
Commit Algorithms Hamid Al-Hamadi CS 5204 November 17, 2009.
Ing. Erick López Ch. M.R.I. Replicación Oracle. What is Replication  Replication is the process of copying and maintaining schema objects in multiple.
MBA 664 Database Management Systems Dave Salisbury ( )
Distributed Transaction Management. Outline Introduction Concurrency Control Protocols  Locking  Timestamping Deadlock Handling Replication.
Two-Phase Commit Brad Karp UCL Computer Science CS GZ03 / M th October, 2008.
 Distributed Database Concepts  Parallel Vs Distributed Technology  Advantages  Additional Functions  Distribution Database Design  Data Fragmentation.
CIS 825 Review session. P1: Assume that processes are arranged in a ring topology. Consider the following modification of the Lamport’s mutual exclusion.
Distributed Mutual Exclusion Synchronization in Distributed Systems Synchronization in distributed systems are often more difficult compared to synchronization.
Topics in Distributed Databases Database System Implementation CSE 507 Some slides adapted from Navathe et. Al and Silberchatz et. Al.
CMS Advanced Database and Client-Server Applications Distributed Databases slides by Martin Beer and Paul Crowther Connolly and Begg Chapter 22.
1 Chapter 22 Distributed DBMSs - Concepts and Design Simplified Transparencies © Pearson Education Limited 1995, 2005.
Distributed Databases – Advanced Concepts Chapter 25 in Textbook.
Remote Backup Systems.
Lecture 17- Distributed Databases (continued)
Outline Announcements Fault Tolerance.
Chapter 10 Transaction Management and Concurrency Control
Distributed Databases
Distributed Databases
ENFORCING SERIALIZABILITY BY LOCKS
Introduction of Week 13 Return assignment 11-1 and 3-1-5
Slides developed by Dr. Hesham El-Rewini Copyright Hesham El-Rewini
Distributed Databases Recovery
Remote Backup Systems.
Presentation transcript:

Distributed Locking

Distributed Locking (No Replication) Assumptions Lock tables are managed by individual sites. The component of a transaction at a site can request locks on the data elements only at that site. Locking X It might seem that there are no messages between sites needed. The local component simply negotiates with the lock manager at that site for the lock on X. However, if the distributed transaction needs locks on several elements, say X, Y, and Z, then the transaction cannot complete its computation until it has locks on all three elements (2PL). Solution Have a lock coordinator which gathers all the locks required. Locking an element X at any other site requires three messages: 1. A message to the site of X requesting the lock. 2. A reply message granting the lock 3. A message to the site of X releasing the lock.

Data Replication Advantage of a distributed system is the ability to replicate data, that is, to make copies of the data at different sites. Motivations Resiliency against failures Speeding up query answering Examples 1.A bank may make copies of current interest-rate policy available at each branch, so a query about rates does not have to be sent to the central office. 2.A chain store may keep copies of information about suppliers at each store, 3.A digital library may cache a copy of a popular document at a school where students have been assigned to read the document. Problems a)How do we keep copies identical? b)How do we decide where and how many copies to keep? The more copies, the more effort is required to update, but the easier queries become.

Problems with Replicated Elements Example Suppose there are two copies, Xl and X2, of X. Suppose also that transaction T gets a shared lock on Xl at the site of that copy, while transaction U gets an exclusive lock on the copy X2 at its site. Now, U can change X2 but cannot change Xl, resulting in the two copies of the element X becoming different. Avoiding problems We must distinguish between getting a shared or exclusive lock on the logical element X and getting a local lock on a copy of X. We need for transactions to take global locks on the logical elements. A logical element X can have either one exclusive lock and no shared lock, or any number of shared locks and no exclusive locks.

Locking Replicated Elements But the logical elements don't exist physically - only their copies do - and there is no global lock table. Thus, the only way that a transaction can obtain a global lock on X is to obtain local locks on one or more copies of X at the site(s) of those copies. Approaches – Primary-Copy Locking – Synthesizing global locks from collections of local locks.

Primary-Copy Locking Each logical element X has one of its copies designated the "primary copy." In order to get a lock on logical element X, a transaction sends a request to the site of the primary copy of X. The site of the primary copy grants or denies the request as appropriate. Global (logical) locks will be administered correctly as long as each site administers the locks for the primary copies correctly. Most lock requests require three messages, except for those where the transaction and the primary copy are at the same site.

Global Locks From Local Locks No copy of a database element X is "primary"; rather they are symmetric, and local shared or exclusive locks can be requested on any of these copies. The key is to require transactions to obtain a certain number of local locks on copies of X before the transaction can assume it has a global lock on X. Suppose database element A has n copies. We pick two numbers: s = number of copies that must be locked for SL and x = number of copies to lock for XL.

s and x numbers As long as 2x > n and s + x > n, we have the desired properties: only one exclusive lock on A, and no shared and exclusive lock on A. – Note that locally, there can be only one XL, but many SL's if no XL. Why it works? – Since 2x > n, if two transactions had XL's on A, there would be at least one copy that had granted XL to both. – Since s + x > n, if one transaction had a SL and one an XL on A, then some copy granted both SL and XL at the same time. Number of messages to obtain a global shared lock is 3s, and the number to obtain a global exclusive lock is 3x. – Seems excessive, compared with centralized methods that require 3 or fewer messages per lock on the average. – However, there are compensating arguments…

Read-Locks-One; Write-Locks-All s = 1 and x = n. Obtaining a global exclusive lock is very expensive, but a global shared lock requires three messages at the most. Advantage over the primary-copy method: – PC allows us to avoid messages when we read the primary copy, – RLO allows us to avoid messages whenever the transaction is at the site of any copy of the database element we desire to read. Superior when most transactions are read-only, but transactions to read an element X initiate at different sites.

Majority Locking s = x = (n + 1)/2 Seems that this system requires many messages no matter where the transaction is. However, there are several other factors that may make this scheme acceptable. Broadcast provided by many network systems Allows partial operation even when the network is disconnected. – As long as there is one component of the network that contains a majority of the sites with copies of X, then it is possible for a transaction to obtain a lock on X. – Even if other sites are active while disconnected, we know that they cannot even get a shared lock on X, and thus there is no risk that transactions running in different components of the network will engage in behavior that is not serializable.