Download presentation
Presentation is loading. Please wait.
1
Client-Server Caching James Wann April 4, 2000
2
Client-Server Architecture A client requests data or locks from a particular server The server in turn responds with the requested items Otherwise known as a data shipping architecture
3
Why Caching? Better utilizes the CPU and memory resources of clients Reduces reliance on the server Increases the scalability of the system
4
Disadvantages of Caching Increased network utilization Extra load on system Increased transaction abort rates, depending on algorithm
5
Test Workloads for Caching Algorithms HOTCOLD – There is a high probability that pages in the “hot set” will be read. However, there is equal probability that pages in either the “hot set” or “cold set” will be written FEED – There is one client that writes to pages in the “hot set”. The other clients have a high probability of reading from the “hot set”
6
Test Workloads for Caching Algorithms (cont’d) UNIFORM – All pages have equal probability of being either read or written HICON – There is a high probability of conflicts in reading/writing
7
Server-Based Two-Phase Locking Client transactions must obtain locks from the server before accessing a data item Easiest algorithm to implement Heavy messaging overhead Best for workloads with high data contention
8
Optimistic Two-Phase Locking Each client has its own lock manager Upon commit, the client sends a message to the server stating which pages are updated The server sends an update message to all clients with copies of the pages The next action depends on algorithm
9
clientserver client Commit Phase
10
clientserver client Update Message Phase
11
Non-Dynamic O2PL Algorithms O2PL-Invalidate (O2PL-I) – invalidates the updated pages in the clients receiving the message O2PL-Propagate (O2PL-P) – propagates the changed pages to the clients that already have the pages
12
Dynamic O2PL Algorithms O2PL-Dynamic (O2PL-D) – chooses between propagation and invalidation based on a certain criteria Criteria 1 – The page is at the client where the message is being sent Criteria 2 – The page was previously propagated to the client and it has since been reaccessed
13
Dynamic O2PL Algorithms (cont’d) O2PL-New Dynamic (O2PL-ND) – uses the same criteria as O2PL-D with one additional characteristic: A structure called the invalidate window is used hold the last n invalidated pages Most recently invalidated pages are placed in front of the window
14
Dynamic O2PL Algorithms (cont’d) If a page is accessed and its number is found in the invalidate window, then the entry is marked as being a mistaken invalidation Criteria 3 – The page was found to be previously invalidated by mistake
15
Evaluation of O2PL Algorithms (HOTCOLD) O2PL-I and O2PL-ND have a higher throughput than O2PL-P and O2PL-D This is due to the fact that propagated updates may not necessarily be accessed again (wasted propagations) O2PL-I, O2PL-D, and O2PL-ND have similar performance on a faster network
16
Evaluation of O2PL Algorithms (FEED) O2PL-P, O2PL-D, and O2PL-ND have better throughput than O2PL-I This scenario benefits propagation (keeps “hot pages” in buffer) However, the performances are comparable in small buffers
17
Evaluation of O2PL Algorithms (UNIFORM) O2PL-P and O2PL-D have far less throughput than the other algorithms Higher probability of wasted propagations
18
Figures 1 through 6 in paper
19
Callback Locking Allows caching of data pages and locks Clients obtain locks by making a request to the server If there is a lock conflict, clients with the locks are asked to release the locks The lock request is granted only when all the locks are released
20
CB-Read Only read locks are cached When a write lock request is made, the server requests all clients with the specified page to release the page If all clients comply, then a write lock is granted All subsequent lock requests are blocked until the write lock is released
21
CB-All Both locks are cached and write locks are not released at the end of a transaction A page copy at a certain client is designated as the exclusive copy Upon a read request from another client, then exclusive copy is received and the original client no longer has the exclusive copy This is called a downgrade request
22
Evaluation of Callback Algorithms (HOTCOLD) O2PL-ND has better throughput than the Callback algorithms The Callback algorithms require more messages per transaction However, the throughput difference is not significant
23
Evaluation of Callback Algorithms (FEED) O2PL-ND has better throughput than either Callback algorithms This is because the pages are usually already found in the clients in O2PL-ND and so no extra messages are needed Again, the performance difference is not significant
24
Evaluation of Callback Algorithms (UNIFORM) All three algorithms have similar throughput
25
Evaluation of Callback Algorithms (HICON) O2PL-ND performance suffers because of frequent aborts due to late deadlock- detection CB-Read has higher throughput than CB-All because of smaller messaging requirements
26
Figures 8 through 13 in paper
27
Figures 14 through 15 in paper
28
Conclusion O2PL-ND proves to be a more flexible algorithm than O2PL-D Invalidation is the default, rather than propagation Ideal for a small number of clients
29
Conclusion (cont’d) CB-Read is a more adaptable algorithm than O2PL-ND and CB-All Detects deadlock earlier than O2PL-ND and avoids aborts for long transactions Has lower messaging overhead than CB-All Server-based 2PL works best with a large number of clients in a high-contention situation. Perhaps further research should be done in consideration of faster LANs (e.g. fast ethernet)?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.