Download presentation
Presentation is loading. Please wait.
1
Scalable Reader Writer Synchronization John M.Mellor-Crummey, Michael L.Scott
2
Outline Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock Empirical Results & Conclusions Summary Scalable Reader Writer Synchronization
3
Abstract – readers & writers All processes request mutex access to the same memory section. Multiple readers can access the memory section at the same time. Only one writer can access the memory section at a time. Scalable Reader Writer Synchronization writersreaders 0100110101010100010001000101001010 0100110101010100010001000101001010 0100110101010100010001000101001010
4
Abstract (continued) Mutex locks implementation using busy wait. Busy wait locks causes memory and network contention which degrades performance. The problem: busy wait is implemented globally (everyone busy wait on the same variable / memory location), creating a global bottleneck instead of a local one. The global bottleneck created by the busy wait, prevents efficient, larger scale (scalability) implementation of mutex synchronization. Scalable Reader Writer Synchronization
5
Outline Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock Empirical Results & Conclusions Summary Scalable Reader Writer Synchronization
6
The purpose of the paper Presenting readers/writers locks which exploits local spin busy wait implementation, in order to reduce memory and network contention. Scalable Reader Writer Synchronization Global – everyone busy wait (spin) on the same location. Local – everyone busy wait (spin) on a different memory location.
7
Definitions Fair lock readers wait for earlier writers writers wait for any earlier process (reader or writer) no starvation Readers preference lock writers wait as long as there are readers requests. possible starvation minimizes the delay for readers maximizes the throughput Writers preference lock readers wait as long as there are writer waiting possible starvation prevents the system from using outdated information Scalable Reader Writer Synchronization
8
The MCS lock The MCS (Mellor-Crummey and Scott) lock is a queue based local spin lock Scalable Reader Writer Synchronization
9
The MCS lock – acquire lock Scalable Reader Writer Synchronization new_node lock tail new_node lock tail
10
my_node The MCS lock – release lock Scalable Reader Writer Synchronization lock tail my_nodelock tail
11
The MCS lock – release lock Scalable Reader Writer Synchronization my_node lock tail
12
The MCS lock – release lock Scalable Reader Writer Synchronization The spin is local since each process spins (busy wait) on its own node
13
Outline Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock Empirical Results & Conclusions Summary Scalable Reader Writer Synchronization
14
Simple Reader-Writer Locks Scalable Reader Writer Synchronization This section presents centralized (not local) algorithms for busy wait reader- writer locks. WRITER start_write(lock) writing_critical_section end_write(lock) READER start_read(lock) reading_critical_section end_read(lock)
15
Outline Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock Empirical Results & Conclusions Summary Scalable Reader Writer Synchronization
16
Reader Preference Lock A reader preference lock is used in several cases: when there are many writers requests, and the preference for readers is required to prevent their starvation. when the throughput of the system is more important than how up to date the information is. Scalable Reader Writer Synchronization
17
Reader Preference Lock The lowest bit indicates if a writer is writing The upper bits count the interested and currently reading processes. When a reader arrives it inc the counter, and waits until the writer bit is deactivated. Writers wait until the whole counter is 0. Scalable Reader Writer Synchronization 0131 writers flag readers counter lock …
18
Reader Preference Lock Scalable Reader Writer Synchronization 0131 writers flag readers counter lock … 1 0 1 1 11 0000 start writing end writing a writer can write, only when no reader is interested or reading, and no writer is writing Notice that everything is done on the same 32 bit location in the memory.
19
Reader Preference Lock Scalable Reader Writer Synchronization 0131 writers flag readers counter lock … 1 01 0 000 start reading end reading readers always get in front of the line, before any writer, other than the one already writing Again, notice that everything is done on the same 32 bit location in the memory.
20
Outline Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock Empirical Results & Conclusions Summary Scalable Reader Writer Synchronization
21
Fair Lock A Fair lock is used when the system must maintain a balance between keeping the information up to date, and still being reactive (the system should respond to data requests within a reasonable amount of time) Scalable Reader Writer Synchronization
22
Fair Lock The readers have 2 counters: completed readers/writers: those who finished reading/writing current readers/writers: those who finished + requests prev/ticket: for waiting in line writers ticket = total readers + total writers readers ticket = total writers (because they can read with the rest of the readers) Scalable Reader Writer Synchronization readerswriters total readers total writers completed readers completed writers Ticket = prev
23
Fair Lock Scalable Reader Writer Synchronization readerswriters Ticket = prev 3 151 6 2 total readers total writers completed readers completed writers
24
Fair Lock Scalable Reader Writer Synchronization readerswriters 3 15 6 2 5 Ticket = prev total readers total writers completed readers completed writers
25
Fair Lock Scalable Reader Writer Synchronization readerswriters 15 6 2 52 Ticket = prev total readers total writers completed readers completed writers
26
Fair Lock Scalable Reader Writer Synchronization readerswriters 25 3 3 5 6 Ticket = prev total readers total writers completed readers completed writers
27
Fair Lock Scalable Reader Writer Synchronization readerswriters 26 3 3 5 Ticket = prev 3 total readers total writers completed readers completed writers
28
Fair Lock Scalable Reader Writer Synchronization readerswriters 3 3 3 5 6 Ticket = prev 6 total readers total writers completed readers completed writers
29
Fair Lock Scalable Reader Writer Synchronization readerswriters Ticket = prev Again, notice that everything is done on the same centralized location in the memory – 3 counters. total readers total writers completed readers completed writers
30
Spin On A Global Location The last 2 algorithms use busy wait by spinning on the same memory location. When many processes try and spin on the same location, it causes a hot spot in the system. Interference from still waiting/spinning processes, increase the time required to release the lock by those who are finished waiting. Also, Interference from still waiting/spinning degrades the performance of processes who are trying to access the same memory area (not just the same exact location) Scalable Reader Writer Synchronization
31
Outline Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock Empirical Results & Conclusions Summary Scalable Reader Writer Synchronization
32
Locks with Local Only Spinning This is the main section of the paper, it contains the implementation of reader/writer locks which uses busy wait on local locations (not all on the same location). Why not just use the previously mentioned MCS algorithm? too much serialization for the readers, they can read at the same time too long code path for this purpose, can be done more efficiently Scalable Reader Writer Synchronization my_node lock
33
Outline Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock Empirical Results & Conclusions Summary Scalable Reader Writer Synchronization
34
Fair Lock (local spinning only) Writing can be done when all previous read and write requests have been met. Reading can be done when all previous write requests have been met. Like the MCS algorithm, using a queue. A reader can begin reading when its predecessor is an active reader or when the previous writer finished. A writer can write if its predecessor is done and there are on active readers. Scalable Reader Writer Synchronization
35
Fair Lock (local spinning only) Scalable Reader Writer Synchronization node type : reader/writer next : pointer blocked : boolean successor_type : reader/writer locktail : pointer to a node (nil) reader_count : counter (0) next_writer : pointer to a node (nil) blocked free n w e r x I t e r
36
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node writer succ: none reader/writer succ : reader/writer blocked free
37
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node writer succ: none reader/writer succ : reader/writer blocked lock = tail free
38
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node writer succ: none reader/writer succ : reader/writer blocked reader_counter next_writer 0 free lock = tail new_node
39
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node writer succ: none reader/writer succ : reader/writer blocked lock = tail 0 free new_node reader_counter
40
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node writer succ: none reader/writer succ : reader/writer blocked 0 free lock = tail new_node reader_counter
41
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node writer succ: none reader/writer succ : reader/writer blocked 0 free lock = tail new_node reader_counter
42
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node writer succ: none reader/writer succ : reader/writer blocked 1 The busy wait is on my own node free lock = tail new_node reader_counter
43
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node writer succ: none reader/writer succ : reader/writer blocked 0 next_writer free lock = tail new_node reader_counter
44
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader/writer succ : reader/writer blocked writer succ: none free lock = tail
45
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader/writer succ : reader/writer blocked lock = tail writer succ: none free new_node
46
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader/writer succ : reader/writer blocked writer succ: nonesucc: writer free pred lock = tail new_node
47
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader/writer succ : reader/writer blocked writer succ: writer free lock = tail new_node
48
Fair Lock (local spinning only) Scalable Reader Writer Synchronization my_node writer reader/writer succ : reader/writer blocked free lock = tail
49
Fair Lock (local spinning only) Scalable Reader Writer Synchronization my_node writer reader/writer succ : reader/writer blocked free lock = tail my_node
50
Fair Lock (local spinning only) Scalable Reader Writer Synchronization my_node writer reader/writer succ : reader/writer blocked free lock = tail my_node
51
Fair Lock (local spinning only) Scalable Reader Writer Synchronization my_node writer reader/writer succ : reader/writer blocked free lock = tail my_node
52
Fair Lock (local spinning only) Scalable Reader Writer Synchronization my_node writer reader/writer succ : reader/writer blocked free lock = tail my_node next reader 0 reader_counter
53
Fair Lock (local spinning only) Scalable Reader Writer Synchronization my_node writer reader/writer succ : reader/writer blocked free lock = tail my_node next reader 1 reader_counter
54
Fair Lock (local spinning only) Scalable Reader Writer Synchronization my_node writer reader/writer succ : reader/writer blocked free lock = tail my_node next
55
Fair Lock (local spinning only) Scalable Reader Writer Synchronization reader/writer succ : reader/writer blocked free lock = tail
56
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader reader/writer succ : reader/writer blocked free lock = tail succ: none
57
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader reader/writer succ : reader/writer blocked free lock = tail succ: none
58
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader reader/writer succ : reader/writer blocked free lock = tail succ: none 0 1 new_node reader_counter
59
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader reader/writer succ : reader/writer blocked free lock = tail succ: none 1 writer new_node pred succ: readersucc: none reader_counter
60
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader reader/writer succ : reader/writer blocked free lock = tail succ: none 1 writer new_node pred succ: reader next reader_counter
61
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader reader/writer succ : reader/writer blocked free lock = tail succ: none 1 writer new_node pred succ: reader next reader_counter
62
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader reader/writer succ : reader/writer blocked free lock = tail succ: none 0 reader new_node pred 1 next succ: not none reader_counter
63
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader reader/writer succ : reader/writer blocked free lock = tail succ: reader another node new_node 1 reader_counter
64
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader reader/writer succ : reader/writer blocked free lock = tail succ: reader another node new_node 1 reader_counter
65
Fair Lock (local spinning only) Scalable Reader Writer Synchronization new_node reader reader/writer succ : reader/writer blocked free lock = tail succ: reader another node new_node 1 2 reader_counter
66
Fair Lock (local spinning only) Scalable Reader Writer Synchronization my_node reader reader/writer succ : reader/writer blocked free lock = tail another node my_node
67
Fair Lock (local spinning only) Scalable Reader Writer Synchronization my_node reader reader/writer succ : reader/writer blocked free lock = tail my_node succ: writer next_writer
68
Fair Lock (local spinning only) Scalable Reader Writer Synchronization my_node reader reader/writer succ : reader/writer blocked free lock = tail my_node 1 0 next_writer = ww reader_counter
69
Outline Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock Empirical Results & Conclusions Summary Scalable Reader Writer Synchronization
70
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization In this algorithm, the writers will be in a queue, while the readers will be in a list, since if there are any readers they get preference. The 0 bit will be used as a flag to indicate interested writers. The 1 bit will be used as a flag to indicate active writers. And the rest is a reader counter. A reader can’t begin if there is an active writer. A writer can’t begin if any other writer is active or the reader counter is non zero (there are waiting/active readers). In order to avoid race conditions between modifying the flag and adding the reader to the list, a reader must double check the flag.
71
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization node next : pointer blocked : boolean blocked free 0131 readers counter … 23 The lock has 3 pointers (reader_head, writer head, writer_tail – all nil), and a counter active writerinterested writer reader_head writer_head writer_tail
72
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer new_node reader_head writer_head writer_tail
73
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer new_node writer_tail writer_tail = writer_head
74
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer writer_tail = writer_head new_node 0 11
75
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer writer_tail new_node pred
76
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer new_node writer_tail pred Notice. each node busy wait on its own node
77
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer writer_head 10
78
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer reader_head readers list
79
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer readers list
80
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer readers list reader_head
81
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer readers list reader_head new_node next
82
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer readers list reader_head next 0 head
83
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer readers list reader_head next 1 Notice. each node busy wait on its own node
84
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer readers list reader_head next 1 0
85
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer readers list reader_head next 0
86
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer readers list reader_head 0 1 0 1
87
Reader Preference Lock (local spinning only) Scalable Reader Writer Synchronization 0131 readers counter … 23 blocked free active writerinterested writer 1 0 writer_head
88
Outline Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock Empirical Results & Conclusions Summary Scalable Reader Writer Synchronization
89
Writer Preference Lock (local spinning only) The code is very long and will not be shown here!!! Scalable Reader Writer Synchronization
90
Outline Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock Empirical Results & Conclusions Summary Scalable Reader Writer Synchronization
91
Performance Results & Conclusions Scalable Reader Writer Synchronization This table shows the single processor latency for each of the lock operations in the absence of competition. Notice that the latency is longer in the local spin algorithms, this is due to the managing operations of the algorithms which have noticeable effect when there’s no competition.
92
Scalable Reader Writer Synchronization Each point in the graph is an average time for a process to acquire and release the lock. Performance Results & Conclusions
93
Scalable Reader Writer Synchronization The 2 upper lines, are the centralized algorithms, The more competition/proces ses their time gets worse. But notice the local spin algorithms, they have a slow start, but the more processes they handle their time gets better until it levels off. Performance Results & Conclusions
94
Scalable Reader Writer Synchronization Same table, with focus on the local spin algorithms. Performance Results & Conclusions
95
Scalable Reader Writer Synchronization Performance Results & Conclusions The local spin algorithms provide better results, They’re faster, and there’s no contention due to the busy wait. These results indicate that contention due to busy wait synchronization, is much less a problem than has generally been thought.
96
Outline Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock Empirical Results & Conclusions Summary Scalable Reader Writer Synchronization
97
Summary The MCS lock – simple mutex queue based lock with local spin A reader pref. lock with centralized busy wait A Fair lock with centralized busy wait A reader pref. lock with local spin based busy wait A Fair lock with local spin based busy wait The local spin gets better results. Scalable Reader Writer Synchronization
98
Any Questions? Scalable Reader Writer Synchronization
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.