Download presentation
Presentation is loading. Please wait.
Published byJames Erick Booker Modified over 8 years ago
1
Advanced Operating Systems (CS 202) Transactional memory Jan, 27, 2016 slide credit: slides adapted from several presentations, including stanford TCC group and MIT superTech group
2
2 Motivation Uniprocessor Systems – Frequency – Power consumption – Wire delay limits scalability – Design complexity vs. verification effort – Where is ILP? Support for multiprocessor or multicore systems – Replicate small, simple cores, design is scalable – Faster design turnaround time, Time to market – Exploit TLP, in addition to ILP within each core – But now we have new problems
3
3 Parallel Software Problems Parallel systems are often programmed with – Synchronization through barriers – Shared objects access control through locks Lock granularity and organization must balance performance and correctness – Coarse-grain locking: Lock contention – Fine-grain locking: Extra overhead – Must be careful to avoid deadlocks or data races – Must be careful not to leave anything unprotected Performance tuning is not intuitive – Performance bottlenecks are related to low level events E.g. false sharing, coherence misses Feedback is often indirect (cache lines, rather than variables )
4
4 Parallel Hardware Complexity (TCC’s view) Cache coherence protocols are complex – Must track ownership of cache lines – Difficult to implement and verify all corner cases Consistency protocols are complex – Must provide rules to correctly order individual loads/stores – Difficult for both hardware and software Current protocols rely on low latency, not bandwidth – Critical short control messages on ownership transfers – Latency of short messages unlikely to scale well in the future – Bandwidth is likely to scale much better High speed interchip connections Multicore (CMP) = on-chip bandwidth
5
5 What do we want? A shared memory system with – A simple, easy programming model (unlike message passing) – A simple, low-complexity hardware implementation (unlike shared memory) – Good performance
6
6 Why are locks bad? Common problems in conventional locking mechanisms in concurrent systems – Priority inversion/inefficiency: When low-priority process is preempted while holding a lock needed by a high-priority process – Convoying: When a process holding a lock is de-scheduled (e.g. page fault, no more quantum), no forward progress for other processes capable of running – Deadlock (or Livelock): Processes attempt to lock the same set of objects in different orders (could be bugs by programmers) Error-prone
7
Lock-free Shared data structure is lock-free if its operations do not require mutual exclusion - Will not prevent multiple processes operating on the same object + avoid lock problems - Existing lock-free techniques use software and do not perform well against lock counterparts
8
Transactional Memory Use transaction style operations to operate on lock free data Allow user to customized read-modify- write operation on multiple, independent words Easy to support with hardware, straight forward extensions to conventional multiprocessor cache
9
Transaction Style A finite sequence of machine instruction with – Sequence of reads, – Computation, – Sequence of write and – Commit Formal properties – Atomicity, Serializability (~ACID)
10
Access Instructions Load-transactional (LT) – Reads from shared memory into private register Load-transactional-exclusive (LTX) – LT + hinting write is coming up Store-transactional (ST) – Tentatively write from private register to shared memory, new value is not visible to other processors till commit
11
State Instructions Commit – Tries to make tentative write permanent. – Successful if no other processor read its read set or write its write set – When fails, discard all updates to write set – Return the whether successful or not Abort – Discard all updates to write set Validate – Return current transaction status – If current status is false, discard all updates to write set
12
Transactional memory API Programmer specifies atomic code blocks Lock version TM version Lock(X[a]); atomic { Lock(X[b]); X[c]=X[a]+X[b]; Lock(X[c]); } X[c] = X[a] + X[b]; Unlock(X[c]); Unlock(X[b]); Unlock X[a] 12
13
Typical Transaction /* keep trying */ While ( true ) { /* read variables */ v1 = LT ( V1 ); …; vn = LT ( Vn ); /* check consistency */ if ( ! VALIDATE () ) continue; /* compute new values */ compute ( v1, …, vn); /* write tentative values */ ST (v1, V1); … ST(vn, Vn); /* try to commit */ if ( COMMIT () ) return result; else backoff; }
14
Example 14 ld 0xdddd... st 0xbeef Transaction A Time ld 0xbeef Transaction C ld 0xbeef Re-execute with new data Commit Arbitrate ld 0xdddd... ld 0xbbbb Transaction B Commit Arbitrate Violation! 0xbeef
15
Warning… Not the same database transactions or intended for database use – Transactions are short in time – Transactions are small in dataset But similar in intent and semantics
16
Idea Behind Implementation Existing cache protocol detects accessibility conflicts Accessibility conflicts ~ transaction conflicts Can extended to cache coherent protocols – Includes bus snoopy, directory
17
Bus Snoopy Example processor Regular cache 2048 8-byte lines Direct mapped Transaction cache 64 8-byte lines Fully associative bus Caches are exclusive Transaction cache contains tentative writes without propagating them to other processors
18
TM support for transactions BufferingTransactional cache Conflict detectionCache coherence protocol Abort/RecoveryInvalidate transactional cache line CommitValidate transactional cache line
19
Transaction Cache Cache line contains separate transactional tag in addition to coherent protocol tag – Transactional tag state: empty, normal, xcommit, xabort Two entries per transaction – Modification write to xabort, set to empty when abort – Xcommit contains the original, set to empty when commits Allocation policy order in decreasing favor – Empty entries, normal entries, xcommit entries Must guarantee a minimum transaction size
20
Transactional Cache Fully set associative cache – Each cache line can be in only one of transactional or regular cache Holds transactional writes – Transactional writes are hidden from other processors and memory Makes updated lines available for snooping on COMMIT Invalidate updated line on ABORT
21
Herlihy and Moss, ISCA ‘93 M S S XCommit XAbort CacheTransaction Cache CPU Memory
22
Sample Counter code
23
Exposing more concurrency Doubly linked list implementation of queue – Head, Tail pointers If queue not empty – Only head pointer is used for dequeuing – Only tail pointer is used for enqueuing Concurrent enqueuing/dequeuing – Possible in TM – Not possible with locks
24
Challenges of TM Long transactions I/O Nested transactions Interrupts
25
Other TM Ideas Speculative Lock Elision Software Transactional Memory – Requires no hardware changes – Allows composition of transactions Multiple improvements both for hardware and software TM Hybrid TMs
26
Speculative Lock Elision Ravi and Goodman, MICRO ‘01 Speculatively remove lock acquire and removal instructions Microarchitectural changes No changes to cache systems No changes to ISA – Can work with existing lock based code
27
SLE example
28
Compare TM and TLS TM is optimistic synchronization TLS is optimistic parallelization Any other similarities or differences
29
Simulation Proteus Simulator 32 processors Regular cache – Direct mapped, 2048 8-byte lines Transactional cache – Fully associative, 64 8-byte lines Single cycle caches access 4 cycle memory access Both snoopy bus and directory are simulated 2 stage network with switch delay of 1 cycle each
30
Benchmarks Counter – n processors, each increment a shared counter (2^16)/n times Producer/Consumer buffer – n/2 processors produce, n/2 processor consume through a shared FIFO – end when 2^16 items are consumed Doubly-linked list – N processors tries to rotate the content from tail to head – End when 2^16 items are moved – Variables shared are conditional – Traditional locking method can introduce deadlock
31
Comparisons Competitors – Transactional memory – Load-locked/store-cond (Alpha) – Spin lock with backoff – Software queue – Hardware queue
32
Counter Result
33
Producer/Consumer Result
34
Doubly Linked List Result
35
Conclusion Avoid extra lock variable and lock problems Trade dead lock for possible live lock/starvation Comparable performance to lock technique when shared data structure is small Relatively easy to implement
36
What has happened since this paper? Many other transactional memory proposals – Software TM (slower, but no hardware support needed, and no limit on size of data) – Hardware TM (many proposals with various degrees of improvement) Products! – Sun Rock in mid 2000s TxLinux used it – Intel/AMD announced 2013 Shipped 2014 Supports SLE as well 36
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.