(Software) Distributed Shared Memory (aka Shared Virtual Memory)

Slides:



Advertisements
Similar presentations
The University of Adelaide, School of Computer Science
Advertisements

1 Lecture 6: Directory Protocols Topics: directory-based cache coherence implementations (wrap-up of SGI Origin and Sequent NUMA case study)
Relaxed Consistency Models. Outline Lazy Release Consistency TreadMarks DSM system.
Multiple-Writer Distributed Memory. The Sequential Consistency Memory Model P1P2 P3 switch randomly set after each memory op ensures some serial order.
Distributed Shared Memory
Cache Coherent Distributed Shared Memory. Motivations Small processor count –SMP machines –Single shared memory with multiple processors interconnected.
1 Lecture 12: Hardware/Software Trade-Offs Topics: COMA, Software Virtual Memory.
Distributed Operating Systems CS551 Colorado State University at Lockheed-Martin Lecture 4 -- Spring 2001.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Lightweight Logging For Lazy Release Consistent DSM Costa, et. al. CS /01/01.
CIS629 Coherence 1 Cache Coherence: Snooping Protocol, Directory Protocol Some of these slides courtesty of David Patterson and David Culler.
Memory consistency models Presented by: Gabriel Tanase.
1 Multiprocessors. 2 Idea: create powerful computers by connecting many smaller ones good news: works for timesharing (better than supercomputer) bad.
CS252/Patterson Lec /23/01 CS213 Parallel Processing Architecture Lecture 7: Multiprocessor Cache Coherency Problem.
Lecture 13: Consistency Models
Distributed Resource Management: Distributed Shared Memory
1 Lecture 5: Directory Protocols Topics: directory-based cache coherence implementations.
Memory Consistency Models
NUMA coherence CSE 471 Aut 011 Cache Coherence in NUMA Machines Snooping is not possible on media other than bus/ring Broadcast / multicast is not that.
1 Lecture 3: Directory-Based Coherence Basic operations, memory-based and cache-based directories.
CS252/Patterson Lec /28/01 CS 213 Lecture 10: Multiprocessor 3: Directory Organization.
1 Lecture 20: Protocols and Synchronization Topics: distributed shared-memory multiprocessors, synchronization (Sections )
1 Shared-memory Architectures Adapted from a lecture by Ian Watson, University of Machester.
Multiprocessor Cache Coherency
Distributed Shared Memory Systems and Programming
TreadMarks Distributed Shared Memory on Standard Workstations and Operating Systems Pete Keleher, Alan Cox, Sandhya Dwarkadas, Willy Zwaenepoel.
Distributed Shared Memory: A Survey of Issues and Algorithms B,. Nitzberg and V. Lo University of Oregon.
1 Cache coherence CEG 4131 Computer Architecture III Slides developed by Dr. Hesham El-Rewini Copyright Hesham El-Rewini.
Lazy Release Consistency for Software Distributed Shared Memory Pete Keleher Alan L. Cox Willy Z.
TECHNIQUES FOR REDUCING CONSISTENCY- RELATED COMMUNICATION IN DISTRIBUTED SHARED-MEMORY SYSTEMS J. B. Carter University of Utah J. K. Bennett and W. Zwaenepoel.
1 Lecture 12: Hardware/Software Trade-Offs Topics: COMA, Software Virtual Memory.
Ch 10 Shared memory via message passing Problems –Explicit user action needed –Address spaces are distinct –Small Granularity of Transfer Distributed Shared.
Distributed Shared Memory Based on Reference paper: Distributed Shared Memory, Concepts and Systems.
Distributed Memory and Cache Consistency (some slides courtesy of Alvin Lebeck)
Cache Coherence Protocols 1 Cache Coherence Protocols in Shared Memory Multiprocessors Mehmet Şenvar.
Distributed Shared Memory (part 1). Distributed Shared Memory (DSM) mem0 proc0 mem1 proc1 mem2 proc2 memN procN network... shared memory.
DISTRIBUTED COMPUTING
Page 1 Distributed Shared Memory Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content of this presentation.
1 Chapter 9 Distributed Shared Memory. 2 Making the main memory of a cluster of computers look as though it is a single memory with a single address space.
Lazy Release Consistency for Software Distributed Shared Memory Pete Keleher Alan L. Cox Willy Z. By Nooruddin Shaik.
1 Lecture 19: Scalable Protocols & Synch Topics: coherence protocols for distributed shared-memory multiprocessors and synchronization (Sections )
1 Lecture 3: Coherence Protocols Topics: consistency models, coherence protocol examples.
Distributed shared memory u motivation and the main idea u consistency models F strict and sequential F causal F PRAM and processor F weak and release.
CMSC 611: Advanced Computer Architecture Shared Memory Most slides adapted from David Patterson. Some from Mohomed Younis.
The University of Adelaide, School of Computer Science
Distributed Memory and Cache Consistency (some slides courtesy of Alvin Lebeck)
Software Coherence Management on Non-Coherent-Cache Multicores
Distributed Shared Memory
The University of Adelaide, School of Computer Science
The University of Adelaide, School of Computer Science
Lecture 18: Coherence and Synchronization
Relaxed Consistency models and software distributed memory
Multiprocessor Cache Coherency
The University of Adelaide, School of Computer Science
CMSC 611: Advanced Computer Architecture
Example Cache Coherence Problem
The University of Adelaide, School of Computer Science
Lecture 25: Multiprocessors
Lecture 9: Directory-Based Examples
Lecture 8: Directory-Based Examples
Design Alternatives for SAS: The Beauty of Mobile Homes
The University of Adelaide, School of Computer Science
Lecture 17 Multiprocessors and Thread-Level Parallelism
Lecture 24: Multiprocessors
Distributed Resource Management: Distributed Shared Memory
Lecture 17 Multiprocessors and Thread-Level Parallelism
Lecture 19: Coherence and Synchronization
Lecture 18: Coherence and Synchronization
The University of Adelaide, School of Computer Science
Lecture 17 Multiprocessors and Thread-Level Parallelism
Presentation transcript:

(Software) Distributed Shared Memory (aka Shared Virtual Memory) CS519: Lecture 10 (Software) Distributed Shared Memory (aka Shared Virtual Memory)

Communication Models Message Passing example: UNIX sockets disjoint address spaces: good between untrusted peers (client-server model) communication is explicit: send/receive messages synchronization is implicit: data available after receive sender-based communication (“push”) programming is difficult: no pointers, data to communicate must be packed/unpacked in messages RPC alleviates problem, but RPC does not allow pointers or global variables CS 519 Operating System Theory

Communication Models (cont’d) Shared Memory example: threads the same address space: good between trusted peers (parallel application) communication is implicit: memory accesses synchronization is explicit: locks/mutex, conditional variables, monitors, barriers receiver-based communication (“pull”) programming is easy: pointers and global variables maintain their meaning because the address space is shared CS 519 Operating System Theory

Implementation Message Passing easy: a network with TCP/IP is enough good scalability inexpensive solution Shared Memory difficult to build: sophisticated memory controllers scalability is more difficult to achieve expensive solution CS 519 Operating System Theory

Shared Memory: Software Approach Emulate shared memory in software on top of message-passing simple programming inexpensive hardware Kai Li’s idea (‘86): SHARED VIRTUAL MEMORY (SVM) implement shared memory in software as a virtual address space emulate the functionality of a cache-coherent multiprocessor at page granularity using virtual memory use a simple network of computers (with message-passing for communication) CS 519 Operating System Theory

Shared Virtual Memory (SVM) address space 1 2 ..... physical memory on each node acts like a cache of the virtual address space 1 1 2 2 page table physical memory page table physical memory network CS 519 Operating System Theory

How is virtual memory used in SVM? virtual memory is used for page protection support if a page of the virtual address space is cached in the local memory then the page entry is valid (page is not protected) if a page of the virtual address space is not available in the local memory then the corresponding entry is invalid (page is protected for both read and write) an attempt to access a protected page causes a segmentation fault (page fault) the protocol handles the fault (by having set a fault handler) in the fault handler the protocol brings the faulting page from another node using message-passing and unprotects the page everything is transparent to the programmer: looks like real shared memory CS 519 Operating System Theory

The Coherence Problem We have seen this problem before! to speed up memory accesses we want to keep (cache) a virtual page in the local memory of the processors which access that page can be more than one -> page is replicated if we have more than one copy of the page, then when a write is performed we have a coherence problem: how to make the write visible to other copies how to combine writes performed on different copies of the same page (by different processors) We have seen this problem before! CS 519 Operating System Theory

Simplest approach: no caches CPU Memory Network interface no caches -> an address is stored in only one place: in memory processor writes to memory, NI reads from memory to send out messages memory accesses are slow because memory bus must be traversed for each access since there are no caches CS 519 Operating System Theory

With caches CPU Memory Network interface (NI) Cache with caches: an address can be stored in two places: in memory and in the cache -> replication CPU and NI must have a coherent view of memory when DMA out: data comes from cache if dirty there when DMA in: the cache block is invalidated CS 519 Operating System Theory

The multiprocessor case CPU Memory a memory bus, with several CPUs, one memory, no caches no coherence problem since there is no replication bus can become overloaded due to the memory traffic generated by processors memory accesses are slow CS 519 Operating System Theory

Cache-coherent multiprocessor CPU CPU CPU CPU cache cache cache cache Memory adding caches introduces the coherence problem on writes snooping cache: caches observe the bus write-through solution: memory is updated on each write other caches observe the write on the bus if they cache the word: either invalidate block or update word; if invalidate, the next access incurs a miss and the block is re-fetched CS 519 Operating System Theory

Ownership-based caching write-through snooping cache still generates high traffic on the memory bus: absorb reads but not writes solution is to absorb writes: make the cache write-back and give the processor exclusive access to the cache block on the first write (grant ownership of the block) further writes to an owned block do not show up on the bus when another processor wants to read, the owner (instead of memory) provides the cache block and looses exclusivity when another processor wants to write, ownership is transferred CS 519 Operating System Theory

State diagram read fault INVALID SHARED invalidate invalidate (ownership transfer) read fault write fault (invalidate others) write fault (invalidate others) EXCLUSIVE (at the owner) CS 519 Operating System Theory

Switched multiprocessors memory bus limits the scalability of bus-based multiprocessors alternative is to use a switched interconnect and memory physically distributed among nodes NUMA architecture (non-uniform memory access) CC-NUMA (cache-coherent NUMA), also called hardware DSM the bus makes coherence easy to maintain broadcast medium allows snooping serialization medium allows global ordering of events CS 519 Operating System Theory

Directory-based coherence Directory keeps current state and location of each block state of the cache block: invalid, shared, exclusive copyset: processors which have valid copies of the block Common approach: distributed directory each node keeps a portion of the directory each node is home for the cache blocks for which it keeps directory information the home knows who is the current owner of the block the same states but coherence protocol is complicated CS 519 Operating System Theory

DSM protocol on a read miss (INVALID -> SHARED) send request to home if block is SHARED send the block, update the copyset if block is EXCLUSIVE, forward to owner, owner sends block to requester and home, home makes it SHARED on a write miss (INVALID/SHARED -> EXCLUSIVE) send request to home if block is SHARED, send invalidations to processors in copyset, wait for acks, make it EXCLUSIVE if block is EXCLUSIVE, forward to owner, owner sends the ownership (and the block if requested), home updates owner CS 519 Operating System Theory

Back to SVM An SVM protocol is similar to a hardware directory-based DSM pages instead of cache blocks VM instead of cache controller to maintain the page state communication performed in software using message-passing CS 519 Operating System Theory

False sharing in both hardware and software DSM coherence and communication is performed in coherence units: blocks or pages the protocol observes sharing at coherence unit granularity not at word/byte granularity as it occurs in the program consequently, the protocol cannot distinguish if two processors “really” share variables in the same coherence unit or if they access different variables which are collocated in the same coherence unit the latter is called “false sharing” and generates unnecessary communication occurs more often in SVM because pages are larger (4 KB page) than cache blocks in hardware DSM CS 519 Operating System Theory

Memory Consistency has to do with the order of memory accesses in a bus-based multiprocessor, the memory bus is a serialization medium which guarantees a global ordering of events as a result, each processor sees memory accesses in the same order -> sequential consistency if we don’t have a bus (as in hardware and software DSM systems) it is more difficult to implement sequential consistency but does the programmer really expect sequential consistency ? the consistency model is a contract between the protocol and the programmer defining the restrictions on memory access ordering CS 519 Operating System Theory

Sequential consistency (SC) assume a = b = 0 initially P0 P1 a=1 print b b=1 print a any valid interleaving is acceptable, as long as all processes see memory writes in the same order legal outputs under SC: 01, 10, 11 SC rules: a read is not allowed to perform until previous writes are performed a write is not allowed to perform until previous reads are performed CS 519 Operating System Theory

What is wrong with SC to implement SC in hardware or software DSM we have to wait for the writes to be performed (wait for acks) -> write latency is high read-write false sharing and write-write false sharing cause protocol to generate more communication than intended by the program (true sharing) in SVM the effect of false sharing can be dramatic: page thrashing we would like to relax the consistency model to reduce the pressure on the protocol while still providing SC behavior a relaxed consistency model allows protocol optimizations CS 519 Operating System Theory

Processor consistency (PC) assume a = b = 0 initially P0 P1 a=1 print b b=1 print a relaxation: writes done by the same processor must be seen by all other processors in the order they were issued but writes from different processors may be seen in different orders by different processors. In simple terms, reads can bypass writes. legal outputs under PC: 01, 10, 11, 00 PC rule: a write is not allowed to perform before all previous reads are performed CS 519 Operating System Theory

What is wrong with PC PC is interesting because most processors can support PC much easier than SC performance gap between PC and SC is significant: PC hides write latency even PC is too relaxed for what programmers actually expect shared memory programs use synchronization to avoid race conditions and since they only access shared variables after synchronization (acquire a lock, for instance), they don’t care about the order in which writes are seen CS 519 Operating System Theory

Weak consistency (WC) relaxation: all writes performed before synchronization must be seen after synchronization = weak consistency to implement weak consistency, program and protocol must distinguish ordinary variables from synchronization variables (in hardware DSM) in SVM synchronization is performed by explicit operations CS 519 Operating System Theory

Release consistency (RC) a refinement of weak consistency distinguishes two kinds of synchronization operations: acquire: get the lock, enter critical section release: free the lock, exit critical section barrier is a global synchronization: process entering a barrier has to wait for all other processes to reach the barrier; it combines a release with an acquire: arrival at the barrier is a release departure from the barrier is an acquire CS 519 Operating System Theory

Release consistency (cont’d) main idea: if ordinary accesses are ordered with respect to synchronization accesses and if the program is race free, then the programmer cannot distinguish RC from SC RC rules: before a release is allowed to perform, all previous writes done by the processor must be completed before an ordinary access is allowed to perform, all previous acquires done by the processor must be completed acquire and release operations must be sequential/processor consistent RC allows substantial protocol optimization: writes can be either pipelined (hardware DSM) or buffered until release (SVM) CS 519 Operating System Theory

Under rules 1-3 RC == SC SC RC acq(L) a=1 b=1 rel(L) P0 Inv Inv acq(L) print a print b rel(L) P1 SC P2 acq(L) a=1 b=1 rel(L) P0 Inv acq(L) print a print b rel(L) P1 RC P2 use < for “happened before”. The following proves RC==SC: {a=1, b=1} < rel(L) [rule 1], rel(L) < acq(L) [rule 3], acq(L) < {print a, print b} [rule 2]} => {{a=1,b=1} < {print a, print b}} CS 519 Operating System Theory

Advantages of RC over SC P0 P1 a=1 print a P2 acq(L) b=1 rel(L) print b Inv print c SC RC writes can be pipelined (hardware DSM) or buffered (SVM) reduces write latency, fewer messages (in hardware DSM) reduces unnecessary communication due to false sharing (for instance, at [print c] if page(c) == page (a)) CS 519 Operating System Theory

What is wrong with (eager) RC rel(L) P0 P1 a=1 print a P2 acq(L) b=1 print b Inv print c SC RC invalidations are propagated at release to all processes still exposes false sharing which can be alleviated by postponing the invalidations from release to acquire time CS 519 Operating System Theory

Lazy Release Consistency (LRC) P0 P1 a=1 P2 acq(L) rel(L) Inv page(a) ERC LRC b=1 acq(L) print a print b rel(L) Inv page (b) Inv page(b) Inv page (a) and page(b) invalidations are propagated lazily following the “happened before” relation and only to one processor at one time LRC difficult to implement: propagate the transitive closure of invalidations with respect to “happened before” relation CS 519 Operating System Theory

Happened-Before Relation It is sometimes important to determine an ordering of events in a distributed systems The happened-before relation (->) provides a partial ordering of events If A and B are events in the same process, and A was executed before B, then A->B If A is the event of sending a msg by one process and B is the event of receiving the msg by another process, the A->B If A->B and B->C, then A->C If events A and B are not related by the -> relation, they executed “concurrently” CS 519 Operating System Theory

Achieving Global Ordering Common or synchronized clock not available, so use a “timestamp” for each event to achieve a global ordering Global ordering requirement: If A->B, then the timestamp of A is less than the timestamp of B The timestamp can take the value of a logical clock, i.e. a simple counter that is incremented between any two successive events executed within a process If event A was executed before B in a process, then LC(A) < LC(B) If A is the event of receiving a msg with timestamp t and LC(A) < t, then LC(A) = t + 1 If LC(A) in one process i is the same as LC(B) in another process j, then use process ids to break ties and create a total ordering CS 519 Operating System Theory

Vector Timestamps In distributed shared memory, scalar timestamps are not enough. For Lazy Release Consistency, for instance, we need vector timestamps. Logical time represented as vector of P entries (P = # processes) we say TS(X) < TS(Y), if TS(X)[i] < TS(Y)[i] for some i and all TS(X)[i] <= TS(Y)[i] captures “happened before” exactly: X --> Y if and only if TS(X) < TS(Y) TS(x)[i] incremented on every local event at i message timestamp’ed with the sender’s vector timestamp on receive, receiver sets jth element of its vector to max of jth element in message timestamp and jth element of the current timestamp at receiver CS 519 Operating System Theory

Using Vector Timestamps in LRC logical time is defined by intervals: an interval is delimited by consecutive local synchronization events each processor keeps its vector timestamps and records the invalidations received for each interval and processor (write notices) at acquire use timestamps to implement RC according to “happened before” relation: the acquirer sends its vector timestamp to the last releaser; the releaser sends back (with the lock) the write notices corresponding to intervals not seen by the acquirer; the acquirer applies them and updates its vector timestamp CS 519 Operating System Theory

Lazy Release Consistency (LRC) P0 P1 P2 a=1 acq(L) rel(L) b=1 acq(L) print a print b rel(L) [1,0,0] [1,1,0] p(a) p(b) [0,0,0] before acquire get L after acquire L {P0,1,p(a)}, {P1,1,p(b)} CS 519 Operating System Theory

Two Key Observations Obs 1: no binding between locks and data is assumed: at acquire all the writes which “happened before” the corresponding release must be propagated (as invalidations), even if they were performed in a critical section protected by a different lock than the one which is acquired or even outside critical sections if we propagate only the writes performed in the critical sections protected by the same lock, we assume a binding between data and locks -> different consistency model: entry consistency Obs 2: RC and LRC alleviate false sharing but do not eliminate it CS 519 Operating System Theory

Multiple Writers To further alleviate false sharing we must tolerate multiple writable copies of a page between two synchronization events the problem is how to merge the writes performed by multiple writers later on assume a and b fall in the same page, i.e page(a) == page(b) P0 acq(L1) a=1 rel(L1) ??? acq(L2) b=1 rel(L2) P1 acq(L1) acq (L2) print a print b rel(L2) rel(L1) P2 CS 519 Operating System Theory

Multiple Writers Using Twins and Diffs before the first write, save a clean copy of the page: twin at the release, compare the page (dirty) with the twin (clean) and detect the differences: diff the faulting processor has to collect the diffs from all writers of a page and apply them on its local copy to bring it up-to-date rel(L1) P0 acq(L1) a=1 acq(L2) b=1 rel(L2) P1 acq(L1) acq (L2) print a print b rel(L2) rel(1) P2 CS 519 Operating System Theory

Potential Problem With This Scheme? in the worst case (all processors write and read the same page): O(n2) messages and O(n) applications of each diff can lead to huge memory consumption for the protocol: diffs have to be kept around and eventually garbage collected alternative scheme: select a node as home of the page (usually the first writer) and propagate diffs eagerly to home P0 acq(L1) a=1 rel(L1) acq(L2) b=1 rel(L2) P1 acq(L1) acq (L2) print a print b rel(L2) rel(1) P2 diff diff P3 (home of page containing a and b) CS 519 Operating System Theory

Home-based LRC (HLRC) in the worst case (all writers - all readers on the same page): O(n) messages and a single application of each diff (home) diffs have short lifetime: used as vehicles to send writes to home, then immediately discarded one remaining problem: must guarantee that writes sent as diffs are performed at home when page is fetched acq(L2) b=1 rel(L2) P1 acq (L2) print b rel(L2) rel(1) P2 diff P3 diff may take long (home of page(b)) to arrive at P3 CS 519 Operating System Theory

Guarantee Coherence in HLRC several solutions: (1) wait for acks from the home when you send the diff at release time -> makes release latency high because processors must wait for acks for all pages which were updated in the current interval better solution: (2) use vector timestamps to “version” the page timestamp the diff with the logical clock when sending to home; home updates its page version after applying diff use vector timestamps when requesting a page from home home provides the page only when it has the required version (this guarantees memory coherence, i.e. page incorporates the necessary updates according to happened-before order) CS 519 Operating System Theory

Optimizations to Page-Based DSM Systems Lock acquirer prediction Pre-diffing (for LRC) Pre-fetching Hardware support (automatic updates, protocol controllers) Adaptation to sharing patterns (single vs multiple writers, invalidates vs updates) Home migration (for HLRC) CS 519 Operating System Theory

Other Software DSM Systems Fine grained: Shasta and BlizzardS (cache block size) Medium grained: CRL, Shared Regions, and Orca (object size) Coarse grained: IVY, Munin, TreadMarks, HLRC, Cashmere, CVM, ADSM, AEC* (page size) CS 519 Operating System Theory