Presentation is loading. Please wait.

Presentation is loading. Please wait.

Shared Memory Consistency Models. Quiz (1)  Let’s define shared memory.

Similar presentations


Presentation on theme: "Shared Memory Consistency Models. Quiz (1)  Let’s define shared memory."— Presentation transcript:

1 Shared Memory Consistency Models

2 Quiz (1)  Let’s define shared memory

3 We often use figures like this…  But perhaps shared memory is not about how CPUs/memory are wired… memory CPU

4 Is this shared memory? memory module CPU memory module memory module CROSSBAR SWITCH

5 And we have a cache, too  Is this still a “shared” memory? memory CPU $ $$$

6 Observation  Defining shared memory in terms of how CPUs and memory are physically organized does not seem feasible  Moreover, it is not necessary either, at least from programs’ point of view

7 From programs’ point of view  What matters is the behavior of memory observed by programs  Vaguely, if a value written by a process is seen by another process, they share a memory no matter how this behavior is implemented  We try to define shared memory along this idea

8 Defining shared memory by its behavior  We try to define “possible behaviors  outcome of read operations  of memory system” in the presence of processes concurrently accessing them  We call such behaviors “consistency model” of shared memory

9 But why are we bothered? (1)  Otherwise we can never (formally) reason about the correctness of shared-memory programs  Implementation of a shared memory (either by HW or SW) needs such a definition too draw the boundary between legitimate optimizations and illegal ones

10 But why are we bothered? (2)  What we (most of us) consider “the natural definition” of shared memory turns out very difficult to implement efficiently we have caches (replicas) that make implementation far from trivial many optimizations violate the natural behavior  Most part of most shared memory programs can work with more relaxed behaviors

11 But why are we bothered? (3)  Therefore many definitions of consistency models have been invented and implemented  They are called relaxed consistency models, relaxed memory models, etc.

12 Sequential consistency  The first “formally defined” behavior of shared memory by Lamport Lamport, "How to Make a Multiprocessor Computer that Correctly Executes Multiprocess Programs," IEEE Trans. Computers, Vol. C-28, No. 9, Sept. 1979, pp. 690-691.  Presumably most of us consider it natural  Before defining it, let’s see how natural it is

13 Quiz (2)  What are possible outputs? List all. P: x = 1; printf(“y = %d\n”, y); Initially: x = 0; y = 0; Q: y = 1; printf(“x = %d\n”, x);

14 Which of the following four are possible? x = 0x = 1 y = 0 y = 1

15 (0, 0) seems impossible…  P : x = 1; read y;  Q : y = 1; read x;  Possible orderings x = 1; read y; y = 1; read x;  (1, 0) x = 1; y = 1; read y; read x;  (1, 1) x = 1; y = 1; read x; read y;  (1, 1) y = 1; x = 1; read y; read x;  (1, 1) y = 1; x = 1; read x; read y;  (1, 1) y = 1; read x; x = 1; read y;  (0, 1)

16 Or more concisely,  P : x = 1; read y;  Q : y = 1; read x;  if P reads zero, then y = 1 by Q comes after read y. The only possible sequence in this case is: x = 1; read y; y = 1; read x  Q reads 1  Thus (x, y) = (0, 0) cannot happen

17 By the way,  This is the basic of a classical mutual exclusion algorithm found in OS textbooks /* Entry section for P2 */ Q2 := True; TURN := 2; wait while Q1 and TURN = 2; /* Exit section for P2 */ Q2 := False; /* Entry section for P1 */ Q1 := True; TURN := 1; wait while Q2 and TURN = 1; /* Exit section for P1 */ Q1 := False;

18  Somewhat outdated material no longer works in relaxed models today’s CPUs has supports more straightforward ways to implement mutual exclusion (compare-swap, LL/SC, etc.)

19 Back to the subject  The assumption underlying the above discussion is the very definition of “sequential consistency”

20 Definition of sequential consistency (preliminary)  Processes access memory, by issuing: a = x /* write x to variable a */ a /* read from variable a */  An execution of a program generates events of the following two kinds: WRITE P (a, x) /* P writes x to variable a */ READ P (a, x) /* P reads x from variable a */  We use “processes” and “processors” interchangeably

21 Definition  A shared memory is sequentially consistent (SC) iff for any execution, there is a total order < among all READ/WRITE events such that: if a process p performs e 1 before e 2 then e 1 < e 2 preserve the program order for each READ P (a, x), if we let WRITE Q (a, y) be the last write to a in the above total order, then x = y read returns the last write

22 Informally, it says:  to reason about possible outcome of the program, interleave all reads/writes in all possible ways and assume each read gets the value of the last write to the read location P’s accesses Q’s accesses

23 So far so good  We will see a reasonable optimization easily breaks SC  Let’s assume we are implementing a shared memory multiprocessor of two CPUs, with caches $ mem $

24  Recall the previous program and assume both CPUs cache x and y main memory is not important in this example x=0 y=0 x=0 y=0

25  P writes 1 to x. It will need update (or invalidate) the other cache P x=1 y=0 Q x=0 y=0

26  A processor does not want to block while update/invalidation is in progress (a reasonable optimization for an architect)  P may then get 0 from y in the cache P x=1 y=0 Q x=0 y=0

27  Q may experience a similar sequence and get 0 from x in the cache P x=1 y=0 Q x=0 y=1

28  We ended up with both processors’ reading zeros  This violates SC P0P0 x=1 y=1 Q0Q0 x=1 y=1

29 Looking back (1)  P writes 1 to x P  P sends an update msg to Q  P reads 0 from y P  Q writes 1 to y Q  Q sends an update msg to P  Q reads 0 from x Q  P receives an update msg and write 1 to y P  Q receives an update msg and write 1 to x Q

30 Looking back (2)  In intuitive terms, “a write is not atomic”, because a single write must update multiple locations (caches)  Definition of SC (total order among R/W events) can be interpreted as saying “a write is atomic”

31 What if we do not have caches?  Assume there are no caches, but there are multiple memory modules  Assume there is no single bus that serializes every access P x=0 y=0 Q

32 How to fix it (one possibility)  A write by processor P first gets an exclusive access to the bus  P sends an update/invalidate msg to the other cache  The other cache replies with an acknowledgement after updating the variable  P blocks (does not issue further memory accesses) until it receives the acknowledgement  P updates its cache and releases the bus Essentially, really serialize all accesses

33 Illustrated  During (1) and (2), P blocks (stalls) The bus won’t be granted for other accesses PQ (1) update/invalidate (2) ack

34 Can you prove this implements SC?  For simplicity, assume No main memory (cache only) Data are always on both caches An update protocol A write sends the new value to the other cache  Reads never miss. It immediately returns the value currently on the cache

35 Outline  Model the protocol as a distributed-memory (asynchronous message passing) program define relevant events (acquire_bus, recv_update, recv_ack, release_bus, read) call them micro-events an execution of such a protocol generates a total order of such micro-events. from the execution, construct a total order of READs/WRITEs satisfying the definition of SC

36 Relaxed Memory Consistency Models  So many “weaker” consistency models have been proposed both for multiprocessors, software shared memory, programming languages, file systems,...  They are generically called “relaxed memory consistency”

37 Models in the literature  processor consistency  total store order, partial store order, relaxed memory ordering  weak consistency  release consistency  lazy release consistency ...

38 How they are generally defined  Which memory accesses may be reordered A processor Q may observe another processor P’s writes differently from the order P issues them  Writes may not be atomic Processors Q and R may observe another processor P’s writes differently from each other

39 Memory barrier  Processors not supporting SC usually have separate “memory barrier” instructions to enforce ordering/completion of instructions  usually called “memory barrier” or “fence” sfence, lfence, mfence (Pentium) membar (SPARC) wmb, mb (Alpha) etc.

40 Variants  Different instructions enforce ordering between different kinds (load/store) of memory accesses  e.g., SPARC “membar #StoreLoad” ensures following loads do not bypass previous stores  e.g., Pentium “lfence” ensures following loads do not bypass previous loads

41 Semantics of memory barrier R W R membar R W R...  if processor P issues “a ; membar ; b” in this order, another processor Q will observe a before b  all membar events are totally ordered and the order preserves the program order

42 In implementation terms  membar will stall processor until all previous accesses have been completed e.g., until in-transit load instructions have returned values, and in-transit cache invalidations have been acknowledged

43 Memory consistency for programming languages  So far we have been dealing with semantics of “processors” (or machine languages)  Ideally, all programming languages should define precise consistency models too, but they rarely do

44 Today’s common practice (1) C/C++  “you know which expression access memory” *p, p->x, p[0],... they are not actually trivial at all! global variables non-pointer structs optimizations eliminating memory accesses Programmers somehow control/predict them by inserting volatile etc.

45 Today’s common practice (2) most high-level languages  Do not write programs for which subtle consistency semantics matter only use supported idioms mutex, cond_var,..., for synchronization, to guarantee “there are no races”  What if there are races ?  undefined (rarely stated explicitly)

46 High-level languages  What are races? conflicting accesses to the same data  What are conflicting accesses? not separated by supported synchronization idioms (unlock -> lock, cond_signal -> cond_wait) and one of them is a “write”

47 The third way : Java  We will see presentation of the last week  Java has “synchronized” (lock), and wait/notify (condition variable) used for most synchronization operations  At the same time, Java also defines behavior under races (memory consistency model) discussion in the community revealed how intricated it is


Download ppt "Shared Memory Consistency Models. Quiz (1)  Let’s define shared memory."

Similar presentations


Ads by Google