Mark D. Hill Multifacet Project ( Computer Sciences Department

Slides:



Advertisements
Similar presentations
Bounded Model Checking of Concurrent Data Types on Relaxed Memory Models: A Case Study Sebastian Burckhardt Rajeev Alur Milo M. K. Martin Department of.
Advertisements

Memory Consistency Models Kevin Boos. Two Papers Shared Memory Consistency Models: A Tutorial – Sarita V. Adve & Kourosh Gharachorloo – September 1995.
Exploring Memory Consistency for Massively Threaded Throughput- Oriented Processors Blake Hechtman Daniel J. Sorin 0.
CS 162 Memory Consistency Models. Memory operations are reordered to improve performance Hardware (e.g., store buffer, reorder buffer) Compiler (e.g.,
Pipeline Computer Organization II 1 Hazards Situations that prevent starting the next instruction in the next cycle Structural hazards – A required resource.
Lecture Objectives: 1)Define pipelining 2)Calculate the speedup achieved by pipelining for a given number of instructions. 3)Define how pipelining improves.
(C) 2001 Daniel Sorin Correctly Implementing Value Prediction in Microprocessors that Support Multithreading or Multiprocessing Milo M.K. Martin, Daniel.
1 Lecture 20: Speculation Papers: Is SC+ILP=RC?, Purdue, ISCA’99 Coherence Decoupling: Making Use of Incoherence, Wisconsin, ASPLOS’04 Selective, Accurate,
Microprocessors VLIW Very Long Instruction Word Computing April 18th, 2002.
Instruction-Level Parallelism (ILP)
Is SC + ILP = RC? Presented by Vamshi Kadaru Chris Gniady, Babak Falsafi, and T. N. VijayKumar - Purdue University Spring 2005: CS 7968 Parallel Computer.
CS492B Analysis of Concurrent Programs Consistency Jaehyuk Huh Computer Science, KAIST Part of slides are based on CS:App from CMU.
By Sarita Adve & Kourosh Gharachorloo Review by Jim Larson Shared Memory Consistency Models: A Tutorial.
1 Lecture 7: Consistency Models Topics: sequential consistency, requirements to implement sequential consistency, relaxed consistency models.
Single-Chip Multiprocessors: the Rebirth of Parallel Architecture Guri Sohi University of Wisconsin.
Lecture 13: Consistency Models
Microprocessors Introduction to ia64 Architecture Jan 31st, 2002 General Principles.
Computer Architecture II 1 Computer architecture II Lecture 9.
Multiscalar processors
RISC. Rational Behind RISC Few of the complex instructions were used –data movement – 45% –ALU ops – 25% –branching – 30% Cheaper memory VLSI technology.
1 Lecture 15: Consistency Models Topics: sequential consistency, requirements to implement sequential consistency, relaxed consistency models.
RISC:Reduced Instruction Set Computing. Overview What is RISC architecture? How did RISC evolve? How does RISC use instruction pipelining? How does RISC.
(C) 2003 Mulitfacet ProjectUniversity of Wisconsin-Madison Revisiting “Multiprocessors Should Support Simple Memory Consistency Models” Mark D. Hill Multifacet.
Shared Memory Consistency Models: A Tutorial Sarita V. Adve Kouroush Ghrachorloo Western Research Laboratory September 1995.
Is Out-Of-Order Out Of Date ? IA-64’s parallel architecture will improve processor performance William S. Worley Jr., HP Labs Jerry Huck, IA-64 Architecture.
10/27: Lecture Topics Survey results Current Architectural Trends Operating Systems Intro –What is an OS? –Issues in operating systems.
By Sarita Adve & Kourosh Gharachorloo Slides by Jim Larson Shared Memory Consistency Models: A Tutorial.
Shared Memory Consistency Models. SMP systems support shared memory abstraction: all processors see the whole memory and can perform memory operations.
Memory Consistency Models. Outline Review of multi-threaded program execution on uniprocessor Need for memory consistency models Sequential consistency.
Detecting and Eliminating Potential Violation of Sequential Consistency for concurrent C/C++ program Duan Yuelu, Feng Xiaobing, Pen-chung Yew.
ICFEM 2002, Shanghai Reasoning about Hardware and Software Memory Models Abhik Roychoudhury School of Computing National University of Singapore.
CS533 Concepts of Operating Systems Jonathan Walpole.
CISC 879 : Advanced Parallel Programming Rahul Deore Dept. of Computer & Information Sciences University of Delaware Exploring Memory Consistency for Massively-Threaded.
1 Lecture 20: Speculation Papers: Is SC+ILP=RC?, Purdue, ISCA’99 Coherence Decoupling: Making Use of Incoherence, Wisconsin, ASPLOS’04.
Advanced Computer Architecture pg 1 Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8) Henk Corporaal
An Evaluation of Memory Consistency Models for Shared- Memory Systems with ILP processors Vijay S. Pai, Parthsarthy Ranganathan, Sarita Adve and Tracy.
Csci 136 Computer Architecture II – Superscalar and Dynamic Pipelining Xiuzhen Cheng
1 Adapted from UC Berkeley CS252 S01 Lecture 17: Reducing Cache Miss Penalty and Reducing Cache Hit Time Hardware prefetching and stream buffer, software.
Symmetric Multiprocessors: Synchronization and Sequential Consistency
CS 352H: Computer Systems Architecture
Memory Consistency Models
Lecture 11: Consistency Models
Memory Consistency Models
Chapter 14 Instruction Level Parallelism and Superscalar Processors
Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8)
/ Computer Architecture and Design
Symmetric Multiprocessors: Synchronization and Sequential Consistency
Levels of Parallelism within a Single Processor
Lecture 14: Reducing Cache Misses
Hardware Multithreading
Address-Value Delta (AVD) Prediction
Yingmin Li Ting Yan Qi Zhao
Symmetric Multiprocessors: Synchronization and Sequential Consistency
Single-Chip Multiprocessors: the Rebirth of Parallel Architecture
Adaptive Single-Chip Multiprocessing
Control unit extension for data hazards
Sampoorani, Sivakumar and Joshua
Instruction Level Parallelism (ILP)
Shared Memory Consistency Models: A Tutorial
Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8)
What is Computer Architecture?
Store Atomicity What does atomicity really require?
COMS 361 Computer Organization
What is Computer Architecture?
Levels of Parallelism within a Single Processor
What is Computer Architecture?
Hardware Multithreading
Control unit extension for data hazards
Control unit extension for data hazards
CSE378 Introduction to Machine Organization
Presentation transcript:

Revisiting “Multiprocessors Should Support Simple Memory Consistency Models” Mark D. Hill Multifacet Project (www.cs.wisc.edu/multifacet) Computer Sciences Department University of Wisconsin—Madison October 2003

High- vs. Low-Level Memory Model Interface C w/ Posix Java w/ Threads HPF Most of This Workshop SW This Talk Multithreaded Hardware Relaxed HL Interface Relaxed HW Interface

Outline Subroutine Call Review Original Paper [Computer, Dec. 1998] Value Prediction & Memory Model Subtleties Review Original Paper [Computer, Dec. 1998] Commercial Memory Model Classes Performance Similarities & Differences Predictions & Recommendation Revisit in 2003 Revisiting 1998 Predictions “SC + ILP = RC?” Paper Revisiting Commercial Memory Model Classes Analysis, Predictions. & Recommendation

Correctly Implementing Value Prediction in Microprocessors that Support Multithreading or Multiprocessing Milo M.K. Martin, Daniel J. Sorin, Harold W. Cain, Mark D. Hill, and Mikko H. Lipasti Computer Sciences Department Department of Electrical and Computer Engineering University of Wisconsin—Madison

Value Prediction Predict the value of an instruction Speculatively execute with this value Later verify that prediction was correct Example: Value predict a load that misses in cache Execute instructions dependent on value-predicted load Verify the predicted value when the load data arrives Without concurrency: simple verification is OK Compare actual value to predicted Value prediction literature has ignored concurrency

Informal Example of Problem, part 1 Student #2 predicts grades are on bulletin board B Based on prediction, assumes score is 60 Bulletin Board B Grades for Class Student ID score 1 75 2 60 3 85

Informal Example of Problem, part 2 Professor now posts actual grades for this class Student #2 actually got a score of 80 Announces to students that grades are on board B Grades for Class Student ID score 1 75 50 2 60 80 3 85 70 Bulletin Board B

Informal Example of Problem, part 3 Student #2 sees prof’s announcement and says, “ I made the right prediction (bulletin board B), and my score is 60”! Actually, Student #2’s score is 80 What went wrong here? Intuition: predicted value from future Problem is concurrency Interaction between student and professor Just like multiple threads, processors, or devices E.g., SMT, SMP, CMP

Linked List Example of Problem (initial state) Linked list with single writer and single reader No synchronization (e.g., locks) needed Initial state of list Uninitialized node head A 60 null B.data B.next A.data null 42 A.next

Linked List Example of Problem (Writer) Writer sets up node B and inserts it into list Code For Writer Thread W1: store mem[B.data] <- 80 W2: load reg0 <- mem[Head] W3: store mem[B.next] <- reg0 W4: store mem[Head] <- B head B 80 A B.data { B.next A.data null 42 Setup node A.next Insert

Linked List Example of Problem (Reader) Reader cache misses on head and value predicts head=B. Cache hits on B.data and reads 60. Later “verifies” prediction of B. Is this execution legal? Predict head=B Code For Reader Thread R1: load reg1 <- mem[Head] = B R2: load reg2 <- mem[reg1] = 60 head ? 60 null B.data B.next A.data null 42 A.next

Why This Execution Violates SC Sequential Consistency Simplest memory consistency model Must exist total order of all operations Total order must respect program order at each processor Our example execution has a cycle No total order exists

Trying to Find a Total Order What orderings are enforced in this example? Code For Reader Thread R1: load reg1 <- mem[Head] R2: load reg2 <- mem[reg1] Code For Writer Thread W1: store mem[B.data] <- 80 W2: load reg0 <- mem[Head] W3: store mem[B.next] <- reg0 W4: store mem[Head] <- B { Setup node Setup node Insert

{ Program Order Must enforce program order Code For Reader Thread R1: load reg1 <- mem[Head] R2: load reg2 <- mem[reg1] Code For Writer Thread W1: store mem[B.data] <- 80 W2: load reg0 <- mem[Head] W3: store mem[B.next] <- reg0 W4: store mem[Head] <- B { Setup node Insert

Data Order If we predict that R1 returns the value B, we can violate SC Code For Reader Thread R1: load reg1 <- mem[Head] = B R2: load reg2 <- mem[reg1] = 60 Code For Writer Thread W1: store mem[B.data] <- 80 W2: load reg0 <- mem[Head] W3: store mem[B.next] <- reg0 W4: store mem[Head] <- B { Setup node Insert

Value Prediction and Sequential Consistency Key: value prediction reorders dependent operations Specifically, read-to-read data dependence order Execute dependent operations out of program order Applies to almost all consistency models Models that enforce data dependence order Must detect when this happens and recover Similar to other optimizations that complicate SC

How to Fix SC Implementations Address-based detection of violations Student watches board B between prediction and verification Like existing techniques for out-of-order SC processors Track stores from other threads If address matches speculative load, possible violation Value-based detection of violations Student checks grade again at verification Also an existing idea Replay all speculative instructions at commit Can be done with dynamic verification (e.g., DIVA)

Relaxed Consistency Models Relax some orderings between reads and writes Allows HW/SW optimizations Software must add memory barriers to get ordering Intuition: should make value prediction easier Our intuition is wrong …

Weakly Ordered Consistency Models Relax orderings unless memory barrier between Examples: SPARC RMO IA-64 PowerPC Alpha Subtle point that affects value prediction Does model enforce data dependence order?

Relaxed Models that Enforce Data Dependence Examples: SPARC RMO, PowerPC, and IA-64 Code For Writer Thread W1: store mem[B.data] <- 80 W2: load reg0 <- mem[Head] W3: store mem[B.next] <- reg0 W3b: Memory Barrier W4: store mem[Head] <- B Code For Reader Thread R1: load reg1 <- mem[Head] R2: load reg2 <- mem[reg1] { Setup node Memory barrier orders W4 after W1, W2, W3 Insert

Violating Consistency Model Simple value prediction can break RMO, PPC, IA-64 How? By relaxing dependence order between reads Same issues as for SC and PC

Solutions to Problem Don’t enforce dependence order (add memory barriers) Changes architecture Breaks backward compatibility Not practical Enforce SC or PC Potential performance loss More efficient solutions possible

Models that Don’t Enforce Data Dependence Example: Alpha Requires extra memory barrier (between R1 & R2) Code For Writer Thread W1: store mem[B.data] <- 80 W2: load reg0 <- mem[Head] W3: store mem[B.next] <- reg0 W3b: Memory Barrier W4: store mem[Head] <- B Code For Reader Thread R1: load reg1 <- mem[Head] R1b: Memory Barrier R2: load reg2 <- mem[reg1] { Setup node Insert

Issues in Not Enforcing Data Dependence Works correctly with value prediction No detection mechanism necessary Do not need to add any more memory barriers for VP Additional memory barriers Non-intuitive locations Added burden on programmer

Summary of Memory Model Issues SC Relaxed Models PC Weakly Ordered Models IA-32 SPARC TSO Enforce Data Dependence NOT Enforce Data Dependence IA-64 SPARC RMO Alpha

Conclusions Naïve value prediction can violate consistency Subtle issues for each class of memory model Solutions for SC & PC require detection mechanism Use existing mechanisms for enhancing SC performance Solutions for more relaxed memory models Enforce stronger model

Outline Subroutine Call Review Original Paper [Computer, Dec. 1998] Value Prediction & Memory Model Subtleties Review Original Paper [Computer, Dec. 1998] Commercial Memory Model Classes Performance Similarities & Differences Predictions & Recommendation Revisit in 2003 Revisiting 1998 Predictions “SC + ILP = RC?” Paper Revisiting Commercial Memory Model Classes Analysis, Predictions. & Recommendation

1998 Commercial Memory Model Classes Sequential Consistency (SC) MIPS/SGI HP PA-RISC Processor Consistency (PC) Relax writeread dependencies Intel x86 (a.k.a., IA-32) Sun TSO Relaxed Consistency (RC) Relax all dependencies, but add fences DEC Alpha IBM PowerPC Sun RMO (no implementations)

With All Models, Hardware Can Use Coherent Caches Non-binding prefetches Simultaneous & vertical multithreading With Speculative Execution Allow “expected” misses to prefetch Speculatively perform all reads & writes What’s different?

Performance Difference RC/PC/SC can do same optimzations But RC/PC can sometimes commit early While SC can lose performance Undoing execution on (suspected) model violation Stalls due to full instruction windows, etc. Performance over SC [Ranganathan et al. 1997] 11% for PC 20% for RC Closer if SC uses their Speculative Retirement

1998 Predictions & Recommendation My Performance Gap Predictions + Longer (relative) memory latency Larger caches, bigger windows, etc. New inventions My Recommendation Implement SC (or PC) Keep interface simple Innovate in implementation

Outline Subroutine Call Review Original Paper [Computer, Dec. 1998] Value Prediction & Memory Model Subtleties Review Original Paper [Computer, Dec. 1998] Commercial Memory Model Classes Performance Similarities & Differences Predictions & Recommendation Revisit in 2003 Revisiting 1998 Predictions “SC + ILP = RC?” Paper Revisiting Commercial Memory Model Classes Analysis, Predictions. & Recommendation

Revisiting Predictions Evolutionary Predictions + Longer (relative) memory latency Larger caches Larger instruction windows. New Inventions Run-ahead & Helper threads SMT commercialized Chip Multiprocessors (CMPs) SC + ILP = RC? Happened, but on-balance made gap bigger Wonderful prefetching Many threads per processor Many threads per chip Can close gap Relaxed HW memory model offers little more performance

2003 Commercial Memory Model Classes Sequential Consistency (SC) MIPS/SGI HP PA-RISC Processor Consistency (PC) Relax writeread dependencies Intel x86 (a.k.a., IA-32) Sun TSO Relaxed Consistency (RC) Relax all dependencies, but add fences DEC Alpha IBM PowerPC Sun RMO (no implementations) + Intel IPF (IA-64)

Current Analysis Architectures changed mostly for business reasons No one substantially changed model class Clearly, all three classes work E.g., generating fences not too bad

Current Options Assume Relaxed HLL model  Three HW Model Options Expose SC/PC & Implement SC/PC Add SC/PC mechanisms & speculate! (somewhat complex) HW implementers & verifiers know what correct is Expose Relaxed & Implement Relaxed Many HW implementers & verifiers don’t understand relaxed More performance? Deep speculation require HW to pass fences Run-ahead & throw all away? Speculative execution with SC/PC-like mechanisms? Expose Relaxed & Implement SC/PC Implement fences as no-ops Use SC/PC mechanisms, speculate!

Predictions & Recommendation Longer (relative) memory latency Only partially compensated by caches, etc. Will speculate further without larger windows (run-ahead) Will need to speculate past synchronization & fences Use CMPs to get many outstanding misses per chip Recommendations (unrepentant  ) Implement SC (or PC) Keep interface simple Innovate in implementation

Outline Subroutine Call Review Original Paper [Computer, Dec. 1998] Value Prediction & Memory Model Subtleties Review Original Paper [Computer, Dec. 1998] High- vs. Low-Level Memory Models Commercial Memory Model Classes Performance Similarities & Differences Predictions & Recommendation Revisit in 2003 Revisiting 1998 Predictions “SC + ILP = RC?” Paper Revisiting Commercial Memory Model Classes Analysis, Predictions. & Recommendation