Dataflow: Passing the Token

Slides:



Advertisements
Similar presentations
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Advertisements

Instruction-Level Parallelism (ILP)
DATAFLOW ARHITEKTURE. Dataflow Processors - Motivation In basic processor pipelining hazards limit performance –Structural hazards –Data hazards due to.
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
Chapter 2 Instruction-Level Parallelism and Its Exploitation
Where do We Go from Here? Thoughts on Computer Architecture TIP For additional advice see Dale Canegie Training® Presentation Guidelines Jack Dennis MIT.
CH13 Reduced Instruction Set Computers {Make hardware Simpler, but quicker} Key features  Large number of general purpose registers  Use of compiler.
High Performance Architectures Dataflow Part 3. 2 Dataflow Processors Recall from Basic Processor Pipelining: Hazards limit performance  Structural hazards.
Computer Architecture Dataflow Machines. Data Flow Conventional programming models are control driven Instruction sequence is precisely specified Sequence.
1 Arvind Computer Science and Artificial Intelligence Lab M.I.T. Dataflow: Passing the Token ISCA 2005, Madison, WI June 6, 2005.
1 Multithreaded Architectures Lecture 3 of 4 Supercomputing ’93 Tutorial Friday, November 19, 1993 Portland, Oregon Rishiyur S. Nikhil Digital Equipment.
March 4, 2008 DF5-1http://csg.csail.mit.edu/arvind/ The Monsoon Project Arvind Computer Science & Artificial Intelligence Lab Massachusetts Institute of.
Different parallel processing architectures Module 2.
ECEG-3202 Computer Architecture and Organization Chapter 7 Reduced Instruction Set Computers.
Von Neumann Computers Article Authors: Rudolf Eigenman & David Lilja
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
March 4, 2008http://csg.csail.mit.edu/arvindDF3-1 Dynamic Dataflow Arvind Computer Science & Artificial Intelligence Lab Massachusetts Institute of Technology.
December 5, 2006http://csg.csail.mit.edu/6.827/L22-1 Dynamic Dataflow Arvind Computer Science & Artificial Intelligence Lab Massachusetts Institute of.
Computer Architecture Dataflow Machines Sunset over Lifou, New Caledonia.
Autumn 2006CSE P548 - Dataflow Machines1 Von Neumann Execution Model Fetch: send PC to memory transfer instruction from memory to CPU increment PC Decode.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
Computer Architecture: Dataflow (Part I) Prof. Onur Mutlu Carnegie Mellon University.
Adding Concurrency to a Programming Language Peter A. Buhr and Glen Ditchfield USENIX C++ Technical Conference, Portland, Oregon, U. S. A., August 1992.
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Advanced Architectures
Processes and threads.
Chapter 1 Introduction.
Operating Systems (CS 340 D)
Conception of parallel algorithms
Multiscalar Processors
Prof. Onur Mutlu Carnegie Mellon University
Fall 2012 Parallel Computer Architecture Lecture 22: Dataflow I
Chapter 9 a Instruction Level Parallelism and Superscalar Processors
Assembly Language for Intel-Based Computers, 5th Edition
Chapter 1 Introduction.
Chapter 14 Instruction Level Parallelism and Superscalar Processors
COMP4211 : Advance Computer Architecture
Pipelining: Advanced ILP
Architecture of Parallel Computers CSC / ECE 506 Summer 2006 Scalable Programming Models Lecture 11 6/19/2006 Dr Steve Hunter.
What is Parallel and Distributed computing?
Superscalar Processors & VLIW Processors
Levels of Parallelism within a Single Processor
Dataflow Models.
Hardware Multithreading
EE 445S Real-Time Digital Signal Processing Lab Spring 2014
The Procedure Abstraction Part I: Basics
Pipeline control unit (highly abstracted)
Lecture Topics: 11/1 General Operating System Concepts Processes
Processes Hank Levy 1.
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
* From AMD 1996 Publication #18522 Revision E
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Pipeline control unit (highly abstracted)
Latency Tolerance: what to do when it just won’t go away
The Vector-Thread Architecture
Introduction to Microprocessor Programming
Levels of Parallelism within a Single Processor
Chapter 12 Pipelining and RISC
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
CSC3050 – Computer Architecture
Dynamic Hardware Prediction
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Dataflow Model of Computation (From Dataflow to Multithreading)
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Lecture 4: Instruction Set Design/Pipelining
Lecture 5: Pipeline Wrap-up, Static ILP
Samira Khan University of Virginia Jan 23, 2019
Advanced Computer Architecture Dataflow Processing
Prof. Onur Mutlu Carnegie Mellon University
Prof. Onur Mutlu Carnegie Mellon University
Presentation transcript:

Dataflow: Passing the Token Arvind Computer Science and Artificial Intelligence Lab M.I.T. ISCA 2005, Madison, WI June 6, 2005

Inspiration: Jack Dennis General purpose parallel machines based on a dataflow graph model of computation Inspired all the major players in dataflow during seventies and eighties, including Kim Gostelow and I @ UC Irvine ISCA, Madison, WI, June 6, 2005

S. Sakai Y. Kodama EM4: single-chip dataflow micro, 80PE multiprocessor, ETL, Japan K. Hiraki T. Shimada Sigma-1: The largest dataflow machine, ETL, Japan Dataflow Machines Static - mostly for signal processing NEC - NEDIP and IPP Hughes, Hitachi, AT&T, Loral, TI, Sanyo M.I.T. Engineering model ... Dynamic Manchester (‘81) M.I.T. - TTDA, Monsoon (‘88) M.I.T./Motorola - Monsoon (‘91) (8 PEs, 8 IS) ETL - SIGMA-1 (‘88) (128 PEs,128 IS) ETL - EM4 (‘90) (80 PEs), EM-X (‘96) (80 PEs) Sandia - EPS88, EPS-2 IBM - Empire Shown at Supercomputing 96 Shown at Supercomputing 91 Greg Papadopoulos John Gurd Related machines: Burton Smith’s Denelcor HEP, Horizon, Tera Andy Boughton Chris Joerg Monsoon Jack Costanza ISCA, Madison, WI, June 6, 2005

Software Influences Parallel Compilers Intermediate representations: DFG, CDFG (SSA,  ,...) Software pipelining Keshav Pingali, G. Gao, Bob Rao, .. Functional Languages and their compilers Active Messages David Culler Compiling for FPGAs, ... Wim Bohm, Seth Goldstein... Synchronous dataflow Lustre, Signal Ed Lee @ Berkeley ISCA, Madison, WI, June 6, 2005

This talk is mostly about MIT work Dataflow graphs A clean model of parallel computation Static Dataflow Machines Not general-purpose enough Dynamic Dataflow Machines As easy to build as a simple pipelined processor The software view The memory model: I-structures Monsoon and its performance Musings ISCA, Madison, WI, June 6, 2005

Dataflow Graphs a b x y < ip , p , v > {x = a + b; y = b * 7 in (x-y) * (x+y)} a b + *7 - * y x 1 2 3 4 5 Values in dataflow graphs are represented as tokens An operator executes when all its input tokens are present; copies of the result token are distributed to the destination operators ip = 3 p = L token < ip , p , v > instruction ptr port data no separate control flow ISCA, Madison, WI, June 6, 2005

Dataflow Operators A small set of dataflow operators can be used to define a general programming language Fork Primitive Ops Switch Merge + T F T F T T  + T F T F T T ISCA, Madison, WI, June 6, 2005

Well Behaved Schemas one-in-one-out & self cleaning Loop Bounded Loop Before After • • • P one-in-one-out & self cleaning Bounded Loop f p T F Loop T F f g Conditional Needed for resource management ISCA, Madison, WI, June 6, 2005

Outline Dataflow graphs  Static Dataflow Machines  A clean model of parallel computation Static Dataflow Machines  Not general-purpose enough Dynamic Dataflow Machines As easy to build as a simple pipelined processor The software view The memory model: I-structures Monsoon and its performance Musings ISCA, Madison, WI, June 6, 2005

Static Dataflow Machine: Instruction Templates b + *7 - * y x 1 2 3 4 5 Destination 1 Destination 2 Operand 1 Operand 2 Opcode + 3L 4L 1 2 3 4 5 * 3R 4R - 5L + 5R * out Presence bits Each arc in the graph has a operand slot in the program ISCA, Madison, WI, June 6, 2005

Static Dataflow Machine Jack Dennis, 1973 Receive Instruction Templates 1 2 . Op dest1 dest2 p1 src1 p2 src2 FU FU FU FU FU Send <s1, p1, v1>, <s2, p2, v2> Many such processors can be connected together Programs can be statically divided among the processor ISCA, Madison, WI, June 6, 2005

Static Dataflow: Problems/Limitations Mismatch between the model and the implementation The model requires unbounded FIFO token queues per arc but the architecture provides storage for one token per arc The architecture does not ensure FIFO order in the reuse of an operand slot The merge operator has a unique firing rule The static model does not support Function calls Data Structures - No easy solution in the static framework - Dynamic dataflow provided a framework for solutions ISCA, Madison, WI, June 6, 2005

Outline Dataflow graphs  Static Dataflow Machines  A clean model of parallel computation Static Dataflow Machines  Not general-purpose enough Dynamic Dataflow Machines  As easy to build as a simple pipelined processor The software view The memory model: I-structures Monsoon and its performance Musings ISCA, Madison, WI, June 6, 2005

Dynamic Dataflow Architectures Allocate instruction templates, i.e., a frame, dynamically to support each loop iteration and procedure call termination detection needed to deallocate frames The code can be shared if we separate the code and the operand storage a token <fp, ip, port, data> frame pointer instruction pointer ISCA, Madison, WI, June 6, 2005

A Frame in Dynamic Dataflow 1 2 3 4 5 + 1 2 4 5 3L, 4L Program a b + *7 - * y x 1 2 3 4 5 * 3R, 4R Need to provide storage for only one operand/operator - 3 5L <fp, ip, p , v> + 5R * out 1 2 4 5 7 3 Frame ISCA, Madison, WI, June 6, 2005

Monsoon Processor Greg Papadopoulos d1,d2 Code Instruction Fetch ip Operand Fetch fp+r Token Queue Frames ALU Form Token Network Network ISCA, Madison, WI, June 6, 2005

Temporary Registers & Threads Robert Iannucci op r S1,S2 Instruction Fetch Code n sets of registers (n = pipeline depth) Frames Operand Fetch Token Queue Registers Registers evaporate when an instruction thread is broken ALU Robert Iannucci Form Token Registers are also used for exceptions & interrupts Network Network ISCA, Madison, WI, June 6, 2005

Actual Monsoon Pipeline: Eight Stages 32 Instruction Memory Instruction Fetch Effective Address 3 Presence bits Presence Bit Operation User Queue System Queue 72 Frame Memory Frame Operation 72 72 144 2R, 2W ALU 144 Network Registers Form Token ISCA, Madison, WI, June 6, 2005

Instructions directly control the pipeline The opcode specifies an operation for each pipeline stage: opcode r dest1 [dest2] EA - effective address FP + r: frame relative r: absolute IP + r: code relative (not supported) WM - waiting matching Unary; Normal; Sticky; Exchange; Imperative PBs X port ® PBs X Frame op X ALU inhibit Register ops: ALU: VL X VR ® V’L X V’R , CC Form token: VL X VR X Tag1 X Tag2 X CC ® Token1 X Token2 EA WM RegOp ALU FormToken Easy to implement; no hazard detection ISCA, Madison, WI, June 6, 2005

Procedure Linkage Operators f a1 an ... get frame extract tag change Tag 0 change Tag 1 change Tag n Like standard call/return but caller & callee can be active simultaneously 1: n: Fork Graph for f token in frame 0 token in frame 1 change Tag 0 change Tag 1 ISCA, Madison, WI, June 6, 2005

Outline Dataflow graphs  Static Dataflow Machines  A clean model of parallel computation Static Dataflow Machines  Not general-purpose enough Dynamic Dataflow Machines  As easy to build as a simple pipelined processor The software view  The memory model: I-structures Monsoon and its performance Musings ISCA, Madison, WI, June 6, 2005

Parallel Language Model Tree of Activation Frames Global Heap of Shared Objects f: g: h: active threads asynchronous and parallel at all levels loop ISCA, Madison, WI, June 6, 2005

Id World implicit parallelism Dataflow Graphs + I-Structures + . . . TTDA Monsoon *T *T-Voyager ISCA, Madison, WI, June 6, 2005

Id World people Boon S. Ang Jamey Hicks Derek Chiou Keshav Pingali Ken Traub Rishiyur Nikhil, Keshav Pingali, Vinod Kathail, David Culler Ken Traub Steve Heller, Richard Soley, Dinart Mores Jamey Hicks, Alex Caro, Andy Shaw, Boon Ang Shail Anditya R Paul Johnson Paul Barth Jan Maessen Christine Flood Jonathan Young Derek Chiou Arun Iyangar Zena Ariola Mike Bekerle K. Eknadham (IBM) Wim Bohm (Colorado) Joe Stoy (Oxford) ... R.S. Nikhil David Culler Boon S. Ang Jamey Hicks Derek Chiou Steve Heller ISCA, Madison, WI, June 6, 2005

Data Structures in Dataflow Data structures reside in a structure store Þ tokens carry pointers I-structures: Write-once, Read multiple times or allocate, write, read, ..., read, deallocate Þ No problem if a reader arrives before the writer at the memory location . . . . P Memory I-fetch a I-store a v ISCA, Madison, WI, June 6, 2005

I-Structure Storage: Split-phase operations & Presence bits <s, fp, a > I-structure address to be read s Memory I-Fetch a 1 a v1 2 a v2 t 3 a fp.ip  4 t I-Fetch <a, Read, (t,fp)> s split phase forwarding address Need to deal with multiple deferred reads other operations: fetch/store, take/put, clear ISCA, Madison, WI, June 6, 2005

Outline Dataflow graphs  Static Dataflow Machines  A clean parallel model of computation Static Dataflow Machines  Not general-purpose enough Dynamic Dataflow Machines  As easy to build as a simple pipelined processor The software view  The memory model: I-structures Monsoon and its performance  Musings ISCA, Madison, WI, June 6, 2005

The Monsoon Project Motorola Cambridge Research Center + MIT Tony Dahbura MIT-Motorola collaboration 1988-91 Research Prototypes 16-node Fat Tree Monsoon Processor 64-bit I-structure 100 MB/sec 10M tokens/sec Unix Box 4M 64-bit words 16 2-node systems (MIT, LANL, Motorola, Colorado, Oregon, McGill, USC, ...) 2 16-node systems (MIT, LANL) Id World Software ISCA, Madison, WI, June 6, 2005

Id Applications on Monsoon @ MIT Numerical Hydrodynamics - SIMPLE Global Circulation Model - GCM Photon-Neutron Transport code -GAMTEB N-body problem Symbolic Combinatorics - free tree matching,Paraffins Id-in-Id compiler System I/O Library Heap Storage Allocator on Monsoon Fun and Games Breakout Life Spreadsheet ISCA, Madison, WI, June 6, 2005

Id Run Time System (RTS) on Monsoon Frame Manager: Allocates frame memory on processors for procedure and loop activations Derek Chiou Heap Manager: Allocates storage in I-Structure memory or in Processor memory for heap objects. Arun Iyengar ISCA, Madison, WI, June 6, 2005

Single Processor Monsoon Performance Evolution One 64-bit processor (10 MHz) + 4M 64-bit I-structure Feb. 91 Aug. 91 Mar. 92 Sep. 92 Matrix Multiply 4:04 3:58 3:55 1:46 500x500 Wavefront 5:00 5:00 3:48 500x500, 144 iters. Paraffins n = 19 :50 :31 :02.4 n = 22 :32 GAMTEB-9C 40K particles 17:20 10:42 5:36 5:36 1M particles 7:13:20 4:17:14 2:36:00 2:22:00 SIMPLE-100 1 iterations :19 :15 :10 :06 1K iterations 4:48:00 1:19:49 Need a real machine to do this hours:minutes:seconds ISCA, Madison, WI, June 6, 2005

Monsoon Speed Up Results Boon Ang, Derek Chiou, Jamey Hicks critical path (millions of cycles) 1pe 1.00 2pe 1.99 1.95 1.86 4pe 3.90 3.92 3.81 3.45 8pe 7.74 7.25 7.35 6.27 1pe 1057 322 590 4681 2pe 531 162 303 2518 4pe 271 82 155 1355 8pe 137 44 80 747 Matrix Multiply 500 x 500 Paraffins n=22 GAMTEB-2C 40 K particles SIMPLE-100 100 iters Could not have asked for more September, 1992 ISCA, Madison, WI, June 6, 2005

Base Performance? Id on Monsoon vs. C / F77 on R3000 MIPS (R3000) (x 10e6 cycles) 954 + 102 + 265 * 1787 * Monsoon (1pe) (x 10e6 cycles) 1058 322 590 4682 Matrix Multiply 500 x 500 Paraffins n=22 GAMTEB-9C 40 K particles SIMPLE-100 100 iters R3000 cycles collected via Pixie * Fortran 77, fully optimized + MIPS C, O = 3 64-bit floating point used in Matrix-Multiply, GAMTEB and SIMPLE MIPS codes won’t run on a parallel machine without recompilation/recoding 8-way superscalar? Unlikely to give 7 fold speedup ISCA, Madison, WI, June 6, 2005

The Monsoon Experience Performance of implicitly parallel Id programs scaled effortlessly. Id programs on a single-processor Monsoon took 2 to 3 times as many cycles as Fortran/C on a modern workstation. Can certainly be improved Effort to develop the invisible software (loaders, simulators, I/O libraries,....) dominated the effort to develop the visible software (compilers...) ISCA, Madison, WI, June 6, 2005

Outline Dataflow graphs  Static Dataflow Machines  A clean parallel model of computation Static Dataflow Machines  Not general-purpose enough Dynamic Dataflow Machines  As easy to build as a simple pipelined processor The software view  The memory model: I-structures Monsoon and its performance  Musings  ISCA, Madison, WI, June 6, 2005

What would we have done differently - 1 Technically: Very little Simple, high performance design, easily exploits fine-grain parallelism, tolerates latencies efficiently Id preserves fine-grain parallelism which is abundant Robust compilation schemes; DFGs provide easy compilation target Of course, there is room for improvement Functionally several different types of memories (frames, queues, heap); all are not full at the same time Software has no direct control over large parts of the memory, e.g., token queue Poor single-thread performance and it hurts when single thread latency is on a critical path. ISCA, Madison, WI, June 6, 2005

What would we have done differently - 2 Non technical but perhaps even more important It is difficult enough to cause one revolution but two? Wake up? Cannot ignore market forces for too long – may affect acceptance even by the research community Should the machine have been built a few years earlier (in lieu of simulation and compiler work)? Perhaps it would have had more impact (had it worked) The follow on project should have been about: Running conventional software on DF machines, or About making minimum modifications to commercial microprocessors (We chose 2 but perhaps 1 would have been better) ISCA, Madison, WI, June 6, 2005

Imperative Programs and Multi-Cores Deep pointer analysis is required to extract parallelism from sequential codes otherwise, extreme speculation is the only solution A multithreaded/dataflow model is needed to present the found parallelism to the underlying hardware Exploiting fine-grain parallelism is necessary for many situations, e.g., producer-consumer parallelism ISCA, Madison, WI, June 6, 2005

Locality and Parallelism: Dual problems? Good performance requires exploiting both Dataflow model gives you parallelism for free, but requires analysis to get locality C (mostly) provides locality for free but one must do analysis to get parallelism Tough problems are tough independent of representation ISCA, Madison, WI, June 6, 2005

Parting thoughts Dataflow research as conceived by most researchers achieved its goals The model of computation is beautiful and will be resurrected whenever people want to exploit fine-grain parallelism But installed software base has a different model of computation which provides different challenges for parallel computing Maybe possible to implement this model effectively on dataflow machines – we did not investigate this but is absolutely worth investigating further Current efforts on more standard hardware are having lots of their own problems Still an open question on what will work in the end ISCA, Madison, WI, June 6, 2005

R.S.Nikhil, Dan Rosenband, James Hoe, Derek Chiou, Thank You! and thanks to R.S.Nikhil, Dan Rosenband, James Hoe, Derek Chiou, Larry Rudolph, Martin Rinard, Keshav Pingali for helping with this talk

DFGs vs CDFGs Both Dataflow Graphs and Control DFGs had the goal of structured, well-formed, compositional, executable graphs CDFG research (70s, 80s) approached this goal starting with original sequential control-flow graphs (“flowcharts”) and data-dependency arcs, and gradually adding structure (e.g., φ-functions) Dataflow graphs approached this goal directly, by construction Schemata for basic blocks, conditionals, loops, procedures CDFGs is an Intermediate representation for compilers and, unlike DFGs, not a language. ISCA, Madison, WI, June 6, 2005