Memory Buffering Techniques Greg Stitt ECE Department University of Florida.

Slides:



Advertisements
Similar presentations
Pipelining (Week 8).
Advertisements

Computer Organization and Architecture
CSCI 4717/5717 Computer Architecture
Morgan Kaufmann Publishers The Processor
Instruction-Level Parallelism compiler techniques and branch prediction prepared and Instructed by Shmuel Wimer Eng. Faculty, Bar-Ilan University March.
Data Dependencies Describes the normal situation that the data that instructions use depend upon the data created by other instructions, or data is stored.
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
ENGS 116 Lecture 101 ILP: Software Approaches Vincent H. Berk October 12 th Reading for today: , 4.1 Reading for Friday: 4.2 – 4.6 Homework #2:
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
Intro to Computer Org. Pipelining, Part 2 – Data hazards + Stalls.
Lecture Objectives: 1)Define pipelining 2)Calculate the speedup achieved by pipelining for a given number of instructions. 3)Define how pipelining improves.
CMPT 334 Computer Organization
Pipeline Hazards Pipeline hazards These are situations that inhibit that the next instruction can be processed in the next stage of the pipeline. This.
Chapter 8. Pipelining.
Vector Processing. Vector Processors Combine vector operands (inputs) element by element to produce an output vector. Typical array-oriented operations.
1 Lecture: Pipeline Wrap-Up and Static ILP Topics: multi-cycle instructions, precise exceptions, deep pipelines, compiler scheduling, loop unrolling, software.
Disk Access Model. Using Secondary Storage Effectively In most studies of algorithms, one assumes the “RAM model”: –Data is in main memory, –Access to.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
EECC551 - Shaaban #1 Spring 2006 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
Memory Hierarchy.1 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output.
EECC551 - Shaaban #1 Fall 2005 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
Using Secondary Storage Effectively In most studies of algorithms, one assumes the "RAM model“: –The data is in main memory, –Access to any item of data.
Appendix A Pipelining: Basic and Intermediate Concepts
Pipelining By Toan Nguyen.
CSE378 Pipelining1 Pipelining Basic concept of assembly line –Split a job A into n sequential subjobs (A 1,A 2,…,A n ) with each A i taking approximately.
RISC:Reduced Instruction Set Computing. Overview What is RISC architecture? How did RISC evolve? How does RISC use instruction pipelining? How does RISC.
Systolic Architectures: Why is RC fast? Greg Stitt ECE Department University of Florida.
9.2 Pipelining Suppose we want to perform the combined multiply and add operations with a stream of numbers: A i * B i + C i for i =1,2,3,…,7.
What have mr aldred’s dirty clothes got to do with the cpu
Copyright © 2013, SAS Institute Inc. All rights reserved. MEMORY CACHE – PERFORMANCE CONSIDERATIONS CLAIRE CATES DISTINGUISHED DEVELOPER
Automobile Manufacturing 1. Build frame. 60 min. 2. Add engine. 50 min. 3. Build body. 80 min. 4. Paint. 40 min. 5. Finish.45 min. 275 min. Latency: Time.
Super computers Parallel Processing By Lecturer: Aisha Dawood.
Computer Organization CS224 Fall 2012 Lesson 28. Pipelining Analogy  Pipelined laundry: overlapping execution l Parallelism improves performance §4.5.
Chapter 4 The Processor CprE 381 Computer Organization and Assembly Level Programming, Fall 2012 Revised from original slides provided by MKP.
ECE 456 Computer Architecture Lecture #14 – CPU (III) Instruction Cycle & Pipelining Instructor: Dr. Honggang Wang Fall 2013.
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
Exploiting Parallelism
Gary MarsdenSlide 1University of Cape Town Pipelining  Technique where multiple instructions are overlapped in execution (key for speed)
More on Pipelining 1 CSE 2312 Computer Organization and Assembly Language Programming Vassilis Athitsos University of Texas at Arlington.
LECTURE 7 Pipelining. DATAPATH AND CONTROL We started with the single-cycle implementation, in which a single instruction is executed over a single cycle.
More on Pipelining 1 CSE 2312 Computer Organization and Assembly Language Programming Vassilis Athitsos University of Texas at Arlington.
EE3A1 Computer Hardware and Digital Design Lecture 9 Pipelining.
Prefetching Techniques. 2 Reading Data prefetch mechanisms, Steven P. Vanderwiel, David J. Lilja, ACM Computing Surveys, Vol. 32, Issue 2 (June 2000)
Pipelining: Implementation CPSC 252 Computer Organization Ellen Walker, Hiram College.
Buffering Techniques Greg Stitt ECE Department University of Florida.
Buffering Techniques Greg Stitt ECE Department University of Florida.
Memory Buffering Techniques
Speed up on cycle time Stalls – Optimizing compilers for pipelining
Computer Architecture Chapter (14): Processor Structure and Function
Chapter 9 a Instruction Level Parallelism and Superscalar Processors
Single Clock Datapath With Control
Greg Stitt ECE Department University of Florida
Pipelining Example Cycle 1 b[0] b[1] b[2] + +
Pipelining Basic concept of assembly line
Control unit extension for data hazards
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Computer Architecture
Pipelining: Basic Concepts
Pipelining Basic concept of assembly line
Pipelining Basic concept of assembly line
Memory System Performance Chapter 3
A relevant question Assuming you’ve got: One washer (takes 30 minutes)
Presentation transcript:

Memory Buffering Techniques Greg Stitt ECE Department University of Florida

Buffers Purposes Metastability issues Memory clock likely different from circuit clock Buffer stores data at one speed, loads data at another Stores “windows” of data, delivers to datapath Window is set of inputs needed each cycle by pipelined circuit Generally, more efficient than datapath requesting needed data i.e. Push data into datapath as opposed to pulling data from memory

FIFOs FIFOs are a common buffer Outputs data in order read from memory + + for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; FIFO RAM

FIFOs FIFOs are a common buffer Outputs data in order read from memory + + b[0-2] for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; FIFO RAM b[0] b[1] b[2] First window read from memory

FIFOs FIFOs are a common buffer Outputs data in order read from memory + + b[0-2] for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[0] b[1] b[2]

FIFOs FIFOs are a common buffer Outputs data in order read from memory + + b[1-3] for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[1] b[2] b[3] b[0] b[1] b[2] First window pushed to datapath Second window read from RAM

FIFOs FIFOs are a common buffer Outputs data in order read from memory + + b[2-4] for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[2] b[3] b[4] b[1] b[2] b[3] b[0]+b[1] b[2]

FIFOs Timing issues Memory bandwidth too small Circuit stalls or wastes cycles while waiting for data Memory bandwidth larger than data consumption rate of circuit May happen if area exhausted

FIFOs Memory bandwidth too small + + b[0-2] for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[0] b[1] b[2] First window read from memory into FIFO

FIFOs Memory bandwidth too small + + for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[0] b[1] b[2] b[1-3] 1 st window pushed to datapath 2 nd window requested from memory, but not transferred yet

FIFOs Memory bandwidth too small + + for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[1-3] No data ready (wasted cycles) 2 nd window requested from memory, but not transferred yet b[0]+b[1] b[2] Alternatively, could have prevented 1 st window from proceeding (stall cycles) - necessary if feedback in pipeline

FIFOs Memory bandwidth larger than data consumption rate + + b[0-2] for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[0] b[1] b[2] 1st window read from memory into FIFO

FIFOs Memory bandwidth larger than data consumption rate + + b[1-3] for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[1] b[2] b[3] b[0] b[1] b[2] 2nd window read from memory into FIFO. 1 st window pushed to datapath

FIFOs Memory bandwidth larger than data consumption rate + + b[2-4] for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[1] b[2] b[3] b[0] b[1] b[2] b[3] b[4] Data arrives faster than circuit can process it – FIFOs begin to fill If FIFO full, memory reads stop until not full

Improvements Do we need to fetch entire window from memory for each iteration? Only if windows of consecutive iterations are mutually exclusive Commonly, consecutive iterations have overlapping windows Overlap represents “reused” data Only has to be fetched from memory once Smart Buffer [Guo, Buyukkurt, Najjar LCTES 2004] Part of the ROCCC compiler Analyzes memory access patterns Detects “sliding window” Stores reused data for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[0]b[1]b[4]b[3]b[2]b[5]

Smart Buffer + + b[0-2] for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[0] b[1] b[2] 1st window read from memory into smart buffer

Smart Buffer + + for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[0] b[1] b[2] b[3-5] Continues reading needed data, but does not reread b[1-2] b[3] b[4] b[5] b[0] b[1] b[2] 1 st window pushed to datapath

Smart Buffer + + for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[0] b[1] b[2] b[6-8] b[0] no longer needed, deleted from buffer b[3] b[4] b[5] b[1] b[2] b[3] b[0]+b[1] b[2] b[6]... 2 nd window pushed to datapath Continues fetching needed data (as opposed to windows)

Smart Buffer + + for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; RAM b[0] b[1] b[2] b[9-11] b[1] no longer needed, deleted from buffer b[3] b[4] b[5] b[2] b[3] b[4] b[1]+b[2] b[3] b[1]+b[2]+b[3] b[6]... 3 rd window pushed to datapath And so on

Comparison with FIFO FIFO - fetches a window each cycle Fetches 3 elements every iteration 100 iterations * 3 accesses/iteration = 300 memory accesses Smart Buffer - fetches as much data as possible each cycle, buffer assembles data into windows Fetches each element once # fetches = array size Circuit performance is equal to latency plus array size No matter how much computation, execution time is approximately equal to time to stream in data! Note: Only true for streaming examples 102 memory accesses Essentially improves memory bandwidth by 3x How does this help? Smart buffers enable more unrolling

Unrolling with Smart Buffers Assume bandwidth = 128 bits We can fetch 128/32 = 4 array elements each cycle First access doesn’t save time No data in buffer Same as FIFO, can unroll once long b[102]; for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[0]b[1]b[3]b[2] First memory access, only 2 parallel iterations

Unrolling with Smart Buffers All other memory accesses Smart Buffer Bandwidth = Memory Bandwidth + Reused Data 4 elements + 2 reused elements (b[2],b[3]) Essentially, provides bandwidth of 6 elements per cycle Can perform 4 iterations in parallel 2x speedup compared to FIFO long b[102]; for (i=0; i < 100; I++) a[i] = b[i] + b[i+1] + b[i+2]; b[0]b[1]b[4]b[3]b[2]b[5]b[6]b[7] However, every subsequent access enables 4 parallel iterations

Datapath Design w/ Smart Buffers Datapath based on unrolling enabled by smart buffer bandwidth (not memory bandwidth) Don’t be confused by first memory access Smart buffer waits until initial windows in buffer before passing any data to datapath Adds a little latency But, avoids 1 st iteration requiring different control due to less unrolling

Another Example Your turn Analyze memory access patterns Determine window overlap Determine smart buffer bandwidth Assume memory bandwidth = 128 bits/cycle Determine maximum unrolling w/ smart buffer Determine total cycles Use previous systolic array analysis (latency, bandwidth, etc). short b[1004], a[1000]; for (i=0; i < 1000; i++) a[i] = avg( b[i], b[i+1], b[i+2], b[i+3], b[i+4] );