Please do not distribute

Slides:



Advertisements
Similar presentations
Please do not distribute
Advertisements

A KTEC Center of Excellence 1 Cooperative Caching for Chip Multiprocessors Jichuan Chang and Gurindar S. Sohi University of Wisconsin-Madison.
1 Memory Performance and Scalability of Intel’s and AMD’s Dual-Core Processors: A Case Study Lu Peng 1, Jih-Kwon Peir 2, Tribuvan K. Prakash 1, Yen-Kuang.
Thoughts on Shared Caches Jeff Odom University of Maryland.
Toward Cache-Friendly Hardware Accelerators
Please do not distribute
1 RAMP Infrastructure Krste Asanovic UC Berkeley RAMP Tutorial, ISCA/FCRC, San Diego June 10, 2007.
1 Instant replay  The semester was split into roughly four parts. —The 1st quarter covered instruction set architectures—the connection between software.
Please do not distribute
The MachSuite Benchmark
Tutorial Outline Time Topic 9:00 am – 9:30 am Introduction 9:30 am – 10:10 am Standalone Accelerator Simulation: Aladdin 10:10 am – 10:30 am Standalone.
Srihari Makineni & Ravi Iyer Communications Technology Lab
Chapter 8 CPU and Memory: Design, Implementation, and Enhancement The Architecture of Computer Hardware and Systems Software: An Information Technology.
1 - CPRE 583 (Reconfigurable Computing): Reconfigurable Computing Architectures Iowa State University (Ames) Reconfigurable Architectures Forces that drive.
L/O/G/O Input Output Chapter 4 CS.216 Computer Architecture and Organization.
Caches for Accelerators
1 - CPRE 583 (Reconfigurable Computing): Reconfigurable Computing Architectures Iowa State University (Ames) CPRE 583 Reconfigurable Computing Lecture.
Quantifying Acceleration: Power/Performance Trade-Offs of Application Kernels in Hardware WU DI NOV. 3, 2015.
Design and Modeling of Specialized Architectures Yakun Sophia Shao May 9 th, 2016 Harvard University P HD D ISSERTATION D EFENSE.
Introduction to Operating Systems Concepts
Computer Architecture Lecture 12: Virtual Memory I
Chapter 6 System Integration and Performance
Lecture 2. A Computer System for Labs
DIRECT MEMORY ACCESS and Computer Buses
Co-Designing Accelerators and SoC Interfaces using gem5-Aladdin
Cache Memories.
CS 704 Advanced Computer Architecture
Please do not distribute
Bus Interfacing Processor-Memory Bus Backplane Bus I/O Bus
TI Information – Selective Disclosure
Please do not distribute
Please do not distribute
Please do not distribute
Cache Memories CSE 238/2038/2138: Systems Programming
Improving Memory Access 1/3 The Cache and Virtual Memory
Please do not distribute
143A: Principles of Operating Systems Lecture 6: Address translation (Paging) Anton Burtsev October, 2017.
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Morgan Kaufmann Publishers
Architecture & Organization 1
5.2 Eleven Advanced Optimizations of Cache Performance
Cache Memory Presentation I
CS 105 Tour of the Black Holes of Computing
Chapter III Desktop Imaging Systems & Issues
Bruhadeshwar Meltdown Bruhadeshwar
Snehasish Kumar, Arrvindh Shriraman, and Naveen Vedula
ReCap Random-Access Memory (RAM) Nonvolatile Memory
Computer Architecture
Derek Chiou The University of Texas at Austin
Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: illusion of having more physical memory program relocation protection.
CMSC 611: Advanced Computer Architecture
Cache Memories September 30, 2008
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Architecture & Organization 1
CS 105 “Tour of the Black Holes of Computing!”
Adapted from slides by Sally McKee Cornell University
Final Project presentation
Introduction to Heterogeneous Parallel Computing
Computer Evolution and Performance
CS 3410, Spring 2014 Computer Science Cornell University
What is Computer Architecture?
What is Computer Architecture?
What is Computer Architecture?
CS 105 “Tour of the Black Holes of Computing!”
CSE 471 Autumn 1998 Virtual memory
Main Memory Background
Automatic Tuning of Two-Level Caches to Embedded Applications
Cache Memory and Performance
6- General Purpose GPU Programming
ADSP 21065L.
Presentation transcript:

Please do not distribute 4/24/2018 Integration for Heterogeneous SoC Modeling Yakun Sophia Shao, Sam Xi, Gu-Yeon Wei, David Brooks Harvard University GYW

Today’s Accelerator-CPU Integration Simple interface to accelerators: DMA Easy to integrate lots of IP Hard to program and share data Core … Core Acc #1 Acc #n L1 $ L1 $ SPAD SPAD L2 $ On-Chip System Bus DMA DRAM

Today’s Accelerator-CPU Integration Simple interface to accelerators: DMA Easy to integrate lots of IP Hard to program and share data Core … Core Acc #1 Acc #n L1 $ L1 $ SPAD SPAD L2 $ On-Chip System Bus DMA DRAM

Typical DMA Flow Flush and invalidate input data from CPU caches. Invalidate a region of memory to be used for receiving accelerator output. Program a buffer descriptor describing the transfer (start, length, source, destination). When data is large, program multiple descriptors Initiate accelerator. Initiate data transfer. Wait for accelerator to complete.

DMA can be very expensive Only 20% of total time! 16-way parallel md-knn accelerator

Co-Design vs. Isolated Design

Co-Design vs. Isolated Design

Co-Design vs. Isolated Design No need to build such an aggressively parallel design!

gem5-Aladdin: An SoC Simulator

Features End-to-end simulation of accelerated workloads. Models hardware-managed caches and DMA + scratchpad memory systems. Supports multiple accelerators. Enables system-level studies of accelerator-centric platforms. Xenon: A powerful design sweep system. Highly configurable and extensible.

DMA Engine Extend the existing DMA engine in gem5 to accelerators. Special dmaLoad and dmaStore functions. Insert into accelerated kernel. Trace will capture them. gem5-Aladdin will handle them. Currently a timing model only. Analytical model for cache flush and invalidation latency. Flush throughput is 1 cache line per 56 cycles (84ns), plus 150 cycles overhead. Invalidate throughput is 1 cache line per 47 cycles (70ns), plus 400 cycles overhead All cycles are CPU clock cycles at 666MHz.

DMA Engine /* Code representing the accelerator */ void fft1D_512(TYPE work_x[512], TYPE work_y[512]){ int tid, hi, lo, stride; /* more setup */ dmaLoad(&work_x[0], 0, 512 * sizeof(TYPE)); dmaLoad(&work_y[0], 0, 512 * sizeof(TYPE)); /* Run FFT here ... */ dmaStore(&work_x[0], 0, 512 * sizeof(TYPE)); dmaStore(&work_y[0], 0, 512 * sizeof(TYPE)); } Maybe note that this is a bit different from how some platforms do DMA. In our case, the accelerator initiates the transfer, while on most platforms, the CPU initiates the transfer. But it doesn’t really matter for us – one way or another, the cost of DMA will be paid, and it’s just a matter of a small bit of power on the CPU side.

Caches and Virtual Memory Gaining traction on multiple platforms. Intel QuickAssist QPI-Based FPGA Accelerator Platform (QAP) IBM POWER8’s Coherent Accelerator Processor Interface (CAPI) System vendors provide a Host Service Layer with virtual memory and cache coherence support. Host service layer communicates with CPUs through an agent. Processors FPGA Host service layer might contain a cache and TLB. Accelerator agent would snoop the system buses on behalf of the accelerator and service cache misses/TLB misses. QPI/PCIe Core … Core Accelerator Acc Agent L1 $ L1 $ Host Service Layer L2 $

Caches and Virtual Memory Accelerator caches are connected directly to system bus. Support for multi-level cache hierarchies. Hybrid memory system: can use both caches and scratchpads. Basic MOESI coherence protocol. Special Aladdin TLB model. Map trace address space to simulated address space.

Demo: DMA Exercise: change system bus width and see effect on accelerator performance. Open up your VM. Go to: ~gem5-aladdin/sweeps/tutorial/dma/stencil-stencil2d/0 Examine these files: stencil-stencil2d.cfg ../inputs/dynamic_trace.gz gem5.cfg run.sh

Demo: DMA Run the accelerator with DMA simulation Change the system bus width to 32 bits Set xbar_width=4 in run.sh Run again. Compare results. At 64-bits, cycles is 37058 At 32 bits, cycles is 45246

Demo: Caches Exercise: see effect of cache size on accelerator performance. Go to: ~gem5-aladdin/sweeps/tutorial/cache/stencil-stencil2d/0 Examine these files: ../inputs/dynamic_trace.gz stencil-stencil2d.cfg gem5.cfg

Demo: Caches Run the accelerator with caches simulation Change the cache size to 1kB. Set cache_size = 1kB in gem5.cfg. Run again. Compare results. Play with some other parameters (associativity, line size, etc.) Run with cache size = 1kB (92225) Change cache size = 4kB (77738)

CPU – Accelerator Cosimulation CPU can invoke an attached accelerator. We use the ioctl system call. Communicate status through shared memory. Spin wait for accelerator, or do something else (e.g. start another accelerator).

Code example /* Code running on the CPU. */ void run_benchmark(TYPE work_x[512], TYPE work_y[512]) { }

Code example /* Code running on the CPU. */ void run_benchmark(TYPE work_x[512], TYPE work_y[512]) { /* Establish a mapping from simulated to trace * address space */ mapArrayToAccelerator(MACHSUITE_FFT_TRANSPOSE, "work_x", work_x, sizeof(work_x)); } ioctl request code Associate this array name with the addresses of memory accesses in the trace. Starting address and length of one memory region that the accelerator can access.

Code example /* Code running on the CPU. */ void run_benchmark(TYPE work_x[512], TYPE work_y[512]) { /* Establish a mapping from simulated to trace * address space */ mapArrayToAccelerator(MACHSUITE_FFT_TRANSPOSE, "work_x", work_x, sizeof(work_x)); mapArrayToAccelerator(MACHSUITE_FFT_TRANSPOSE, "work_y", work_y, sizeof(work_y)); }

Code example /* Code running on the CPU. */ void run_benchmark(TYPE work_x[512], TYPE work_y[512]) { /* Establish a mapping from simulated to trace * address space */ mapArrayToAccelerator(MACHSUITE_FFT_TRANSPOSE, "work_x", work_x, sizeof(work_x)); mapArrayToAccelerator(MACHSUITE_FFT_TRANSPOSE, "work_y", work_y, sizeof(work_y)); // Start the accelerator and spin until it finishes. invokeAcceleratorAndBlock(MACHSUITE_FFT_TRANSPOSE); }

Demo: disparity You can just watch for this one. If you want to follow along: ~/gem5-aladdin/sweeps/tutorial/cortexsuite_sweep/0 This is a multi-kernel, CPU + accelerator cosimulation.

How can I use gem5-Aladdin? Investigate optimizations to the DMA flow. Study cache-based accelerators. Study impact of system-level effects on accelerator design. Multi-accelerator systems. Near-data processing. All these will require design sweeps!

Xenon: Design Sweep System A small declarative command language for generating design sweep configurations. Implemented as a Python embedded DSL. Highly extensible. Not gem5-Aladdin specific. Not limited to sweeping parameters on benchmarks. Why “Xenon”?

Xenon: Generation Procedure Read sweep configuration file Execute sweep commands Generate all configurations Export configurations to JSON Backend: read JSON, rewrite into desired format. Backend: Generate any additional outputs

Xenon: Data Structures md-knn md_kernel loop_i loop_j force_x force_y force_z cycle_time pipelining partition_type partition_factor memory_type Example of a Python data structure that Xenon will operate on. unrolling A benchmark suite would contain many of these.

Xenon: Commands set unrolling 4 set partition_type “cyclic” set unrolling for md_knn.* 8 set partition_type for md_knn.force_x “block” sweep cycle_time from 1 to 5 sweep partition_factor from 1 to 8 expstep 2 set partition_factor for md_knn.force_x 8 generate configs generate trace

Xenon: Execute Every configuration in a JSON file. "Benchmark(\"md-knn\")": { "Array(\"NL\")": { "memory_type": "cache", "name": "NL", "partition_factor": 1, "partition_type": "cyclic", "size": 4096, "type": "Array", "word_length": 8 }, "Array(\"force_x\")": { "name": "force_x", "size": 256, "Array(\"force_y\")": { "name": "force_y", } ... Every configuration in a JSON file. A backend is then invoked to load this JSON object and write application specific config files.

Enough talking…let’s do a demo

Demo: Design Sweeps with Xenon Exercise: sweep some parameters. Go to: ~gem5-aladdin/sweeps/tutorial/ Examine these files: ../inputs/dynamic_trace.gz stencil-stencil2d.cfg gem5.cfg

Tutorial References Y.S. Shao, S. Xi, V. Srinivasan, G.-Y. Wei, D. Brooks, “Co-Designing Accelerators and SoC Interfaces using gem5-Aladdin”, MICRO, 2016. Y.S. Shao, S. Xi, V. Srinivasan, G.-Y. Wei, D. Brooks, “Toward Cache-Friendly Hardware Accelerators”, SCAW, 2015. Y.S. Shao and D. Brooks, “ISA-Independent Workload Characterization and its Implications for Specialized Architectures,” ISPASS’13. B. Reagen, Y.S. Shao, G.-Y. Wei, D. Brooks, “Quantifying Acceleration: Power/Performance Trade-Offs of Application Kernels in Hardware,” ISLPED’13. Y.S. Shao, B. Reagen, G.-Y. Wei, D. Brooks, “Aladdin: A Pre-RTL, Power-Performance Accelerator Simulator Enabling Large Design Space Exploration of Customized Architectures,” ISCA’14. B. Reagen, B. Adolf, Y.S. Shao, G.-Y. Wei, D. Brooks, “MachSuite: Benchmarks for Accelerator Design and Customized Architectures,” IISWC’14.

Backup slides

Validation Implemented accelerators in Vivado HLS Designed complete system in Vivado Design Suite 2015.1.

Reducing DMA Overhead

Reducing DMA Overhead

Reducing DMA Overhead

DMA Optimization Results Overlap of flush and data transfer

DMA Optimization Results Overlap of data transfer and compute

DMA Optimization Results md-knn is able to completely overlap compute with data transfer.

DMA vs Caches DMA Push-based data access Bulk transfer efficiency Simple hardware, lower power Manual coherence management Manual optimizations Coarse grained communication Cache On-demand data access Automatic coherence handling Fine grained communication Automatic eviction Larger, higher power cost Less efficient at bulk data movement.

DMA vs Caches DMA is faster and lower power than caches. Regular access patterns Small input data size Caches will have cold misses.

DMA vs Caches DMA and caches are approximately equal. Power is dominated by floating point units.

DMA vs Caches DMA is faster and lower power than caches. Regular access patterns Small input data size Caches will have cold misses.