Presentation is loading. Please wait.

Presentation is loading. Please wait.

Warp Processor: A Dynamically Reconfigurable Coprocessor

Similar presentations


Presentation on theme: "Warp Processor: A Dynamically Reconfigurable Coprocessor"— Presentation transcript:

1 Warp Processor: A Dynamically Reconfigurable Coprocessor
Frank Vahid Professor Department of Computer Science and Engineering University of California, Riverside Associate Director, Center for Embedded Computer Systems, UC Irvine Work supported by the National Science Foundation, the Semiconductor Research Corporation, Xilinx, Intel, Motorola/Freescale Contributing Ph.D. Students: Roman Lysecky (2005, now asst. prof. at U. Arizona), Greg Stitt (Ph.D. 2006), Kris Miller (MS 2007), David Sheldon (3rd yr PhD), Scott Sirowy (1st yr PhD)

2 Outline Intro and Background: Warp Processors
Work in progress under SRC 3-yr grant Parallelized-computation memory access Deriving high-level constructs from binaries Case studies Using commercial FPGA fabrics Application-specific FPGA Other ongoing related work Configurable cache tuning Frank Vahid, UC Riverside

3 Intro: Partitioning to FPGA
Custom ASIC coprocessor known to speedup sw kernels Energy advantages too (e.g., Henkel’98, Rabaey’98, Stitt/Vahid’04) Power savings even on FPGA (Stitt/Vahid IEEE D&T’02, IEEE TECS’04) Con: more silicon (~10x), less power savings Pro: platform fully programmable Mass-produced Application Proc. ASIC Application Proc. FPGA Frank Vahid, UC Riverside

4 Intro: FPGA vs. ASIC Coprocessor – FPGA Surprisingly Competitive
FPGA 34% savings versus ASIC’s 48% (Stitt/Vahid IEEE D&T’02, IEEE TECS’04) 70% energy savings & 5.4 speedup vs. 200 MHz MIPS (Stitt/Vahid DATE’05) Frank Vahid, UC Riverside

5 FPGA – Why (Sometimes) Better than Software
C Code for Bit Reversal Hardware for Bit Reversal Bit Reversed X Value Original X Value x = (x >>16) | (x <<16); x = ((x >> 8) & 0x00ff00ff) | ((x << 8) & 0xff00ff00); x = ((x >> 4) & 0x0f0f0f0f) | ((x << 4) & 0xf0f0f0f0); x = ((x >> 2) & 0x ) | ((x << 2) & 0xcccccccc); x = ((x >> 1) & 0x ) | ((x << 1) & 0xaaaaaaaa); sll $v1[3],$v0[2],0x10 srl $v0[2],$v0[2],0x10 or $v0[2],$v1[3],$v0[2] srl $v1[3],$v0[2],0x8 and $v1[3],$v1[3],$t5[13] sll $v0[2],$v0[2],0x8 and $v0[2],$v0[2],$t4[12] srl $v1[3],$v0[2],0x4 and $v1[3],$v1[3],$t3[11] sll $v0[2],$v0[2],0x4 and $v0[2],$v0[2],$t2[10] ... Binary Compilation Processor Requires between 32 and 128 cycles Processor FPGA Requires only 1 cycle (speedup of 32x to 128x) Other big reason: Concurrency Frank Vahid, UC Riverside

6 Warp Processing – Dynamic Partitioning of Sw Kernels to FPGA
Profile application to determine critical regions 2 Profiler Partition critical regions to hardware 3 Dynamic Part. Module (DPM) Initially execute application in software only 1 µP I$ D$ Profiler µP I$ Partitioned application executes faster with lower energy consumption 5 D$ Program configurable logic & update software binary 4 FPGA FPGA Dynamic Part. Module (DPM) Frank Vahid, UC Riverside

7 Warp Processors – Dynamic Partitioning
Binary SW Advantages vs. compiler-time partitioning No special compilers Completely transparent Separates function and architecture for architectures having FPGAs Avoid complexities of supporting different FPGAs Potentially brings FPGA advantages to ALL software Profiling Standard Compiler Binary Traditional partitioning done here Profiling CAD Tools FPGA Proc. Profiling CAD Tools FPGA Proc. Frank Vahid, UC Riverside

8 Warp Processing Steps (On-Chip CAD)
Binary Decompilation HW Bitstream RT Synthesis Partitioning Binary Updater Updated Binary Std. HW Binary JIT FPGA Compilation Tech. Mapping/Packing Placement Logic Synthesis Routing µP I$ D$ WCLA (FPGA) Profiler DPM (CAD) Frank Vahid, UC Riverside

9 Warp Processing – Partitioning
Applications spend much time in small amount of code 90-10 rule Observed 75-4 rule for MediaBench, NetBench Potentially large perfomance/ energy benefits from implementing critical regions in hardware Use profiling results to identify critical regions µP I$ D$ WCLA (FPGA) Profiler DPM (CAD) Frank Vahid, UC Riverside

10 Warp Processing – Decompilation
Synthesis from binary has a challenge High-level information (e.g., loops, arrays) lost during compilation Solution –Recover high-level information: decompilation Corresponding Assembly long f( long reg2 ) { int reg3 = 0; int reg4 = 0; loop: reg4 = reg4 + mem[reg2 + reg3 << 1)]; reg3 = reg3 + 1; if (reg3 < 10) goto loop; return reg4; } Function Recovery long f( short array[10] ) { long reg4 = 0; for (long reg3 = 0; reg3 < 10; reg3++) { reg4 += array[reg3]; } return reg4; Array Recovery long f( long reg2 ) { long reg4 = 0; for (long reg3 = 0; reg3 < 10; reg3++) { reg4 += mem[reg2 + (reg3 << 1)]; } return reg4; Control Structure Recovery loop: reg1 := reg3 << 1 reg5 := reg2 + reg1 reg6 := mem[reg5 + 0] reg4 := reg4 + reg6 reg3 := reg3 + 1 if (reg3 < 10) goto loop ret reg4 reg3 := 0 reg4 := 0 Control/Data Flow Graph Creation loop: reg4 := reg4 + mem[ reg2 + (reg3 << 1)] reg3 := reg3 + 1 if (reg3 < 10) goto loop ret reg4 reg3 := 0 reg4 := 0 Data Flow Analysis Original C Code long f( short a[10] ) { long accum; for (int i=0; i < 10; i++) { accum += a[i]; } return accum; Mov reg3, 0 Mov reg4, 0 loop: Shl reg1, reg3, 1 Add reg5, reg2, reg1 Ld reg6, 0(reg5) Add reg4, reg4, reg6 Add reg3, reg3, 1 Beq reg3, 10, -5 Ret reg4 Almost Identical Representations Frank Vahid, UC Riverside

11 Warp Processing – Decompilation
Earlier study Synthesis after decompilation often quite similar Almost identical performance, small area overhead FPGA 2005 Frank Vahid, UC Riverside

12 Warp Processing – RT Synthesis
Maps decompiled DFG operations to hw library components Adders, Comparators, Multiplexors, Shifters Creates Boolean expression for each output bit in dataflow graph r4[0]=r1[0] xor r2[0], carry[0]=r1[0] and r2[0] r4[1]=(r1[1] xor r2[1]) xor carry[0], carry[1]= ……. ……. r1 r2 + r4 r3 8 < r5 32-bit adder 32-bit comparator Frank Vahid, UC Riverside

13 Warp Processing – JIT FPGA Compilation
Existing FPGAs require complex CAD tools FPGAs designed to handle large arbitrary circuits, ASIC prototyping, etc. Require long execution times and large memory usage Not suitable for dynamic on-chip execution 50 MB 60 MB 10 MB 1 min Log. Syn. Tech. Map 1-2 mins Place 2-30 mins Route Solution: Develop a custom CAD-oriented FPGA (WCLA – Warp Configurable Logic Architecture) Careful simultaneous design of FPGA and CAD FPGA features evaluated for impact on CAD Add architecture features for SW kernels Enables development of fast, lean JIT FPGA compilation tools 1s <1s .5 MB 1 MB 10s 3.6 MB Frank Vahid, UC Riverside

14 Warp Configurable Logic Architecture (WCLA)
Data address generators (DADG) and loop control hardware(LCH) Provide fast loop execution Supports memory accesses with regular access pattern Integrated 32-bit multiplier-accumulator (MAC) Frequently found within critical SW kernels DADG & LCH Configurable Logic Fabric Reg0 32-bit MAC Reg1 Reg2 ARM I$ D$ WCLA Profiler DPM DATE’04 Frank Vahid, UC Riverside

15 Warp Configurable Logic Architecture (WCLA)
CAD-specialized configurable logic fabric Simplified switch matrices Directly connected to adjacent CLB All nets are routed using only a single pair of channels Allows for efficient routing Simplified CLBs Two 3 input, 2 output LUTs Each CLB connected to adjacent CLB to simplify routing of carry chains Currently being prototyped by Intel (scheduled for 2006 Q3 shuttle) 0L 1 1L 2L 2 3L 3 LUT a b c d e f o1 o2 o3 o4 Adj. CLB Frank Vahid, UC Riverside DATE’04

16 Warp Processing – Logic Synthesis
ROCM - Riverside On-Chip Minimizer Two-level minimization tool Combination of approaches from Espresso-II [Brayton, et al., 1984][Hassoun & Sasoa, 2002] and Presto [Svoboda & White, 1979] Single expand phase instead of multiple iterations Eliminate need to compute off-set – reduces memory usage On average only 2% larger than optimal solution Expand dc-set on-set off-set Reduce JIT FPGA Compilation Tech. Mapping/Packing Placement Logic Synthesis Routing Irredundant On-Chip Logic Minimization, DAC’03 A Codesigned On-Chip Logic Minimizer, CODES+ISSS’03 Frank Vahid, UC Riverside

17 Warp Processing – Technology Mapping
ROCTM - Technology Mapping/Packing Decompose hardware circuit into DAG Nodes correspond to basic 2-input logic gates (AND, OR, XOR, etc.) Hierarchical bottom-up graph clustering algorithm Breadth-first traversal combining nodes to form single-output LUTs Combine LUTs with common inputs to form final 2-output LUTs Pack LUTs in which output from one LUT is input to second LUT JIT FPGA Compilation Tech. Mapping/Packing Placement Logic Synthesis Routing Dynamic Hardware/Software Partitioning: A First Approach, DAC’03 A Configurable Logic Fabric for Dynamic Hardware/Software Partitioning, DATE’04 Frank Vahid, UC Riverside

18 Warp Processing – Placement
ROCPLACE - Placement Dependency-based positional placement algorithm Identify critical path, placing critical nodes in center of CLF Use dependencies between remaining CLBs to determine placement Attempt to use adjacent CLB routing whenever possible CLB CLB CLB CLB CLB CLB JIT FPGA Compilation Tech. Mapping/Packing Placement Logic Synthesis Routing Dynamic Hardware/Software Partitioning: A First Approach, DAC’03 A Configurable Logic Fabric for Dynamic Hardware/Software Partitioning, DATE’04 Frank Vahid, UC Riverside

19 Warp Processing – Routing
ROCR - Riverside On-Chip Router Requires much less memory than VPR as resource graph is smaller 10x faster execution time than VPR (Timing driven) Produces circuits with critical path 10% shorter than VPR (Routablilty driven) JIT FPGA Compilation Tech. Mapping/Packing Placement Logic Synthesis Routing Frank Vahid, UC Riverside Dynamic FPGA Routing for Just-in-Time FPGA Compilation, DAC’04

20 Experiments with Warp Processing
Warp Processor ARM/MIPS plus our fabric Riverside on-chip CAD tools to map critical region to configurable fabric Requires less than 2 seconds on lean embedded processor to perform synthesis and JIT FPGA compilation Traditional HW/SW Partitioning ARM/MIPS plus Xilinx Virtex-E FPGA Manually partitioned software using VHDL VHDL synthesized using Xilinx ISE 4.1 ARM I$ D$ WCLA Profiler DPM ARM I$ D$ Xilinx Virtex-E FPGA Frank Vahid, UC Riverside

21 Warp Processors Performance Speedup (Most Frequent Kernel Only)
Average kernel speedup of 41, vs. 21 for Virtex-E WCLA simplicity results in faster HW circuits SW Only Execution Frank Vahid, UC Riverside

22 Warp Processors Performance Speedup (Overall, Multiple Kernels)
Assuming 100 MHz ARM, and fabric clocked at rate determined by synthesis Average speedup of 7.4 Energy reduction of 38% - 94% SW Only Execution Frank Vahid, UC Riverside

23 Warp Processors - Results Execution Time and Memory Requirements
Xilinx ISE 60 MB 0.2 s DPM (CAD) 3.6MB 3.6MB 1.4s DPM (CAD) (75MHz ARM7) Frank Vahid, UC Riverside

24 Outline Intro and Background: Warp Processors
Work in progress under SRC 3-yr grant Parallelized-computation memory access Deriving high-level constructs from binaries Case studies Using commercial FPGA fabrics Application-specific FPGA Other ongoing related work Configurable cache tuning Frank Vahid, UC Riverside

25 1. Parallelized-Computation Memory Access
Problem Can parallelize kernel computation, but may hit memory bottleneck Solution Use more advanced memory/compilation methods A[i] c[i] + A[i+1] c[i+1] B[i] B[i+1] B[i+2] Parallelism can’t be exploited if data isn’t available Frank Vahid, UC Riverside

26 1. Parallelized-Computation Memory Access
Method 1: Distribute data among FPGA block RAMS Concurrently accessible Memory accesses are parallelized Main Memory blockRAM blockRAM A[i] c[i] + A[i+1] c[i+1] B[i] B[i+1] A[i] c[i] + A[i+1] c[i+1] B[i] B[i+1] Frank Vahid, UC Riverside

27 1. Parallelized-Computation Memory Access
Method 2: Smart Buffers (Najjar’2004) Memory structure optimized for application’s access patterns Takes advantage of data reuse Speedups of 2x to 10x compared to hw without smart buffers RAM 1st iteration window Controller Smart Buffer Datapath 2nd iteration window A[0] A[1] A[2] A[3] A[4] A[5] A[6] A[7] A[8] …. 3rd iteration window Smart Buffer Empty A[4] Killed A[0] A[1] A[2] A[3] Killed A[6] A[5] Killed RAM Frank Vahid, UC Riverside

28 2. Deriving high-level constructs from binaries
Problem Some binary features unsuitable for synthesis Loops unrolled by optimizing compiler, or pointers Previous decompilation techniques didn’t consider Features OK for sw-to-sw translation, not for synthesis Solution – New decompilation techniques Convert pointers to arrays Reroll loops Others Loop Unrolling Loop Rerolling for (int i=0; i < 3; i++) accum += a[i]; Ld reg2, 100(0) Add reg1, reg1, reg2 Ld reg2, 100(1) Ld reg2, 100(2) for (int i=0; i<3;i++) reg1 += array[i]; Frank Vahid, UC Riverside

29 2. Deriving high-level constructs from binaries
Recent study of decompilation robustness In presence of compiler optimizations, and instruction sets Energy savings of 77%/76%/87% for MIPS/ARM/Microblaze ICCAD’05 DATE’04 Frank Vahid, UC Riverside

30 3. Case Studies Compare warp processing (binary level) versus compiler-based (C level) partitioning for real examples H.264 study (w/ Freescale) Highly-optimized proprietary C code Results of 2 month study Competitive Also learned that simple C-coding guidelines improve synthesis Whether done from binary or source; presently developing guidelines More examples: IBM (server), others... Frank Vahid, UC Riverside

31 4. Using Commercial FPGA Fabrics
Can warp processing utilize commercial FPGAs? Approach 1: “Virtual FPGA” – Map our fabric to FPGA Collaboration with Xilinx Initial results: 6x performance overhead, 100x area overhead Main problem is routing uP Warp fabric Map fabric onto a commercial fabric uP uP Commercial FPGA Warp fabric Investigating better methods (one-to-one mapping) “Virtual FPGA” Frank Vahid, UC Riverside

32 5. Application-Specific FPGA
Commercial FPGAs intended for ASIC prototyping Huge range of possible designs Generality causes loss of efficiency Propose to investigate app-spec FPGAs Put on ASIC next to custom circuits and microprocessor FPGA tuned to particular circuit, but still general and reprogrammable Supports late changes, modifications to standards, etc. Customize CLB size, # of inputs, routing resources Coarse-grained components – multiply-accumulate, RAM, etc. Use retargetable CAD tool Expected results: smaller, faster FPGAs General Fabric CLB DSP-tuned Fabric MAC MAC RAM CLB CLB CLB Frank Vahid, UC Riverside

33 5. Application-Specific FPGA
Initial results: Performance improvements up to 300%, area reductions up to 90% Frank Vahid, UC Riverside

34 Outline Intro and Background: Warp Processors
Work in progress under SRC 3-yr grant Parallelized-computation memory access Deriving high-level constructs from binaries Case studies Using commercial FPGA fabrics Application-specific FPGA Other ongoing related work Configurable cache tuning Frank Vahid, UC Riverside

35 Configurable Cache Tuning
Developed Runtime configurable cache (ISCA 2003) Configuration heuristics (DATE 2004, ISLPED 2005) – 60% memory-access energy savings Present focus: Dynamic tuning Way1 Way2 Way 3 Way 4 Way 1 Way 2 Way 1 Ways, line size, and total size are configurable ISLPED 2005 Frank Vahid, UC Riverside

36 Summary Basic warp technology Ongoing work (SRC)
Developed (NSF, and 1-year CSR grants from SRC) Uses binary synthesis and FPGAs Conclusion: Feasible technology, much potential Ongoing work (SRC) Improve and validate effectiveness of binary synthesis Examine FPGA implementation issues Extensive future work to develop robust warp technology Frank Vahid, UC Riverside


Download ppt "Warp Processor: A Dynamically Reconfigurable Coprocessor"

Similar presentations


Ads by Google