Download presentation
Presentation is loading. Please wait.
Published byEvan Nash Modified over 9 years ago
2
Mar 16, 2011 1 Automatic Transformation and Optimization of Applications on GPUs and GPU Clusters PhD Oral Defence: Wenjing Ma Advisor: Dr Gagan Agrawal The Ohio State University
3
Mar 16, 2011 2 Outline of Contents Motivation Accelerators, GPGPU and GPU cluster Difficulty of GPU programming Framework and Approaches Code generation for data mining applications Translation system for enabling data mining applications on GPUs Automatic translation of data mining applications from MATLAB to GPUs Automatic code generation for data mining on clusters with GPU support Arranging data on shared memory with ILP Solver Code optimization for tensor contractions Auto-tuning approach for tensor contractions on GPUs Loop transformation for tensor contraction sequences on multi-level memory architecture
4
Mar 16, 2011 3 Introduction Accelerators, GPGPU and GPU cluster –Multi-core architectures are more and more popular in high performance computing –GPU, Cell Processor, FPGA –GPU has good performance/price ratio Difficulty of Programming –How to program a cluster with accelerators on each node ?
5
Mar 16, 2011 4 Our Approach Provide high-level support for programming emerging high-end configurations Effective and simple optimization strategies Focus on specific application classes Data mining application Tensor contraction expressions
6
Mar 16, 2011 5 Outline of Contents Motivation Accelerators, GPGPU and GPU cluster Difficulty of GPU programming Framework and Approaches Code generation for data mining applications Translation system for enabling data mining applications on GPUs Automatic translation of data mining applications from MATLAB to GPUs Automatic code generation for data mining on clusters with GPU support Arranging data on shared memory with ILP Solver Code optimization for tensor contractions Auto-tuning approach for tensor contractions on GPUs Loop transformation for tensor contraction sequences on multi-level memory architecture
7
Mar 16, 2011 6 Shared memory on GPU Features of shared memory on GPU –Small in size –Software controllable –Much faster than device memory –… Need a strategy to arrange data on shared memory –Arrange by hand: Time consuming and not optimal –Previous work: intuitive solution
8
Mar 16, 2011 7 An Example to show shared memory usage Void Kernel_function(float *A, float *C, …) { __shared__ float s_C[r*NUM_THREADS]; __shared__ float s_A[r*NUM_THREADS]; for(int i=0;i<n;i+=NUM_THREADS) { for(int j=0;j<r;j++) /* load A in device memory into s_A */ for(int j=0;j<m;j++) for(int k=0;k<r;k++) /* load C in device memory into s_C*/...... } /* write C in s_C to device memory… */
9
Mar 16, 2011 8 Problem Formulation for Shared Memory Arrangement What to Consider A kernel function (with a number of basic blocks) Array, section of array, element of array Live ranges of each variable Determine in which basic block a variable is allocated to shared memory Assign_point[i][k]: variable i, basic block k
10
Mar 16, 2011 9 Integer Linear Programming Linear Programming Objective function Maximize z = C T x Constraints Ax ≤ b Solution Values of x Special case of linear programming All the unknown variables are integers (within {1,0} in our case) Solvable for reasonable size of problems
11
Mar 16, 2011 10 Integer Programming for Shared Memory Arrangement (cnt’d) Objective Function Maximize shared memory usage Minimize data transfer between memory hierarchies Maximize z = ∑ i ∈ {1…nVar}, k ∈ {1…nLive[i]} Agg_SMref i k – ∑ i ∈ {1..nVar}, k ∈ {1…nLive[i]} Total_memcopy i k
12
Mar 16, 2011 11 Integer Programming for Shared Memory Arrangement Objective Function Agg_SMref i k =∑ j ∈ {live_blocks[i][j]} Is_assigned i j ×Refs i j ×iters j Total_memcopy i k =Data_trans i j ×iters j 2×size_alloc i j, i f Access i k = readwrite Data_trans i j = 0, i f Access i k = temp size_alloc i j, otherwise {
13
Mar 16, 2011 12 An Example to Show size_alloc for (int i=0; i<n; i++) for (int j=0; j<m; j++) for (int k = 0; k<r; k++) C[k] += A[i][k]- B[j][k];...... Size_alloc = r Size_alloc = 1 Size_alloc = r*m
14
Mar 16, 2011 13 Integer Programming for Shared Memory Arrangement Constraints Total allocation does not exceed the limit of shared memory at any time Only at most one assign_point is 1 in each live range ∑ i ∈ {live_list[j]} Is_assigned i j ×size_alloc i j ≤ limit ∑ i ∈ {live_blocks[j][k]} assign_point i j ≤ 1
15
Mar 16, 2011 14 An Example A: n*r B: m*r C: r n: 2048 m: 3 r: 3 NUM_THREADS: 256 assign_point[0][1]=1; assign_point[1][0]=1; assign_point[2][0]=1; /* all other elements of assign_point are 0 */ for (int i=0; i<n; i++) for (int j=0; j<m; j++) for (int k = 0; k<r; k++) C[k] += A[i][k]- B[j][k];...... Integer Programming Solver assign_point[i][j]: i denotes variable I, j denotes basic block j. Variables 0, 1, 2 correspond to A, B, C in the code.
16
Mar 16, 2011 15 An Example (cnt’d) Generated Code: __shared__ float s_B[m][r]; __shared__ float s_C[r*NUM_THREADS]; __shared__ float s_A[r*NUM_THREADS]; /* load B to s_B */ for(int i=0;i<n;i+=NUM_THREADS) { for(int j=0;j<r;j++) s_A[tid*r+j]=A[tid+i][j]; for(int j=0;j<m;j++) for(int k=0;k<r;k++) s_C[k*tid]+=s_A[tid*r+k]- s_B[j][k];...... } /* Synchronize and combination of C */ for (int i=0; i<n; i++) for (int j=0; j<m; j++) for (int k = 0; k<r; k++) C[k] += A[i][k]- B[j][k];......
17
Mar 16, 2011 16 Suggesting Loop Transformation for (int rc = 0; rc < nRowCl; rc++) { tempDis = 0; for(int c = 0;c<numCol;c++) tempDis = tempDis + data[r][c] * Acomp[rc][colCL[c]]; } for (int rc = 0; rc < nRowCl; rc++) { tempDis[rc] = 0; } for(int c = 0;c<numCol;c++) { /* load into shared memory */ for (int rc = 0; rc < nRowCl; rc++) { tempDis[rc] += data[r][c] * Acomp[rc][colCL[c]]; }
18
Mar 16, 2011 17 Experiment Results K-meansEM
19
Mar 16, 2011 18 Experiment Results PCACo-clustering
20
Mar 16, 2011 19 Effect of Loop Transformation PCA Co-clustering
21
Mar 16, 2011 20 Outline of Contents Motivation Accelerators, GPGPU and GPU cluster Difficulty of GPU programming Framework and Approaches Code generation for data mining applications Translation system for enabling data mining applications on GPUs Automatic translation of data mining applications from MATLAB to GPUs Automatic code generation for data mining on clusters with GPU support Arranging data on shared memory with ILP Solver Code optimization for tensor contractions Auto-tuning approach for tensor contractions on GPUs Loop transformation for tensor contraction sequences on multi-level memory architecture
22
Mar 16, 2011 21 Tensor Contraction on GPU and Auto-tuning Tensor contraction expressions –Motivated by the CCSD(T) part of NWchem –In the form of high-dimensional matrix multiplication –Example: r[h1 h2 p3 p4] += t[h6 h7 h1 h2] * v[p3 p4 h6 h7] Auto-tuning –Compile-time and Run-time optimization –Selecting best implementation with given input problem
23
Mar 16, 2011 22 Original Algorithm and Optimization Original Algorithm on T10 GPU –Loading input matrices to shared memory –Index Calculation Flattening and index combination Optimization for Fermi –Register tiling Larger shared memory and register file on Fermi –Modified index calculation order Different output/input access ratio for each thread r[h1 h2 p4 p3] += t[h6 h7 h1 h2] * v[p3 p4 h6 h7]
24
Mar 16, 2011 23 Motivation of auto-tuning for tensor contractions on GPU Algorithm modification for different architectures Different algorithm choices for different inputs Favor input Favor output Ex 1(a)0.4250.504 (b)0.4870.584 (c)0.510.671 (d)0.6810.881 Ex 2(A)13.611 (B)105.541.5 (C)199.7149.9 (D)27.122.6 Running time of two functions on Fermi with different index orders
25
Mar 16, 2011 24 Approaches of Auto-tuning Existing approaches –Analytical cost model Hard to capture complex architecture features –Empirical search Not practical when search space is large Our approach –Parametrizable micro-benchmarks –Focusing on main features that affect performance
26
Mar 16, 2011 25 Auto-tuning Approach for Tensor Contractions on Different GPUs Auto-tuning tool –Parametrizable micro-benchmarks Auto-tuning parameters –Memory access pattern –Kernel Consolidation
27
Mar 16, 2011 26 Auto-tuning with Parametrizable Micro-benchmarks Different Implementations Target Expressions Architecture Features Micro Benchmark Parameter Space Execution Models and Thresholds Implementation Choice Expression and problem size in application
28
Mar 16, 2011 27 Micro-benchmark Evaluation for Memory Access Access Stride on device memory makes big difference –Coalesced accesses: adjacent threads access contiguous words in device memory –Cache L1 and L2 –… Mapping to tensor contractions –Index calculation order –For uncommon index: favor input/output –For common index: favor each of the input
29
Mar 16, 2011 28 Mapping to tensor contractions r[h1 h2 p4 p3] += t[h6 h7 h1 h2] * v[p3 p4 h6 h7] calculate with input order: p3 is the inner loop –Accessing v Loading v from device memory to shared memory Strides between two thread with adjacent x index: 1 –Accessing r Update r in device memory Strides between two threads with adjacent x index: h1*h2*p4
30
Mar 16, 2011 29 Micro-benchmark Evaluation for Memory Access A simple micro- benchmark Three types of : stride_x, stride_y, stride_iter Fermi T10 A[tid.x*stride_x + tid.y*stride_y+ i*stride_iter] /* i is the index of iterations */
31
Mar 16, 2011 30 Micro-benchmark Evaluation for Kernel Consolidation Launching multiple kernels at the same time With data copy –Overlapping of computing and data transfer Without data copy –Better utilization of the computing resource Using a matrix-matrix multiplication kernel as micro-benchmark
32
Mar 16, 2011 31 Choice of kernel consolidation Tightly coupled consolidation –For functions with large data movement cost Loosely coupled consolidation –For functions with comparable computation and data movement Foreach (task i) data copy (host to device) Foreach (task i) launch the kernels Foreach (task i) data copy (device to host) Foreach (task i) data copy for task i (host to device) launch kernel(i) data copy for task i (device to host)
33
Mar 16, 2011 32 Experiments Memory access for single expression Tile size Predicted choice Actual (in order) Actual (out order) 12in order0.2410.295 13in order0.3120.302 14in order0.4250.504 15in order0.4870.584 16in order0.510.671 17in order0.6810.881 18in order1.0781.471 * Actual values are running time in ms Tile size Predicted choice Actual (in order) Actual (out order) 12out order0.2220.214 13out order0.280.27 14out order0.3640.354 15out order0.5110.482 16out order0.8540.644 17Equal0.9430.92 18Equal1.1931.124
34
Mar 16, 2011 33 Experiments Kernel Consolidation for single expression Micro-benchmarkReal contraction
35
Mar 16, 2011 34 Experiment Running on collections of tensor contractions T10: without data copy Fermi: without data copy Fermi: with data copy
36
Mar 16, 2011 35 Outline of Contents Motivation Accelerators, GPGPU and GPU cluster Difficulty of GPU programming Framework and Approaches Code generation for data mining applications Translation system for enabling data mining applications on GPUs Automatic translation of data mining applications from MATLAB to GPUs Automatic code generation for data mining on clusters with GPU support Arranging data on shared memory with ILP Solver Code optimization for tensor contractions Auto-tuning approach for tensor contractions on GPUs Loop transformation for tensor contraction sequences on multi-level memory architecture
37
Mar 16, 2011 36 Motivation of loop fusion for sequence of tensor contractions Tensor contraction Sequence = ∑ p C4(p, a) ×A(p, q, r, s); = ∑ q C3(q, b) × = ∑ r C2(r, c) × B(a, b, c, d)= ∑ s C1(s, d) × T1(a, b, c, s); T2(a, b, r, s);T1(a, b, c, s) T2(a, b, r, s)T3(a, q, r, s); T3(a, q, r, s) Need to find the “fusion chains” Memory limit at different levels –With GPU, memory limitation is more strict
38
Mar 16, 2011 37 Tensor contractions in multi- level memory hierarchy Memory hierarchy in GPU clusters –α: disk –β: global memory –γ: local memory/GPU memory None of the levels could be bypassed A higher level is smaller and faster than a lower level
39
Mar 16, 2011 38 Loop transformation for tensor contraction sequences on multi- level memory architecture Single tensor contraction –Memory and data movement cost on multi-level memory Loop fusion for sequence of tensor contractions –Condition for fusion –Fusion on multi-level memory hierarchy
40
Mar 16, 2011 39 Single Tensor Contraction on Multi-level memory Hierarchy One array fits in memory –X[x; y], Y [y; z], Z[x; z], assume X fits in memory – Memory cost : N x ×N y +min(N x, N y )+1 ≤ M β – No redundant data movement No array fits in memory – To minimize data movement, a preferred solution is T i = T j = T ≈ Multi-level memory hierarchy – Tile size determined with particular system parameters and problem sizes
41
Mar 16, 2011 40 Fusion Conditions A sequence Only consider the case where communication dominates –Common index of the first contraction –Uncommon index of the smaller matrix in the second contraction I 1 (d, c 2,..., c n ) = I 0 (d, c 1, …, c n ) ×B 0 (d, c 1, …, c n ) I 2 (d, c 3,…, c n ) = I 1 (d, c 2, …, c n ) ×B 1 (d, c 2, …, c n ) … I n (d) = I n-1 (d, c n ) × B n-1 (d, c n ) |I i (c i+1 )| ≤
42
Mar 16, 2011 41 Fusion Conditions Size of the matrix that is not eliminated The first B and the last B could be large –Tile sizes are determined as in single contraction |B i |≤
43
Mar 16, 2011 42 Algorithm to determine fusion chains For a “fusable” contraction list – With one matrix fitting to memory in each contraction – Memory cost: – When memory cost exceeds memory limit, a split is made to break the fusion chain = f(i, j) = 0, if j<i, otherwise==
44
Mar 16, 2011 43 Fusion in multi-level memory hierarchy With given chains at the lower level, determine subchains at the higher level – Reduced memory requirement forβ level – Same procedure to select fusion chains f(i, j) = 0, if j<i, otherwise = =, if memory γ (i, j) ≤M γ
45
Mar 16, 2011 44 Evaluation Fusion at Global Memory levelFusion at disk level
46
Mar 16, 2011 45 Outline Motivation Accelerators, GPGPU and GPU cluster Difficulty of GPU programming Framework and Approaches Code generation for data mining applications Translation system for enabling data mining applications on GPUs Automatic translation of data mining applications from MATLAB to GPUs Automatic code generation for data mining on clusters with GPU support Arranging data on shared memory with ILP Solver Code optimization for tensor contractions Auto-tuning approach for tensor contractions on GPUs Loop transformation for tensor contraction sequences on multi-level memory architecture
47
Mar 16, 2011 46 GREENRIDE: A Translation system for enabling data mining applications on GPUs User input Code analyzer –Analysis of variables (variable type and size) –Analysis of reduction functions (sequential code from the user) Code Generator ( generating CUDA code and C++ code invoking the kernel function) –Optimization
48
Mar 16, 2011 47 GREENRIDE: A Translation system for enabling data mining applications on GPUs Variable information Reduction functions Optional functions Code Analyzer( In LLVM) Variable Analyze r Code Generato r Variable Access Pattern and Combination Operations Host Program Data copy and thread grid configuration Kernel functions Executable User Input
49
Mar 16, 2011 48 GMAT-DM: Automatic Transformation from MATLAB for GPUs MATLAB code OCTAVE parser C code CUDA code GREENRIDE GMAT-DM Transform MATLAB code for GPU –Convert MATLAB code to C –Use GREENRIDE to convert to CUDA Matrix manipulation Modified metric for matrix multiplication chain Function combination
50
Mar 16, 2011 49 AUTO-GC: Automatic Code Generation for FREERIDE with GPU Support Add support to GPU clusters! Variable Information Reduction Functions Optional Functions Variable Info Parallel Loop Access Pattern Reduction Objects Combination Operation Code Analyzer Variable Analyzer Code Generator FREERIDE Code CUDA Code Cluster of CPUs GPU on Each Node User input
51
Mar 16, 2011 50 Future Work Extend the code generation system for data mining applications to more structures Improve and apply ILP approach for shared memory arrangement for other architectures Include more parameters in the auto-tuning framework Extend loop transformation to heterogeneous structures …
52
Mar 16, 2011 51 Conclusion Code generation for data mining applications –Translation system for enabling data mining applications on GPUs –Automatic translation of data mining applications from MATLAB to GPUs –Automatic code generation for data mining on clusters with GPU support –Arranging data on shared memory with ILP Solver Code optimization for tensor contractions –Auto-tuning approach for tensor contractions on GPUs –Loop transformation for tensor contraction sequences on multi-level memory architecture
53
Mar 16, 2011 52 Thank you !
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.