Data Locality CS 524 – High-Performance Computing.

Slides:



Advertisements
Similar presentations
Compiler Support for Superscalar Processors. Loop Unrolling Assumption: Standard five stage pipeline Empty cycles between instructions before the result.
Advertisements

CS492B Analysis of Concurrent Programs Memory Hierarchy Jaehyuk Huh Computer Science, KAIST Part of slides are based on CS:App from CMU.
1 Optimizing compilers Managing Cache Bercovici Sivan.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) Parallelism & Locality Optimization.
CPE 731 Advanced Computer Architecture Instruction Level Parallelism Part I Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University.
1 4/20/06 Exploiting Instruction-Level Parallelism with Software Approaches Original by Prof. David A. Patterson.
Eliminating Stalls Using Compiler Support. Instruction Level Parallelism gcc 17% control transfer –5 instructions + 1 branch –Reordering among 5 instructions.
Computer Architecture Instruction Level Parallelism Dr. Esam Al-Qaralleh.
Compiler Challenges for High Performance Architectures
CS420 lecture six Loops. Time Analysis of loops Often easy: eg bubble sort for i in 1..(n-1) for j in 1..(n-i) if (A[j] > A[j+1) swap(A,j,j+1) 1. loop.
Numerical Algorithms Matrix multiplication
Fall 2011SYSC 5704: Elements of Computer Systems 1 SYSC 5704 Elements of Computer Systems Optimization to take advantage of hardware.
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
Introduction CS 524 – High-Performance Computing.
Stanford University CS243 Winter 2006 Wei Li 1 Loop Transformations and Locality.
Memory System Performance October 29, 1998 Topics Impact of cache parameters Impact of memory reference patterns –matrix multiply –transpose –memory mountain.
Computational Astrophysics: Methodology 1.Identify astrophysical problem 2.Write down corresponding equations 3.Identify numerical algorithm 4.Find a computer.
1 Tuesday, September 19, 2006 The practical scientist is trying to solve tomorrow's problem on yesterday's computer. Computer scientists often have it.
CS267 L2 Memory Hierarchies.1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 2: Memory Hierarchies and Optimizing Matrix Multiplication.
Cache Memories May 5, 2008 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance EECS213.
Data Dependences CS 524 – High-Performance Computing.
Uniprocessor Optimizations and Matrix Multiplication
Data Locality CS 524 – High-Performance Computing.
Dense Matrix Algorithms CS 524 – High-Performance Computing.
CS 524 (Wi 2003/04) - Asim LUMS 1 Cache Basics Adapted from a presentation by Beth Richardson
1 Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as a i,j and elements of.
DATA LOCALITY & ITS OPTIMIZATION TECHNIQUES Presented by Preethi Rajaram CSS 548 Introduction to Compilers Professor Carol Zander Fall 2012.
High Performance Computing 1 Numerical Linear Algebra An Introduction.
1 Cache Memories Andrew Case Slides adapted from Jinyang Li, Randy Bryant and Dave O’Hallaron.
Recitation 7: 10/21/02 Outline Program Optimization –Machine Independent –Machine Dependent Loop Unrolling Blocking Annie Luo
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
U NIVERSITY OF D ELAWARE C OMPUTER & I NFORMATION S CIENCES D EPARTMENT Optimizing Compilers CISC 673 Spring 2011 Dependence Analysis and Loop Transformations.
Carnegie Mellon Lecture 15 Loop Transformations Chapter Dror E. MaydanCS243: Loop Optimization and Array Analysis1.
ECE 454 Computer Systems Programming Memory performance (Part II: Optimizing for caches) Ding Yuan ECE Dept., University of Toronto
C.E. Goutis V.I.Kelefouras University of Patras Department of Electrical and Computer Engineering VLSI lab Date: 31/01/2014 Compilers for Embedded Systems.
Code and Caches 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with permission.
Cache-oblivious Programming. Story so far We have studied cache optimizations for array programs –Main transformations: loop interchange, loop tiling.
Optimizing Compilers for Modern Architectures Creating Coarse-grained Parallelism for Loop Nests Chapter 6, Sections 6.3 through 6.9.
Computer Organization CS224 Fall 2012 Lessons 41 & 42.
1 Cache Memories. 2 Today Cache memory organization and operation Performance impact of caches  The memory mountain  Rearranging loops to improve spatial.
Cache Memories Topics Generic cache-memory organization Direct-mapped caches Set-associative caches Impact of caches on performance CS 105 Tour of the.
Memory Hierarchy—Improving Performance Professor Alvin R. Lebeck Computer Science 220 Fall 2008.
Empirical Optimization. Context: HPC software Traditional approach  Hand-optimized code: (e.g.) BLAS  Problem: tedious to write by hand Alternatives:
Optimizing for the Memory Hierarchy Topics Impact of caches on performance Memory hierarchy considerations Systems I.
09/13/2012CS4230 CS4230 Parallel Programming Lecture 8: Dense Linear Algebra and Locality Optimizations Mary Hall September 13,
Learning A Better Compiler Predicting Unroll Factors using Supervised Classification And Integrating CPU and L2 Cache Voltage Scaling using Machine Learning.
1 Writing Cache Friendly Code Make the common case go fast  Focus on the inner loops of the core functions Minimize the misses in the inner loops  Repeated.
Programming for Cache Performance Topics Impact of caches on performance Blocking Loop reordering.
CSE 351 Section 9 3/1/12.
Cache Memories CSE 238/2038/2138: Systems Programming
Optimizing Cache Performance in Matrix Multiplication
The Hardware/Software Interface CSE351 Winter 2013
Section 7: Memory and Caches
5.2 Eleven Advanced Optimizations of Cache Performance
Prof. Zhang Gang School of Computer Sci. & Tech.
CS 105 Tour of the Black Holes of Computing
Optimizing Cache Performance in Matrix Multiplication
Memory Hierarchies.
STUDY AND IMPLEMENTATION
Optimizing MMM & ATLAS Library Generator
Copyright 2003, Keith D. Cooper & Linda Torczon, all rights reserved.
Siddhartha Chatterjee
Cache Memories Professor Hugh C. Lauer CS-2011, Machine Organization and Assembly Language (Slides include copyright materials from Computer Systems:
Memory System Performance Chapter 3
Cache Models and Program Transformations
Cache Memories.
Cache Performance Improvements
Optimizing single thread performance
Writing Cache Friendly Code
Presentation transcript:

Data Locality CS 524 – High-Performance Computing

CS 524 (Au 05-06)- Asim LUMS2 Motivating Example: Vector Multiplication 1 GHz processor (1 ns clock), 4 instructions per cycle, 2 multiply-add units (4 FPUs) 100 ns memory latency, no cache, 1 word transfer size Peak performance: 4/1 ns = 4 GFLOPS Performance on matrix-vector multiply  After every 200 ns 2 floating point operations occur  Performance = 1 / 100 ns = 10 MFLOPS

CS 524 (Au 05-06)- Asim LUMS3 Motivating Example: Vector Multiplication Assume there is a 32 K cache with a 1 ns latency. 1 word cache line, optimal replacement policy Multiply 1 K vectors  No. of floating point operations = 2 K (2n: n = size of vector)  Latency to fill cache = 2 K * 100 ns = 200 micro sec  Time to execute = 2 K / 4 * 1 ns = 0.5 micro sec  Performance = 2 K / = about 10 MFLOPS No increase in performance! Why?

CS 524 (Au 05-06)- Asim LUMS4 Motivating Example: Matrix-Matrix Multiply Assume the same memory system characteristics on the previous two slides Multiply two 32 x 32 matrices  No. of floating point operations = 64 K (2n 3 : n = dim of matrix)  Latency to fill cache = 2 K * 100 ns = 200 micro sec  Time to execute = 64 K / 4 * 1 ns = 16 micro sec  Performance = 64 K / 216 = about 300 MFLOPS Repeated use of data in cache helps improve performance: temporal locality

CS 524 (Au 05-06)- Asim LUMS5 Motivating Example: Vector Multiply Assume same memory system characteristics on first slide (no cache) Assume 4 word transfer size (cache line size) Multiply two vectors  Each access brings in four words of a vector  In 200 ns, four words of each vector are brought in  8 floating point operations can be performed in 200 ns  Performance: 8 / 200 ns = 40 MFLOPS Memory increased by the effective increase in memory bandwidth  Spatial locality

CS 524 (Au 05-06)- Asim LUMS6 Locality of Reference Application data space is often too large to all fit into cache Key is locality of reference Types of locality  Register locality  Cache-temporal locality  Cache-spatial locality Attempt (in order of preference) to:  Keep all data in registers  Order algorithms to do all ops on a data element before it is overwritten in cache  Order algorithms such that successive ops are on physically contiguous data

CS 524 (Au 05-06)- Asim LUMS7 Single Processor Performance Enhancement Two fundamental issues  Adequate op-level parallelism (to keep superscalar, pipelined processor units busy)  Minimize memory access costs (locality in cache) Three useful techniques to improve locality of reference (loop transformations)  Loop permutation  Loop blocking (tiling)  Loop unrolling

CS 524 (Au 05-06)- Asim LUMS8 Access Stride and Spatial Locality Access stride: separation between successively accessed memory locations  Unit access stride maximizes spatial locality (only one miss per cache line) 2-D arrays have different linearized representations in C and FORTRAN  FORTRAN’s column-major representation favors column- wise access of data  C’s representation favors row-wise access of data a d g b e h c f i abca d g Row major order (C ) Column major order (FORTRAN)

CS 524 (Au 05-06)- Asim LUMS9 Matrix-Vector Multiply (MVM) do i = 1, N do j = 1, N y(i) = y(i)+A(i,j)*x(j) end do Total no. of references = 4N 2 Two questions:  Can we predict the miss ratio of different variations of this program for different cache models?  What transformations can we do to improve performance? Ax y j i j

CS 524 (Au 05-06)- Asim LUMS10 Cache Model Fully associative cache (so no conflict misses) LRU replacement algorithm (ensures temporal locality) Cache size  Large cache model: no capacity misses  Small cache model: miss depending on problem size Cache block size  1 floating-point number  b floating-point numbers

CS 524 (Au 05-06)- Asim LUMS11 MVM: Dot-Product Form (Scenario 1) Loop order: i-j Cache model  Small cache: assume cache can hold fewer than (2N+1) numbers  Cache block size: 1 floating-point number Cache misses  Matrix A: N 2 misses (cold misses)  Vector x: N cold misses + N(N-1) capacity misses  Vector y: N cold misses Miss ratio = (2N 2 + N)/4N 2 → 0.5

CS 524 (Au 05-06)- Asim LUMS12 MVM: Dot-Product Form (Scenario 2) Cache model  Large cache: cache can hold more than (2N+1) numbers  Cache block size: 1 floating-point number Cache misses  Matrix A: N 2 cold misses  Vector x: N cold misses  Vector y: N cold misses Miss ratio = (N 2 + 2N)/4N 2 → 0.25

CS 524 (Au 05-06)- Asim LUMS13 MVM: Dot-Product Form (Scenario 3) Cache block size: b floating-point numbers Matrix access order: row-major access (C) Cache misses (small cache)  Matrix A: N 2 /b cold misses  Vector x: N/b cold misses + N(N-1)/b capacity misses  Vector y: N cold misses  Miss ratio = (1/2b +1/4N) → 1/2b Cache misses (large cache)  Matrix A: N 2 /b cold misses  Vector x: N/b cold misses  Vector y: N/b cold misses  Miss ratio = (1/4 +1/2N)*(1/b) → 1/4b

CS 524 (Au 05-06)- Asim LUMS14 MVM: SAXPY Form do j = 1, N do i = 1, N y(i) = y(i) +A(i,j)*x(j) end do Total no. of references = 4N 2 Ayx j i j

CS 524 (Au 05-06)- Asim LUMS15 MVM: SAXPY Form (Scenario 4) Cache block size: b floating-point numbers Matrix access order: row-major access (C) Cache misses (small cache)  Matrix A: N 2 cold misses  Vector x: N cold misses  Vector y: N/b cold misses + N(N-1)/b capacity misses  Miss ratio = 0.25(1 +1/b) + 1/4N → 0.25(1 + 1/b) Cache misses (large cache)  Matrix A: N 2 cold misses  Vector x: N/b cold misses  Vector y: N/b cold misses  Miss ratio = 1/4 +1/2Nb → 1/4

CS 524 (Au 05-06)- Asim LUMS16 Loop Blocking (Tiling) Loop blocking enhances temporal locality by ensuring that data remain in cache until all operations are performed on them Concept  Divide or tile the loop computation into chunks or blocks that fit into cache  Perform all operations on block before moving to the next block Procedure  Add outer loop(s) to tile inner loop computation  Select size of the block so that you have a large cache model (no capacity misses, temporal locality maximized)

CS 524 (Au 05-06)- Asim LUMS17 MVM: Blocked do bi = 1, N/B do bj = 1, N/B do i = (bi-1)*B+1, bi*B do j = (bj-1)*B+1, bj*B y(i) = y(i) + A(i, j)*x(j) A x y j i B B

CS 524 (Au 05-06)- Asim LUMS18 MVM: Blocked (Scenario 5) Cache size greater than 2B + 1 floating-point numbers Cache block size: b floating-point numbers Matrix access order: row-major access (C) Cache misses per block  Matrix A: B 2 /b cold misses  Vector x: B/b  Vector y: B/b  Miss ratio = (1/4 +1/2B)*(1/b) → 1/4b Lesson: proper loop blocking eliminates capacity misses. Performance essentially becomes independent of cache size (as long as cache size is > 2B +1 fp numbers)

CS 524 (Au 05-06)- Asim LUMS19 Access Stride and Spatial Locality Access stride: separation between successively accessed memory locations  Unit access stride maximizes spatial locality (only one miss per cache line) 2-D arrays have different linearized representations in C and FORTRAN  FORTRAN’s column-major representation favors column- wise access of data  C’s representation favors row-wise access of data a d g b e h c f i abca d g Row major order (C ) Column major order (FORTRAN)

CS 524 (Au 05-06)- Asim LUMS20 Access Stride Analysis of MVM c Dot-product form do i = 1, N do j = 1, N y(i) = y(i) + A(i,j)*x(j) end do c SAXPY form do j = 1, N do i = 1, N y(i) = y(i) + A(i,j)*x(j) end do Access stride for arrays Dot-product form CFortran A1N x11 y00 SAXPY form AN1 x00 y11

CS 524 (Au 05-06)- Asim LUMS21 Loop Permutation Changing the order of nested loops can improve spatial locality  Choose the loop ordering that minimizes miss ratio, or  Choose the loop ordering that minimizes array access strides Loop permutation issues  Validity: Loop reordering should not change the meaning of the code  Performance: Does the reordering improve performance?  Array bounds: What should be the new array bounds?  Procedure: Is there a mechanical way to perform loop permutation? These are of primary concern in compiler design. We will discuss some of these issues when we cover dependences, from an application software developer’s perspective.

CS 524 (Au 05-06)- Asim LUMS22 Matrix-Matrix Multiply (MMM) for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { for (k = 0; k < N; k++) C[i][j] = C[i][j] + A[k][i]*B[k][j]; } } Access strides (C compiler: row-major) + Performance (MFLOPS) ijkjikikjkijjkikji C(i,j)0011NN A(k,i)NN0011 B(k,j)NN1100 P P suraj

CS 524 (Au 05-06)- Asim LUMS23 Loop Unrolling Reduce number of iterations of loop but add statement(s) to loop body to do work of missing iterations Increase amount of operation-level parallelism in loop body Outer-loop unrolling changes the order of access of memory elements and could reduce number of memory accesses and cache misses do i = 1, 2n do j = 1, m loop-body(i,j) end do do i = 1, 2n, 2 do j = 1, m loop-body(i,j) loop-body(i+1, j) End do

CS 524 (Au 05-06)- Asim LUMS24 MVM: Dot-Product 4-Outer Unrolled Form /* Assume n is a multiple of 4 */ for (i = 0; i < n; i+=4) { for (j = 0; j < n; j++) { y[i] = y[i] + A[i][j]*x[j]; y[i+1] = y[i+1] + A[i+1][j]*x[j]; y[i+2] = y[i+2] + A[i+2][j]*x[j]; y[i+3] = y[i+3] + A[i+3][j]*x[j]; } Performance (MFLOPS) 32 x x 1024 P P suraj

CS 524 (Au 05-06)- Asim LUMS25 MVM: SAXPY 4-Outer Unrolled Form /* Assume n is a multiple of 4 */ for (j = 0; j < n; j+=4) { for (i = 0; i < n; i++) { y[i] = y[i] + A[i][j]*x[j] + A[i][j+1]*x[j+1] + A[i][j+2]*x[j+2] + A[i][j+3]*x[j+3]; } Performance (MFLOPS) 32 x x 1024 P P suraj3.58.6

CS 524 (Au 05-06)- Asim LUMS26 Summary Loop permutation  Enhances spatial locality  C and FORTRAN compilers have different access patterns for matrices resulting in different performances  Performance increases when inner loop index has the minimum access stride Loop blocking  Enhances temporal locality  Ensure that 2B is less than cache size Loop unrolling  Enhances superscalar, pipelined operations