Introduction to CUDA Programming Profiler, Assembly, and Floating-Point Andreas Moshovos Winter 2009 Some material from: Wen-Mei Hwu and David Kirk NVIDIA.

Slides:



Advertisements
Similar presentations
L9: Floating Point Issues CS6963. Outline Finish control flow and predicated execution discussion Floating point – Mostly single precision until recent.
Advertisements

A Complete GPU Compute Architecture by NVIDIA Tamal Saha, Abhishek Rawat, Minh Le {ts4rq, ar8eb,
Fixed Point Numbers The binary integer arithmetic you are used to is known by the more general term of Fixed Point arithmetic. Fixed Point means that we.
Arithmetic in Computers Chapter 4 Arithmetic in Computers2 Outline Data representation integers Unsigned integers Signed integers Floating-points.
Fabián E. Bustamante, Spring 2007 Floating point Today IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties Next time.
Computer Engineering FloatingPoint page 1 Floating Point Number system corresponding to the decimal notation 1,837 * 10 significand exponent A great number.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 11: Floating-Point Considerations.
Topics covered: Floating point arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Lecture 16: Computer Arithmetic Today’s topic –Floating point numbers –IEEE 754 representations –FP arithmetic Reminder –HW 4 due Monday 1.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3: IT Students.
1 Lecture 9: Floating Point Today’s topics:  Division  IEEE 754 representations  FP arithmetic Reminder: assignment 4 will be posted later today.
L10: Floating Point Issues and Project CS6963. Administrative Issues Project proposals – Due 5PM, Friday, March 13 (hard deadline) Homework (Lab 2) –
L9: Next Assignment, Project and Floating Point Issues CS6963.
The PTX GPU Assembly Simulator and Interpreter N.M. Stiffler Zheming Jin Ibrahim Savran.
COMPUTER ARCHITECTURE & OPERATIONS I Instructor: Hao Ji.
February 26, 2003MIPS floating-point arithmetic1 Question  Which of the following are represented by the hexadecimal number 0x ? —the integer.
Information Representation (Level ISA3) Floating point numbers.
© David Kirk/NVIDIA and Wen-mei W. Hwu, University of Illinois, Urbana-Champaign 1 CSCE 4643 GPU Programming Lecture 8 - Floating Point Considerations.
Computer Organization and Architecture Computer Arithmetic Chapter 9.
Computer Architecture Lecture 3: Logical circuits, computer arithmetics Piotr Bilski.
CEN 316 Computer Organization and Design Computer Arithmetic Floating Point Dr. Mansour AL Zuair.
CUDA Performance Considerations Patrick Cozzi University of Pennsylvania CIS Spring 2011.
Computing Systems Basic arithmetic for computers.
ECE232: Hardware Organization and Design
CPS3340 COMPUTER ARCHITECTURE Fall Semester, /14/2013 Lecture 16: Floating Point Instructor: Ashraf Yaseen DEPARTMENT OF MATH & COMPUTER SCIENCE.
L9: Project Discussion and Floating Point Issues CS6235.
8-1 Embedded Systems Fixed-Point Math and Other Optimizations.
© David Kirk/NVIDIA and Wen-mei W. Hwu, University of Illinois, Urbana-Champaign 1 ECE408 Applied Parallel Programming Lecture 15 - Floating.
Lecture 9: Floating Point
Floating Point Arithmetic
Floating Point Representation for non-integral numbers – Including very small and very large numbers Like scientific notation – –2.34 × –
© David Kirk/NVIDIA and Wen-mei W. Hwu, University of Illinois, Urbana-Champaign 1 ECE408 Applied Parallel Programming Lecture 16 - Floating.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Lecture notes Reading: Section 3.4, 3.5, 3.6 Multiplication
CDA 3101 Fall 2013 Introduction to Computer Organization
Computer Engineering FloatingPoint page 1 Floating Point Number system corresponding to the decimal notation 1,837 * 10 significand exponent A great number.
Chapter 3 Arithmetic for Computers. Chapter 3 — Arithmetic for Computers — 2 Arithmetic for Computers Operations on integers Addition and subtraction.
Floating Point Numbers Representation, Operations, and Accuracy CS223 Digital Design.
Monday, January 14 Homework #1 is posted on the website Homework #1 is posted on the website Due before class, Jan. 16 Due before class, Jan. 16.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30 – July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana-Champaign 1 Floating-Point Considerations.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core Processors for Science and Engineering.
1/25/02CSE Number Representations Number Representations XVII times LIX CLXX -XVII D(CCL)LL DCCC LLLL X-X X-VII = DCCC CC III = MIII X-VII = VIIIII-VII.
10/7/2004Comp 120 Fall October 7 Read 5.1 through 5.3 Register! Questions? Chapter 4 – Floating Point.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana-Champaign 1 Floating-Point Considerations.
1 Floating Point. 2 Topics Fractional Binary Numbers IEEE 754 Standard Rounding Mode FP Operations Floating Point in C Suggested Reading: 2.4.
© David Kirk/NVIDIA and Wen-mei W
Computer Architecture & Operations I
Integer Division.
Topics IEEE Floating Point Standard Rounding Floating Point Operations
Floating Point Numbers: x 10-18
ECE 498AL Lectures 8: Bank Conflicts and Sample PTX code
Floating Point Number system corresponding to the decimal notation
CS/COE0447 Computer Organization & Assembly Language
Arithmetic for Computers
CSCE 350 Computer Architecture
ECE 498AL Spring 2010 Lecture 11: Floating-Point Considerations
Introduction to CUDA Programming
CS 105 “Tour of the Black Holes of Computing!”
CSCE 4643 GPU Programming Lecture 9 - Floating Point Considerations
CS 105 “Tour of the Black Holes of Computing!”
© David Kirk/NVIDIA and Wen-mei W. Hwu,
Morgan Kaufmann Publishers Arithmetic for Computers
CS213 Floating Point Topics IEEE Floating Point Standard Rounding
Review In last lecture, done with unsigned and signed number representation. Introduced how to represent real numbers in float format.
Topic 3d Representation of Real Numbers
CS 105 “Tour of the Black Holes of Computing!”
CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming
Presentation transcript:

Introduction to CUDA Programming Profiler, Assembly, and Floating-Point Andreas Moshovos Winter 2009 Some material from: Wen-Mei Hwu and David Kirk NVIDIA Robert Strzodka, Dominik Göddeke, NVISION08 presentation

The CUDA Profiler Both GUI and command-line Non-GUI control: –CUDA_PROFILE set to 1 or 0 to enable or disable the profiler –CUDA_PROFILE_LOG set to the name of the log file (will default to./cuda_profile.log) –CUDA_PROFILE_CSV set to 1 or 0 to enable or disable a comma separated version of the log –CUDA_PROFILE_CONFIG specify a config file with up to 4 signals

Profiler Signals

Profiler Counters Grid size X, Y Block size X, Y, Z Dyn smem per block: –Dynamic shared memory Sta smem per block: –static shared memory Reg per thread Mem transfer dir –Direction: 0  host to device, 1  device to host Mem transfer size –bytes

Interpreting Profiler Counters Values represent events within a thread warp Only targets one multiprocessor –Values will not correspond to the total number of warps launched for a particular kernel. –Launch enough thread blocks to ensure that the target multiprocessor is given a consistent percentage of the total work. Values are best used to identify relative performance differences between non-optimized and optimized code –e.g., make the number of non-coalesced loads go from some non-zero value to zero

CUDA Visual Profiler Helps measure and find potential performance problem –GPU and CPU timing for all kernel invocations and memcpys –Time stamps Access to hardware performance counters

Assembly

PTX: Assembly for NVIDIA GPUs Parallel Thread eXecution Virtual Assembly –Translated to actual machine code at runtime –Allows for different hardware implementations Might enable additional optimizations –E.g., %clock register to time blocks of code

Code Generation Flow Parallel Thread eXecution (PTX) –Virtual Machine and ISA –Programming model –Execution resources and state ISA – Instruction Set Architecture –Variable declarations –Instructions and operands Translator is an optimizing compiler –Translates PTX to Target code –Program install time Driver implements VM runtime –Coupled with Translator C/C++ Compiler C/C++ Application PTX to Target Translator C G80 … GPU ASM-level Library Programmer Target code PTX Code

How to See the PTX code nvcc –keep –Produces.ptx and.cubin nvcc --opencc-options -LIST:source=on

PTX Example ld.global.v4.f32 {$f1,$f3,$f5,$f7}, [$r9+0]; # 174 me.x += me.y * me.z; mad.f32 $f1, $f5, $f3, $f1; float4 me = gx[gtid]; me.x += me.y * me.z; CUDA PTX Registers are virtual – The actual hardware registers are hidden from PTX

PTX Syntax Example

Another Example: CUDA Function CUDAPTX sub.f32 $f18, $f1, $f15; sub.f32 $f19, $f3, $f16; sub.f32 $f20, $f5, $f17; mul.f32 $f21, $f18, $f18; mul.f32 $f22, $f19, $f19; mul.f32 $f23, $f20, $f20; add.f32 $f24, $f21, $f22; add.f32 $f25, $f23, $f24; rsqrt.f32 $f26, $f25; mad.f32 $f13, $f18, $f26, $f13; mov.f32 $f14, $f13; mad.f32 $f11, $f19, $f26, $f11; mov.f32 $f12, $f11; mad.f32 $f9, $f20, $f26, $f9; mov.f32 $f10, $f9; __device__ void interaction( float4 b0, float4 b1, float3 *accel) { r.x = b1.x - b0.x; r.y = b1.y - b0.y; r.z = b1.z - b0.z; float distSqr = r.x * r.x + r.y * r.y + r.z * r.z; float s = 1.0f/sqrt(distSqr); accel->x += r.x * s; accel->y += r.y * s; accel->z += r.z * s; }

PTX Data types

Predicated Execution –p = Evaluate cond –Branch not true After –Then Code After: –After Code If (cond) Then Code After Code –p = Evaluate cond –(p) Then Code –After Code

PTX Predicated Execution

Variable Declaration

Parameterized Variable Names How to create 100 register “variables”.reg.b32 %r Declares %r0 - %r99

Addresses as Operands The value of x The value of tbl[12] The base address of tlb

Compiling a loop that calls a function - again CUDA –sx is shared –mx, accel are local PTX mov.s32 $r12, 0; $Lt_0_26: setp.eq.u32 $p1, $r12, bra $Lt_0_27; mul.lo.u32 $r13, $r12, 16; add.u32 $r14, $r13, $r1; ld.shared.f32 $f15, [$r14+0]; ld.shared.f32 $f16, [$r14+4]; ld.shared.f32 $f17, [$r14+8]; [func body from previous slide inlined here] $Lt_0_27: add.s32 $r12, $r12, 1; mov.s32 $r15, 128; setp.ne.s32 $p2, $r12, bra $Lt_0_26; for (i = 0; i < K; i++) { if (i != threadIdx.x) { interaction( sx[i], mx, &accel ); }

Yet Another Example: SAXPY code cvt.u32.u16 $blockid, %ctaid.x; // Calculate i from thread/block IDs cvt.u32.u16 $blocksize, %ntid.x; cvt.u32.u16 $tid, %tid.x; mad24.lo.u32 $i, $blockid, $blocksize, $tid; ld.param.u32 $n, [N]; // Nothing to do if n ≤ i setp.le.u32 $p1, $n, bra $L_finish; mul.lo.u32 $offset, $i, 4; // Load y[i] ld.param.u32 $yaddr, [Y]; add.u32 $yaddr, $yaddr, $offset; ld.global.f32 $y_i, [$yaddr+0]; ld.param.u32 $xaddr, [X]; // Load x[i] add.u32 $xaddr, $xaddr, $offset; ld.global.f32 $x_i, [$xaddr+0]; ld.param.f32 $alpha, [ALPHA]; // Compute and store alpha*x[i] + y[i] mad.f32 $y_i, $alpha, $x_i, $y_i; st.global.f32 [$yaddr+0], $y_i; $L_finish: exit;

The %clock register Real time clock cycle counter How to read: – mov.u32 $r1, %clock; Can be used to time code It measures real time not just time spent executing this thread –If a thread is blocks time still elapses

PTX Reference Please Read the PTX ISA specification –Posted under the handouts section

Occupancy Calculator UDA_Occupancy_calculator.xlshttp://developer.download.nvidia.com/compute/cuda/C UDA_Occupancy_calculator.xls GPU Occupancy –Active warps / max warps –Threads/block –Registers/thread –Shared memory/block Nvcc –cubin –code { name = my_kernel lmem = 0 smem = 24 reg = 5 bar = 0 bincode { � } const { � } }

Occupancy Calculator Example

Floating Point Considerations

Comparison of FP Capabilities G80SSEIBM AltivecCell SPE PrecisionIEEE 754 Rounding modes for FADD and FMUL Round to nearest and round to zero All 4 IEEE, round to nearest, zero, inf, - inf Round to nearest only Round to zero/truncate only Denormal handlingFlush to zero Supported, 1000’s of cycles Flush to zero NaN supportYes No Overflow and Infinity support Yes, only clamps to max norm Yes No, infinity FlagsNoYes Some Square rootSoftware onlyHardwareSoftware only DivisionSoftware onlyHardwareSoftware only Reciprocal estimate accuracy 24 bit12 bit Reciprocal sqrt estimate accuracy 23 bit12 bit log2(x) and 2^x estimates accuracy 23 bitNo12 bitNo

IEEE Floating Point Representation A floating point binary number consists of three parts: –sign (S), exponent (E), and mantissa (M). –Each (S, E, M) pattern uniquely identifies a floating point number. For each bit pattern, its IEEE floating-point value is derived as: –value = (-1) S * M * {2 E }, where 1.0 ≤ M < 10.0 B The interpretation of S is simple: S=0 results in a positive number and S=1 a negative number.

Normalized Representation Specifying that 1.0 B ≤ M < 10.0 B makes the mantissa value for each floating point number unique. –For example, the only one mantissa value allowed for 0.5 D is M = D = 1.0 B * 2 -1 –Neither 10.0 B * 2 -2 nor 0.1 B * 2 0 qualifies Because all mantissa values are of the form 1.XX…, one can omit the “1.” part in the representation. –The mantissa value of 0.5 D in a 2-bit mantissa is 00, which is derived by omitting “1.” from 1.00.

Exponent Representation In an n-bits exponent representation, 2 n-1 -1 is added to its 2's complement representation to form its excess representation. –See Table for a 3-bit exponent representation A simple unsigned integer comparator can be used to compare the magnitude of two FP numbers Symmetric range for +/- exponents (111 reserved) 2’s complementActual decimalExcess (reserved pattern) E = represented E - BIAS

A Hypothetical 5-bit Floating Point Representation Assume 1-bit S, 2-bit E, and 2-bit M –0.5 D = 1.00 B * 2 -1 –0.5 D = , –where S = 0, E = 00 M = (1.)00 2’s complementActual decimalExcess (reserved pattern)11 00

Representable Numbers The representable numbers of a given format is the set of all numbers that can be exactly represented in the format. See Table for representable numbers of an unsigned 3-bit integer format

Hypothetical 5-bit FP: Representable Numbers No-zero Abrupt underflowGradual underflow EM S=0S=1 S=0S=1S=0S= (2 -1 ) *2 -3 -( *2 -3 ) 001* * *2 -3 -( *2 -3 ) 002* * *2 -3 -( *2 -3 ) 003* * (2 0 ) *2 -2 -(2 0 +1*2 -2 ) *2 -2 -(2 0 +1*2 -2 )2 0 +1*2 -2 -(2 0 +1*2 -2 ) *2 -2 -(2 0 +2*2 -2 ) *2 -2 -(2 0 +2*2 -2 )2 0 +2*2 -2 -(2 0 +2*2 -2 ) *2 -2 -(2 0 +3*2 -2 ) *2 -2 -(2 0 +3*2 -2 )2 0 +3*2 -2 -(2 0 +3*2 -2 ) (2 1 ) *2 -1 -(2 1 +1*2 -1 ) *2 -1 -(2 1 +1*2 -1 )2 1 +1*2 -1 -(2 1 +1*2 -1 ) *2 -1 -(2 1 +2*2 -1 ) *2 -1 -(2 1 +2*2 -1 )2 1 +2*2 -1 -(2 1 +2*2 -1 ) *2 -1 -(2 1 +3*2 -1 ) *2 -1 -(2 1 +3*2 -1 )2 1 +3*2 -1 -(2 1 +3*2 -1 ) 11Reserved pattern

Flush To Zero Treat all bit patterns with E=0 as 0.0 –This takes away several representable numbers near zero and lump them all into 0.0 –For a representation with large M, a large number of representable numbers numbers will be removed

Hypothetical 5-bit FP: Representable Numbers No-zero Abrupt underflowGradual underflow EM S=0S=1 S=0S=1S=0S= (2 -1 ) *2 -3 -( *2 -3 ) 001* * *2 -3 -( *2 -3 ) 002* * *2 -3 -( *2 -3 ) 003* * (2 0 ) *2 -2 -(2 0 +1*2 -2 ) *2 -2 -(2 0 +1*2 -2 )2 0 +1*2 -2 -(2 0 +1*2 -2 ) *2 -2 -(2 0 +2*2 -2 ) *2 -2 -(2 0 +2*2 -2 )2 0 +2*2 -2 -(2 0 +2*2 -2 ) *2 -2 -(2 0 +3*2 -2 ) *2 -2 -(2 0 +3*2 -2 )2 0 +3*2 -2 -(2 0 +3*2 -2 ) (2 1 ) *2 -1 -(2 1 +1*2 -1 ) *2 -1 -(2 1 +1*2 -1 )2 1 +1*2 -1 -(2 1 +1*2 -1 ) *2 -1 -(2 1 +2*2 -1 ) *2 -1 -(2 1 +2*2 -1 )2 1 +2*2 -1 -(2 1 +2*2 -1 ) *2 -1 -(2 1 +3*2 -1 ) *2 -1 -(2 1 +3*2 -1 )2 1 +3*2 -1 -(2 1 +3*2 -1 ) 11Reserved pattern

Denormalized Numbers The actual method adopted by the IEEE standard is called denormalized numbers or gradual underflow. –The method relaxes the normalization requirement for numbers very close to 0. –whenever E=0, the mantissa is no longer assumed to be of the form 1.XX. Rather, it is assumed to be 0.XX. In general, if the n-bit exponent is 0, the value is 0.M * ^(n-1)

Hypothetical 5-bit FP: Representable Numbers No-zero Abrupt underflowGradual underflow EM S=0S=1 S=0S=1S=0S= (2 -1 ) *2 -3 -( *2 -3 ) 001* * *2 -3 -( *2 -3 ) 002* * *2 -3 -( *2 -3 ) 003* * (2 0 ) *2 -2 -(2 0 +1*2 -2 ) *2 -2 -(2 0 +1*2 -2 )2 0 +1*2 -2 -(2 0 +1*2 -2 ) *2 -2 -(2 0 +2*2 -2 ) *2 -2 -(2 0 +2*2 -2 )2 0 +2*2 -2 -(2 0 +2*2 -2 ) *2 -2 -(2 0 +3*2 -2 ) *2 -2 -(2 0 +3*2 -2 )2 0 +3*2 -2 -(2 0 +3*2 -2 ) (2 1 ) *2 -1 -(2 1 +1*2 -1 ) *2 -1 -(2 1 +1*2 -1 )2 1 +1*2 -1 -(2 1 +1*2 -1 ) *2 -1 -(2 1 +2*2 -1 ) *2 -1 -(2 1 +2*2 -1 )2 1 +2*2 -1 -(2 1 +2*2 -1 ) *2 -1 -(2 1 +3*2 -1 ) *2 -1 -(2 1 +3*2 -1 )2 1 +3*2 -1 -(2 1 +3*2 -1 ) 11Reserved pattern

Floating Point Numbers As the exponent gets larger The distance between two representable numbers increases

Arithmetic Instruction Throughput int and float add, shift, min, max and float mul, mad: 4 cycles per warp –int multiply (*) is by default 32-bit requires multiple cycles / warp –Use __mul24() / __umul24() intrinsics for 4-cycle 24-bit int multiply –For G80, for G20 should be OK Integer divide and modulo are expensive –Compiler will convert literal power-of-2 divides to shifts –Be explicit in cases where compiler can’t tell that divisor is a power of 2 –Useful trick: foo % n == foo & (n-1) if n is a power of 2

Arithmetic Instruction Throughput Reciprocal, reciprocal square root, sin/cos, log, exp: 16 cycles per warp –These are the versions prefixed with “__” –Examples:__rcp(), __sin(), __exp() Other functions are combinations of the above –y / x == rcp(x) * y == 20 cycles per warp –sqrt(x) == rcp(rsqrt(x)) == 32 cycles per warp

Runtime Math Library There are two types of runtime math operations –__func(): direct mapping to hardware ISA Fast but low accuracy (see prog. guide for details) Examples: __sin(x), __exp(x), __pow(x,y) –func() : compile to multiple instructions Slower but higher accuracy (5 ulp, units in the least place, or less) Examples: sin(x), exp(x), pow(x,y) The -use_fast_math compiler option forces every func() to compile to __func()

Make your program float-safe! G20 has double precision support –G80 is single-precision only –Double precision has additional performance cost Only one unit per multiprocessor –Careless use of double or undeclared types may run more slowly on G80+ Important to be float-safe (be explicit whenever you want single precision) to avoid using double precision where it is not needed –Add ‘f’ specifier on float literals: foo = bar * 0.123; // double assumed foo = bar * 0.123f; // float explicit –Use float version of standard library functions foo = sin(bar); // double assumed foo = sinf(bar); // single precision explicit

Deviations from IEEE-754 Addition and Multiplication are IEEE 754 compliant –Maximum 0.5 ulp (units in the least place) error However, often combined into multiply-add (FMAD) –Intermediate result is truncated Division is non-compliant (2 ulp) Not all rounding modes are supported Denormalized numbers are not supported No mechanism to detect floating-point exceptions

Units in the Last Place Error If the result of a FP computation is: –3.12 x 10^-2 = But the answer when computed to infinite precision is: – Then ulp is: – – = For binary representations the maximum ulp is 0.5 –Round to nearest number

Mixed Precision Methods From slides by Robert Strzodka Dominik Göddeke

What is a Mixed Precision Method?

Mixed Precision Performance Gains

Single vs Double Precision FP Float s23e8 Double s53e11

Round off and Cancellation

Double Precision != Better Accuracy

The Dominant Data Error

Understanding Floating Point Operations

Commutative Summation

Commutative Summation Example Say we want to calculate –

High Precision Emulation

Example: Addition c = a + b

Please read the following: