Presentation is loading. Please wait.

Presentation is loading. Please wait.

Heterogeneous Programming

Similar presentations


Presentation on theme: "Heterogeneous Programming"— Presentation transcript:

1 Heterogeneous Programming
Martin Kruliš by Martin Kruliš (v1.0)

2 Heterogeneous Programming
GPU “Independent” device Controlled by host Used for “offloading” Host Code Needs to be designed in a way that Utilizes GPU(s) efficiently Utilize CPU while GPU is working CPU and GPU do not wait for each other Remember Amdahl’s Law. by Martin Kruliš (v1.0)

3 Heterogeneous Programming
Bad Example cudaMemcpy(..., HostToDevice); Kernel1<<<...>>>(...); cudaDeviceSynchronize(); cudaMemcpy(..., DeviceToHost); ... Kernel2<<<...>>>(...); CPU GPU Device is working by Martin Kruliš (v1.0)

4 Overlapping Work Overlapping CPU and GPU work Kernels Memory transfers
Started asynchronously Can be waited for (cudaDeviceSynchronize()) A little more can be done with streams Memory transfers cudaMemcpy() is synchronous and blocking Alternatively cudaMemcpyAsync() starts the transfer and returns immediately Can be synchronized the same way as the kernel by Martin Kruliš (v1.0)

5 Workload balance becomes an issue
Overlapping Work Using Asynchronous Transfers cudaMemcpyAsync(HostToDevice); Kernel1<<<...>>>(...); cudaMemcpyAsync(DeviceToHost); ... do_something_on_cpu(); cudaDeviceSynchronize(); CPU GPU Workload balance becomes an issue by Martin Kruliš (v1.0)

6 Overlapping Work CPU Threads GPU Overlapping Capabilities
Multiple CPU threads may use the GPU GPU Overlapping Capabilities Multiple kernels may run simultaneously Since Fermi architecture cudaDeviceProp.concurrentKernels Kernel execution may overlap with data transfers Or even multiple data transfers cudaDeviceProp.asyncEngineCount by Martin Kruliš (v1.0)

7 Streams Stream In-order GPU command queue (like in OpenCL)
Asynchronous GPU operations are registered in queue Kernel execution Memory data transfers Commands in different streams may overlap Provide means for explicit and implicit synchronization Default stream (stream 0) Always present, does not have to be created Global synchronization capabilities by Martin Kruliš (v1.0)

8 Streams Stream Creation Stream Usage Stream Destruction
cudaStream_t stream; cudaStreamCreate(&stream); Stream Usage cudaMemcpyAsync(dst, src, size, kind, stream); kernel<<<grid, block, sharedMem, stream>>>(...); Stream Destruction cudaStreamDestroy(stream); by Martin Kruliš (v1.0)

9 Streams Synchronization Explicit Implicit
cudaStreamSynchronize(stream) – waits until all commands issued to the stream have completed cudaStreamQuery(stream) – a non-blocking test whether the stream has finished Implicit Operations in different streams cannot overlap if a special operation is issued between them Memory allocation A CUDA command to default stream Switch between L1/shared memory configuration by Martin Kruliš (v1.0)

10 Streams Overlapping Behavior
Commands in different streams overlap if the hardware is capable running them concurrently Unless implicit/explicit synchronization prohibits so for (int i = 0; i < 2; ++i) { cudaMemcpyAsync(…HostToDevice, stream[i]); MyKernel<<<g, b, 0, stream[i]>>>(...); cudaMemcpyAsync(…DeviceToHost, stream[i]); } If the device does not support concurrent data transfers, the streams will not overlap at all (HostToDevice copy in stream[1] must wait until DeviceToHost transfer in stream[0] finishes). For CC < 3.0, the kernel execution cannot overlap, since the second kernel is issued after DeviceToHost copy in the stream[0]. May have many implicit synchronizations, depending on CC and hardware overlapping capabilities. by Martin Kruliš (v1.0)

11 Much less opportunities for implicit synchronization
Streams Overlapping Behavior Commands in different streams overlap if the hardware is capable running them concurrently Unless implicit/explicit synchronization prohibits so for (int i = 0; i < 2; ++i) cudaMemcpyAsync(…HostToDevice, stream[i]); MyKernel<<<g, b, 0, stream[i]>>>(...); cudaMemcpyAsync(…DeviceToHost, stream[i]); Much less opportunities for implicit synchronization by Martin Kruliš (v1.0)

12 Streams Callbacks Callbacks are registered in streams by
cudaStreamAddCallback(stream, fnc, data, 0); The callback function is invoked asynchronously after all preceding commands terminate Callback registered to the default stream is invoked after previous commands in all streams terminate Operations issued after callback registration start after the callback returns The callback looks like void CUDART_CB MyCallback(stream, errorStatus, userData) { ... by Martin Kruliš (v1.0)

13 Streams Events Special markers that can be used for synchronization and performance monitoring The typical usage is Waiting for all commands before the marker finishes Explicit synchronization between selected streams Measuring time between two events Example cudaEvent_t event; cudaEventCreate(&event); cudaEventRecord(event, stream); cudaEventSynchronize(event); by Martin Kruliš (v1.0)

14 Pipelining Making a Good Use of Overlapping
Split the work into smaller fragments Create a pipeline effect (load, process, store) by Martin Kruliš (v1.0)

15 Multiple cudaMemcpy() calls may be quite inefficient
Feeding Threads Data Gather and Scatter Problem Input Data Host Memory Gather Multiple cudaMemcpy() calls may be quite inefficient Kernel Execution GPU Memory Scatter Results Host Memory by Martin Kruliš (v1.0)

16 Feeding Threads Gather and Scatter Reducing overhead
Performed by CPU before/after cudaMemcpy Main Thread Gather Scatter Kernel HtD copy DtH copy Stream 0 Stream 1 # of thread per GPU and # of streams per thread depends on the workload structure by Martin Kruliš (v1.0)

17 Optimized for writing, not cached on CPU
Page-locked Memory Page-locked (Pinned) Host Memory Host memory that is prevented from swapping Created/dismissed by cudaHostAlloc(), cudaFreeHost() cudaHostRegister(), cudaHostUnregister() Optionally with flags cudaHostAllocWriteCombined cudaHostAllocMapped cudaHostAllocPortable Copies between pinned host memory and device are automatically performed asynchronously Pinned memory is a scarce resource Optimized for writing, not cached on CPU On systems with a front-side bus, bandwidth between host memory and device memory is higher if host memory is allocated as page-locked and even higher if in addition it is allocated as write-combining as described in Write-Combining Memory. Write-combining memory frees up the host's L1 and L2 cache resources, making more cache available to the rest of the application. In addition, write-combining memory is not snooped during transfers across the PCI Express bus, which can improve transfer performance by up to 40%. by Martin Kruliš (v1.0)

18 Memory Mapping Device Memory Mapping
Allowing GPU to access portions of host memory directly (i.e., without explicit copy operations) For both reading and writing The memory must be allocated/registered with flag cudaHostAllocMapped The context must have cudaDeviceMapHost flag (set by cudaSetDeviceFlags()) Function cudaHostGetDevicePointer() gets host pointer and returns corresponding device pointer Note: Atomic operations on mapped memory are not atomic from the host (or other devices) perspective. by Martin Kruliš (v1.0)

19 Asynchronous Error Handling
Asynchronous Errors An error may occur outside the a CUDA call In case of asynchronous memory transfers or kernel execution The error is reported by the following CUDA call To make sure all errors were reported, the device must synchronize (cudaDeviceSynchronize()) Error handling functions cudaGetLastError() cudaPeekAtLastError() cudaGetErrorString(error) by Martin Kruliš (v1.0)

20 Discussion by Martin Kruliš (v1.0)


Download ppt "Heterogeneous Programming"

Similar presentations


Ads by Google