Presentation is loading. Please wait.

Presentation is loading. Please wait.

CUDA Streams These notes will introduce the use of multiple CUDA streams to overlap memory transfers with kernel computations. Also introduced is paged-locked.

Similar presentations


Presentation on theme: "CUDA Streams These notes will introduce the use of multiple CUDA streams to overlap memory transfers with kernel computations. Also introduced is paged-locked."— Presentation transcript:

1 CUDA Streams These notes will introduce the use of multiple CUDA streams to overlap memory transfers with kernel computations. Also introduced is paged-locked memory

2 Page-locked host memory
(also called pinned host memory) Page-locked memory is not paged in and out main memory by the OS through paging but will remain resident. Allows: Concurrent host/device memory transfers with kernel operations (Compute capability 2.x) – see next Host memory can be mapped to device address space (Compute capability > 1.0) Memory bandwidth is higher Uses real addresses rather than virtual addresses Does not need to intermediate copy buffering

3 Note on using page-locked memory
Using page-locked memory will reduce memory available to the OS for paging and so need to be careful in allocating it

4 Allocating page locked memory
cudaMallocHost ( void ** ptr, size_t size ) Allocates page-locked host memory that is accessible to device cudaHostAlloc ( void ** ptr, size_t size, unsigned int flags) Allocates page-locked host memory that is accessible to device – seems to have more options

5 CUDA Streams A CUDA Stream is a sequence of operations (commands) that are executed in order. CUDA streams can be created and executed together and interleaved although the “program order” is always maintained within each stream. Streams proved a mechanism to overlap memory transfer and computations operations in different stream for increased performance if sufficient resources are available.

6 Creating a stream Done by creating a stream object and associated it with a series of CUDA commands that then becomes the stream. CUDA commands have a stream pointer as an argument: cudaStream_t stream1; cudaStreamCreate(&stream1); cudaMemcpyAsync(…, stream1); MyKernel<<< grid, block, stream1>>>(…); cudaMemcpyAsync(… , stream1); Cannot use regular cudaMemcpy with streams, need asynchronous commands for concurrent operation see next Stream

7 cudaMemcpyAsync( …, stream)
Asynchronous version of cudaMemcpy that copies date to/from host and the device May return before copy complete A stream argument specified. Needs “page-locked” memory

8 Simply concatenating statements does not work well because of the way the GPU schedules work
Page 206 CUDA by Example,

9 Page 207 CUDA by Example,

10 Page 208 CUDA by Example

11 Interleave statements of each stream
for(int i=0;I < SIZE;i+= N*2 { // loop over data in chunks // interleave stream 1 and stream 2 cudaMemcpyAsync(dev_a1,a+i,N*sizeof(int),cudaMemcpyHostToDevice,stream1); cudaMemcpyAsync(dev_a2,a+i,N*sizeof(int),cudaMemcpyHostToDevice,stream2); cudaMemcpyAsync(dev_b1,a+i,N*sizeof(int),cudaMemcpyHostToDevice,stream1); cudaMemcpyAsync(dev_b2,a+i,N*sizeof(int),cudaMemcpyHostToDevice,stream2); kernel<<<N/256,256,0,stream1>>>(dev_a,dev-b,dev_c); kernel<<<N/256,256,0,stream2>>>(dev_a,dev-b,dev_c); cudaMemcpyAsync(c+1,dev_c1,N*sizeof(int),cudaMemcpyDeviceToHost,stream1); cudaMemcpyAsync(c+1,dev_c2,N*sizeof(int),cudaMemcpyDeviceToHost,stream2); }

12 Page 210 CUDA by Example

13 Questions


Download ppt "CUDA Streams These notes will introduce the use of multiple CUDA streams to overlap memory transfers with kernel computations. Also introduced is paged-locked."

Similar presentations


Ads by Google