Performance modeling in GPGPU computing Wenjing xu Professor: Dr.Box
GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, engineering, and enterprise applications. What’s GPGPU?
a simplified representation of a system or phenomenon it is the most explicit way in which to describe a system or phenomenon use the parameter we set to build formula to Analysis system What’s modeling
Hong and Kim [3] introduce two metrics, Memory Warp Parallelism (MWP) and Computation Warp Parallelism (CWP) in order to describe the GPU parallel architecture. Zhang and Owens [4] develop a performance model based on their microbenchmarks so that they can identify bottlenecks in the program. Supada [5] performance model consider memory latencies are varied depending on the data type and the type of memory Relate work
Different application and device cannot use same setting Find the relationship between each parameters in this model, and choose the best block size for each application on different device to get peak performance. 1Introduction and background
varies data size with varies size of block have different performance
How GPU working
Memory latency hiding
The structure of threads
Specification of GeForce GTX 650
Parameters
N MB >= N TB = N* N TW (N is integer) >= N RT / N RB Block size setting under threads limitation
Memory resource
M R / M TR >= N* N TB (N is integer) N* N TB (N is integer) <= N RT N<= M SM / M SB Block size setting under stream multiprocessor resource
Though more threads can hide memory access latency, but the more thread use the more resource needed. Find the balance point between resource limitation and memory latency is a shortcut to touch the peak performance. By different application and device this performance model shows it advantage, adaptable and without any rework and redesign let application running on the best tuning. Conclusion