Presentation is loading. Please wait.

Presentation is loading. Please wait.

GPUs and Accelerators Jonathan Coens Lawrence Tan Yanlin Li.

Similar presentations


Presentation on theme: "GPUs and Accelerators Jonathan Coens Lawrence Tan Yanlin Li."— Presentation transcript:

1 GPUs and Accelerators Jonathan Coens Lawrence Tan Yanlin Li

2 Outline Graphic Processing Units o Features o Motivation o Challenges Accelerator o Methodology o Performance Evaluation o Discussion Rigel o Methodology o Performance Evaluation o Discussion Conclusion

3 Graphics Processing Units (GPU) GPU o Special purpose processors designed to render 3D scenes o In almost every desktop today Features o Highly parallel processors o Better floating point performance than CPUs  ATI Radeon x1900 - 250 Gflops Motivation o Use GPUs for general purpose programming Challenges o Difficult for programmer to program o Trade off between programmability and performance GeForce 6600GT (NV43) GPU

4 Accelerator: Using Data Parallelism to Program GPUs for General Purpose Uses Methodology o Data Parallelism to program GPU (SIMD) o Parallel Array C# Object o No aspects of GPU are exposed to the programmer o Programmer only needs to know how to use the Parallel Array o Accelerator takes care of the conversion to pixel shader code o Parallel programs can be represented as DAGs Simplified block diagram for a GPU Expression DAG with shader breaks marked

5 Accelerator: Using Data Parallelism to Program GPUs for General Purpose Uses Performance Evaluation Performance of Accelerator versus hand coded pixel shader programs on a GeForce 7800 GTX and an ATI x1800. Performance is shown as speedup relative to the C++ version of programs Speedup of Accelator programs on various GPU compared to C++ programs running on a CPU

6 Rigel: 1024-core Accelerator Specific Architecture SPMD programming model Global address space RISC instruction set Write-back cache Cores laid out in clusters of 8, each cluster with local cache Custom cores (optimized for space / power) Hierarchical Task Queueing Single queue from programmer's perspective Architecture handles distributing tasks Customizable via API o Task granularity o Static vs. dynamic scheduling

7 Rigel's Performance Fairly Successful Achieved speedup utilizing all 1024 cores Hierarchical task structure effectively scaled to 1024 Issues Cache coherence! o Memory invalidate broadcasts slow system down  Barrier flags  Task enqueue / dequeue variables o Not done in hardware...  Lazy-evaluation write-through barriers at cluster level

8 Improving Rigel 1.Will the hierarchical task structure continue to scale? If not, when will the boundary be? (Think multiple cache levels but with processor tasks) 2.How could we implement barriers or queues to avoid contention, but still scale? (Is memory managed cache coherence appropriate?) 3.Is specialized hardware the way to go (clusters of 8 custom cores), or can this be replaced by general purpose cores?

9 Generic and Custom Accelerators Difficult to make generic enough programming interface between programmer and multi-core system o GPUs are limited by SIMD programming model o Specific hardware platforms still have issues for SPMD Efficiently scaling for more cores is still an issue How do we solve these issues?


Download ppt "GPUs and Accelerators Jonathan Coens Lawrence Tan Yanlin Li."

Similar presentations


Ads by Google