Download presentation
Presentation is loading. Please wait.
1
Parallel Processing and GPUs
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
2
Benefits of GPUs Hundreds, or thousands of cores to allow for parallel processing of data, such as computing the forward pass through the network for each example in parallel. Memory on a GPU is faster and closer to the GPU cores, further improving processing speed. GPUs are designed specifically for vector addition or multiplication, which comes up often in deep learning
3
Drawbacks of GPU’s Moving data between the CPU and GPU is very costly, so unnecessary movement slows down the processing. Difficult to add additional memory to a GPU if the network size exceeds GPU memory. CPU’s can use up to 1TB of RAM, whereas GPU’s max out around GB. Each core is less powerful than a CPU core, so operations than cannot be parallelized are slower than on a CPU.
4
Field Programmable Gate Array (FPGA)
An integrated circuit that can be programmed in the field to handle a specific function. Tend to be faster than GPU for specific tasks, but are less versatile Can be used with embedded microprocessors to form a complete system More power efficient than GPUs due to their reduced ability at general purpose computing.
5
Application Specific Integrated Circuit (ASIC)
An integrated circuit designed for a specific purpose, such as multiplying matrices. Can get away with lower precision (even 8 bit) when calculating weight updates or outputs of nodes for faster training. Google has created the TPU (Tensor Processing Unit), which powered AlphaGo in its competition against Lee Sedol Faster and uses less power than Traditional GPUs, but can cost millions of dollars to create the circuitry. Only useful for larger scale operations (such as Google).
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.