Download presentation
Presentation is loading. Please wait.
Published byJohanna Raske Modified over 6 years ago
1
Past Practices of Conventional Core Microarchitecture is Dead
ROADMAP Backdrop Brief history of graphics hardware Why GPU Computing? Progression GPU Computing 1.0 – compute pretending to be graphics GPU Computing 2.0 – direct computing, CUDA GPU Computing 3.0 – an emerging ecosystem Future Driving workloads GPU Computing 4.0? Steve Keckler Architecture Research Group
2
Old Equations
3
New Equation
4
Where are we now? Today’s high-end CPUs: 1-2nJ/Flop
Today’s high-end GPUs: ~200pJ/Flop
5
What do things cost? Operation Energy 64-bit FP Operation 10.5pJ
Regfile access (2 read/1 write) 5.5pJ Instruction RAM access 3.6pJ Data RAM access On-chip wire 18-110fJ/bit-mm 64-bit on-chip bus 1.2-7pJ/mm Standard off-chip link 30pJ/bit TSV (not including wire) 1-11fJ/bit 30nm with aggressive voltage scaling, from DARPA Exascale Report, 2008
6
Core microarchitecture is not dead
But now need to focus on core perf/W Limit communication, storage access overheads All multicore proposals must focus on Perf/W Drive down overheads of data movement and tracking
7
Research poster submission deadline: August 15
Research poster submission deadline: August 15
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.