CS 52500, Parallel Computing Spring 2011 Alex Pothen Lectures: Tues, Thurs, 3:00—4:15 PM, BRNG 2275 Office Hours: Wed 3:00—4:00 PM; Thurs 4:30—5:30 PM; and by appointment Course webpage:
Motivational background Parallel computing is now everywhere, even at the desktop Many important applications demand the solution of challenging problems via parallel computing – Discovering new sources of energy – Discovering new drugs for diseases – Nano-materials – Information science – National security Funding agencies continue to massively invest in parallel computing research – Department of Energy SciDAC: – The National Science Foundation – Department of Defense Industry stakeholders (software as well as hardware vendors) are increasingly supporting research in parallel computing (both in-house and in collaboration with universities and labs) – Parallel computing has permeated the commercial enterprise
CS 525 will cover Parallel and distributed computing architectures – Shared memory processors – Distributed memory processors – Multi-core processors Parallel programming – Shared memory programming (OpenMP) – Distributed memory programming (MPI) – Thread-based programming Parallel algorithms – Sorting – Matrix-vector multiplication – Graph algorithms Applications – Computational science and engineering – High-performance computing
Text books Main: – Ananth Grama, Anshul Gupta, George Karypis and Vipin Kumar, Introduction to Parallel Computing, Second edition, Addison Wesley, 2003 Supplementary: – Michael Quinn, Parallel Programming in C with MPI and OpenMP, McGraw-Hill, 2003 – Jack Dongara, Ian Foster, Geoffrey C. Fox, and William Gropp, The Sourcebook of Parallel Computing, Morgan Kaufmann, 2002 – Thomas Rauber and Gudula Runger, Parallel Programming for Multicore and Cluster Systems, Springer Verlag, 2010 – David B. Kirk and Wen-mei W. Hwu, Programming Massively Parallel Processors: A Hands-on Approach, Morgan Kaufman, 2010
Grading Regular homework problems and programming assignments Final exam