1 Parallel of Hyderabad CS-726 Parallel Computing By Rajeev Wankar
2 Parallel of Hyderabad For whom Elective for M.Tech. and MCA
3 Parallel of Hyderabad Objective By the end of the semester, students should be able to develop the following skills: Should be able to understand parallel algorithm paradigms and design efficient parallel algorithms. Given a practical application, identify design issues and should be able to write algorithms for the targeted machine. Develop skill to write/modify parallel library.
4 Parallel of Hyderabad Prerequisite Knowledge of Introductory Algorithms, Networks, Java/C/C++, and Unix/Linux (as an OS and good if you know socket programming).
5 Parallel of Hyderabad Course Outline Here is a preliminary and non-exhaustive list of topics we will be or might be covering. This is subject to change with advanced notice, partly based on the understanding of the students.
6 Parallel of Hyderabad Unit 1 Introduction to Parallel Computing: Why Parallel Computing & Scope of Parallel Computing, Control and Data Approach, Models of parallel computation, Design paradigms of Parallel Algorithms.
7 Parallel of Hyderabad Unit 2 Classification: Taxonomies: MPP, SMP, CC-NUMA, cluster: dedicated high performance (HP), high throughput (HT), data-intensive computing, Interconnection networks, Flynn‘s Taxonomy.
8 Parallel of Hyderabad Unit 3 An overview of Practical Parallel Programming Paradigms: Programmability Issues, Programming Models: Message passing, client-server, peer-to-peer, Map & Reduce.
9 Parallel of Hyderabad Unit 4 Clustering of Computers, Beowulf Supercomputer, Use of MPI in Cluster Computing. Debugging, Evaluating and tuning of Cluster Programs
10 Parallel of Hyderabad Unit 5 Message passing standards: PVM (Parallel Virtual Machine), MPI (Message Passing Interface) Message Passing Interface (MPI) and its routines.
11 Parallel of Hyderabad Unit 6 Performance Metrics & Speed Up: Types of Performance requirements, Basic Performance metrics; Workload Speed Metrics; Performance of Parallel Computers-Parallelism and interaction overheads;
12 Parallel of Hyderabad Unit 6 Overview of Programming with Shared Memory: OpenMP (History, Overview, Programming Model, OpenMP Constructs, Performance Issues and examples, Explicit Parallelism: Advanced Features of OpenMP) Distributed Shared Memory programming using Jackal Introduction to “Programming Multi-Core Programming” through Software Multi-threading
13 Parallel of Hyderabad Unit 7 Reconfigurable Computing What is it? Why? How to do it? Where to do it? Algorithms for Reconfigurable systems
14 Parallel of Hyderabad Unit 8 (Applications) Built cluster using Rocks On Cluster Based algorithms and applications On Shared Memory Programming Writing subset of parallel libraries using socket programming in C or Java.
15 Parallel of Hyderabad Assessment Internal: 40 Marks Three internals of 10 marks each (best two of three will be selected) Lab assignments 10 marks One Group assignment 5 marks Seminar 5 marks External: End semester examination 60 Marks.
16 Parallel of Hyderabad Reference: Quinn, M. J., Parallel Computing: Theory and Practice (McGraw-Hill Inc.). Bary Wilkinson and Michael Allen: Parallel Programming Techniques using Networked of workstations and Parallel Computers, Prentice Hall, R. Buyya (ed.) High Performance Cluster Computing: Programming and Applications, Prentice Hall, William Gropp, Rusty Lusk, Tuning MPI Applications for Peak Performance, Pittsburgh (1996). Kai Hwang, Zhiwei Xu, Scalable Parallel Computing (Technology Architecture Programming) McGraw Hill Newyork (2004). W. Gropp, E. Lusk, N. Doss, A. Skjellum, A high performance portable implementation of the message passing Interface (MPI) standard, Parallel Computing 22 (6), Sep Gibbons, A., W. Rytter, Efficient Parallel Algorithms (Cambridge Uni. Press). Kumar V., et al., Introduction to Parallel Computing, Design and Analysis of Parallel Algorithms, Benjamin/Cummings, Shameem A and Jason, Multicore Programming, Intel Press, 2006
17 Parallel of Hyderabad Questions