1 Parallel of Hyderabad CS-726 Parallel Computing By Rajeev Wankar

Slides:



Advertisements
Similar presentations
CSCI 4125 Programming for Performance Andrew Rau-Chaplin
Advertisements

MINJAE HWANG THAWAN KOOBURAT CS758 CLASS PROJECT FALL 2009 Extending Task-based Programming Model beyond Shared-memory Systems.
Distributed Systems CS
Parallel Computing About the Course Ondřej Jakl Institute of Geonics, Academy of Sci. of the CR TUL
Parallel Programming Yang Xianchun Department of Computer Science and Technology Nanjing University Introduction.
ICS 556 Parallel Algorithms Ebrahim Malalla Office: Bldg 22, Room
Types of Parallel Computers
NSF/TCPP Early Adopter Experience at Jackson State University Computer Science Department.
PARALLEL PROCESSING COMPARATIVE STUDY 1. CONTEXT How to finish a work in short time???? Solution To use quicker worker. Inconvenient: The speed of worker.
1 Course Information Parallel Computing Fall 2008.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
1 Course Information Parallel Computing Spring 2010.
Computer Architecture II 1 Computer architecture II Professor: Florin Isaila Professor Coordinator: Felix Garcia Caballiera.
CS/CMPE 524 – High- Performance Computing Outline.
Computer Architecture II 1 Computer architecture II Professor: Florin Isaila Professor Coordinator: Felix Garcia Caballiera.
CS 524 – High- Performance Computing Outline. CS High-Performance Computing (Wi 2003/2004) - Asim LUMS2 Description (1) Introduction to.
Multiprocessors CSE 471 Aut 011 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Jawwad A Shamsi Nouman Durrani Nadeem Kafi Systems Research Laboratories, FAST National University of Computer and Emerging Sciences, Karachi Novelties.
Cli/Serv.: Prelim/01 Client/Server Distributed Systems v Lecturer:Dr. Andrew Davison Info. Eng. Research Lab (rm 101)
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
Parallel and Distributed Computing Overview and Syllabus Professor Johnnie Baker Guest Lecturer: Robert Walker.
NSF/TCPP Curriculum Planning workshop Behrooz Shirazi Washington State University February 2010.
1 Developing Native Device for MPJ Express Advisor: Dr. Aamir Shafi Co-advisor: Ms Samin Khaliq.
1 ACAC 2001 Advanced Computer Architecture Course Course Information for Academic Year 2001 Guihai Chen.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Integrating Parallel and Distributed Computing Topics into an Undergraduate CS Curriculum Andrew Danner & Tia Newhall Swarthmore College Third NSF/TCPP.
CIS4930/CDA5125 Parallel and Distributed Systems Florida State University CIS4930/CDA5125: Parallel and Distributed Systems Instructor: Xin Yuan, 168 Love,
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006outline.1 ITCS 4145/5145 Parallel Programming (Cluster Computing) Fall 2006 Barry Wilkinson.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Slides Courtesy Michael J. Quinn Parallel Programming in C.
Extreme scale parallel and distributed systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward.
Chapter 1 Introduction and General Concepts. References Selim Akl, Parallel Computation: Models and Methods, Prentice Hall, 1997, Updated online version.
Edgar Gabriel Short Course: Advanced programming with MPI Edgar Gabriel Spring 2007.
AN EXTENDED OPENMP TARGETING ON THE HYBRID ARCHITECTURE OF SMP-CLUSTER Author : Y. Zhao 、 C. Hu 、 S. Wang 、 S. Zhang Source : Proceedings of the 2nd IASTED.
MIMD Distributed Memory Architectures message-passing multicomputers.
1 SIAC 2000 Program. 2 SIAC 2000 at a Glance AMLunchPMDinner SunCondor MonNOWHPCGlobusClusters TuePVMMPIClustersHPVM WedCondorHPVM.
CSNB334 Advanced Operating Systems Course Introduction Lecturer: Abdul Rahim Ahmad.
Summary Background –Why do we need parallel processing? Moore’s law. Applications. Introduction in algorithms and applications –Methodology to develop.
Early Adopter: Integration of Parallel Topics into the Undergraduate CS Curriculum at Calvin College Joel C. Adams Chair, Department of Computer Science.
October 2008 Integrated Predictive Simulation System for Earthquake and Tsunami Disaster CREST/Japan Science and Technology Agency (JST)
CSCI-455/552 Introduction to High Performance Computing Lecture 6.
Parallel and Distributed Computing Overview and Syllabus Professor Johnnie Baker Guest Lecturer: Robert Walker.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
Cheating The School of Network Computing, the Faculty of Information Technology and Monash as a whole regard cheating as a serious offence. Where assignments.
CSci6702 Parallel Computing Andrew Rau-Chaplin
CS 52500, Parallel Computing Spring 2011 Alex Pothen Lectures: Tues, Thurs, 3:00—4:15 PM, BRNG 2275 Office Hours: Wed 3:00—4:00 PM; Thurs 4:30—5:30 PM;
Distributed Real-time Systems- Lecture 01 Cluster Computing Dr. Amitava Gupta Faculty of Informatics & Electrical Engineering University of Rostock, Germany.
CCSB234/CSNB234 Operating System Concepts Semester 2, Dec 2006 – Mar 2007 Abdul Rahim Ahmad.
Parallel Computing Presented by Justin Reschke
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
Chapter 16 Client/Server Computing Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
BMTS Computer Programming Pre-requisites :BMTS 242 –Computer and Systems Nature Of the Course: Programming course, contain such as C, C++, Database.
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
BMTS Computer and Systems Pre-requisites :CT140 –Computer Skills Nature Of the Course: This course deals about the fundamentals of Computer such.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Introduction to threads
Yvon Kermarrec Telecom Bretagne Institut Mines Télécom
Multi-Processing in High Performance Computer Architecture:
Parallel and Distributed Algorithms (CS 6/76501) Spring 2007
Is System X for Me? Cal Ribbens Computer Science Department
Distributed Shared Memory
Parallel and Distributed Computing Overview
Lecture 1: Parallel Architecture Intro
Parallel and Distributed Algorithms Spring 2005
Chapter 4: Threads.
Hybrid Programming with OpenMP and MPI
CSC4005 – Distributed and Parallel Computing
Presentation transcript:

1 Parallel of Hyderabad CS-726 Parallel Computing By Rajeev Wankar

2 Parallel of Hyderabad For whom  Elective for M.Tech. and MCA

3 Parallel of Hyderabad Objective By the end of the semester, students should be able to develop the following skills:  Should be able to understand parallel algorithm paradigms and design efficient parallel algorithms.  Given a practical application, identify design issues and should be able to write algorithms for the targeted machine.  Develop skill to write/modify parallel library.

4 Parallel of Hyderabad Prerequisite Knowledge of Introductory Algorithms, Networks, Java/C/C++, and Unix/Linux (as an OS and good if you know socket programming).

5 Parallel of Hyderabad Course Outline Here is a preliminary and non-exhaustive list of topics we will be or might be covering. This is subject to change with advanced notice, partly based on the understanding of the students.

6 Parallel of Hyderabad Unit 1  Introduction to Parallel Computing:  Why Parallel Computing & Scope of Parallel Computing, Control and Data Approach, Models of parallel computation, Design paradigms of Parallel Algorithms.

7 Parallel of Hyderabad Unit 2  Classification: Taxonomies:  MPP, SMP, CC-NUMA, cluster: dedicated high performance (HP), high throughput (HT), data-intensive computing, Interconnection networks, Flynn‘s Taxonomy.

8 Parallel of Hyderabad Unit 3  An overview of Practical Parallel Programming Paradigms:  Programmability Issues, Programming Models: Message passing, client-server, peer-to-peer, Map & Reduce.

9 Parallel of Hyderabad Unit 4  Clustering of Computers, Beowulf Supercomputer, Use of MPI in Cluster Computing. Debugging, Evaluating and tuning of Cluster Programs

10 Parallel of Hyderabad Unit 5  Message passing standards:  PVM (Parallel Virtual Machine), MPI (Message Passing Interface) Message Passing Interface (MPI) and its routines.

11 Parallel of Hyderabad Unit 6  Performance Metrics & Speed Up:  Types of Performance requirements, Basic Performance metrics; Workload Speed Metrics; Performance of Parallel Computers-Parallelism and interaction overheads;

12 Parallel of Hyderabad Unit 6  Overview of Programming with Shared Memory:  OpenMP (History, Overview, Programming Model, OpenMP Constructs, Performance Issues and examples, Explicit Parallelism: Advanced Features of OpenMP)  Distributed Shared Memory programming using Jackal  Introduction to “Programming Multi-Core Programming” through Software Multi-threading

13 Parallel of Hyderabad Unit 7  Reconfigurable Computing  What is it? Why? How to do it? Where to do it?  Algorithms for Reconfigurable systems

14 Parallel of Hyderabad Unit 8 (Applications)  Built cluster using Rocks  On Cluster Based algorithms and applications  On Shared Memory Programming  Writing subset of parallel libraries using socket programming in C or Java.

15 Parallel of Hyderabad Assessment  Internal: 40 Marks  Three internals of 10 marks each (best two of three will be selected)  Lab assignments 10 marks  One Group assignment 5 marks  Seminar 5 marks  External: End semester examination 60 Marks.

16 Parallel of Hyderabad Reference:  Quinn, M. J., Parallel Computing: Theory and Practice (McGraw-Hill Inc.).  Bary Wilkinson and Michael Allen: Parallel Programming Techniques using Networked of workstations and Parallel Computers, Prentice Hall,  R. Buyya (ed.) High Performance Cluster Computing: Programming and Applications, Prentice Hall,  William Gropp, Rusty Lusk, Tuning MPI Applications for Peak Performance, Pittsburgh (1996).  Kai Hwang, Zhiwei Xu, Scalable Parallel Computing (Technology Architecture Programming) McGraw Hill Newyork (2004).  W. Gropp, E. Lusk, N. Doss, A. Skjellum, A high performance portable implementation of the message passing Interface (MPI) standard, Parallel Computing 22 (6), Sep  Gibbons, A., W. Rytter, Efficient Parallel Algorithms (Cambridge Uni. Press).  Kumar V., et al., Introduction to Parallel Computing, Design and Analysis of Parallel Algorithms, Benjamin/Cummings,  Shameem A and Jason, Multicore Programming, Intel Press, 2006

17 Parallel of Hyderabad Questions