Chapter 1 Introduction and General Concepts. References Selim Akl, Parallel Computation: Models and Methods, Prentice Hall, 1997, Updated online version.

Slides:



Advertisements
Similar presentations
SE 292 (3:0) High Performance Computing Aug R. Govindarajan Sathish S. Vadhiyar
Advertisements

Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Parallel Computing About the Course Ondřej Jakl Institute of Geonics, Academy of Sci. of the CR TUL
The Intel® Software Community Real-world resources you can use.
Parallel Programming Yang Xianchun Department of Computer Science and Technology Nanjing University Introduction.
ICS 556 Parallel Algorithms Ebrahim Malalla Office: Bldg 22, Room
Introduction to Parallel Computing
Summary Background –Why do we need parallel processing? Applications Introduction in algorithms and applications –Methodology to develop efficient parallel.
Slide 1 COMP 308 Parallel Efficient Algorithms Lecturer: Dr. Igor Potapov Ashton Building, room COMP 308 web-page:
CS/CMPE 524 – High- Performance Computing Outline.
Chapter 7 Performance Analysis. 2 Additional References Selim Akl, “Parallel Computation: Models and Methods”, Prentice Hall, 1997, Updated online version.
Parallel and Distributed Algorithms (CS 6/76501) Spring 2010 Johnnie W. Baker.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming with MPI and OpenMP Michael J. Quinn.
Performance Metrics Parallel Computing - Theory and Practice (2/e) Section 3.6 Michael J. Quinn mcGraw-Hill, Inc., 1994.
Chapter 7 Performance Analysis. 2 References (Primary Reference): Selim Akl, “Parallel Computation: Models and Methods”, Prentice Hall, 1997, Updated.
Parallel & Distributed Computing Fall 2004 Comments About Final.
CS 524 – High- Performance Computing Outline. CS High-Performance Computing (Wi 2003/2004) - Asim LUMS2 Description (1) Introduction to.
Multiprocessors CSE 471 Aut 011 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor.
Parallel Programming Chapter 3 Introduction to Parallel Architectures Johnnie Baker January 26 , 2011.
PARALLEL AND DISTRIBUTED COMPUTING OVERVIEW Fall 2003
Parallel Computers 1 MIMD COMPUTERS OR MULTIPROCESSORS References: –[8] Jordan and Alaghaband, Fundamentals of Parallel Algorithms, Architectures, Languages,
Parallel and Distributed Algorithms Spring 2007
Parallel and Distributed Computing Overview and Syllabus Professor Johnnie Baker Guest Lecturer: Robert Walker.
1 Parallel of Hyderabad CS-726 Parallel Computing By Rajeev Wankar
Chapter 7 Performance Analysis.
Basic Communication Operations Based on Chapter 4 of Introduction to Parallel Computing by Ananth Grama, Anshul Gupta, George Karypis and Vipin Kumar These.
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
David O’Hallaron Carnegie Mellon University Processor Architecture Overview Overview Based on original lecture notes by Randy.
INTEL CONFIDENTIAL Predicting Parallel Performance Introduction to Parallel Programming – Part 10.
Flynn’s Taxonomy SISD: Although instruction execution may be pipelined, computers in this category can decode only a single instruction in unit time SIMD:
Edgar Gabriel Short Course: Advanced programming with MPI Edgar Gabriel Spring 2007.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Sieve of Eratosthenes by Fola Olagbemi. Outline What is the sieve of Eratosthenes? Algorithm used Parallelizing the algorithm Data decomposition options.
Domain Decomposed Parallel Heat Distribution Problem in Two Dimensions Yana Kortsarts Jeff Rufinus Widener University Computer Science Department.
Slide 1 Course Description and Objectives: The aim of the module is –to introduce techniques for the design of efficient parallel algorithms and –their.
Summary Background –Why do we need parallel processing? Moore’s law. Applications. Introduction in algorithms and applications –Methodology to develop.
Parallel Programming with MPI and OpenMP
Parallel and Distributed Algorithms Spring 2010 Johnnie W. Baker.
الگوریتم‌های موازی احسان عادلی مسبب 1.
1 Introduction ALGORITHMS AND PROGRAMMING Introduction Ferry Wahyu Wibowo, S.Si., M.Cs.
Parallel and Distributed Computing Overview and Syllabus Professor Johnnie Baker Guest Lecturer: Robert Walker.
Parallel Computers 1 References: [1] - [4] given below; [5] & [6] given on slide Chapter 1, “Parallel Programming ” by Wilkinson, el. 2.Chapter 1,
CS 52500, Parallel Computing Spring 2011 Alex Pothen Lectures: Tues, Thurs, 3:00—4:15 PM, BRNG 2275 Office Hours: Wed 3:00—4:00 PM; Thurs 4:30—5:30 PM;
1 IDGF International Desktop Grid Federation ASSESSING THE PERFORMANCE OF DESKTOP GRID APPLICATIONS A. Afanasiev, N. Khrapov, and M. Posypkin DEGISCO is.
LECTURE #1 INTRODUCTON TO PARALLEL COMPUTING. 1.What is parallel computing? 2.Why we need parallel computing? 3.Why parallel computing is more difficult?
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
Parallel and Distributed Algorithms Spring 2005 Johnnie W. Baker.
1 Potential for Parallel Computation Chapter 2 – Part 2 Jordan & Alaghband.
S ORTING ON P ARALLEL C OMPUTERS Dr. Sherenaz Al-Haj Baddar KASIT University of Jordan
Parallel Algorithm Design & Analysis Course Dr. Stephen V. Providence Motivation, Overview, Expectations, What’s next.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Introduction to Parallel Computing
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Odd-Even Sort Implementation Dr. Xiao Qin.
ELEC 7770 Advanced VLSI Design Spring 2016 Introduction
What Exactly is Parallel Processing?
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Parallel Odd-Even Sort Algorithm Dr. Xiao.
Team 1 Aakanksha Gupta, Solomon Walker, Guanghong Wang
Parallel and Distributed Algorithms (CS 6/76501) Spring 2007
ELEC 7770 Advanced VLSI Design Spring 2014 Introduction
Parallel and Distributed Computing Overview
Parallel and Distributed Algorithms Spring 2005
ELEC 7770 Advanced VLSI Design Spring 2012 Introduction
Introduction to Parallel Computing by Grama, Gupta, Karypis, Kumar
ELEC 7770 Advanced VLSI Design Spring 2010 Introduction
Chapter 4 - Case Study Clustering
MAP33 Introdução à Computação Paralela
Parallel algorithm design
Parallel & Distributed Computing Fall 2006
Chapter 2 from ``Introduction to Parallel Computing'',
Presentation transcript:

Chapter 1 Introduction and General Concepts

References Selim Akl, Parallel Computation: Models and Methods, Prentice Hall, 1997, Updated online version available through website. Selim Akl, The Design of Efficient Parallel Algorithms, Chapter 2 in “Handbook on Parallel and Distributed Processing” edited by J. Blazewicz, K. Ecker, B. Plateau, and D. Trystram, Springer Verlag, Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar, Introduction to Parallel Computing, 2 nd Edition, Addison Wesley, Harry Jordan and Gita Alaghband, Fundamentals of Parallel Processing: Algorithms Architectures, Languages, Prentice Hall, Michael Quinn, Parallel Programming in C with MPI and OpenMP, McGraw Hill, Michael Quinn, Parallel Computing: Theory and Practice, McGraw Hill, 1994 Barry Wilkenson and Michael Allen, Parallel Programming, 2 nd Ed.,Prentice Hall, 2005.

Outline Need for Parallel & Distributed Computing Flynn’s Taxonomy of Parallel Computers –Two Main Types of MIMD Computers Examples of Computational Models Data Parallel & Functional/Control/Job Parallel –Granularity Analysis of Parallel Algorithms –Elementary Steps: computational and routing steps –Running Time & Time Optimal –Parallel Speedup –Speedup –Cost and Work –Efficiency Linear and Superlinear Speedup Speedup and Slowdown Folklore Theorems Amdahl’s and Gustafon’s Law

Reasons to Study Parallel & Distributed Computing Sequential computers have severe limits to memory size –Significant slowdowns occur when accessing data that is stored in external devices. Sequential computational times for most large problems are unacceptable. Sequential computers can not meet the deadlines for many real-time problems. Many problems are distributed in nature and natural for distributed computation