Download presentation
Presentation is loading. Please wait.
1
MAP33 Introdução à Computação Paralela
Dagoberto A.R.Justo PPGMAp UFRGS 1/11/2019 intro
2
Contato Prof. Dagoberto A. R.Justo www.mat.ufrgs.br/~dago
Sala B121 Atendimento: Qua/Sex, 10:30-12:00
3
Textbook And Reference Books
Introduction to Parallel Computing, 2nd Edition Kumar, Grama, Gupta, and Karypis, Benjamin/Cummings, 2002 ISBN Errata file for 1st Edition, 1st and 2nd printing posted on web site Reference and related books Fundamentals Of Parallel Processing Jordan and Alaghband, Prentice-Hall, 2003 Source Book Of Parallel Computing Dongarra et al, Morgan-Kaufmann, 2003 MPI -- The Complete Reference Volume 1, The MPI Core, 2nd Edition, Snir et al, MIT Press, 1999
4
Lectures Topics (From 2nd Edition)
Chapter 1, Introduction to Parallel Computing What is parallel computing and why use it? Scope and Issues in parallel computing Chapter 2, Parallel Programming Platforms Implicit parallelism in microprocessors The impact of memory limitations The dichotomy of control versus data parallelism Taxonomies for Parallel Architectures and Programs Physical interconnection and communication networks dynamic versus static shapes of connections (hypercube, mesh, grid, torus, tree) communication costs Processor to processor mappings and mapping techniques
5
Topics Continued Chapter 3, Parallel Algorithm Design
decompositions, tasks, and dependency graphs characteristics of tasks and their interactions containing interaction overheads parallel algorithm models Chapter 4, Basic Communication Mechanisms one-to-all and all-to-all broadcasts, reductions, and prefix sums one-to-all and all-to-all personalized communication circular shifts Chapter 5, Analytic modeling of Parallel Programs performance metrics, granularity, scalability, efficiency measures, overhead, cost-optimal execution time
6
Topics Continued Chapter 6, The Message Passing Paradigm
basic approach send/receive operations the MPI interface topologies and embedding, overlapping communication and computation, collective communications, groups and communicators Chapter 7, Shared Address Space Platforms and Programming Threads, Pthreads, OpenMP Synchronization, control, and cancellation of threads Chapter 8, Dense Matrix Algorithms basic matrix operations and partitioning matrix transposition, matrix-vector and matrix-matrix operations solving systems of linear equations
7
Topics Continued Chapter 9, Sorting -- Selected Topics
issues with sorting in parallel sorting networks and bubble sort Chapter 10, Graph Algorithms minimum spanning tree and shortest path algorithms Chapter 11, Searching -- Selected Topics review of sequential search algorithms Parallel depth first search
8
Topics Continued Chapter 12, Dynamic Programming
searching problems solved by creating dynamic tasks that solve subproblems and then go away Chapter 13, Fast Fourier Transformations (FFT) serial algorithm for FFTs binary exchange algorithm the transpose algorithm
9
Objective Of This Course
Introduce you to the ideas and concepts in parallel computing Help you identify when a parallel approach is useful and when it is a waste of time Prepare you for coping with the parallel tools and environments that will be your future
10
What Is Parallel Computing?
Performing a computation by using more than one computational element to complete the task Like having several people work on different parts of the same task and completing it together It is problem of organizing work to be performed in parallel it is primarily a coordination and synchronization problem it increases the complexity of the solution
11
Why Parallel Computing?
Primarily, to complete the computation faster To solve problems that could not be solved: it would take too long (days, months, even years) it would take too much resources (not available in a single processor). More processorsmore memorymore disk… The hardware speed of a single processor is approaching a limit (that was claimed in the late 80s, and late 2000) See Moore’s Law It represents a computational challenge presents interesting and difficult computational problems
12
Moore’s Law the number of transistors placed on an integrated circuit is increasing exponentially, doubling approximately every two years.
13
Moore’s Law: IPS, Instructions Per Second
Pentium 4 3 Ghz
14
P4
15
Areas and Examples weather prediction, modeling fluid flows,
information recover and discovery, drug analysis, drug discovery, rapid response to threats, manufacturing process, system modeling avoiding expensive prototypes, …
16
Change/Growth In Need for Parallel Capabilities
Traditional application areas using parallel computation: complex designs, engineering and scientific applications Commercial applications are increasing web and database servers economics and stock modeling data mining and analysis Scientific applications are becoming massive problems sequencing the human genome functional and structural characterizations of genes and proteins -- development of new drugs integrated analysis of new materials, requiring tremendous changes in scale -- quantum to macro-molecular
17
But There are Serious Drawbacks
Parallel computing has not become ubiquitous (omnipresent) Thus software support is poor poor and ineffective debugging tools poor and ineffective evaluation tools early and immature compilers (a very difficult task) administrative systems poor high performance networks are special and thus are not well supported The payoff is massive and very significant Consequently, the challenge of parallel programming
18
Stacking books in a Library : a simple illustration of parallelism
Problem: one librarian takes far too long to stack the returned books from overnight Solution: many librarians stacking together by how? First approach: partition the books to be stacked and a different librarian stacks each partition several competing for the same shelves (congestion and contention) Second approach: have the librarians initially sort the books in a production line by stacks -- communication and cooperation -- a pipeline when sorted, place the books in the stacks without contention this is task parallelism
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.