Download presentation
Presentation is loading. Please wait.
1
Parallel Algorithm Models
2
Introduction - Two main implementations
- Avoid unnecessary performance costs - Fit the model to the problem - shared address space, Message passing paradigm - increase locality and reduce communication costs
3
The Data Parallelism Model
+4
4
The Data Parallelism Model
+4 +4 +4 +4 Shared memory architecture: data resides in global memory, easier to program Distributed memory architecture: data chunks are sent to workers local memory, better locality Prominent example: Vector/matrix addition
5
The Work Pool Model - Dynamic mapping of tasks
Task N P1 P2 PM - Dynamic mapping of tasks - Static and Dynamic processes - Centralized and Decentralized - Granularity vs Overheads
6
The Work Graph Model
7
The Work Graph Model - Break workload into tasks
- Directed acyclic graph - Tasks dependent on each other - Task size is important - Static vs Dynamic graph - Tasks vs Cost of communication
8
The Master-Slave Model
Task division Pre-assigned Random mapping On the fly Master(s) may become bottleneck(s)
9
The Pipeline Model 1 2a 2b 2c 3a 3b 3c 4 5 6 7 Data 3a 2c 2b Func X
Func Y Func Z Pipeline
10
Hybrid Model - Different models applied hierarchically
- Different models applied sequentially - Different models used at different stages
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.