Download presentation
Presentation is loading. Please wait.
1
A Parallel Computational Model for Heterogeneous Clusters Jose Luis Bosque, Luis Pastor, IEEE TRASACTION ON PARALLEL AND DISTRIBUTED SYSTEM, VOL. 17, NO. 12, DECEMBER 2006 Presented by 張肇烜
2
Outline Introduction Heterogeneous LogGP HLogGP Validation Experimental Results Conclusions
3
Introduction During the last decade, Beowulf clusters have had tremendous dissemination and acceptance. However the design and implementation of efficient parallel algorithms for clusters is still a problematic issue.
4
Introduction (cont.) In this paper, a new heterogeneous parallel computational model based on the LogGP model is proposed.
5
Heterogeneous LogGP Reasons for selecting LogGP model The architecture is very similar to a cluster. LogGP removes the synchronization points needed in other models. LogGP allows considering both short and long messages.
6
Heterogeneous LogGP (cont.) LogGP assumes finite network capacity, avoiding situation where the network becomes a bottleneck. This model encourages techniques that yield good results in practice, such as designing algorithms with balanced communication patterns.
7
Heterogeneous LogGP (cont.) HLogGP Definition: Latency, L: Communication latency depends on both network technology and topology. The Latency Matrix of a heterogeneous cluster can be defined as a square matrix L={l 1,1, …, l m,m }.
8
Heterogeneous LogGP (cont.) Overhead, o: the time needed by a processor to send or receive a message is referred to as overhead. Sender overhead vector, O s ={os 1,…,os m }. Receiver overhead vector, O r ={or 1,…,or m }. Gap between message, g: this parameter reflects each node’s proficiency at sending consecutive short messages. A gap vector g={g 1,…,g m }.
9
Heterogeneous LogGP (cont.) Gap per byte, G: The Gap per byte depends on network technology. In a heterogeneous network, a message can cross different switches with different bandwidths. Gap matrix G={G 1,1,…,G m,m }.
10
Heterogeneous LogGP (cont.) Computational power, P i : The number of nodes cannot be used in a heterogeneous model for measuring the system’s computational power. A computational power vector P={P 1,…,P m }.
11
HLogGP Validation Cluster Description: 100Mbps 10Mbps (slow, S) (fast, F)
12
HLogGP Validation (cont.) Benchmark 1:
13
HLogGP Validation (cont.) Benchmark 2: Source code of the benchmark for measuring the gap between messages.
14
HLogGP Validation (cont.) Overhead:
15
HLogGP Validation (cont.) Overhead:
16
HLogGP Validation (cont.) Latency: Switch-switch hub-hub Switch-hub
17
HLogGP Validation (cont.) Gap between messages:
18
HLogGP Validation (cont.) Gap per Byte:
19
HLogGP Validation (cont.) Computational power:
20
Experimental Results Three objectives were pursued in the tests presented here. To verify HLogGP is accurate enough to predict the response time of a parallel program. To verify that heterogeneity has a strong impact on system performance. To show how the cluster parametrization may be used for determining the performance of a parallel program on a real application environment.
21
Experimental Results (cont.) A volumetric magnetic resonance image compression application was selected. The sequential process may be divided into the following stages. Data acquisition. Data read and memory allocation. Computation of the 3D Harr wavelet transform.
22
Experimental Results (cont.) Thresholding. Encoding of the subbands using the run- length encoding compression algorithm. Write back of the compressed image.
23
Experimental Results (cont.) A theoretical analysis of the application’s response time is presented. First stage: The master distributes the raw data among the slave processors. The number of total slices. The cluster’s total computational power. Slices of each slave i will receive
24
Experimental Results (cont.) The total time for this stage is : Cycles of sending overhead to get the first byte into the network Subsequent bytes take G cycles to be sent Each byte travels through the network for cycles The receiving processor spends in receiving overhead
25
Experimental Results (cont.) Second stage: In this case, the response time is the time spent by the last slave to finish its work. The total response time for the second stage is estimated as the response time of a generic slave processor:
26
Experimental Results (cont.) Third stage: The master process has to first gather the partial results produced by all of the slave processes. The total response time of the third phase is calculated as a sumatory:
27
Experimental Results (cont.) Fourth stage: The master process has to send an image subband to each of the slave processes. The total time for this stage is:
28
Experimental Results (cont.) Fifth stage: This stage is similar to the second, the amount of work is not distributed according to the nodes’ computational power. This time could be given approximately by the following expression :
29
Experimental Results (cont.) Sixth stage: This stage is similar to the third stage, the message’s sizes cannot be determined a priori. K is determined by the subband size
30
Experimental Results (cont.) Execution Results
31
Experimental Results (cont.) Execution Results
32
Conclusion In this paper, HlogGP model for heterogeneous clusters has been proposed and validated. The model can be applied to heterogeneous clusters where either the nodes, the interconnection network, or both are heterogeneous.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.