Download presentation
Presentation is loading. Please wait.
Published byGilbert Chilton Modified over 9 years ago
1
MPI Labs Simulation of an colony of ants Camille Coti coti@lri.fr QosCosGrid Barcelona meeting, 10/25/06
2
Introduction to ants ● How ants find food and how they remember the path – Random walk around the source – Once they find some food : go back to the source – Drop pheromones along this path ● When they find some pheromones: – Follow them – Pheromones evaporate, thereby limiting their influence over time
3
Modelising an ant colony ● Cellular automata – Grid of cells, reprensented as a matrix – State of a cell: ● With an ant on it (or several ones) ● Some pheromones can have been dropped on it ● It can also be empty – We define a transition rule from time t to time t+1
4
A picture can make things easier The ant-hill (where the ants live) The food Ants spread around the ant-hill Ants that have found the food drop pheromones
5
Update algorithm ● every ant seeks around it ● if it finds pheromones: – follow it ● if it finds some food: – take some – go back to the ant hill dropping pheromones on the path ● otherwise: – chose a random direction
6
Parallelisation ● Share the grid among the processors – Each processor computes a part of the calculation – Use MPI communication between the processes – This is parallel computing ☺ Proc #0 Proc #1 Proc #3 Proc #2
7
Parallelisation ● Each processor can compute the transition rule for almost all the space it is assigned to – BUT problem near the boundaries: need to know what is next – THEN each processor has to send the state of its frontiers to its neighbours ● Overlap computation and communications – Non-blocking communication – Computation – Wait for the communications to be finished (usually not necessary)
8
Algorithm of the parallelisation ● Initialisation ● for n iterations do: – send/receive frontiers – compute the transition rule (excepted near the frontiers) – finish the communications – compute the transition rule near the frontiers – send the result – update the bounds (ants might have walked across the frontiers)
9
What you have to do ● We provide you – The basic functions – The update rule ● You have to write – The MPI communications – An MPI data type creation and declaration
10
Some “good” practice rules ● Initalise your communications – MPI_Init(&argc, &argv); – MPI_Comm_size(MPI_COMM_WORLD, &size); – MPI_Comm_rank(MPI_COMM_WORLD, &rank); ● Finalise them – MPI_Finalize();
11
Some “good” practice rules ● Use non-blocking communications rather than blocking ones – MPI_Isend() / MPI_Irecv() – Wait for completion with MPI_Waitall() ● So that you can overlap communications with computation
12
Creating a new MPI data type ● Declare the types that will be contained – MPI_Datatypes types[2] = {MPI_INT, MPI_CHAR} ● Declare the offset for the address – MPI_Aint displ[2]={0, 4} ● Create your structure and declare its name – MPI_Type_create_struct(...) ● And commit it – MPI_Type_commit(...)
13
Create a topology ● For example, create a torus – void create_comm_torus_1D(){ ● int mpierrno, period, reorder; ● period=0; reorder=0; ● mpierrno=MPI_Cart_create(MPI_COMM_WORLD, 1, &comm_size, &period, reorder, &comm_torus_1D); ● MPI_Cart_shift(comm_torus_1D,0,1,&my_west_rank, &my_east_rank); – } ● (you won't have to do this for the labs, this function is provided, but it is for your personal culture)
14
Some collective communications ● Reductions: sum, min, max... – Useful for time measurements or to make a global sum of local results, for example – MPI_Reduce(...) ● Barriers – All the processes get synchronised – MPI_Barrier(communicator)
15
Misc ● Time measurement: – t1 = MPI_Wtime(); – t2 = MPI_Wtime(); – time_elapsed = t2 - t1; – MPI_Wtime() returns the time elapsed on a given processor
16
If you need more ● www.lri.fr/~coti/QosCosGrid www.lri.fr/~coti/QosCosGrid ● Feel free to ask questions ☺
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.