Presentation is loading. Please wait.

Presentation is loading. Please wait.

:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.

Similar presentations


Presentation on theme: ":: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009."— Presentation transcript:

1 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009 Kiril Dichev HLRS, University of Stuttgart

2 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 2 Overview of using MPI on Grids Advantages: a great set of computing and storage resources are available for Grid computing: “The EGEE Grid consists of over 36,000 CPU available to users 24 hours a day, 7 days a week, in addition to about 5 PB disk (5 million Gigabytes) + tape MSS of storage, and maintains 30,000 concurrent jobs on average” source: http://knowledge.eu-egi.eu/index.php/EGEEhttp://knowledge.eu-egi.eu/index.php/EGEE Disadvantages: – (for admins) MPI configuration on the site level (MPI libraries and supporting software on clusters, publishing of information to the Information System) is needed – (for users) The Grid middleware is not implemented to address MPI jobs advanced setup more difficult

3 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 3 Different philosophies on using computing resources There is a controversy between high-performance oriented computing and Grid computing: – If you do high performance computing, you would want to be able to configure yourself: Resource reservation The runtime environment – If you do Grid computing, you would (normally) use the Grid simply as a set of abstract available resources Additional tuning is possible through scripts, but more difficult to get through the middleware Certain aspects (like resource manager options) are not possible

4 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 4 Issues with using MPI on Grids MPI status in EGEE (report from 2008): – Of the 331 EGEE sites tested, only 36 sites accept parallel jobs – 22 of the 36 jobs can run MPI jobs – The main problems: No MPI installation, misconfiguration of the Grid middleware for advertising MPI, wrong MPI installation, wrong startup of MPI jobs etc. Euforia infrastructure: – High reliability of MPI jobs – Still, maintainance of a reliable Grid for MPI is time- consuming

5 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 5 Running MPI applications on HPC resources In a HPC environment: – Log into a frontend computer – Specify the batch job configuration file Very flexible control of node reservation Very flexible control of runtime options – Submit – Check status – Retrieve output Job scheduler

6 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 6 Running MPI applications on Grid infrastructures In a Grid infrastructure like EGEE: – Log into a UI computer – Specify the resource requirements in a JDL file Limited configurability of reservation/runtime – Submit – Check status – Retrieve output Grid middleware

7 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 7 Running MPI applications on EGEE-like infrastructures A diagram: WMS UI CE WN MPI-Start (shell scripts) MPI library Grid middleware – LCG, gLite etc.

8 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 8 MPI-Start MPI-Start is a set of shell scripts to support MPI applications The goal is to „Do The Right Thing“ for running MPI- parallel jobs on different sites with different configuration Detection mechanisms for: – MPI – Scheduler – File systems Allows for basic and advanced usage

9 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 Open MPI Open MPI is deployed in the frame of Euforia Some of the nice features of Open MPI include: – High configurability (by both admins and users) – Flexibility (component-based architecture supporting all modern HPC architectures/interconnects) – Responsive mailing list MPI on Grids 9

10 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 10 MPI Job Submission in Euforia For EGEE: – edg-job-* tools (LCG-based) – glite-wms-job-* tools (gLite-based) The typical command-line tools for jobs on Euforia are modified edg-job-* tools: – i2g-job-* tools

11 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 11 MPI Job Submission in Euforia Log into the Euforia UI iwrui2.fzk.de Compile your MPI application: mpicc –o ring_c ring_c.c Create a temporary proxy which is associated with your certificate voms-proxy-init –voms itut

12 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 12 MPI Job Submission in Euforia (2) i2g-job-submit An example JDL: Executable = “ring_c” JobType = {“openmpi”}; NodeNumber = 4; InputSandbox = {“ring_c”}; StdOutput = “StdOutput”; StdError = “StdError”; OutputSandbox = {“StdOutput”, “StdError”};

13 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 13 Status and Output Retrieval i2g-job-status I2g-job-get-output

14 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 14 Advanced MPI Support on Grids File distribution for different types of clusters Handling large input/output files (transfer from/to SE)‏ Forwarding MPI runtime options to mpirun/mpiexec MPI tools support can be easily added: – MPI Performance measurement tools – MPI correctness checking tools The key to most advanced features is to use shell scripts (e.g. MPI-Start extensions) before/during/after the program execution. The scripts allow the interaction with: – The Grid middleware – Any Linux application tool or library

15 :: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 15 Thank you!


Download ppt ":: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009."

Similar presentations


Ads by Google