Presentation is loading. Please wait.

Presentation is loading. Please wait.

The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) MPI Applications with the Grid Engine Riccardo Rotondo

Similar presentations


Presentation on theme: "The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) MPI Applications with the Grid Engine Riccardo Rotondo"— Presentation transcript:

1 www.epikh.eu The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) MPI Applications with the Grid Engine Riccardo Rotondo (riccardo.rotondo@garr.it)riccardo.rotondo@garr.it Joint CHAIN/EUMEDGRID-Support/EPIKH School to Science Gateways Amman, Jordan 29.11.2011

2 2 2 Outline What is MPI ? MPI Challenges on GRID MPI Interface MPI embedded in your GRID Engine MPI Portlet Development 2 Outline

3 3 3 Message Passing Interface What is MPI ?  A standard defining a syntax and semantics useful for righting message-passing appications. Why MPI ?  Heavy usage of CPU power in HPC;  Development of portable and scalable large parallel applications. 3 3

4 4 4 There is no standard way of starting an MPI application No common syntax for mpirun, mpiexec support optional The cluster where the MPI job is supposed to run doesn't have a shared file system How to distribute the binary and input files? How to gather the output? Different clusters over the Grid are managed by different Local Resource Management Systems (PBS, LSF, SGE,…) Where is the list of machines that the job can use? What is the correct format for this list? How to compile MPI program? How can a physicist working on Windows workstation compile his code for/with an Itanium MPI implementation? MPI in GRID 4 4

5 Specifies a unique interface to the upper layer to run a MPI job Allow the support of new MPI implementations without modifications in the Grid middleware Support of “simple” file distribution Provide some support for the user to help manage his data MPI-Start 5 5 Grid Middleware MPI-START ResourcesMPI

6 mpi-start is a recommended solution to hide the implementation details for jobs submission.  The design of mpi-start was focused in making the MPI job submission as transparent as possible from the cluster details! Using the mpi-start system requires the user to define a wrapper script that set the environment variables and a set of hooks. MPI-Start 6 6

7 Portable  The program must be able to run under any supported operating system Modular and extensible architecture  Plugin/Component architecture Relocatable  Must be independent of absolute path, to adapt to different site configurations  Remote “injection” of mpi-start along with the job “Remote” debugging features MPI-Start design goals 7 7

8 MPI-Start Architecture 8 8 CORE Execution Open MPI MPICH2 LAM PACX Scheduler PBS/Torque SGE LSF Hooks Local User Compiler File Dist.

9 MPI-Start Flow 9 9 Do we have a scheduler plugin for the current environment? Trigger pre-run hooks Ask Scheduler plugin for a machinefile in default format Activate MPI Plugin Start mpirun Do we have a plugin for the selected MPI? Prepare mpirun Trigger post-run hooks START EXIT Dump Env NO Scheduler Plugin Execution Plugin Hooks Plugins

10 Interface with environment variables:  I2G_MPI_APPLICATION: The executable  I2G_MPI_APPLICATION_ARGS: The parameters to be passed to the executable  I2G_MPI_TYPE: The MPI implementation to use (e.g openmpi,...)  I2G_MPI_VERSION: Specifies which version of the the MPI implementation to use. If not defined the default version will be used Using MPI-Start 10

11 More variables:  I2G_MPI_PRECOMMAND Specifies a command that is prepended to the mpirun (e.g. time).  I2G_MPI_PRE_RUN_HOOK Points to a shell script that must contain a “pre_run_hook” function. This function will be called before the parallel application is started (usage: compilation of the executable)  I2G_MPI_POST_RUN_HOOK Like I2G_MPI_PRE_RUN_HOOK, but the script must define a “post_run_hook” that is called after the parallel application finished (usage: upload of results). Using MPI-Start 11

12 Using MPI-Start 12 [imain179@i2g-ce01 ~]$ cat test2mpistart.sh #!/bin/sh # This is a script to show how mpi-start is called # Set environment variables needed by mpi- start export I2G_MPI_APPLICATION=/bin/hostname export I2G_MPI_APPLICATION_ARGS= export I2G_MPI_NP=2 export I2G_MPI_TYPE=openmpi export I2G_MPI_FLAVOUR=openmpi export I2G_MPI_JOB_NUMBER=0 export I2G_MPI_STARTUP_INFO=/home/imain179 export I2G_MPI_PRECOMMAND=time export I2G_MPI_RELAY= export I2G_MPI_START=/opt/i2g/bin/mpi-start # Execute mpi-start $I2G_MPI_START [imain179@i2g-ce01 ~]$ cat test2mpistart.sh.o114486 Scientific Linux CERN SLC release 4.5 (Beryllium) lflip30.lip.pt lflip31.lip.pt [imain179@i2g-ce01 ~]$ qsub -S /bin/bash -pe openmpi 2 -l allow_slots_egee=0./test2mpistart.sh [lflip31] /home/imain179 > cat test2mpistart.sh.e114486 Scientific Linux CERN SLC release 4.5 (Beryllium) real 0m0.731s user 0m0.021s sys 0m0.013s The submission (in SGE): The StdOut: The StdErr: The script: MPI commands are transparent to the user – No explicit mpiexec/mpirun instruction – Start the script via normal LRMS submission

13 Wrapper Script for MPI Start 13 #!/bin/bash # Pull in the arguments. MY_EXECUTABLE=`pwd`/$1 MPI_FLAVOR=$2 # Convert flavor to lowercase for passing to mpi-start. MPI_FLAVOR_LOWER=`echo $MPI_FLAVOR | tr '[:upper:]' '[:lower:]'` # Pull out the correct paths for the requested flavor. eval MPI_PATH=`printenv MPI_${MPI_FLAVOR}_PATH` # Ensure the prefix is correctly set. Don't rely on the defaults. eval I2G_${MPI_FLAVOR}_PREFIX=$MPI_PATH export I2G_${MPI_FLAVOR}_PREFIX # Touch the executable. #It exist must for the shared file system check. # If it does not, then mpi-start may try to distribute the executable # when it shouldn't. touch $MY_EXECUTABLE # Setup for mpi-start. export I2G_MPI_APPLICATION=$MY_EXECUTABLE export I2G_MPI_APPLICATION_ARGS= export I2G_MPI_TYPE=$MPI_FLAVOR_LOWER export I2G_MPI_PRE_RUN_HOOK=mpi-hooks.sh export I2G_MPI_POST_RUN_HOOK=mpi-hooks.sh # If these are set then you will get more debugging information. export I2G_MPI_START_VERBOSE=1 #export I2G_MPI_START_DEBUG=1 # Invoke mpi-start. $I2G_MPI_START The script takes 2 argument the executable name and the MPI flavor The script refers to the Hooks scripts

14 Wrapper Script for MPI Start 14 #!/bin/bash # Pull in the arguments. MY_EXECUTABLE=`pwd`/$1 MPI_FLAVOR=$2 # Convert flavor to lowercase for passing to mpi-start. MPI_FLAVOR_LOWER=`echo $MPI_FLAVOR | tr '[:upper:]' '[:lower:]'` # Pull out the correct paths for the requested flavor. eval MPI_PATH=`printenv MPI_${MPI_FLAVOR}_PATH` # Ensure the prefix is correctly set. Don't rely on the defaults. eval I2G_${MPI_FLAVOR}_PREFIX=$MPI_PATH export I2G_${MPI_FLAVOR}_PREFIX # Touch the executable. #It exist must for the shared file system check. # If it does not, then mpi-start may try to distribute the executable # when it shouldn't. touch $MY_EXECUTABLE # Setup for mpi-start. export I2G_MPI_APPLICATION=$MY_EXECUTABLE export I2G_MPI_APPLICATION_ARGS= export I2G_MPI_TYPE=$MPI_FLAVOR_LOWER export I2G_MPI_PRE_RUN_HOOK=mpi-hooks.sh export I2G_MPI_POST_RUN_HOOK=mpi-hooks.sh # If these are set then you will get more debugging information. export I2G_MPI_START_VERBOSE=1 #export I2G_MPI_START_DEBUG=1 # Invoke mpi-start. $I2G_MPI_START The script takes 2 argument the executable name and the MPI flavor The script refers to the Hooks scripts

15 Outline 15 worker node Resource Manager

16 MPI Portlet 16 MPI Portlet Develop

17 cpunumber in mpi_portlet.java 17 String cpunumber; // Number of cpu will execute the mpi script in parallel A global variable representing the number or the cpu can be set by user public void getInputForm(ActionRequest request) {... List items = upload.parseRequest(request);... cpunumber=item.getString(); // or cpunumber=(String)request.getParameter("cpunumber");... } Reading the CPU number from the user request

18 cpunumber in mpi_portlet.java 18 public void submitJob() {... String arguments="mpi-start-wrapper.sh cpi mpich2";... JSagaJobSubmission tmpJSaga = new JSagaJobSubmission();... tmpJSaga.setTotalCPUCount(cpunumber);... } Using the cpunumber submitting the job:

19 cpunumber input.jsp 19 " method="post">... Insert cpu number 4... Using the cpunumber submitting the job:

20 mpi-start, mpi-hooks, mpi-app 20 public void submitJob() {... String executable="/bin/sh"; String arguments="mpi-start-wrapper.sh cpi mpich2";... String inputSandbox= appServerPath+"/WEB- INF/job/pilot_script.sh” +","+appServerPath+"/WEB-INF/job/cpi.c" +","+appServerPath+"/WEB-INF/job/mpi-hooks.sh" +","+appServerPath+"/WEB-INF/job/mpi-start-wrapper.sh" +","+inputSandbox_inputFile;... } Setting the location of the mpi scripts and mpi application.

21 Grid Option in portlet.xml 21 mpi-portlet it.infn.ct.mpi_portlet... init_JobRequirements Member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment); Member("MPICH", other.GlueHostApplicationSoftwareRunTimeEnvironment)... In portlet.xml can set some default preferences that can be recall in the java code:

22 Grid Option in mpi-portlet.java 22 private void getPreferences( ActionRequest actionRequest, RenderRequestrenderRequest) {... pref_JobRequirements=prefs.getValue("pref_JobRequirements ",init_JobRequirements);... } First we need to get the value from portlet preferences:

23 Grid Option in mpi-portlet.java 23 public void submitJob() {... String jdlRequirements[] = pref_JobRequirements.split(";"); int numRequirements=0; for(int i=0; i<jdlRequirements.length; i++) { if(!jdlRequirements[i].equals("")) { jdlRequirements[numRequirements] = "JDLRequirements=("+jdlRequirements[i]+")"; numRequirements++;... tmpJSaga.setJDLRequirements(jdlRequirements);... } Now set the Job Requirements in java code to submit the job:

24 24 References Gilda Trainin Material: –http://gilda.ct.infn.it/wikimainhttp://gilda.ct.infn.it/wikimain Science Gateway developer page: –http://gilda.ct.infn.it/wikimain/- /wiki/Main/Science%20Gateway%20Developer%2 0Pageshttp://gilda.ct.infn.it/wikimain/- /wiki/Main/Science%20Gateway%20Developer%2 0Pages MPI Standalone wiki: –http://gilda.ct.infn.it/wikimain/- /wiki/Main/GridEngineMPIStandaloneCodehttp://gilda.ct.infn.it/wikimain/- /wiki/Main/GridEngineMPIStandaloneCode References

25 Thank you for your attention Special thanks to Mario Reale 25


Download ppt "The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) MPI Applications with the Grid Engine Riccardo Rotondo"

Similar presentations


Ads by Google