Skip to content

MPI

The Message Passing Interface (MPI) implementation is an important part of an HPC Vega cluster. The vast majority of software that needs to communicate across nodes uses MPI. Although it is not the only means of communication between processes, it is the only one that we provide a standard implementation for that can be used both for compiling and running software. Other more specialized communication implementations are typically built with the software that utilizes them.

More information about MPI can be found at the following location:

There are several implementations of the MPI standard. The one that we use on the HPC Vega is Open MPI.

Open MPI on HPC Vega is available as a module. Search and display for the desired module:

$ module keyword openmpi/4.1.2.1

# Check all options with `tab`
$ module load openmpi/<tab>

Loading the selected module (Default Open MPI version is 4.1.2.1).

  • Compile C/C++ code with openmpi (eg. mpicc, mpic++, mpiCC and mpicxx)
  • Compile Fortran code with openmpi/gnu (eg. C/C++ compilers, mpifort, mpif70, mpif77 and mpif90)
  • Compile with latest OneAPI compilers (Used latest version is: 2022.0.2): oneapi/compiler/latest.
$ module load openmpi/4.1.2.1
$ module load openmpi/gnu/4.1.2.1

$ module load oneapi/compiler/latest

Find older versions:

$ module spider openmpi

Once you have loaded the Open MPI module all commands will be available for running your MPI programs.

$ mpirun ./program

Single node jobs

Use any version on HPC Vega (eg. cvmfs modules mounted at /cvmfs available through modules or system Open MPI).

Multi node jobs

HPC Vega is using Infiniband, therefore for successful job execution correct environment must be setup.

The system Open MPI module is already configured by default. btl disabled by default, therefore UCX takes correct interface (ib0). In case you are using any other version that is not properly compiled and setup, the correct interface (ib0) must be selected. An example is available below.

Usage of srun:

Check supported API interfaces with command. We recommend usage of pmix.

srun --mpi=list

Example of SBATCH script:

#!/bin/bash
#SBATCH --jobname=multinode-srun-test
#SBATCH --nodes=2
#SBATVH --ntasks=4
#SBATCH --ntasks-per-node=2
#SBATCH --time=00:10:00
#SBATCH --output %j.output
#SBATCH --partition=cpu

srun --mpi=pmix ./mpibench 

For Intel, export environment variable I_MPI_PMI_LIBRARY and load OneAPI module.

Example of SBATCH script:

#!/bin/bash
#SBATCH --jobname=multinode-srun-test
#SBATCH --nodes=2
#SBATVH --ntasks=4
#SBATCH --ntasks-per-node=2
#SBATCH --time=00:10:00
#SBATCH --output %j.output
#SBATCH --partition=cpu

export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi2.so
module load oneapi/mpi/2021.5.1

srun --mpi=pmix ./mpiBench-Intel

Usage of mpirun:

Open MPI, version 3.x

$ mpirun -mca btl_tcp_if_include ib0 ./mpibench

Open MPI, version prior to 4.0.3

Enable ucx and disable btl. Example of SBATCH script:

#!/bin/bash
#SBATCH --job-name=multinode-mpirun-test
#SBATCH --nodes=2
#SBATVH --ntasks=4
#SBATCH --ntasks-per-node=2
#SBATCH --time=00:15:00
#SBATCH --output %j.output
#SBATCH --partition=cpu

module load OpenMPI/4.0.3-GCC-9.3.0

mpirun -mca pml ucx -mca btl ^uct,tcp,openib,vader --bind-to core:overload-allowed ./mpibench

mpiBench can be found at:

  • Source code: mpiBench
  • Compiled code: /ceph/hpc/software/mpibench

For more information for multithreaded, multicore and other jobs with Open MPI, Open MP, hybrid can be found at: LINK.