Skip to content

GROMACS

GROMACS is an open-source software dedicated to high-performance molecular dynamics and output analysis.

License

GROMACS is a free program released under the GNU Lesser General Public License (LGPL), version 2.1..

Installation and usage

GROMACS is available on the cluster either as a module or as a Singularity container.

Modules:

module av GROMACS

Output:

-------------------------------------------------------------------------------------------------- /ceph/hpc/software/modulefiles --------------------------------------------------------------------------------------------------
   GROMACS/GROMACS-2021.3-gcc-GPU    GROMACS/GROMACS-2021.3-gcc

--------------------------------------------------------------------------------------------- /cvmfs/sling.si/modules/el7/modules/all ----------------------------------------------------------------------------------------------
   GROMACS/2019-foss-2018b                   GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2    GROMACS/2021.5-foss-2021b-PLUMED-2.8.0
   GROMACS/2020.4-foss-2020a-Python-3.8.2    GROMACS/2021.5-foss-2021b-CUDA-11.4.1                 GROMACS/2021.5-foss-2021b              (D)

Singularity containers:

$ ls -al /ceph/hpc/software/containers/singularity/images/gromacs-*

Output:

 ... /ceph/hpc/software/containers/singularity/images/gromacs-2021.3-gpu.sif
 ... /ceph/hpc/software/containers/singularity/images/gromacs-2021-gpu.sif
 ... /ceph/hpc/software/containers/singularity/images/gromacs-2022.1-gpu.sif
 ... /ceph/hpc/software/containers/singularity/images/gromacs-2022.3-gpu.sif

Build a container for GROMACS:

GPU

NVIDIA NGC Catalog provides an optimized container for GROMACS with GPU support, which you can compile with the command bellow:

singularity build gromacs-gpu-<tag>.sif docker://nvcr.io/hpc/gromacs:<tag>
singularity build --fakeroot gromacs-gpu-2022.3.sif docker://nvcr.io/hpc/gromacs:2022.3

If you don't have admin rights on the system, use the --fakeroot switch. If the switch is not enabled, write to support@sling.si.

SBATCH

Example of using Singularity container on CPU and GPU

GROMACS Singularity containers are already availiable on cluster, at path:

/ceph/hpc/software/containers/singularity/images/

Example of SBATCH script

Module example:

#!/bin/bash

#SBATCH --job-name=my_job
#SBATCH --partition=cpu
##SBATCH --ntasks=64
#SBATCH --nodes=6
#SBATCH --mem-per-cpu=1GB
#SBATCH --ntasks=256
#SBATCH --ntasks-per-node=256
#SBATCH --cpus-per-task=1
#SBATCH --nodes=1
#SBATCH --output=%j-sling.out
#SBATCH --error=%j-sling.err
#SBATCH --time=1:00:00

module purge
module load GROMACS/GROMACS-2021.3-gcc

export OMP_NUM_THREADS=1

export UCX_TLS="self,shm,rc,dc"
export OMPI_MCA_PML="ucx"
export OMPI_MCA_osc="ucx"
export OMP_NUM_THREADS=1

gmx_mpi grompp -f bench.mdp -c bench.gro -r bench.gro -p bench.top -n bench.ndx -o bench_$SLURM_NTASKS.tpr

Singularity container example:

#!/bin/bash

#SBATCH --nodes=1                # number of nodes
#SBATCH --gres=gpu:4             # request 4 GPUs per node
#SBATCH --ntasks-per-node=16      # request 16 MPI tasks per node
#SBATCH --cpus-per-task=8       # 8 OpenMP threads per MPI process
#SBATCH --mem=0                  # Request all available memory in the node
#SBATCH --time=1:00:00        # time limit (D-HH:MM:ss)

export OMP_NUM_THREADS=8

srun --hint=nomultithread singularity exec --nv /ceph/hpc/software/containers/singularity/images/gromacs-2021.3-gpu.sif gmx mdrun -nb gpu -pin on -v -noconfout ....

Documentation