submit_gmx_mdrun

Start or continue a molecular dynamics (MD) simulation with Gromacs on a computing cluster that uses the Slurm Workload Manager.

This script is designed to be used on the Palma2 HPC cluster of the University of Münster or on the Bagheera HPC cluster of the research group of Professor Heuer.

Options

Required Arguments

--system

The name of the system to simulate, e.g. 'LiTFSI_PEO_20-1' for an LiTFSI/PEO electrolyte with an ether-oxygen-to-lithium ratio of 20:1. You can give any string here. See notes below.

--settings

The simulation settings to use, e.g. 'pr_npt298_pr_nh' for a production run in an NPT ensemble at 298 K utilizing a Parrinello-Rahman barostat and an Nose-Hoover thermostat. You can give any string here. See notes below.

--structure

Name of the file that contains the starting structure in a format that is readable by Gromacs. The starting structure is ignored if you continue a previous simulation. Default: None.

Resubmission

--continue

{0, 1, 2, 3}

Continue a previous simulation?

  • 0 = No.

  • 1 = Yes.

  • 2 = Start a new simulation and resubmit it to the Slurm Workload Manager as many times as specified with --nresubmits or until it has reached the number of simulation steps given in the .mdp file (whatever happens earlier).

  • 3 = Continue a previous simulation and resubmit it to the Slurm Workload Manager as many times as specified with --nresubmits or until it has reached the number of simulation steps given in the .mdp file (whatever happens earlier).

Default: 0.

--nresubmits

Number of job resubmissions. The jobs depend on each other and only start if the preceding job has finished successfully. This is useful if your simulation takes longer than the maximum allowed simulation time on your computing cluster. This option is ignored if --continue is set to 0 or 1. Default: 10.

--backup

Backup old simulation files into a subdirectory using rsync before continuing a previous simulation. This might take up to a few hours depending on the size of the files.

Gromacs-Specifig Options

--gmx-lmod

If running on a cluster which uses the Lmod module system, specifiy here which file to source (relative to the lmod subdirectory of this project) to load Gromacs. Default: 'palma/2019a/gmx2018-8_foss.sh'.

--gmx-exe

Name of the Gromacs executable. Default: 'gmx'.

--gmx-mpi-exe

Name of the MPI version of the Gromacs executable. If provided, the simulation will be run using this executable instead of ‘gmx mdrun’. Must be provided if the (maximum) number of nodes set with --nodes is greater than one. If given, --ntasks-per-node must be provided to sbatch. Default: None.

--no-guess-threads

Don’t let Gromacs guess the number of thread-MPI ranks and OpenMP threads, but set the number of thread-MPI ranks to ${SLURM_NTASKS_PER_NODE} and the number of OpenMP threads to ${CPUS_PER_TASK}, which is equivalent to ${SLURM_CPUS_PER_TASK} (see Notes below). Note, if --gmx-mpi-exe is provided, the number of MPI ranks is always set to ${SLURM_NTASKS_PER_NODE} and guessing only affects the number of OpenMP threads. If --no-guess-threads is given, --ntasks-per-node must be provided to sbatch.

--mdrun-flags

Additional options to parse to the Gromacs ‘mdrun’ engine, provided as one long, enquoted string, e.g. ‘-npme 12’. Default: '-cpt 60'.

--grompp-flags

Additional options to parse to the Gromacs preprocessor ‘grompp’, provided as one long, enquoted string, e.g. ‘-maxwarn 1’. Is ignored if --continue is 1 or 3. Default: ''.

Sbatch Options

You can provide arbitrary other options to this script. All these other options are parsed directly to the sbatch Slurm command without further introspection or validation. This means, you can parse any option to sbatch but you are responsible for providing correct options.

sbatch options with additional meaning in the context of this submit script:

--cpus-per-task

Number of CPUs per task (more details). This specifies the number of OpenMP threads to use to run Gromacs if --no-guess-threads is given.

--ntasks-per-node

Number of tasks per node (more details). This specifies the number of thread-MPI ranks to use to run Gromacs if --no-guess-threads is given. If --gmx-exe-mpi is given, this specifies the number of MPI ranks. Must be provided if --no-guess-threads and/or --gmx-exe-mpi is given.

--signal

You cannot parse --signal to sbatch, because this option is used internally to allow for cleanup steps after the simulation has finished.

Config File

This script reads options from the following sections of a Configuration File:

  • [submit]

  • [submit.simulation]

  • [submit.simulation.gmx]

  • [sbatch]

  • [sbatch.simulation]

  • [sbatch.simulation.gmx]

Notes

The --system and --settings options allow (or enforces) you to choose systematic names for the input and output files of your simulations. Besides a better overview, this enables an easy automation of preparation and analysis tasks.

When starting a new simulation, the following commands will be launched:

${gmx_exe} grompp \
    -f ${settings}_${system}.mdp \
    -c ${structure} \
    -p ${system}.top \
    -o ${settings}_${system}.tpr \
    ${grompp_flags[@]} \
    -n ${system}.ndx  # Only if present

${gmx_exe} mdrun \
    -s ${settings}_${system}.tpr \
    -deffnm ${settings}_out_${system} \
    ${mdrun_flags[@]} \
    -ntmpi ${SLURM_NTASKS_PER_NODE} \  # Only if not guessed
    -ntomp ${CPUS_PER_TASK}  # Only if not guessed

Therefore, the following files must exist in your working directory:

  • ${settings}_${system}.mdp

  • ${structure}

  • ${system}.top

The bash variable ${CPUS_PER_TASK} is set to ${SLURM_CPUS_PER_TASK} or if ${SLURM_CPUS_PER_TASK} is not specified, it is set to $((SLURM_CPUS_ON_NODE / SLURM_NTASKS_PER_NODE)).

When continuing a previous simulation, the following command will be launched:

${gmx_exe} mdrun \
    -s ${settings}_${system}.tpr \
    -deffnm ${settings}_out_${system} \
    ${mdrun_flags[@]} \
    -ntmpi ${SLURM_NTASKS_PER_NODE} \  # Only if not guessed
    -ntomp ${CPUS_PER_TASK} \  # Only if not guessed
    -cpi ${settings}_out_${system}.cpt \
    -append

Therefore, the following files must exist in your working directory:

  • ${settings}_${system}.mdp (to check when the desired number of simulation steps is reached)

  • ${settings}_${system}.tpr

  • ${settings}_out_${system}.cpt

${settings}_${system}.mdp is also required when continuing a previous simulation, because the maximum number of simulation steps is read from this file.

If the these files cannot be found, the submission script will terminate with an error message before submitting the job to the Slurm Workload Manager.