LAMMPS on Jean Zay

Introduction

LAMMPS is a classical molecular dynamics simulator code specialised in materials modeling.

Useful sites

Available versions

Version Modules to load Commentaries
2020.07.21lammps/20200721-mpi intel-mkl/2020.1 Version for CPU production
2020.06.30lammps/20200630-mpi-cuda-kokkos
intel-mkl/2020.1 cuda/10.2
Version for GPU production
2019.06.05lammps/20190605-mpi intel-mkl/2019.4 Version for CPU production
2019.06.05lammps/20190605-mpi-cuda-kokkos
intel-mkl/2019.4 cuda/10.1.1
Version for GPU production
2018.08.31lammps/20180831-mpi intel-mkl/2019.4 Version for CPU production
2017.09.22lammps/20170922-mpi intel-mkl/2019.4 Version for CPU production
2017.08.11lammps/20170811-mpi Version for CPU production

Submission script on the CPU partition

lammps.in
#!/bin/bash
#SBATCH --nodes=1               # Number of Nodes
#SBATCH --ntasks-per-node=40    # Number of MPI tasks per node
#SBATCH --cpus-per-task=1       # Number of OpenMP threads
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --job-name=rhodo        # Jobname
#SBATCH --output=%x.o%j         # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=10:00:00         # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu       # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4      # Uncomment for job requiring more than 20h (only one node)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load the module
module load lammps/20200721-mpi intel-mkl/2020.1
 
# Execute commands
srun lmp -i rhodo.in

Submission script on GPU partition

lammps_gpu.slurm
#!/bin/bash
#SBATCH --nodes=1               # Number of Nodes
#SBATCH --ntasks-per-node=40    # Number of MPI tasks per node
#SBATCH --cpus-per-task=1       # Number of OpenMP threads
#SBATCH --gres=gpu:4                # Allocate 4 GPUs/node
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --job-name=rhodo        # Jobname
#SBATCH --output=%x.o%j         # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j          # Error file
#SBATCH --time=10:00:00         # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu       # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4      # Uncomment for job requiring more than 20h (only one node)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load the module
module load lammps/20200630-mpi-cuda-kokkos intel-mkl/2020.1 cuda/10.2
 
# Execute commands
srun lmp -i rhodo.in

Comments:

  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation for the project) for which to count the job's computing hours as indicated in our documentation detailing the project hours management.