Siesta on Jean Zay

Introduction

Siesta is an ab initio molecular dynamics and electronic structure simulation software.

Useful Sites

Available versions

Version Modules to load Comments
4.0.2 siesta/4.0.2-mpi-cudaCPU version

Commentaires

Warning : Some memory leaks problems during molecular dynamics are currently under investigation.

Submission script on the CPU partition

siesta.slurm
#!/bin/bash
#SBATCH --nodes=1               # Allocate 1 node
#SBATCH --ntasks-per-node=40    # 40 MPI tasks per node
#SBATCH --cpus-per-task=1       # 1 thread OpenMP
#SBATCH --hint=nomultithread    # Disable hyperthreading
#SBATCH --time=10:00:00         # Expected runtime
#SBATCH --job-name=t-3000K
#SBATCH --output=%x.o%j            # Output file %x is the jobname, %j the jobid 
#SBATCH --error=%x.o%j            # Error file
#SBATCH --time=10:00:00           # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu       # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>       # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev      # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4      # Uncomment for job requiring more than 20h (only one node)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load the module
module load siesta/4.0.2-mpi-cuda
 
srun siesta < b2o3.fdf

Comments:

  • All jobs have resources defined in Slurm per partition and per QoS (Quality of Service) by default. You can modify the limits by specifying another partition and / or QoS as shown in our documentation detailing the partitions and Qos.
  • For multi-project users and those having both CPU and GPU hours, it is necessary to specify the project accounting (hours allocation for the project) for which to count the job's computing hours as indicated in our documentation detailing the project hours management.