QMCPACK sur Jean Zay

Présentation

QMCPACK est un programme de modélisation de structure électronique de molécules et de solides qui utilise une méthode Monte-Carlo.

Liens utiles

Versions disponibles

Version Modules à charger Commentaires
3.7.0 qmcpack/3.7.0-mpi
gcc/8.3.0
Version de production CPU
3.7.0 CUDAqmcpack/3.7.0-mpi-cuda
gcc/8.3.0 cuda/10.1.1
Version de production GPU
3.7.0 CUDA (CUDA-Aware MPI)qmcpack/3.7.0-mpi-cuda
gcc/8.3.0 cuda/10.1.1
openmpi/3.1.4-cuda
Version de production GPU avec CUDA-Aware MPI

Script de soumission sur la partition CPU

gmcpack_mpi.slurm
#!/bin/bash                                                                                                                                                                                         
#SBATCH --nodes=1                   # Number of nodes
#SBATCH --ntasks-per-node=4         # Number of tasks per node
#SBATCH --cpus-per-task=10          # Number of cores for each task (important to get all memory)
#SBATCH --hint=nomultithread        # Disable hyperthreading
#SBATCH --job-name=qmcpack_mpi      # Jobname 
#SBATCH --output=%x.o%j             # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j              # Error file
#SBATCH --time=10:00:00             # Expected runtime HH:MM:SS (max 100h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@cpu           # To specify cpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>             # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_cpu-dev          # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_cpu-t4           # Uncomment for job requiring more than 20 hours (only one node)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load needed modules
module load gcc/8.3.0
module load qmcpack/3.7.0-mpi
 
# echo commands
set -x
 
cd ${SLURM_SUBMIT_DIR}/dmc
 
# Execute code
srun qmcpack C2CP250_dmc_x2.in.xml

Script de soumission sur la partition GPU

gmcpack_multi_gpus.slurm
#!/bin/bash                                                                                                                                                                                         
#SBATCH --nodes=1                    # Number of nodes
#SBATCH --ntasks-per-node=4          # Number of tasks per node
#SBATCH --gres=gpu:4                 # Allocate GPUs
#SBATCH --cpus-per-task=10           # Number of cores for each task (important to get all memory)
#SBATCH --hint=nomultithread         # Disable hyperthreading
#SBATCH --time=00:10:00              # Expected job duration (HH:MM:SS)
#SBATCH --job-name=qmcpack_multi_gpu # Jobname 
#SBATCH --output=%x.o%j              # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j               # Error file
#SBATCH --time=10:00:00              # Expected runtime HH:MM:SS (max 20h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@gpu    # To specify gpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>    # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_gpu-dev          # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_gpu-t4           # Uncomment for job requiring more than 20 hours (only one node)
 
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load needed modules
module load gcc/8.3.0
module load cuda/10.1.1
module load qmcpack/3.7.0-mpi-cuda
 
# echo commands
set -x
 
cd ${SLURM_SUBMIT_DIR}/dmc
 
# Execute code with the right binding. 1GPU per task
srun /gpfslocalsup/pub/idrtools/bind_gpu.sh qmcpack C2CP250_dmc_x2.in.xml

Script de soumission sur la partition GPU avec CUDA-Aware MPI

gmcpack_cuda_aware.slurm
#!/bin/bash                                                                                                                                                                                         
#SBATCH --nodes=1                     # Number of nodes
#SBATCH --ntasks-per-node=4           # Number of tasks per node
#SBATCH --gres=gpu:4                  # Allocate GPUs
#SBATCH --cpus-per-task=10            # Number of cores for each task (important to get all memory)
#SBATCH --hint=nomultithread          # Disable hyperthreading
#SBATCH --time=00:10:00               # Expected job duration (HH:MM:SS)
#SBATCH --job-name=qmcpack_cuda_aware # Jobname 
#SBATCH --output=%x.o%j              # Output file %x is the jobname, %j the jobid
#SBATCH --error=%x.o%j               # Error file
#SBATCH --time=10:00:00              # Expected runtime HH:MM:SS (max 20h)
##
## Please, refer to comments below for
## more information about these 4 last options.
##SBATCH --account=<account>@gpu    # To specify gpu accounting: <account> = echo $IDRPROJ
##SBATCH --partition=<partition>    # To specify partition (see IDRIS web site for more info)
##SBATCH --qos=qos_gpu-dev          # Uncomment for job requiring less than 2 hours
##SBATCH --qos=qos_gpu-t4           # Uncomment for job requiring more than 20 hours (only one node)
 
# Cleans out the modules loaded in interactive and inherited by default
module purge
 
# Load needed modules
module load gcc/8.3.0
module load cuda/10.1.1
module load openmpi/3.1.4-cuda
module load qmcpack/3.7.0-mpi-cuda
 
# echo commands
set -x
 
cd ${SLURM_SUBMIT_DIR}/dmc
 
# Execute code with the right binding. 1GPU per task
srun /gpfslocalsup/pub/idrtools/bind_gpu.sh qmcpack C2CP250_dmc_x2.in.xml

Remarques

  • Les jobs ont tous des ressources définies dans Slurm par une partition et une “Qualité de Service” QoS (Quality of Service) par défaut. Vous pouvez en modifier les limites en spécifiant une autre partition et/ou une QoS comme indiqué dans notre documentation détaillant les partitions et les Qos.
  • Pour les comptes multi-projets ainsi que ceux ayant des heures CPU et GPU, il est indispensable de spécifier l'attribution d'heures sur laquelle décompter les heures de calcul du job comme indiqué dans notre documentation détaillant la gestion des heures de calcul.