1. Introduction to IDRIS

The Missions and Objectives of IDRIS

The Institute for Development and Resources in Intensive Scientific Computing (IDRIS), founded in 1993, is a centre of excellence in intensive numerical calculations which serves the research branches of extreme computing. This concerns the application aspects (large-scale simulations) as much as the research inherent to high performance computation (calculation infrastructures, resolution methods and associated algorithms, processing of large data volumes, etc.)..

IDRIS is the major centre of very high performance intensive numerical computation for the French National Centre for Scientific Research (CNRS).  Together with the two other national centres, CINES (the computing centre for the French Ministry of Higher Education and Research) and the Very Large Computing Center (TGCC) of the French Alternative Energies and Atomic Energy Commission (CEA), and coordinated by GENCI (Grand Équipement National de Calcul Intensif), IDRIS participates in the installation of national computer resources for the use of government-funded research which requires extreme computing means.

Scientific management of Resources

Coordinated by GENCI (Grand Équipement National de Calcul Intensif), a call for proposals, common to the three national computing centres (CINES, IDRIS and TGCC), is organised twice each year for the purpose of allocating computing hours. These hours are valuable for one year. During each call, requests may be submitted for calculation hours for new projects, to renew an existing project or to request supplementary hours for an allocation received during the preceding call.

Requests for resources are made with the DARI form (Demande d'Attribution de Ressources Informatiques) through a common web site for the three computing centres (see Requesting resource hours on IDRIS machine).

The proposals are examined from a scientific perspective by the Thematic Committees who draw on the technical expertise of the centres' application assistance teams as needed. Subsequently, an evaluation committee meets to decide upon the resource requests and make approval recommendations to the Attribution Committee, under the authority of GENCI, for distributing computing hours to the three national centres. 

Between the two project calls, IDRIS management studies the requests for supplementary resources as needed (“demandes au fil de l'eau”) and attributes limited hours in order to avoid the blockage of on-going projects.

A procedure called Preparatory Access is available to new projects for the purpose of evaluating and improving the performance of the concerned applications on IDRIS supercomputer. Through theses Preparatory Access, projects may benefit from:

  • 50000 CPU hours on CPU partition of Jean Zay
  • 1000 GPU hours on GPU partition of Jean Zay

The IDRIS User Committee

The role of the User Committee is to be a liaison for dialogue with IDRIS so that all the projects which received allocations of computer resources can be successfully conducted in the best possible conditions. The committee transmits the observations of all the users regarding the functionning of the centre and the issues are discussed with IDRIS in order to determine the appropriate changes to be made.

The User Committee consists of 2 members elected from each scientific discipline (24 members in 2014) who can be contacted at the following address: The User Committee pages are available to IDRIS users by connecting to the IDRIS Extranet, section: Comité des utilisateurs.

In this space are found the reports on IDRIS machine exploitation as well as the latest meeting minutes.

IDRIS personnel

Organigramme IDRIS

Return to Table of Contents

2. The IDRIS machine

Jean Zay: HPE SGI 8600 supercomputer

Jean Zay is an HPE SGI 8600 computer composed of two partitions: a partition containing scalar nodes, and a partition containing accelerated nodes which are hybrid nodes equipped with both CPUs and GPUs. All the compute nodes are interconnected by an Intel Omni-PAth network (OPA) and access a parallel file system with very high bandwidth.

Scalar partition (or CPU partition)

  • 1528 scalar compute nodes with:
    • 2 Intel Cascade Lake 6248 processors (20 cores at 2.5 GHz), namely 40 cores per node
    • 192 GB of memory per node

Accelerated partition (or GPU partition)

  • 261 four-GPU accelerated compute nodes with:
    • 2 Intel Cascade Lake 6248 processors (20 cores at 2.5 GHz), namely 40 cores per node
    • 192 GB of memory per node
    • 4 Nvidia Tesla V100 SXM2 GPUs (32 GB)
  • 31 eight-GPU accelerated compute nodes, currently dedicated to the AI community with:
    • 2 Intel Cascade Lake 6226 processors (12 cores at 2.7 GHz), namely 24 cores per node
    • 20 nodes with 384 GB of memory and 11 nodes with 768 GB of memory
    • 8 Nvidia Tesla V100 SXM2 GPUs (32 GB)
  • Extension in the summer of 2020, 351 four-GPU accelerated compute nodes with:
    • 2 Intel Cascade Lake 6248 processors (20 cores at 2.5 GHz), namely 40 cores per node
    • 192 GB of memory per node
    • 4 Nvidia Tesla V100 SXM2 GPUs (16 GB)
  • Extension in the summer of 2021, 3 eight-GPU accelerated compute nodes with:
    • 2 Intel Cascade Lake 6240R processors (24 cores at 2.4 GHz), namely 48 cores per node
    • 768 GB of memory per node
    • 8 Nvidia A100 PCIE GPUs (40 GB)


  • 4 pre- and post-processing large memory nodes with:
    • 4 Intel Skylake 6132 processors (12 cores at 3.2 GHz), namely 48 cores per node
    • 3 TB of memory per node
    • 1 Nvidia Tesla V100 GPU
    • A 1.5 TB internal NVMe disk


  • 5 scalar-type visualization nodes with:
    • 2 Intel Cascade Lake 6248 processors (20 cores at 2.5 GHz), namely 40 cores per node
    • 192 GB of memory per node
    • 1 Nvidia Quatro P6000 GPU

Additional characteristics

  • Cumulated peak performance of 28 Pflop/s since the summer of 2020 extension, with a total of 2696 Nvidia V100 GPUs
  • Omni-PAth interconnection network 100 Gb/s : 1 link per scalar node and 4 links per converged node
  • IBM's Spectrum Scale parallel file system (ex-GPFS)
  • Parallel storage device with a capacity of 2.2 PB SSD disks (GridScaler GS18K SSD) after the summer of 2020 extension.
  • Parallel storage device with a capacity greater than 30 PB
  • 5 frontal nodes
    • 2 Intel Cascade Lake 6248 processors (20 cores at 2.5 GHz), namely 40 cores per node
    • 192 GB of memory per node

Return to Table of Contents

3. Requesting allocations of hours on IDRIS machine

Requesting resource hours at IDRIS

Requesting resource hours on Jean Zay is done via the DARI site, common to the three national computing centres: CINES, IDRIS and TGCC.

The procedure to follow differs depending on your usage objectives :

  • To develop algorithms in Artificial Intelligence (AI) on the Jean Zay GPU partition, or
  • To effectuate high performance computations (HPC) on the CPU and GPU partitions of Jean Zay

Before requesting any hours, we recommend that you consult the GENCI communication (in French) detailing the conditions and eligibility criteria for obtaining computing hours.

What is your usage objective?

To develop algorithms in Artificial Intelligence (AI) on the Jean Zay GPU partition

  • This usage is intended for researchers wishing to develop algorithms, methodologies or tools (machine learning and deep learning) in Artificial Intelligence.
  • To request computing hours for this type of activity, consult the section, “Développer des algorithmes en Intelligence Artificielle (IA)” on the DARI Website. The link “Mon espace” allows you to create/access to your private space in order to:
    • Complete a dynamic access file which will be examined by the IDRIS director, or by experts of the related scientific field if the request is for a large number of hours.
    • Renew a dynamic access file when it expires after one year if you wish.
    • Complete the on-line computing account form if you do not have an account on Jean Zay.
    • Request the linking of your account to a new dynamic access file.
  • You can find IDRIS documentation to help you complete each of these formalities via the DARI portal.
  • The hours allocation is valid for a period of one year after the opening of a computing account on the Jean Zay GPU partition. Requests for hours may be made throughout the year.

High Performance Computing (HPC) on the Jean Zay CPU and GPU partitions:

  • This usage is intended for researchers doing modelisation and numerical simulation for resolving complex computing problems.
  • Two project calls for regular access are launched each year:
    • In January-February for an hours allocation from 1 May of the same year until 30 April of the following year,
    • In June-July for an hours allocation from 1 November of the current year until 31 October of the following year.
  • To request computing hours during these project calls, consult the DARI site, section “Calcul haute performance (HPC) et utilisation IA”. Follow the link, “Création d’un compte ou connexion sur eDARI” to create/access your private space in order to:
    • Provide information for a regular access file which will be examined by experts, or
    • Renew a regular access file which will be examined by experts, or
    • Migrate from preparatory access (see section below) to regular access, which will be examined by experts.
  • To request the opening of an account, consult the IDRIS document on Account management.

Other resource requests:

Throughout the entire year you may request the following resources on the DARI site :

  • Preparatory access (“accès préparatoire”): This type of request was introduced to assist in the porting, optimisation and parallelization of HPC compute codes on CPU and/or GPU for the preparation of a regular access file. The requests will be examined by IDRIS which, if needed, will seek advice from the president of the Thematic Committee related to the project. The quota of hours which could be allocated for a period of 6 months is:
    • 50 000 core hours on the Jean Zay CPU partition
    • 1000 GPU hours on the Jean Zay GPU partition
  • Supplementary resources as needed (“demandes au fil de l'eau”) for all existing projects (dynamic access, HPC regular access, and preparatory access) which have used up their hours quotas during the year.

Return to Table of Contents

4. How to obtain an account at IDRIS

Account management: Account opening and closure

User accounts

Each user has a unique account which can be associated with all the projects in which the user participates.

For more information, you may consult our web page regarding multi-project management.

Managing your account is done through completing an IDRIS form which must be sent to .

Of particular note, the FGC form is used to make modifications for an existing account: Add a project, add/delete machine access, change a postal address, telephone number, employer, etc.

An account can only exist as “open” or “closed”:

  • Open. In this case, it is possible to:
    • Submit jobs on the compute machine if the project's current hours allocation has not been exhausted (cf. idracct command output).
    • Submit pre- and post-processing jobs.
  • Closed. In this case, the user can no longer connect to the machine. An e-mail notification is sent to the project manager and the user at the time of account closure.

Opening a user account

For a new project

There is no automatic or implicit account opening. Each user of a project must request one of the following:

  • If the user does not yet have an IDRIS account, the opening of an account respects the access procedures (modalités d'accès) defined by GENCI :
    • For regular access or preparatory access, use the GENCI account creation request form (DCC) after the concerned project has obtained computing hours.
    • For dynamic access, you may create/access your private space throughout the year via the DARI homepage, section “Développer des algorithmes en Intelligence Artificielle (IA)”, and the link “Création d’un compte ou connexion sur eDARI” (in French).
  • If the user already has an account open at IDRIS, he/she must complete the first section of the IDRIS FGC form, “Multiproject logins: Add a project”. This request must be signed by both the user and the manager of the added project.

IMPORTANT INFORMATION: By decision of the IDRIS director or the CNRS Defence and Security Officer (FSD), the creation of a new account may be submitted for ministerial authorisation in application of the French regulations for the Protection du Potentiel Scientifique et Technique de la Nation (PPST). In this event, a personal communication will be transmitted to the user so that implementation of the required procedure may be started, knowing that this authorisation procedure may take up to two months.

Comment: The opening of a new account on the machine will not be effective until (1) the access request (regular, dynamic, or preparatory) is validated (with ministerial authorisation if requested) and (2) the concerned project has obtained computing hours.

For a project renewal

Existing accounts are automatically carried over from one project call to another if the eligibility conditions of the project members have not changed (cf. GENCI explanatory document for project calls Modalités d'accès). If your account is open and already associated with the project which has obtained hours for the following call, no action on your part is necessary.

Closure of a user account

Account closure of an unrenewed project

When a GENCI project is not renewed, the following procedure is applied:

  • On the date of project expiry:
    • DARI hours are no longer available and the project accounts can no longer submit jobs on the compute machine for this project.
    • The project accounts remain open so that data can be accessed for a period of six months.
  • Six months after the date of project expiry:
    • All the project data (SCRATCH, STORE, WORK, ALL_CCFRSCRATCH, ALL_CCFRSTORE and ALL_CCFRWORK) will be deleted at the initiative of IDRIS within an undefined time period.
    • All the project accounts which are still linked to another project will remain open but the default project may need to be changed via the idrproj command.
    • All the project accounts which are no longer linked to any project can be closed at any time.

File recovery is the responsibility of each user during the six months following the end of an unrenewed project, by transferring files to a user's local laboratory machine or to the Jean Zay disk spaces of another DARI project for multi-project accounts.

With this six-month delay period, we avoid premature closing of project accounts: This is the case for a project of allocation Ai which was not renewed for the following year (no computing hours requested for allocation Ai+2) but was renewed for allocation Ai+3 (which begins six months after allocation Ai+2).

Account closure after expiry of ministerial authorisation for accessing IDRIS computer resources

Ministerial authorisation is only valid for a certain period of time. When the ministerial authorisation reaches its expiry date, we are obligated to close your account.

In this situation, the procedure is as follows:

  • A first notification is sent to the user 90 days before the expiry date.
  • A second notification is sent to the user 70 days before the expiry date.
  • The account is closed on the expiry date.

To avoid this account closure, the user is advised to submit a new GENCI Account creation request form (DCC) to the address as soon as receiving the first notification so that IDRIS may begin processing a request for access prolongation. This process can take up to two months.

Account closure for security reasons

An account may be closed at any moment and without notice by decision of the IDRIS management.

Declaring the machines from which a user connects to IDRIS

Each machine from which a user wishes to access an IDRIS computer must be registered at IDRIS.

The user must provide, for each of his/her accounts, a list of machines which will be used to connect to the IDRIS computers (the machine's name and IP address). This is done at the creation of each account via the GENCI account creation request form available at this portal.

The user must update the list of machines associated with a login account (adding/deleting) by using the FGC form (account administration form). After completing this form, it must be signed by both the user and the security manager of the laboratory.

Important note: Personal IP addresses are not authorised for connection to IDRIS machines.

Security manager of the laboratory

The laboratory security manager is the network/security intermediary for IDRIS. This person must guarantee that the machine from which the user connects to IDRIS conforms to the most recent rules and practices concerning information security and must be able to immediately close the user access to IDRIS in case of a security alert.

The security manager's name and contact information are transmitted to IDRIS by the laboratory director on the FGC (account administration) form. This form is also used for informing IDRIS of any change in the security manager.

How to access IDRIS while teleworking or on mission

For security reasons, we cannot authorise access to IDRIS machines from non-institutional IP addresses. For example, you cannot have direct access from your personal connection.

Using a VPN

The recommended solution for accessing IDRIS resources when you are away from your registered address (teleworking, on mission, etc.) is to use the VPN (Virtual Private Network) of your laboratory/institute/university. A VPN allows you to access distant resources as if you were directly connected to the local network of your laboratory. Nevertheless, you still need to register the VPN-attributed IP address of your machine to IDRIS by following the procedure described above. This solution has the advantage of allowing the usage of IDRIS services which are accessible via a web navigator (for example, the extranet or products such as Jupyter Notebook, JupyterLab and TensorBoard).

Using a proxy machine

If using a VPN is impossible, it is always possible to connect via SSH to a proxy machine of your laboratory from which Jean Zay is accessible (which implies having registered the IP address of this proxy machine).

you@portable_computer:~$ ssh proxy_login@proxy_machine
proxy_login@proxy_machine~$ ssh idris_login@idris_machine

Note that it is possible to automate the proxy via the SSH options ProxyJump or ProxyCommand to be able to connect by using only one command (for example, ssh -J proxy_login@proxy_machine idris_login@idris_machine).

Obtaining temporary access to IDRIS machines from a foreign country

The user on mission must request machine authorisation by completing the corresponding box on page 3 of the FGC form. A temporary ssh access to all the IDRIS machines is then accorded.

Return to Table of Contents

5. How to connect to an IDRIS machine

How do I access Jean Zay ?

You can only connect on Jean Zay from a machine whose IP address is registered in our filters. If this is not the case, consult the procedure for declaring machines which is available on our Web site. Interactive access to Jean Zay is only possible on the front-end nodes of the machine via the ssh protocol.
For more detailed information, you may consult the description of the hardware and software of the cluster. Each IDRIS user is the holder of a unique login for all the projects in which he/she participates. This login is associated with a password which is subject to certain security rules. Before connecting, we advise you to consult the page management and problems of passwords.

Jean Zay: Access and shells

Access to the machines

Jean Zay:

Connection to the Jean Zay front end is done via ​ssh from a machine registered at IDRIS:

$ ssh

Then, enter your password if you have not configured the ​ssh key.

Jean Zay pre- and post-processing:

Interactive connection to the pre-/post-processing front end is done by ssh from a machine registered at IDRIS:

$ ssh

Then, enter your password if you have not configured the ssh key.

SSH key authentification

SSH key authentification is allowed on Jean Zay. You have to generate a pair of keys following IDRIS recommendations and type:

$ ssh-copy-id

It copies your public key in the file $HOME/.ssh/authorized_keys on Jean Zay.

Managing the environment

Your ​$HOME space is common to all the Jean-Zay front ends.​ Consequently, every modification of your personal environment files is automatically applied on all the machines.

What shells are available on the IDRIS machines?

The Bourne Again shell (bash) is the only login shell available on the IDRIS machines: IDRIS does not guarantee that the default user environment will be correctly defined with other shells. The bash is an important evolution of the Bourne shell (formerly sh) with advanced functionalities. However, other shells (​ksh,​ ​tcsh,​ ​csh​) are also installed on the machines to allow the execution of scripts which are using them.

Which environment files are invoked during the launching of a login session in bash?

The .bash_profile, if it exists in your HOME, is executed at the login only one time per session. If not, it is the .profile file which is executed, if it exists. The environment variables and the programs are placed in one of these files, to be launched at the connection. Aliases, personal functions and the loading of modules are to be placed in the .bashrc file which, in contrast, is run at the launching of each sub-shell.

It is preferable to use only one environment file: .bash_profile or .profile.


Connecting to Jean Zay is done with the user login and the associated password.

During the first connection, the user must indicate the “initial password” and then immediately change it to an “actual password”.

The initial password

What is the initial password?

The initial password is the result of the concatenation of two passwords respecting the order:

  1. The first part consists of a randomly generated IDRIS password which is sent to you by e-mail during the account opening and during a reinitialisation of your password. It remains valid for 20 days.
  2. The second part consists of the user-chosen password (8 alphanumeric characters) which you provided on the “Account creation request form (GENCI)” during your first account opening request (if you are a new user) or when requesting a change in your initial password (using the FGC form).
    Note: For a user with a previously opened login account created in 2014 or before, the password indicated in the last postal letter from IDRIS should be used.

The initial password must be changed within 20 days following transmission of the randomly generated password (see below the section "Using an initial password at the first connection").
If this first connexion is not done within the 20-day timeframe, the initial password is invalidated and an e-mail is sent to inform you. In this case, you just have to send an e-mail to to request a new randomly generated password which is then sent to you by e-mail.

An initial password is generated (or re-generated) in the following cases:

  • Account opening (or reopening): an initial password is formed at the creation of each account and also for the reopening of a closed account.
  • Loss of the actual password:
    • If you have lost your actual password, you must contact to request the re-generation of a randomly generated password which is then sent to you by e-mail. You will also need to have the user-chosen part of the password you previously provided in the FGC form.
    • If you have also lost the user-chosen part of the password which you previously provided in the FGC form (or was contained in the postal letter from IDRIS in the former procedure of 2014 or before), you must complete the “Request to change the user part of initial password” section of the FGC form, print and sign it, then scan and e-mail it to or send it to IDRIS by postal mail. You will then receive an e-mail containing a new randomly generated password.

Using an initial password at the first connection

Below is an example of the first connection (without using ssh keys) for which the “initial password” be required for the login_idris account on IDRIS machine.

Important: At the first connection, the initial password is requested twice. A first time to establish the connection on the machine and a second time by the password change procedure which is then automatically executed.

Recommendation : As you have to change the initial password the first time you log in, before beginning the procedure, carefully prepare another password which you will enter (see Creation rules for "actual passwords" in section below).

$ ssh
login_idris@machine_idris password:  ## Enter INITIAL PASSWORD first time ##
Last login: Fri Nov 28 10:20:22 2014 from
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for user login_idris.
Enter login(    ) password:          ## Enter INITIAL PASSWORD second time ##
Enter new password:                      ## Enter new chosen password   ##
Retype new password:                     ## Confirm new chosen password ##
     password information changed for login_idris
passwd: all authentication tokens updated successfully.
Connection to machine_idris closed.

Remark : You will be immediately disconnected after entering a new correct chosen password (“all authentication tokens updated successfully”).

Now, you may re-connect using your new actual password that you have just registered.

The actual password

Once your actual password has been created and entered correctly, it will remain valid for one year (365 days).

How to change your actual password

You can change your password at any time by using the UNIX command passwd directly on front end. The change is taken into account immediately for all the machines. This new actual password will remain valid for one year (365 days) following its creation.

Creation rules for "actual passwords"

  • It must contain a minimum of 12 characters.
  • The characters must belong to at least 3 of the 4 following groups:
    • Uppercase letters
    • Lowercase letters
    • Numbers
    • Special characters
  • The same character may not be repeated more than 2 times consecutively.
  • A password must not be composed of words from dictionaries or from trivial combinations (1234, azerty, …).


  • Your actual password is not modifiable on the same day as its creation or for the 5 days following its creation. Nevertheless, if necessary, you may contact the User Support Team to request a new randomly generated password for the re-creation of an initial password.
  • A record is kept of the last 6 passwords used. Reusing one of the last 6 passwords will be rejected.

Forgotten or expired password

If you have forgotten your password or, despite the warning e-mails sent to you, you have not changed your actual password before its expiry date (i.e. one year after its last creation), your password will be invalidated.

In this case, you must contact to request the re-generation of the randomly generated password which is then sent to you by e-mail.

Note: You will also need to have the user-chosen part of the initial password you initially provided, to be able to connect on the host after this re-generation. In fact, you will have to follow the same procedure than for using an initial password at the first connection.

Account blockage following 15 unsuccessful connection attempts

If your account has been blocked as a result of 15 unsuccessful connection attempts, you must contact the IDRIS User Support Team.

Account security reminder

You must never write out your password in an e-mail, even ones sent to IDRIS (User Support, Gestutil, etc.) no matter what the reason: We would be obligated to immediately generate a new initial password, the objective being to inhibit the actual password which you published and to ensure that you define a new one during your next connection.

Each account is strictly personal. Discovery of account access by an unauthorised person will cause immediate protective measures to be taken by IDRIS including the eventual blockage of the account.
The user must take certain basic common sense precautions:

  • Inform IDRIS immediately of any attempted trespassing on your account.
  • Respect the recommendations for using SSH keys.
  • Protect your files by limiting UNIX access rights.
  • Do not use a password which is too simple.
  • Protect your personal work station.

Return to Table of Contents

6. Management of your account and your environment variables

How do I modify my personal data?

Modification of your personal data is done via the Web interface Extranet.

  • For those who do not have a password for Extranet or who have lost it, the access modalities are described on this page.
  • For those who have a password, click on Extranet, connect with your identifiers, then ⇒ Your account ⇒ Your data ⇒ Your contact details.

The only data modifiable on line are:

  • e-mail address
  • telephone number
  • fax number

Modification of your postal address is done by completing the section « Modification of the user's postal address », on the Administration Form for Login Accounts FGC, and sending it to from an institutional address. Note that this procedure requires the signatures of the user and of your laboratory director.

What disk spaces are available on Jean Zay ?

For each project, there are 5 distinct disk spaces available on Jean Zay: HOME, WORK, SCRATCH/JOBSCRATCH, STORE and DSDIR.

You will find the explanations concerning these spaces on the disk spaces page of our Web site.
Important: HOME, WORK and STORE are subject to quotas !

If your login is attached to more than one project, the IDRIS command idrenv will display all the environment variables referencing all the disk spaces of your projects. These variables allow you to access the data of your different projects from any of your other projects.

Choose your storage space according to your needs (permanent, semi-permanent or temporary data, large or small files, etc.).

How do I request an extension of a disk space or inodes?

If your use of the disk space is in accordance with its usage recommendations and if you cannot delete or move the data contents from this space, your project manager can make a justified request for a quota increase (space and/or inodes) via the extranet.

How can I know the number of computing hours consumed per project?

You simply need to use the IDRIS command idracct to know the hours consumed by each collaborator of the project as well as the total number of hours consumed and the percentage of the allocation.

Note that the information returned by this command is updated once per day (the date and time of the update are indicated in the first line of the command output).

If you have more than one project at IDRIS, this command will display the CPU and/or GPU consumptions of all the projects that your login is attached to.

What should I do when I soon will have no computing hours remaining?

There are two possible ways to request supplementary hours :

  • Either by requesting more resources as needed («au fil de l'eau») ; this request may be made as soon as your HPC or AI project has an hours allocation in progress.
  • Or by requesting a complement of hours for a period of six months ; this can be requested midway through your Regular Access (HPC) allocation.

These requests must be justified and should be made via the DARI homepage as indicated on our Web page requesting resource hours.

How can I know when the machine is unavailable?

The machine can be unavailable because of a planned maintenance event or due to a technical problem which occurred unexpectedly. In both cases, the information is available on the homepage of the IDRIS Web site via the drop-down menu entitled, « For users », then the heading "Machine availability".

IDRIS users may also subscribe to the “info-machines” mailing list through the Extranet.

How do I recover files which I unintentionally deleted?

You can only recover files which were deleted from your HOME and from the WORK spaces of your different projects. In fact, only the HOME and WORK spaces are backed up via « snapshots » which is explained on the disk spaces page of our Web site.

Because their sizes are too large, neither the SCRATCH (semi-permanent space), nor the STORE (archiving space) are backed up.

Can I ask IDRIS to transfer files from one account to another account?

IDRIS considers data to be linked to a project. Consequently, for the transfer to be possible, the following are necessary:

  • Both accounts (the owner and the recipient) must be attached to the same project.
  • The project manager makes the request by signed fax or by e-mail to the support team () specifying:
    • The concerned machine.
    • The source account and the recipient account.
    • The list of files and/or directories to transfer.

Can I recover files on an external support ?

It is no longer possible to request the transfer of files to an external support.

Return to Table of Contents

7. The disk spaces

Jean Zay: The disk spaces

There are four distinct disk spaces accessible for each project: HOME, WORK, SCRATCH/JOBSCRATCH and the STORE.

Each space has specific characteristics suitable to its usage which are described below. The paths to access these spaces are stocked in five variables of the shell environment: $HOME, $WORK, $SCRATCH, $JOBSCRATCH and $STORE.

To know how much the disk spaces are occupied for your project, you can use idrquota command.


$HOME : This is the home directory during an interactive connection. This space is intended for frequently-used small-sized files such as the shell environment files, the tools, and potentially the sources and libraries if they have a reasonable size. The size of this space is limited (in space and in number of files).
The HOME characteristics are:

  • It is a permanent space.
  • It is saved via snapshots: See the section entitled The snapshots below.
  • Intended to receive small-sized files.
  • In the case of a multi-project login, the HOME is unique.
  • Submitted to quotas per user which are intentionally rather low (3 GiB by default).
  • Accessible in interactive or in a batch job via the $HOME variable :
    $ cd $HOME
  • It is the home directory during an interactive connection.


$WORK: This is a permanent work and storage space which is usable in batch. In this space, we generally store large-sized files for use during batch executions: very large source files, libraries, data files, executable files, result files and submission scripts.
The characteristics of WORK are:

  • It is a permanent space.
  • It is saved via snapshots: See the section entitled The snapshots below.
  • Intended to receive large-sized files.
  • In the case of a multi-project login, a WORK is created for each project.
  • Submitted to quotas per project.
  • It is accessible in interactive or in a batch job.
  • It is composed of 2 sections:
    • A section in which each user has an individual part, accessed by the command:
      $ cd $WORK
    • A section common to the project to which the user belongs and into which files to be shared can be placed, accessed by the command:
      $ cd $ALL_CCFRWORK
  • The WORK is a GPFS disk space with a bandwidth of about 100 GB/s in read and in write. This bandwidth can be temporarily saturated in case of exceptionally intensive usage.

Usage recommendations

  • Batch jobs can run in the WORK. Nevertheless, because several of your jobs can be run at the same time, you must manage the unique identities of your execution directories or your file names.
  • Moreover, this disk space is submitted to quotas (per project) which can suddenly stop your execution if the quotas are reached. Therefore, in the WORK, you must not only be aware of your own activity but also that of your project colleagues. For these reasons, you may prefer using the SCRATCH or the JOBSCRATCH for the execution of batch jobs.


$SCRATCH : This is a semi-permanent work and storage space which is usable in batch; the lifespan of the files is limited to 30 days. The large-sized files used during batch executions are generally stored here: the data files, result files or the computation restarts. Once the post-processing has been done to reduce the data volume, you must remember to copy the significant files into the WORK so that they are not lost after 30 days, or into the STORE for long-term archiving.
The characteristics of the SCRATCH are:

  • The SCRATCH is a semi-permanent space with a 30-day file lifespan.
  • It is not backed up.
  • It is intended to receive large-sized files.
  • It is submitted to very large quotas per project, about 1/10th of the total disk space for each group.
  • It is accessible in interactive or in a batch job.
  • It is composed of 2 sections:
    • A section in which each user has an individual part; accessed by the command:
      $ cd $SCRATCH
    • A section common to the project to which the user belongs into which files to be shared can be placed. It is accessed by the command:
  • In the case of a multi-project login, a SCRATCH is created for each project.
  • The SCRATCH is a GPFS disk space with a bandwidth of about 500 GB/s in write and in read.

$JOBSCRATCH: This is the temporary execution directory specific to batch jobs.
Its characteristics are:

  • It is a temporary directory with file lifespan equivalent to the batch job lifespan.
  • It is not backed up.
  • It is intended to receive large-sized files.
  • It is submitted to the same project quotas as the SCRATCH.
  • It is created automatically when a batch job starts and, therefore, is unique to each job.
  • It is destroyed automatically at the end of the job. Therefore, it is necessary to manually copy the important files onto another disk space (the WORK or the SCRATCH) before the end of the job.
  • The JOBSCRATCH is a GPFS disk space with a bandwidth of about 500 GB/s in write and in read.
  • During the execution of a batch job, the corresponding JOBSCRATCH is accessible from the Jean Zay front end via its JOBID job number (see the output of the squeue command) and the following command:
    $ cd /gpfsssd/jobscratch/JOBID

Usage recommendations:

  • The JOBSCRATCH can be seen as the former TMPDIR.
  • The SCRATCH can be seen as a semi-permanent WORK which offers the maximum input/output performance available at IDRIS but limited by a 30-day file lifespan.
  • The semi-permanent characteristics of the SCRATCH allow storing large volumes of data there between two or more jobs which run successively, one right after another, but within a limited period of a few weeks: This disk space is not purged after each job.


$STORE: This is the IDRIS archiving space for long-term storage. Very large-sized files are generally stored there, consequent to using tar for a tree hierarchy of compute result files after post-processing. This is a space which is not meant to be accessed or modified on a daily basis but to preserve very large volumes of data over time with only occasional consultation.
Its characteristics are:

  • The STORE is a permanent space.
  • It is not backed up .
  • We advise against systematically accessing it in write during a batch job.
  • It is intended to received very large-sized files: The maximum size is 10 TiB per file and the minimum recommended size is 250 MiB (ratio disc size/ number of inodes).
  • In the case of a multi-project login, a STORE is created per project.
  • It is submitted to quotas per project with a small number of inodes, but a very large space.
  • It is composed of 2 sections:
    • A section in which each user has an individual part, accessed by the command:
      $ cd $STORE
    • A section common to the project to which the user belongs and into which files to be shared can be placed. It is accessed by the command:
      $ cd $ALL_CCFRSTORE

Usage recommendations:

  • The STORE can be seen as replacing the former Ergon archive server.
  • However, there is no longer a limitation on file lifespan.
  • As this is an archive space, it is not intended for frequent access.


$DSDIR: This is a storage space dedicated to voluminous public data bases (in size or number of files) which are needed for using AI tools. These datasets are visible to all Jean Zay users.

If you use large public data bases which are not found in the $DSDIR space, IDRIS will download and install them in this disk space at your request.

The list of currently accessible data bases is found on this page: Jean Zay: Datasets available in $DSDIR storage space.

Summary table of the main disk spaces

Space Default capacity Features Usage
$HOME 3GB and 150k inodes
per user
- Home directory at connection
- Backed up space
- Storage of configuration files and small files
$WORK 5TB and 500k inodes
per project (*)
- Storage on rotating disks
(100GB/s read/write operations)
- Backed up space
- Storage of source codes and input/output data
- Execution in batch or interactive
$SCRATCH No quota
2.5PB shared by all users
- SSD Storage
(500GB/s read/write operations)
- Lifespan of unused files
(= not read or modified): 30 days
- Space not backed up
- Storage of voluminous input/output data
- Execution in batch or interactive
- Optimal performance for read/write operations
$STORE 50TB and 100k inodes
per project (*)
- Space not backed up - Long-term archive storage (for lifespan of project)
(*) Quotas per project can be increased at the request of the project manager or deputy manager via the Extranet interface, or per request to the user support team.

The snapshots

The $HOME and $WORK are saved regularly via a snapshot mechanism: These are snapshots of the tree hierarchies which allow you to recover a file or a directory that you have corrupted or deleted by error.

All the available snapshots SNAP_YYYYMMDD, where YYYYMMDD correspond to the backup date, are visible from all the directories of your HOME and your WORK via the following command:

$ ls .snapshots
SNAP_20191022  SNAP_20191220  SNAP_20200303  SNAP_20200511
SNAP_20191112  SNAP_20200127  SNAP_20200406  SNAP_20200609 

Comment: In this example, you can see 8 backups. To recover a file from 9 June 2020, you simply need to select the directory SNAP_20200609.

Important: The .snapshots directory is not visible with the ls -a command so don't be surprised when you don't see it. Only its contents can be consulted.

For example, if you wish to recover a file which was in the $WORK/MY_DIR subdirectory, you just need to follow the procedure below:

  1. Go into the directory of the initial file:
    $ cd $WORK/MY_DIR
  2. You will find the backup which interests you via the ls command:
    $ ls .snapshots
    SNAP_20191022  SNAP_20191220  SNAP_20200303  SNAP_20200511
    SNAP_20191112  SNAP_20200127  SNAP_20200406  SNAP_20200609 
  3. You can then see the contents of your $WORK/MY_DIR directory as it was on 9 June 2020, for example, with the command:
    $ ls -al .snapshots/SNAP_20200609 
    total 2
    drwx--S--- 2 login  prj  4096 oct.  24  2019 .
    dr-xr-xr-x 2 root  root 16384 janv.  1  1970 ..
    -rw------- 1 login  prj 20480 oct.  24  2019 my_file 
  4. Finally, you can recover the file as it was on the date of 9 June 2020 by using the cp command:
    1. By overwriting the initial file, $WORK/MY_DIR/my_file (note the “.” at the end of the command):
      $ cp .snapshots/SNAP_20200609/my_file . 
    2. Or, by renaming the copy as $WORK/MY_DIR/my_file_20200609 in order to not overwrite the initial file $WORK/MY_DIR/my_file:
      $ cp .snapshots/SNAP_20200609/my_file  my_file_20200609 


  • The ls -l .snapshots/SNAP_YYYYMMDD command always indicates the contents of the directory where you are but on the given date YYYY/MM/DD.
  • You can add the -p option to the cp command in order to keep the date and the Unix access rights of the recovered file:
    $ cp -p .snapshots/SNAP_20200609/my_file . 
    $ cp -p .snapshots/SNAP_20200609/my_file  my_file_20200609 
  • Files are recovered from your HOME by using the same procedure.

Jean Zay: Disk quotas and idrquota command


Quotas guarantee equitable access to the disk resources. They prevent the situation where one group of users consumes all the space and prevents other groups from working. At IDRIS, quotas limit both the quantity of disk space and the number of files (inodes). These limits are applied per user for the $HOME (one HOME per user even if your login is attached to more than one project) and applied per project for the $WORK (as many WORK spaces as there are projects for the same user).

You may consult the disk quotas of your project by using the idrquota command (see below).

Exceeding the quotas

When a group has exceeded the quota, no warning e-mails are sent. Nevertheless, you are informed by error messages such as disk quota exceeded when you manipulate files in the concerned disk space.

When one of the quotas is reached, you can no longer create files in the concerned disk space: This could disturb the jobs being executed at that time, if they were submitted from that disk space.

Caution: Editing a file when you are at the limit of your disk quota can bring the file size back to zero, thereby deleting its contents.

When you are blocked or in the process of being blocked:

  • Try to clean out the concerned disk space by deleting files which are no longer useful.
  • Move your files into another space. Note that the $STORE is intended for receiving large archives and not a multitude of files.
  • The Project Manager or his/her deputy can ask for a quota increase using our Extranet Web site.
  • You can also request a quota increase by contacting the IDRIS Support Team (01 69 35 85 55 or ).

The idrquota command

The idrquota command provides information about the consumption and quotas for the spaces subject to quotas:

  • -m option: recuperates information for the $HOME (quotas per user).
  • -s option: recuperates information for the $STORE (quotas per project).
  • -w option: recuperates information for the $WORK (quotas per project).
  • -p PROJET option: allows specifying the concerned project if your login is linked to more than one project. This can be combined with the-w option but not with the -m option. The value of PROJET corresponds to the UNIX group to which the $WORK space belongs.
  • -h option: to get help about the command.

Comment: The information displayed by this command is updated every 30 minutes.

The idrquota command currently displays a short summary but it is possible that this will evolve.

Return to Table of Contents

8. Commands for file transfers

File transfers using the bbftp command

To transfer large-sized files from IDRIS to your laboratory, we advise you to use BBFTP which is an optimised software for transferring files.

All the information for using the bbftp command is found on our website.

File transfers via CCFR network

How do I transfer data between the 3 national centres (CCFR network) ?


The CCFR (Centres de Calcul Français) network is dedicated to very high speed and interconnects the three French national computing centres: CINES, IDRIS and TGCC. This network is made available to users to facilitate data transfers between the national centres. The machines currently connected on this network are Joliot-Curie at TGCC, Jean Zay at IDRIS, and Occigen at CINES.


  • Your IDRIS login must be authorised to access the CCFR network. If this is not the case, you must complete the section entitled “Acces the CCFR network” in the Administration Form for Login Accounts (FGC) and send it to from an institutional address. Note that this procedure requires the signatures of the user and of the security manager of your laboratory.
  • Moreover, not all of the Jean Zay nodes are connected to this network. To use the network from IDRIS, you must connect on the machine referenced by the alias, only accessible from jean-zay :
    $ ssh

For more information, please contact the User Support Team ().


Complete documentation (prerequisites, authentification, certification) is available here (in French).

Return to Table of Contents

9. The module command

For more information, consult our web page about instructions to use the module command on Jean Zay.

Return to Table of Contents

10. Compilation

Jean Zay: The Fortran and C/C++ compilation system (Intel)

$ module avail intel-compilers
----------------------- /gpfslocalsup/pub/module-rh/modulefiles  --------------------------
intel-compilers/16.0.4 intel-compilers/18.0.5 intel-compilers/19.0.2 intel-compilers/19.0.4
$ module load intel-compilers/19.0.4
$ module list
Currently Loaded Modulefiles:
 1) intel-compilers/19.0.4

$ ifort prog.f90 -o prog
$ icc  prog.c -o prog
$ icpc prog.C -o prog

Jean Zay: Compilation of an MPI parallel code in Fortran, C/C++

  • Intel MPI :
$ module avail intel-mpi
-------------------------------------------------------------------------- /gpfslocalsup/pub/module-rh/modulefiles --------------------------------------------------------------------------
intel-mpi/5.1.3(16.0.4)   intel-mpi/2018.5(18.0.5)  intel-mpi/2019.4(19.0.4)  intel-mpi/2019.6  intel-mpi/2019.8  
intel-mpi/2018.1(18.0.1)  intel-mpi/2019.2(19.0.2)  intel-mpi/2019.5(19.0.5)  intel-mpi/2019.7  intel-mpi/2019.9
$ module load intel-compilers/19.0.4 intel-mpi/19.0.4
  • Open MPI (without CUDA-aware MPI, you must choose one of the modules with no -cuda extension) :
$ module avail openmpi
-------------------------------------------------------- /gpfslocalsup/pub/modules-idris-env4/modulefiles/linux-rhel8-skylake_avx512 --------------------------------------------------------
openmpi/3.1.4       openmpi/3.1.5  openmpi/3.1.6-cuda  openmpi/4.0.2       openmpi/4.0.4       openmpi/4.0.5       openmpi/4.1.0       openmpi/4.1.1       
openmpi/3.1.4-cuda  openmpi/3.1.6  openmpi/4.0.1-cuda  openmpi/4.0.2-cuda  openmpi/4.0.4-cuda  openmpi/4.0.5-cuda  openmpi/4.1.0-cuda  openmpi/4.1.1-cuda     
$ module load pgi/20.4 openmpi/4.0.4

  • Intel MPI :
$ mpiifort source.f90
$ mpiicc source.c
$ mpiicpc source.C
  • Open MPI :
$ mpifort source.f90
$ mpicc source.c
$ mpic++ source.C

Jean Zay: Compilation of an OpenMP parallel code in Fortran, C/C++

$ ifort -qopenmp source.f90
$ icc -qopenmp source.c
$ icpc -qopenmp source.C

$ ifort -c -qopenmp source1.f
$ ifort -c source2.f
$ icc -c source3.c
$ ifort -qopenmp source1.o source2.o source3.o

Jean Zay: Using the PGI compilation system for C/C++ and Fortran

$ module avail pgi 
---------------- /gpfslocalsup/pub/module-rh/modulefiles ----------------
pgi/19.10  pgi/20.1  pgi/20.4
$ module load pgi/19.10
$ module list
Currently Loaded Modulefiles:
  1) pgi/19.10

$ pgcc prog.c -o prog
$ pgc++ prog.cpp -o prog
$ pgfortran prog.f90 -o prog

Jean Zay : Compiling an OpenACC code

The PGI compiling options for activating OpenACC are the following:

  • -acc: This option activates the OpenACC support. You can specify some suboptions:
    • [no]autopar: Activate automatic parallelization for the ACC PARALLEL directive. The default is to activate it.
    • [no]routineseq: Compile all the routines for the accelerator. The default is to not compile each routine as a sequential directive.
    • strict: Display warning messages if using non-OpenACC directives for the accelerator.
    • verystrict: Stops the compilation if using any non-OpenACC directives for the accelerator.
    • sync: Ignore async clauses.
    • [no]wait: Wait for the completion of each calculation kernel on the accelerator. By default, kernel launching is blocked except if async is used.
    • Example :
      $ pgfortran -acc=noautopar,sync -o prog_ACC prog_ACC.f90
  • -ta: This option activates offloading on the accelerator. It automatically activates the -acc option.
    • It will be useful for choosing the architecture for compiling the code.
    • To use the V100 GPUs of Jean Zay, it is necessary to use the tesla suboption of -ta and the cc70 compute capability. For example:
      $ pgfortran -ta=tesla:cc70 -o prog_gpu prog_gpu.f90
    • Some useful tesla suboptions:
      • managed: Creates a shared view of the GPU and CPU memory.
      • pinned: Activates CPU memory pinning. This can improve data transfer performance.
      • autocompare: Activates comparison of CPU/GPU results.

Jean Zay: CUDA-aware MPI and GPUDirect

For optimal performance, the Cuda-aware OpenMPI libraries supporting the GPUDirect are available on Jean Zay.

$ module avail openmpi/*-cuda
-------------- /gpfslocalsup/pub/modules-idris-env4/modulefiles/linux-rhel8-skylake_avx512 --------------
openmpi/3.1.4-cuda openmpi/3.1.6-cuda openmpi/4.0.2-cuda openmpi/4.0.4-cuda
$ module load openmpi/4.0.4-cuda

$ mpifort source.f90
$ mpicc source.c
$ mpic++ source.C

Since no particular option is necessary for the compilation, you may refer to the GPU compilation section of the index for more information on code compiling using the GPUs.

Adaptation of the code

The utilisation of the CUDA-aware MPI GPUDirect functionality on Jean Zay requires an accurate initialisation order for CUDA or OpenACC and MPI in the code :

  1. Initialisation of CUDA or OpenACC
  2. Choice of the GPU which each MPI process should use (binding step)
  3. Initialisation of MPI.

Caution: if this initialisation order is not respected, your code execution might crash with the following error:

CUDA failure: cuCtxGetDevice()

Return to Table of Contents

11. Code execution

Interactive and batch

There are two possible job modes: Interactive and batch.

In both cases, you must respect the maximum limits in elapsed (or clock) time, memory, and number of processors and/or number of GPUs which are set by IDRIS with the goal of better managing the computing resources. You will find more complete information concerning these limits by consulting the following pages on our Web server: CPU Slurm partitions, GPU Slurm partitions and the pages detailing how to reserve memory for CPU jobs or for GPU jobs.

Interactive jobs

From the machines declared in the IDRIS filters, you have SSH access to the front ends from which you can:

Comment: Any code requiring GPUs cannot be executed on the front ends as they are not equipped with them.

Batch jobs

There are several reasons to work in batch mode:

  • Having the possibility of closing the interactive session after submitting a batch job.
  • Having the possibility of going beyond the limitations of interactive in elapsed (or clock) time, memory, number of processors or GPUs.
  • Doing the computations with dedicated resources (these resources are reserved for you alone).
  • Allowing for better resource management for users by distribution on the machine according to the resources requested.
  • Launching your pre-/post-processing jobs on nodes dedicated to large memory (jean-zay-pp).

At IDRIS, we use Slurm software for batch job management on the compute nodes, the pre-/post-processing nodes (jean-zay-pp) and the visualisation nodes (jean-zay-visu).

This batch manager controls the scheduling of jobs according to resources requested (memory, elapsed (or clock) time, number of CPUs, number of GPUs, …), the number of active jobs at a given moment (number in total and number per user) and the number of hours consumed per project.

There are 2 essential steps in order to work in batch: job creation and job submission.

Job creation

This step consists of writing all the commands that you want executed into a file and then adding, at the beginning of the file, the Slurm submission directives for defining certain parameters such as:

  • Job name (directive #SBATCH --job-name=...)
  • Elapsed time limit for the entire job (directive #SBATCH --time=HH:MM:SS)
  • Number of compute nodes (directive #SBATCH --nodes=...)
  • Number of (MPI) processes per compute node (directive #SBATCH --ntasks-per-node=...)
  • Total number of (MPI) processes (directive #SBATCH --ntasks=...)
  • Number of OpenMP threads per process (directive #SBATCH --cpus-per-task=...)
  • Number of GPUs for jobs using GPUs (directive #SBATCH --gres=gpu:...)

Once the submission directives have been defined, it is recommended to enter the commands in the following order:

  • Go into the execution directory under WORK, SCRATCH or JOBSCRATCH (for more information, see our documentation about the disk spaces).
  • Copy the entry files necessary to the execution into this directory.
  • Launch the execution (via the srun command for the MPI, hybrid or multi-GPU codes).
  • If you have used the SCRATCH or the JOBSCRATCH, you should copy the result files which you wish to save.

Comments :

  • With the Slurm directive #SBATCH –cpus-per-task=..., which sets the number of threads per process, you may also set the quantity of memory available per process. For more information, please consult our documentation on the memory allocation of a CPU job and/or of a GPU job.
  • Detailed examples of jobs are available on our Web site in the sections entitled “Execution/commands of a CPU code” and “Execution/commands of a GPU code”.

Job submission

To submit a batch job (here, Slurm script mon_job), you must use the following command :

$ sbatch mon_job

Your job will be placed in a partition according to the values requested in the Slurm directives. We advise you to set the parameters concerning the number of CPUs/GPUs, and the elapsed time, as accurately as possible in order to have a job return as rapidly as possible.

Comments :

  • For monitoring and managing your batch jobs, you should use the Slurm commands.
  • In batch mode, the user cannot intervene during the execution of commands except to stop/kill the job. Consequently, file transfers must be done without using/requiring a password.
  • The compute nodes have no access to the Internet which prevents all downloading (Git repositories, Python/Conda installation, …) from these nodes. If needed, these downloads can be done from the front ends or from the pre-/post-processing nodes, either before the code execution or via the submission of cascade jobs.
  • If you want to execute a job from the pre-/post-processing machine (jean-zay-pp), you must use the Slurm directive shown below in the submission script:
    #SBATCH --partition=prepost

    If this submission directive is absent, the job will execute in the default partition, thus on the compute nodes.

For any problem, please contact the IDRIS User Support Team.

Return to Table of Contents

12. Training courses offered at IDRIS

IDRIS training courses

IDRIS provides training courses for its own users as well as to others who use scientific computing. Most of these courses are included in the CNRS continuing education catalogue CNRS Formation Entreprises which makes them accessible to all users of scientific calculation in both the academic and industrial sectors.

These courses are principally oriented towards the methods of parallel progamming: MPI, OpenMP and hybrid MPI/OpenMP, the keystones for using the supercomputers of today. Courses are also given on the Fortran and C general scientific programming languages.

A catalogue of scheduled IDRIS training courses, regularly updated, is available on our web server:

These courses are free of charge if you are employed by either the CNRS (France's National Centre for Scientific Research) or a French university. In these cases, enrollment is done directly on the web site IDRIS courses. All other users should enroll through CNRS Formation Entreprises.

IDRIS training course materials are available on line here.

Return to Table of Contents

13. IDRIS documentation

  • The Web site : IDRIS maintains a regularly updated website, grouping together the totality of our documentation (IDRIS news, machine functioning, etc.).
  • Manufacturer documentation : Access to complete manufacturer documentation concerning the compilers (f90, C and C++), the scientific libraries, message-passing libraries (MPI), etc.
  • The manuals : Access to all the Unix manuals with your user login on any of the IDRIS calculators by using the command man.

Return to Table of Contents

14. User support

Contacting the User Support Team

Please contact the User Support Team for any questions, problems or other information regarding the usage of the IDRIS machines. This assistance is jointly provided by the members of the HPC support team and AI support team.

The User Support Team may be contacted directly:

  • By telephone at +33 (0)1 69 35 85 55 or by e-mail at
  • Monday through Thursday, 9:00 a.m. - 6:00 p.m. and on Friday, 9:00 a.m. - 5:30 p.m., without interruption.

Note: During certain holiday periods (e.g. Christmas and summer breaks), the support team staffing hours may be reduced as follows:
Monday through Friday, 9:00 a.m. - 12:00 (noon) and 1:30 p.m. - 5:30 p.m.
Outside of regularly scheduled hours, the answering machine of the User Support Team phone will indicate the currently relevant staffing hours (normal or holiday).
Administrative management for IDRIS users
For any problems regarding passwords, account opening, access authorisation, or in sending us the forms for account opening or management, please send an e-mail to: