Using the MESOIPSL cluster
This page provides some information for those who use the MESOIPSL clusters (also known as "spirit", replacing "ciclad").
Note that there are two distinct MESOIPSL Spirit clusters, one in Sorbonne Université (SU), and one in Ecole Polytechnique (X). If you log on to "spirit1" or "spirit2" (as shown below) then you are on the SU-Spirit cluster whereas if you log on to "spiritx1" or "spiritx2" then you are on the X-Spirit cluster.
If you need to run on GPUs, that is possible using the 3rd MESOIPSL cluster, HAL.
Contents
How to access the cluster
If you had an account on Ciclad, then you have one on Spirit. If you need to open an account (this is of course reserved to IPSL users) then proceed to this page: https://documentations.ipsl.fr/spirit/getting_started/account.html
Once you have an account you can ssh to the cluster via either of the spirit1 or spirit2 login nodes:
ssh yourMESOIPSLlogin@spirit1.ipsl.fr
or equivalently
ssh yourMESOIPSLlogin@spirit2.ipsl.fr
IMPORTANT: the required ssh authentication level is such that it requires ED25519 or RSA (4096 bits) key types. If your ssh to the machines fails, the first thing to check is that you indeed are using RSA keys.
Here is probably also the right place to point to the MEOIPSL cluster's main page: https://documentations.ipsl.fr/spirit/
OS and disk space
As the welcome message will remind you:
Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 5.4.0-125-generic x86_64) CPU AMD EPYC 7402P 24-Core Processor 2.8GHz ========================================================================= * Mesocentre ESPRI IPSL (Cluster Spirit) JUSSIEU * ========================================================================= ** Disk Space : - /home/login (32Go and 300000 files max per user) : Backup every day. - /data/login ( 1To and 300000 files max per user) : NO BACKUP - /scratchu/login ( 2To and 300000 files max per user) : NO BACKUP - /bdd/ : Datasets - /climserv-home/, /homedata ,/scratchx : SpiritX workspace ( READ-ONLY) ------------------------------------------------------------------------------ Migration Documentation ( Temporary URL ) https://documentations.ipsl.fr/spirit/spirit_clusters/migration_from_ciclad_climserv.html ** Support Contact mailto:meso-support@ipsl.fr ------------------------------------------------------------------------------
This is Ubuntu Linux and the "HOME" is quite limited in size. Most work should be done on the data and/or scratchu disks.
It is up to you to tailor your environment. By default it is quite bare; it is up to you to load the modules you'll need to have access to specific software or compilers or libraries (and versions thereof). To know what modules are available:
module avail
To load a given module, here the Panoply software:
module load panoply
Compiling the PCMs on Spirit
Dedicated arch files are available in the LMDZ.COMMON/arch subdirectory! They are labeled ifort_MESOIPSL (XIOS and DYNAMICO also have similarly named arch files).
Likewise there is a dedicated install_ioipsl_ifort_MESOIPSL.bash install script for IOIPSL available in LMDZ.COMMON/ioipsl
Example of a job to run a GCM simulation
Here to run using 24 MPI tasks with 2 OpenMP threads each:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --cpus-per-task=2
#SBATCH --partition=zen4 # zen4: 64cores/node and 240GB of memory
##SBATCH --partition=zen16 # zen16: 32 cores/node core and 496GB of memory
#SBATCH -J job_mpi_omp
#SBATCH --time=0:55:00
#SBATCH --output %x.%j.out
source ../trunk/LMDZ.COMMON/arch/arch-ifort_MESOIPSL.env
export OMP_NUM_THREADS=2
export OMP_STACKSIZE=400M
mpirun gcm_64x48x54_phymars_para.e > gcm.out 2>&1
Note that there is a per-user limitation of (maximum) 96 cores for a given job.
Sending data on Spirit
This function could be used to send data to your scratch directory on spirit.
function rsend {
a=${1:-.}
b=${2:-$a}
c=${3:-spirit}
rsync -avzl $a/ $c:/scratchx/$USER/$b/
}
To be used as such:
rsend folder # simply send folder to your scratch dir
rsend folder1 folder2 # send folder1 into folder2
rsend folder1 folder2 machine #send folder1 into folder2 on machine (depends on your ssh config)