Using the MESOIPSL cluster

From Planets
Revision as of 19:00, 25 November 2022 by Emillour (talk | contribs) (Created page with "This page provides some information for those who use the MESOIPSL cluster (also known as "spirit", replacing "ciclad"). == How to access the cluster == If you had an account...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This page provides some information for those who use the MESOIPSL cluster (also known as "spirit", replacing "ciclad").

How to access the cluster

If you had an account on Ciclad, then you have one on Spirit. If you need to open an account (this is of course reserved to IPSL users) then proceed to this page: https://documentations.ipsl.fr/spirit/getting_started/account.html

Once you have an account you can ssh to the cluster via either of the spirit1 or spirit2 login nodes:

ssh yourMESOIPSLlogin@spirit1.ipsl.fr

Here is probably also the right place to point to the MEOIPSL cluster's main page: https://documentations.ipsl.fr/spirit/

OS and disk space

As the welcome message will remind you:

Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 5.4.0-125-generic x86_64)
CPU AMD EPYC 7402P 24-Core Processor 2.8GHz
=========================================================================
*        Mesocentre ESPRI IPSL (Cluster Spirit) JUSSIEU                 *
=========================================================================
** Disk Space :
- /home/login     (32Go and 300000 files max per user) : Backup every day.
- /data/login     ( 1To and 300000 files max per user) : NO BACKUP
- /scratchu/login ( 2To and 300000 files max per user) : NO BACKUP
- /bdd/ : Datasets
- /climserv-home/, /homedata ,/scratchx : SpiritX workspace ( READ-ONLY)
------------------------------------------------------------------------------
Migration Documentation ( Temporary URL )
https://documentations.ipsl.fr/spirit/spirit_clusters/migration_from_ciclad_climserv.html
**  Support Contact  mailto:meso-support@ipsl.fr
------------------------------------------------------------------------------

This is Ubuntu Linux and the "HOME" is quite limited in size. Most work should be done on the data and/or scratchu disks.

It is up to you to tailor your environment. By default it is quite bare; it is up to you to load the modules you'll need to have access to specific software or compilers or libraries (and versions thereof). To know what modules are available:

module avail

To load a given module, here the Panoply software:

module load panoply

Example of a job to run a GCM simulation

Here to run using 24 MPI tasks with 2 OpenMP threads each:

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --cpus-per-task=2
#SBATCH --partition=zen4 # zen4: 64cores/node and 240GB of memory
##SBATCH --partition=zen16 # zen16: 32 cores/node core and 496GB of memory
#SBATCH -J job_mpi_omp
#SBATCH --time=0:55:00
#SBATCH --output %x.%j.out

source ../trunk/LMDZ.COMMON/arch/arch-ifort_MESOIPSL.env

export OMP_NUM_THREADS=2
export OMP_STACKSIZE=400M

mpirun gcm_64x48x54_phymars_para.e > gcm.out 2>&1