Using Adastra
This page provides a summary of examples and tools designed to help you get used with the Adastra environment.
Contents
Getting access to the cluster
For people on the "Atmosphères Planétaires" GENCI project who need to open an account on Adastra, here is the procedure:
- Log on to https://www-dcc.extra.cea.fr/CCFR/ and provide various information about yourself
A few tips: - pick CINES in the list of computer center you are requesting access to - give your PROFESSIONAL phone number (and not your personal cell phone number) - name of the project: Atmosphères Planétaires Numéro du Dossier: A0140110391 - Responsable scientifique du projet: M. Ehouarn MILLOUR , ehouarn.millour@lmd.ipsl.fr, 0144275286, Nationalité: Fr - Responsable sécurité: M. Franck Guyon, franck.guyon@lmd.ipsl.fr, 0144275277, Nationalité: Fr. Note that this is assuming you have an account on LMD computers and/or the MESOIPSL cluster. Otherwise you must provide the information for the relevant person from your lab.
- IPs & machine names to connect to Irene: 134.157.47.46 (ssh-out.lmd.jussieu.fr) and 134.157.176.129 (ciclad.ipsl.upmc.fr). Note that this is assuming you have an account on LMD computers and/or the MESOIPSL cluster. Otherwise you must provide name & IP of your institute's gateway machine.
- Chose anything you want for the 8 character password
- And then get Ehouarn to sign the form and forward it to Franck for him to sign as well.
- Send the signed form to svp@cines.fr
A couple of pointers
- Connecting to Adastra: For those who had an account on Occigen, we have retained group and login credentials from then; To connect to Adastra you need first go through the LMD gateway (hakim) or the IPSL (Ciclad/Spirit) gateway and then
ssh your_cines_login@adastra.cines.fr
And then you will probably want to switch project using the myproject command, e.g. to switch to "lmd1167" (the old "Atmosphères Planétaires" GENCI project)
myproject -a lmd1167
and to switch to "cin0391" (the 2023-2024 "Atmosphères Planétaires" GENCI project)
myproject -a cin0391
WARNING: when you switch projects, you also switch HOME directory etc.
To get all the info about dedicated environment variables (e.g. paths to SCRATCH, STORE, etc.) you can use
myproject -c
- Changing the password of your CINES account
When your password is close to expiring, CINES asks you to change it on this website : https://rosetta.cines.fr
Please note that you can access this website only if you are on a machine that you declared as a gateway for Adastra. At LMD, we have generally declared hakim.lmd.jussieu.fr (aka ssh-out) and ciclad.ipsl.jussieu.fr as gateway machines. Hakim doesn't have any browser installed, but you can launch firefox
on Ciclad and connect to the rosetta website.
If that doesn't work, you will have to mail svp@cines.fr
- Link to the Adastra technical documentation: https://dci.dci-gitlab.cines.fr/webextranet/
Submitting jobs
It's done using SLURM; you need to write up a job script and submit it using sbatch
sbatch myjob
You must specify in the header of the job which project ressources you are using ("cin0391" in our case):
#SBATCH --account=cin0391
Example of an MPI job to launch a simulation
#!/bin/bash
#SBATCH --job-name=job_mpi
#SBATCH --account=cin0391
### GENOA nodes accommodate 96 cores
#SBATCH --constraint=GENOA
### Number of Nodes to use
#SBATCH --nodes=1
### Number of MPI tasks per node
#SBATCH --ntasks-per-node=48
### Number of OpenMP threads per MPI task
#SBATCH --cpus-per-task=1
#SBATCH --threads-per-core=1
###SBATCH --exclusive
#SBATCH --output=job_mpi_%A.out
#SBATCH --time=00:45:00
#source env modules:
source ../trunk/LMDZ.COMMON/arch.env
ulimit -s unlimited
srun --cpu-bind=threads --label gcm_96x96x78_phyvenus_para.e > gcm.out 2>&1
Example of a mixed MPI/OpenMP job to launch a simulation
#!/bin/bash
#SBATCH --job-name=job_mpi_omp
#SBATCH --account=cin0391
### GENOA nodes accommodate 96 cores
#SBATCH --constraint=GENOA
### Number of Nodes to use
#SBATCH --nodes=1
### Number of MPI tasks per node
#SBATCH --ntasks-per-node=24
### Number of OpenMP threads per MPI task
#SBATCH --cpus-per-task=4
#SBATCH --threads-per-core=1
###SBATCH --exclusive
#SBATCH --output=job_mpi_omp_%A.out
#SBATCH --time=00:30:00
#source env modules:
source ../trunk/LMDZ.COMMON/arch.env
ulimit -s unlimited
### OMP_NUM_THREADS value must match "#SBATCH --cpus-per-task"
export OMP_NUM_THREADS=4
export OMP_STACKSIZE=400M
srun --cpu-bind=threads --label gcm_64x48x54_phymars_para.e > gcm.out 2>&1