Using Adastra

From Planets
Revision as of 08:14, 15 June 2023 by Evos (talk | contribs)

Jump to: navigation, search

This page provides a summary of examples and tools designed to help you get used with the Adastra environment.

Getting access to the cluster

For people on the "Atmosphères Planétaires" GENCI project who need to open an account on Adastra, here is the procedure:

  1. Go to https://www.edari.fr/utilisateur and log in via Janus or create an account if you don't have a Janus login. If this doesn't work, you can create a new eDARI account.
  2. Click on "se rattacher à un dossier ayant obtenu des ressources"
  3. "Atmosphères Planétaires" project number to provide: A0140110391
  4. Ehouarn then receives an email to allow you to join the project. Once he has validated it, you receive a confirmation mail.
  5. Once approved, you have to request for an account, click on "CINES: créer une demande d'ouverture de compte"
  6. fill in the forms: name, contract end date, CINES, your lab information (LMD is the default)
  7. Access IP address 134.157.47.46 , FQDN (Fully Qualified Domain Name): ssh-out.lmd.jussieu.fr
  8. Add a second adress : 134.157.176.129 , FQDN: ciclad.ipsl.upmc.fr
  9. click on option to have access to CCFR (only important if you have access to other GENCI machines)
  10. Security officer is Julien Lenseigne for LMD (his informations are all pre-filled, except phone: +33169335172)
  11. YOU MUST THEN VALIDATE THE REQUEST: click on the "Valider la saisie des informations"
  12. You then receive an automatic mail, but it's only to tell you to go to the next step: You must now download the pre-filled form from e-dari: find "télécharger la demande" and download the pdf. Sign it, and upload it on e-dari "déposer la demande de création de compte".
  13. Wait for your application to be preprocessed by the system...

A couple of pointers

  • Connecting to Adastra: For those who had an account on Occigen, we have retained group and login credentials from then; To connect to Adastra you need first go through the LMD gateway (hakim) or the IPSL (Ciclad/Spirit) gateway and then
ssh your_cines_login@adastra.cines.fr

And then you will probably want to switch project using the myproject command, e.g. to switch to "lmd1167" (the old "Atmosphères Planétaires" GENCI project)

myproject -a lmd1167

and to switch to "cin0391" (the 2023-2024 "Atmosphères Planétaires" GENCI project)

myproject -a cin0391

WARNING: when you switch projects, you also switch HOME directory etc.

To get all the info about dedicated environment variables (e.g. paths to SCRATCH, STORE, etc.) you can use

myproject -c
  • Changing the password of your CINES account

When your password is close to expiring, CINES asks you to change it on this website : https://rosetta.cines.fr

Please note that you can access this website only if you are on a machine that you declared as a gateway for Adastra. At LMD, we have generally declared hakim.lmd.jussieu.fr (aka ssh-out) and ciclad.ipsl.jussieu.fr as gateway machines. Hakim doesn't have any browser installed, but you can launch firefox on Ciclad and connect to the rosetta website. If that doesn't work, you will have to mail svp@cines.fr

Submitting jobs

It's done using SLURM; you need to write up a job script and submit it using sbatch

sbatch myjob

You must specify in the header of the job which project ressources you are using ("cin0391" in our case):

#SBATCH --account=cin0391

Example of an MPI job to launch a simulation

#!/bin/bash
#SBATCH --job-name=job_mpi
#SBATCH --account=cin0391
### GENOA nodes accommodate 96 cores 
#SBATCH --constraint=GENOA
### Number of Nodes to use
#SBATCH --nodes=1
### Number of MPI tasks per node
#SBATCH --ntasks-per-node=48 
### Number of OpenMP threads per MPI task
#SBATCH --cpus-per-task=1
#SBATCH --threads-per-core=1
###SBATCH --exclusive
#SBATCH --output=job_mpi_%A.out
#SBATCH --time=00:45:00 

#source env modules:
source ../trunk/LMDZ.COMMON/arch.env 
ulimit -s unlimited

srun --cpu-bind=threads --label gcm_96x96x78_phyvenus_para.e > gcm.out 2>&1

Example of a mixed MPI/OpenMP job to launch a simulation

#!/bin/bash
#SBATCH --job-name=job_mpi_omp
#SBATCH --account=cin0391
### GENOA nodes accommodate 96 cores 
#SBATCH --constraint=GENOA
### Number of Nodes to use
#SBATCH --nodes=1
### Number of MPI tasks per node
#SBATCH --ntasks-per-node=24 
### Number of OpenMP threads per MPI task
#SBATCH --cpus-per-task=4
#SBATCH --threads-per-core=1
###SBATCH --exclusive
#SBATCH --output=job_mpi_omp_%A.out
#SBATCH --time=00:30:00 

#source env modules:
source ../trunk/LMDZ.COMMON/arch.env 
ulimit -s unlimited

### OMP_NUM_THREADS value must match "#SBATCH --cpus-per-task"
export OMP_NUM_THREADS=4
export OMP_STACKSIZE=400M

srun --cpu-bind=threads --label gcm_64x48x54_phymars_para.e > gcm.out 2>&1