Difference between revisions of "Using Adastra"

From Planets
Jump to: navigation, search
(A couple of pointers)
Line 34: Line 34:
  
 
* Link to the Adastra technical documentation: https://dci.dci-gitlab.cines.fr/webextranet/
 
* Link to the Adastra technical documentation: https://dci.dci-gitlab.cines.fr/webextranet/
 +
 +
== Submitting jobs ==
 +
It's done using SLURM; you need to write up a job script and submit it using '''sbatch'''
 +
<syntaxhighlight lang="bash">
 +
sbatch myjob
 +
</syntaxhighlight>
 +
You must specify in the header of the job which project ressources you are using ("cin0391" in our case):
 +
<syntaxhighlight lang="bash">
 +
#SBATCH --account=cin0391
 +
</syntaxhighlight>
 +
 +
=== Example of an MPI job to launch a simulation ===
 +
<syntaxhighlight lang="bash">
 +
#!/bin/bash
 +
#SBATCH --job-name=job_mpi
 +
#SBATCH --account=cin0391
 +
### GENOA nodes accommodate 96 cores
 +
#SBATCH --constraint=GENOA
 +
### Number of Nodes to use
 +
#SBATCH --nodes=1
 +
### Number of MPI tasks per node
 +
#SBATCH --ntasks-per-node=48
 +
### Number of OpenMP threads per MPI task
 +
#SBATCH --cpus-per-task=1
 +
#SBATCH --threads-per-core=1
 +
###SBATCH --exclusive
 +
#SBATCH --output=job_mpi_%A.out
 +
#SBATCH --time=00:45:00
 +
 +
#source env modules:
 +
source ../trunk/LMDZ.COMMON/arch.env
 +
ulimit -s unlimited
 +
 +
srun --cpu-bind=threads --label gcm_96x96x78_phyvenus_para.e > gcm.out 2>&1
 +
 +
</syntaxhighlight>
 +
 +
=== Example of a mixed MPI/OpenMP job to launch a simulation ===
 +
<syntaxhighlight lang="bash">
 +
#!/bin/bash
 +
#SBATCH --job-name=job_mpi_omp
 +
#SBATCH --account=cin0391
 +
### GENOA nodes accommodate 96 cores
 +
#SBATCH --constraint=GENOA
 +
### Number of Nodes to use
 +
#SBATCH --nodes=1
 +
### Number of MPI tasks per node
 +
#SBATCH --ntasks-per-node=24
 +
### Number of OpenMP threads per MPI task
 +
#SBATCH --cpus-per-task=4
 +
#SBATCH --threads-per-core=1
 +
###SBATCH --exclusive
 +
#SBATCH --output=job_mpi_omp_%A.out
 +
#SBATCH --time=00:30:00
 +
 +
#source env modules:
 +
source ../trunk/LMDZ.COMMON/arch.env
 +
ulimit -s unlimited
 +
 +
### OMP_NUM_THREADS value must match "#SBATCH --cpus-per-task"
 +
export OMP_NUM_THREADS=4
 +
export OMP_STACKSIZE=400M
 +
 +
srun --cpu-bind=threads --label gcm_64x48x54_phymars_para.e > gcm.out 2>&1
 +
 +
</syntaxhighlight>
  
 
[[Category:FAQ]]
 
[[Category:FAQ]]

Revision as of 10:39, 8 June 2023

This page provides a summary of examples and tools designed to help you get used with the Adastra environment.

Warning

Access to Adastra is for now (at least until 01/05/2023) only for computations on the GPU (MI200) partition!

Nevertheless it is also a way to have access to "old" files that we stored at CINES when using Occigen in the frame of the "Atmosphères Planétaires" GENCI project.

A couple of pointers

  • Connecting to Adastra: We have retained group and login credentials from our previous setup on Occigen ; To connect to Adastra you need first go through the LMD gateway (hakim) or the IPSL (Ciclad/Spirit) gateway and then
ssh your_cines_login@adastra.cines.fr

And then you will probably want to switch project using the myproject command, e.g. to switch to "lmd1167" (the old "Atmosphères Planétaires" GENCI project)

myproject -a lmd1167

and to switch to "cin0391" (the 2023-2024 "Atmosphères Planétaires" GENCI project)

myproject -a cin0391

WARNING: when you switch projects, you also switch HOME directory etc.

To get all the info about dedicated environment variables (e.g. paths to SCRATCH, STORE, etc.) you can use

myproject -c
  • Changing the password of your CINES account

When your password is close to expiring, CINES asks you to change it on this website : https://rosetta.cines.fr

Please note that you can access this website only if you are on a machine that you declared as a gateway for Adastra. At LMD, we have generally declared hakim.lmd.jussieu.fr (aka ssh-out) and ciclad.ipsl.jussieu.fr as gateway machines. Hakim doesn't have any browser installed, but you can launch firefox on Ciclad and connect to the rosetta website. If that doesn't work, you will have to mail svp@cines.fr

Submitting jobs

It's done using SLURM; you need to write up a job script and submit it using sbatch

sbatch myjob

You must specify in the header of the job which project ressources you are using ("cin0391" in our case):

#SBATCH --account=cin0391

Example of an MPI job to launch a simulation

#!/bin/bash
#SBATCH --job-name=job_mpi
#SBATCH --account=cin0391
### GENOA nodes accommodate 96 cores 
#SBATCH --constraint=GENOA
### Number of Nodes to use
#SBATCH --nodes=1
### Number of MPI tasks per node
#SBATCH --ntasks-per-node=48 
### Number of OpenMP threads per MPI task
#SBATCH --cpus-per-task=1
#SBATCH --threads-per-core=1
###SBATCH --exclusive
#SBATCH --output=job_mpi_%A.out
#SBATCH --time=00:45:00 

#source env modules:
source ../trunk/LMDZ.COMMON/arch.env 
ulimit -s unlimited

srun --cpu-bind=threads --label gcm_96x96x78_phyvenus_para.e > gcm.out 2>&1

Example of a mixed MPI/OpenMP job to launch a simulation

#!/bin/bash
#SBATCH --job-name=job_mpi_omp
#SBATCH --account=cin0391
### GENOA nodes accommodate 96 cores 
#SBATCH --constraint=GENOA
### Number of Nodes to use
#SBATCH --nodes=1
### Number of MPI tasks per node
#SBATCH --ntasks-per-node=24 
### Number of OpenMP threads per MPI task
#SBATCH --cpus-per-task=4
#SBATCH --threads-per-core=1
###SBATCH --exclusive
#SBATCH --output=job_mpi_omp_%A.out
#SBATCH --time=00:30:00 

#source env modules:
source ../trunk/LMDZ.COMMON/arch.env 
ulimit -s unlimited

### OMP_NUM_THREADS value must match "#SBATCH --cpus-per-task"
export OMP_NUM_THREADS=4
export OMP_STACKSIZE=400M

srun --cpu-bind=threads --label gcm_64x48x54_phymars_para.e > gcm.out 2>&1