Using MeSU

From Planets
Revision as of 14:43, 13 December 2023 by Mlefevre (talk | contribs) (Creation of the page about the usage the MeSU computer)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This page provides some information for those who use the Sorbonne Université clusters MeSU

How to access the cluster

You need first to open an account (this is of course reserved to people in labs connected to Sorbonne Université) then proceed to this page: https://sacado.sorbonne-universite.fr/mesu/index.php/accounts-and-access/

Once you have an account you can ssh to the cluster :

ssh username@mesu.dsi.upmc.fr

For people not on linux : https://sacado.sorbonne-universite.fr/mesu/index.php/usage/howtos/how-to-connect-to-mesu/

OS and disk space

As the welcome message will remind you:

               __  __      ____  _   _ 
              |  \/  | ___/ ___|| | | |
              | |\/| |/ _ \___ \| | | |
              | |  | |  __/___) | |_| |
              |_|  |_|\___|____/ \___/ 

  ⢎⡑ ⢀⡀ ⡀⣀ ⣇⡀ ⢀⡀ ⣀⡀ ⣀⡀ ⢀⡀   ⡇⢸ ⣀⡀ ⠄ ⡀⢀ ⢀⡀ ⡀⣀ ⢀⣀ ⠄ ⣰⡀ ⢠⡂
  ⠢⠜ ⠣⠜ ⠏  ⠧⠜ ⠣⠜ ⠇⠸ ⠇⠸ ⠣⠭   ⠣⠜ ⠇⠸ ⠇ ⠱⠃ ⠣⠭ ⠏  ⠭⠕ ⠇ ⠘⠤ ⠣⠭
 
      https://sacado.sorbonne-universite.fr/mesu
             mesu@sorbonne-universite.fr

 beta: 144x 24 Intel Haswell CPUs, 128 GB RAM
 gamma:  2x 12 Intel Haswell CPUs, 2 NVidia K5200, 256 GB RAM

This is Ubuntu Linux and the "HOME" is quite limited in size. Most work should be done on the /scratchbeta disks.

It is up to you to tailor your environment. By default it is quite bare; it is up to you to load the modules you'll need to have access to specific software or compilers or libraries (and versions thereof). To know what modules are available:

module avail

To load a given module, here the Panoply software:

module load panoply

The list of available software is available at https://sacado.sorbonne-universite.fr/mesu/index.php/usage/available-software/

Compiling the PCMs on MeSU

To do

Compiling the WRF on MeSU

To do

Environment:

module load intel/intel-compilers-18.2/18.2
module load intel/intel-mpi/2018.2
module load intel/intel-cmkl-18.0/18.0
export NETCDF="/home/lefevrema/NETCDF/netcdf-4.0.1"
export NETCDF_INCDIR="/home/lefevrema/NETCDF/netcdf-4.0.1/include/"
export NETCDF_LIBDIR="/home/lefevrema/NETCDF/netcdf-4.0.1/lib/"
export WHERE_MPI="/opt/dev/intel/2018_Update2/impi/2018.2.199/bin64"

Submit your job

MeSU Supercomputer uses a queuing system to match users jobs with available computing resources.

Users submit their programs to the job scheduler (PBS), which maintains a queue of jobs and distributes them on the compute nodes according to the servers status, scheduling policies and jobs parameters (number of compute nodes / cores, estimated execution time, required memory, etc.).

The interface with PBS is done via a text file – the PBS script – created by the user, which will define your job requirements and execution steps. This file is mainly comprised of two sections :

  • the header in which you specify the job requirements (execution time, number of CPU cores to use, memory requirements…) in the form of PBS directives
  • the body in which you will write the commands to load specific software, define environment variables, and run your job.

More information https://sacado.sorbonne-universite.fr/mesu/index.php/usage/howtos/job-submission/

Here is an example of job script to run using 24 cpus:

#!/bin/bash
#PBS -q beta
#PBS -l select=1:ncpus=24
#PBS -l place=free:exclhost
#PBS -l walltime=23:59:00
#PBS -N RUN_GCM
#PBS -j oe

# Load your environment
source ../trunk/LMDZ.COMMON/arch/arch-ifort_MESOIPSL.env

# Go to job directory
cd $PBS_O_WORKDIR
cd ./RUN_DIR

#Magic trick
ulimit -s unlimited

mpirun gcm.e > gcm.out 2>&1

Submitting your job to the PBS scheduler is easy once you have authored the corresponding PBS script file, using the qsub command:

qsub ./myScript