Difference between revisions of "The DYNAMICO dynamical core"

From Planets
Jump to: navigation, search
(Created page with "== What is DYNAMICO ? == DYNAMICO is a recent dynamical core for atmospheric general circulation models (GCM). It is based on an icosahedral hexagonal grid projected on the s...")
 
Line 16: Line 16:
  
 
=== Prerequisites ===
 
=== Prerequisites ===
 +
There are a couple of prerequisites to installing and using DYNAMICO:
 +
# An MPI library must be available (i.e. installed and ready to use)
 +
# BLAS and LAPACK libraries must be available.
 +
# The XIOS library, compiled with that same MPI library, must also be available. Check out the [[The_XIOS_Library|XIOS library]] page for some information on installing it.
  
 
=== Downloading and compiling DYNAMICO ===
 
=== Downloading and compiling DYNAMICO ===
 +
Using git
 +
<syntaxhighlight lang="bash">
 +
git clone https://gitlab.in2p3.fr/ipsl/projets/dynamico/dynamico.git
 +
</syntaxhighlight>
 +
will create a '''dynamico''' directory containing all the necessary source code.
 +
Note that it is advised that this directory be alongside the '''XIOS''' library directory (because some relative paths in the dynamico arch*.env files assume it is the case. If not you will need to modify these files accordingly).
 +
 +
In the '''dynamico''' directory is the ''make_icosa'' compilation script, which is based on FCM and thus requires that adequate architecture ASCII files be available. The '''arch''' subdirectory contains examples for a few machines. Assuming you want to compile using ''somemachine'' architecture files (i.e. files ''arch/arch-somemachine.fcm'' , ''arch/arch-somemachine.env'', ''arch/arch-somemachine.path'' are available and contain the adequate information) you would run:
 +
<syntaxhighlight lang="bash">
 +
./make_icosa -parallel mpi_omp -with_xios -arch somemachine -job 8
 +
</syntaxhighlight>
 +
If compilation went well than you will find the executable '''icosa_gcm.exe''' in the '''bin''' subdirectory
 +
 +
==== For the experts: more about the arch files and their content ====
 +
TO BE WRITTEN
 +
 +
==== For the experts: more about the make_icosa script ====
 +
TO BE WRITTEN
  
 
=== Running a simple Held and Suarez test case ===
 
=== Running a simple Held and Suarez test case ===
 +
DYNAMICO comes with a simple test case corresponding to a Held and Suarez simulation (...DESCRIBE H&S IN A NUTSHELL + REF TO THEIR PAPER...).
 +
All the input files necessary to run that case can be found in the ''TEST_CASE/HELD_SUAREZ'' subdirectory.
 +
 +
Assuming you want to test this configuration in a ''test__HELD_SUAREZ'' placed alongside the ''dynamico'' and ''XIOS'' directories, one would simply copy over all XML and def files over:
 +
<syntaxhighlight lang="bash">
 +
cp ../dynamico/TEST_CASE/HELD_SUAREZ/*def .
 +
cp ../dynamico/TEST_CASE/HELD_SUAREZ/*xml .
 +
</syntaxhighlight>
 +
Along with the executable:
 +
<syntaxhighlight lang="bash">
 +
cp ../dynamico/bin/icosa_gcm.exe .
 +
</syntaxhighlight>
 +
The next step is to write up a script (or job) to run on your computer. This is where it is not easy to give any ready-made material. Note that the test-case design is such that it assumes you will be using 10 MPI processes. As an illustrative example, this is a script that should work on the Occigen supercomputer:
 +
<syntaxhighlight lang="bash">
 +
#!/bin/bash
 +
#SBATCH --nodes=1
 +
#SBATCH --ntasks-per-node=10
 +
### set --threads-per-core=2 for hyperthreading
 +
#SBATCH --threads-per-core=1
 +
#SBATCH -J ICOSAGCM
 +
#SBATCH --time=00:55:00
 +
#SBATCH --output job_mpi.%j.out
 +
#SBATCH --constraint=BDW28
 +
 +
module purge
 +
source ../dynamico/arch.env
 +
 +
srun --resv-ports --kill-on-bad-exit=1 --mpi=pmi2 --label -n $SLURM_NTASKS icosa_gcm.exe > icosa_gcm.out 2>&1
 +
 +
</syntaxhighlight>
 +
 +
And here another one that should work on Irene-Rome (assuming you are in project "gen10391"):
 +
<syntaxhighlight lang="bash">
 +
#!/bin/bash
 +
# Partition to run on:
 +
#MSUB -q rome
 +
# project to run on
 +
#MSUB -A  gen10391
 +
# disks to use
 +
#MSUB -m  scratch,work,store
 +
# Job name
 +
#MSUB -r job_mpi
 +
# Job standard output:
 +
#MSUB -o job_mpi.%I
 +
# Job standard error:
 +
#MSUB -e job_mpi.%I
 +
# number of OpenMP threads c
 +
#MSUB -c 1
 +
# number of MPI tasks n
 +
#MSUB -n 40
 +
# number of nodes to use N
 +
#MSUB -N 1
 +
# max job run time T (in seconds)
 +
#MSUB -T 10800
 +
 +
source ../dynamico/arch.env
 +
ccc_mprun -l icosa_gcm.exe > icosa_gcm.out 2>&1
 +
</syntaxhighlight>
  
 +
If the run has successfully completed then the last lines of ''icosa_gcm.out'' should be something like
 +
<syntaxhighlight lang="bash">
 +
0:  GETIN restart_file_name = restart
 +
0:          masse    advec mass    rmsdpdt      energie  enstrophie    entropie    rmsv    mt.ang
 +
0: GLOB  -0.533E-14 0.000E+00  0.85973E+00    0.134E-01    0.653E-01    0.121E-01    0.539E+01    0.129E-01
 +
0:
 +
0:  Time elapsed :    1390.77057500000   
 +
</syntaxhighlight>
 
[[Category:FAQ]]
 
[[Category:FAQ]]

Revision as of 13:26, 27 July 2022

What is DYNAMICO ?

DYNAMICO is a recent dynamical core for atmospheric general circulation models (GCM). It is based on an icosahedral hexagonal grid projected on the sphere, a hybrid pressure-based terrain-following vertical coordinate, second-order enstrophy-conserving finite-difference discretization and positive-definite advection.

DYNAMICO is coded in Fortran and meant to be used in a massively parallel environment (using MPI and OpenMP) and has been coupled to a number of physics packages, such as the Earth LMDZ6 physics package (see https://lmdz-forge.lmd.jussieu.fr/mediawiki/LMDZPedia/index.php/Accueil ; search there for the keyword DYNAMICO to get some related documentation) but also the planetary version of the Mars, Venus or Generic Planetary Climate Models (PCM).

The DYNAMICO source code is freely available and downloadable using git

git clone https://gitlab.in2p3.fr/ipsl/projets/dynamico/dynamico.git

The DYNAMICO project page (be warned that information there is somewhat obsolete and related to earlier version, now obsolete, on svn) can be found at http://forge.ipsl.jussieu.fr/dynamico

Installing and running DYNAMICO

Here we just describe how to compile and run DYNAMICO by itself, i.e. without being coupled to any physics. This is essentially done as an exercise to check that it has been correctly installed, before moving on to the more complex (and complete!) case of compiling and running with a given physics package

Prerequisites

There are a couple of prerequisites to installing and using DYNAMICO:

  1. An MPI library must be available (i.e. installed and ready to use)
  2. BLAS and LAPACK libraries must be available.
  3. The XIOS library, compiled with that same MPI library, must also be available. Check out the XIOS library page for some information on installing it.

Downloading and compiling DYNAMICO

Using git

git clone https://gitlab.in2p3.fr/ipsl/projets/dynamico/dynamico.git

will create a dynamico directory containing all the necessary source code. Note that it is advised that this directory be alongside the XIOS library directory (because some relative paths in the dynamico arch*.env files assume it is the case. If not you will need to modify these files accordingly).

In the dynamico directory is the make_icosa compilation script, which is based on FCM and thus requires that adequate architecture ASCII files be available. The arch subdirectory contains examples for a few machines. Assuming you want to compile using somemachine architecture files (i.e. files arch/arch-somemachine.fcm , arch/arch-somemachine.env, arch/arch-somemachine.path are available and contain the adequate information) you would run:

./make_icosa -parallel mpi_omp -with_xios -arch somemachine -job 8

If compilation went well than you will find the executable icosa_gcm.exe in the bin subdirectory

For the experts: more about the arch files and their content

TO BE WRITTEN

For the experts: more about the make_icosa script

TO BE WRITTEN

Running a simple Held and Suarez test case

DYNAMICO comes with a simple test case corresponding to a Held and Suarez simulation (...DESCRIBE H&S IN A NUTSHELL + REF TO THEIR PAPER...). All the input files necessary to run that case can be found in the TEST_CASE/HELD_SUAREZ subdirectory.

Assuming you want to test this configuration in a test__HELD_SUAREZ placed alongside the dynamico and XIOS directories, one would simply copy over all XML and def files over:

cp ../dynamico/TEST_CASE/HELD_SUAREZ/*def .
cp ../dynamico/TEST_CASE/HELD_SUAREZ/*xml .

Along with the executable:

cp ../dynamico/bin/icosa_gcm.exe .

The next step is to write up a script (or job) to run on your computer. This is where it is not easy to give any ready-made material. Note that the test-case design is such that it assumes you will be using 10 MPI processes. As an illustrative example, this is a script that should work on the Occigen supercomputer:

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=10
### set --threads-per-core=2 for hyperthreading
#SBATCH --threads-per-core=1
#SBATCH -J ICOSAGCM
#SBATCH --time=00:55:00
#SBATCH --output job_mpi.%j.out
#SBATCH --constraint=BDW28

module purge
source ../dynamico/arch.env

srun --resv-ports --kill-on-bad-exit=1 --mpi=pmi2 --label -n $SLURM_NTASKS icosa_gcm.exe > icosa_gcm.out 2>&1

And here another one that should work on Irene-Rome (assuming you are in project "gen10391"):

#!/bin/bash
# Partition to run on:
#MSUB -q rome
# project to run on 
#MSUB -A  gen10391
# disks to use
#MSUB -m  scratch,work,store
# Job name
#MSUB -r job_mpi
# Job standard output:
#MSUB -o job_mpi.%I
# Job standard error:
#MSUB -e job_mpi.%I
# number of OpenMP threads c
#MSUB -c 1
# number of MPI tasks n
#MSUB -n 40
# number of nodes to use N
#MSUB -N 1
# max job run time T (in seconds)
#MSUB -T 10800

source ../dynamico/arch.env
ccc_mprun -l icosa_gcm.exe > icosa_gcm.out 2>&1

If the run has successfully completed then the last lines of icosa_gcm.out should be something like

 0:  GETIN restart_file_name = restart
 0:           masse     advec mass     rmsdpdt      energie   enstrophie     entropie     rmsv     mt.ang
 0: GLOB  -0.533E-14 0.000E+00  0.85973E+00    0.134E-01    0.653E-01    0.121E-01    0.539E+01    0.129E-01
 0: 
 0:  Time elapsed :    1390.77057500000