The DYNAMICO dynamical core
What is DYNAMICO ?
DYNAMICO is a recent dynamical core for atmospheric general circulation models (GCM). It is based on an icosahedral hexagonal grid projected on the sphere, a hybrid pressure-based terrain-following vertical coordinate, second-order enstrophy-conserving finite-difference discretization and positive-definite advection.
DYNAMICO is coded in Fortran and meant to be used in a massively parallel environment (using MPI and OpenMP) and has been coupled to a number of physics packages, such as the Earth LMDZ6 physics package (see https://lmdz-forge.lmd.jussieu.fr/mediawiki/LMDZPedia/index.php/Accueil ; search there for the keyword DYNAMICO to get some related documentation) but also the planetary version of the Mars, Venus or Generic Planetary Climate Models (PCM).
The DYNAMICO source code is freely available and downloadable using git
git clone https://gitlab.in2p3.fr/ipsl/projets/dynamico/dynamico.git
The DYNAMICO project page (be warned that information there is somewhat obsolete and related to earlier version, now obsolete, on svn) can be found at http://forge.ipsl.jussieu.fr/dynamico
Installing and running DYNAMICO
Here we just describe how to compile and run DYNAMICO by itself, i.e. without being coupled to any physics. This is essentially done as an exercise to check that it has been correctly installed, before moving on to the more complex (and complete!) case of compiling and running with a given physics package
Prerequisites
There are a couple of prerequisites to installing and using DYNAMICO:
- An MPI library must be available (i.e. installed and ready to use)
- BLAS and LAPACK libraries must be available.
- The XIOS library, compiled with that same MPI library, must also be available. Check out the XIOS library page for some information on installing it.
Downloading and compiling DYNAMICO
Using git
git clone https://gitlab.in2p3.fr/ipsl/projets/dynamico/dynamico.git
will create a dynamico directory containing all the necessary source code. Note that it is advised that this directory be alongside the XIOS library directory (because some relative paths in the dynamico arch*.env files assume it is the case. If not you will need to modify these files accordingly).
In the dynamico directory is the make_icosa compilation script, which is based on FCM and thus requires that adequate architecture ASCII files be available. The arch subdirectory contains examples for a few machines. Assuming you want to compile using somemachine architecture files (i.e. files arch/arch-somemachine.fcm , arch/arch-somemachine.env, arch/arch-somemachine.path are available and contain the adequate information) you would run:
./make_icosa -parallel mpi_omp -with_xios -arch somemachine -job 8
If compilation went well than you will find the executable icosa_gcm.exe in the bin subdirectory
For the experts: more about the arch files and their content
TO BE WRITTEN
For the experts: more about the make_icosa script
To know more about possible options to the make_icosa script:
./make_icosa -h
Running a simple Held and Suarez test case
DYNAMICO comes with a simple test case corresponding to a Held and Suarez simulation (...DESCRIBE H&S IN A NUTSHELL + REF TO THEIR PAPER...). All the input files necessary to run that case can be found in the TEST_CASE/HELD_SUAREZ subdirectory.
Assuming you want to test this configuration in a test__HELD_SUAREZ placed alongside the dynamico and XIOS directories, one would simply copy over all XML and def files over:
cp ../dynamico/TEST_CASE/HELD_SUAREZ/*def .
cp ../dynamico/TEST_CASE/HELD_SUAREZ/*xml .
Along with the executable:
cp ../dynamico/bin/icosa_gcm.exe .
The next step is to write up a script (or job) to run on your computer. This is where it is not easy to give any ready-made material. Note that the test-case design is such that it assumes you will be using 10 MPI processes. As an illustrative example, this is a script that should work on the Occigen supercomputer:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=10
### set --threads-per-core=2 for hyperthreading
#SBATCH --threads-per-core=1
#SBATCH -J ICOSAGCM
#SBATCH --time=02:00:00
#SBATCH --output job_mpi.%j.out
#SBATCH --constraint=BDW28
module purge
source ../dynamico/arch.env
srun --resv-ports --kill-on-bad-exit=1 --mpi=pmi2 --label -n $SLURM_NTASKS icosa_gcm.exe > icosa_gcm.out 2>&1
And here another one that should work on Irene-Rome (assuming you are in project "gen10391"):
#!/bin/bash
# Partition to run on:
#MSUB -q rome
# project to run on
#MSUB -A gen10391
# disks to use
#MSUB -m scratch,work,store
# Job name
#MSUB -r job_mpi
# Job standard output:
#MSUB -o job_mpi.%I
# Job standard error:
#MSUB -e job_mpi.%I
# number of OpenMP threads c
#MSUB -c 1
# number of MPI tasks n
#MSUB -n 40
# number of nodes to use N
#MSUB -N 1
# max job run time T (in seconds)
#MSUB -T 7200
source ../dynamico/arch.env
ccc_mprun -l icosa_gcm.exe > icosa_gcm.out 2>&1
If the run has successfully completed then the last lines of icosa_gcm.out should be something like
0: GETIN restart_file_name = restart
0: masse advec mass rmsdpdt energie enstrophie entropie rmsv mt.ang
0: GLOB 0.111E-14 0.000E+00 0.79892E+00 0.128E-01 0.692E-01 0.116E-01 0.542E+01 0.128E-01
0:
0: Time elapsed : 4613.89145300000
Moreover the following NetCDF output files should have been produced:
Ai.nc daily_output_native.nc restart.nc time_counter.nc
apbp.nc daily_output.nc start0.nc
where:
- Ai.nc, apbp.nc and time_counter.nc are unimportant
- start0.nc is a file containing the initial conditions of the simulation
- restart.nc is the output file containing the final state of the simulation
- daily_output_native.nc is the output file containing a time series of a selection of variables on the native (icosahedral) grid
- daily_output.nc is the output file containing a time series of a selection of variables re-interpolated on a regular longitude-latitude grid
Mixed bag of comments about the run's setup and outputs
For those interested in more details about the key aspects and main parameters:
- The run control parameters are set in the run.def ASCII file which is read at run-time. This is where for instance one specifies the model resolution (parameters nbp: number of subdivisions of main triangles, and llm: number of vertical levels), time step (parameter day_step: number of steps per day) and length of the run (parameter ndays: number of days to run)
- The sub-splitting of the main rhombus (parameters nsplit_i and nsplit_j) controls the overall number of tiles (sub-domains). As a rule of thumb when running in parallel (MPI) one want to have as many sub-domains as available MPI processes. Since the icosahedron is made up of 10 rhombus this implies that one should target using a total of 10*nsplit_i*nspilt_j processes.
- Controlling the outputs made by XIOS is via the XML files, namely file_def_dynamico.xml
- and so much more...