Difference between revisions of "The DYNAMICO dynamical core"
(Created page with "== What is DYNAMICO ? == DYNAMICO is a recent dynamical core for atmospheric general circulation models (GCM). It is based on an icosahedral hexagonal grid projected on the s...") |
m (→Mixed bag of comments about the run's setup and outputs) |
||
(17 intermediate revisions by 4 users not shown) | |||
Line 16: | Line 16: | ||
=== Prerequisites === | === Prerequisites === | ||
+ | There are a couple of prerequisites to installing and using DYNAMICO: | ||
+ | # An MPI library must be available (i.e. installed and ready to use) | ||
+ | # BLAS and LAPACK libraries must be available. | ||
+ | # The XIOS library, compiled with that same MPI library, must also be available. Check out the [[The_XIOS_Library|XIOS library]] page for some information on installing it. | ||
=== Downloading and compiling DYNAMICO === | === Downloading and compiling DYNAMICO === | ||
+ | Using git | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | git clone https://gitlab.in2p3.fr/ipsl/projets/dynamico/dynamico.git ICOSAGCM | ||
+ | </syntaxhighlight> | ||
+ | will create a '''ICOSAGCM''' directory containing all the necessary source code. | ||
+ | Note that it is advised that this directory be alongside the '''XIOS''' library directory (because some relative paths in the dynamico arch*.env files assume it is the case. If not you will need to modify these files accordingly). | ||
+ | |||
+ | In the '''ICOSAGCM''' directory is the ''make_icosa'' compilation script, which is based on FCM and thus requires that adequate architecture ASCII files be available. The '''arch''' subdirectory contains examples for a few machines. Assuming you want to compile using ''somemachine'' architecture files (i.e. files ''arch/arch-somemachine.fcm'' , ''arch/arch-somemachine.env'', ''arch/arch-somemachine.path'' are available and contain the adequate information) you would run: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | ./make_icosa -parallel mpi_omp -with_xios -arch somemachine -job 8 | ||
+ | </syntaxhighlight> | ||
+ | If compilation went well than you will find the executable '''icosa_gcm.exe''' in the '''bin''' subdirectory. | ||
+ | If it fails, be sure to check if you have OpenMP installed and else try to compile with '''-parallel mpi''' only. | ||
+ | |||
+ | Before running any example, be sure to modify the stack size to avoid any segmentation fault when running. | ||
+ | Add this to your ~/.bashrc: | ||
+ | |||
+ | <pre> | ||
+ | # This option will unlimit the stack size | ||
+ | ulimit -s unlimited | ||
+ | </pre> | ||
+ | |||
+ | And reload the script with: | ||
+ | <pre> | ||
+ | source ~/.bashrc | ||
+ | </pre> | ||
+ | |||
+ | |||
+ | ==== For the experts: more about the arch files and their content ==== | ||
+ | TO BE WRITTEN | ||
+ | |||
+ | ==== For the experts: more about the make_icosa script ==== | ||
+ | To know more about possible options to the ''make_icosa'' script: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | ./make_icosa -h | ||
+ | </syntaxhighlight> | ||
=== Running a simple Held and Suarez test case === | === Running a simple Held and Suarez test case === | ||
+ | DYNAMICO comes with a simple test case corresponding to a Held and Suarez simulation (...DESCRIBE H&S IN A NUTSHELL + REF TO THEIR PAPER...). | ||
+ | |||
+ | We will run a testCase “without the physics”, to verify that the Dynamical Core works alone. | ||
+ | |||
+ | To do this, make a new folder “'''test_HELD_SUAREZ'''”, alongside '''ICOSAGCM''' and '''XIOS'''. | ||
+ | |||
+ | <pre> | ||
+ | cd /your/path/trunk/ | ||
+ | mkdir test_HELD_SUAREZ | ||
+ | </pre> | ||
+ | |||
+ | Then, we need to copy the specific .def files for this testCase. (they are in /ICOSAGCM) | ||
+ | |||
+ | <pre> | ||
+ | # Go where the .def files are | ||
+ | cd /your/path/trunk/ICOSAGCM/TEST_CASE/HELD_SUAREZ | ||
+ | |||
+ | # Copy the .def files in the repository test_HELD_SUAREZ | ||
+ | cp *def ../../../test_HELD_SUAREZ | ||
+ | </pre> | ||
+ | |||
+ | The run.def file is specific to DYNAMICO. See [[Run.def for Held&Suarez test case|this page]] for its description. | ||
+ | |||
+ | Do the same for the .xml files: | ||
+ | |||
+ | <pre> | ||
+ | cd /your/path/trunk/ICOSAGCM/xml/DYNAMICO_XML | ||
+ | cp *xml ../../../test_HELD_SUAREZ | ||
+ | |||
+ | cd .. | ||
+ | cp iodef.xml ../../test_HELD_SUAREZ | ||
+ | </pre> | ||
+ | |||
+ | Then, copy the executable “icosa_gcm.exe” in the test directory test_HELD_SUAREZ (it is placed in ICOSAGCM/bin): | ||
+ | |||
+ | <pre> | ||
+ | cp bin/icosa_gcm.exe ../test_HELD_SUAREZ | ||
+ | </pre> | ||
+ | |||
+ | or (simpler in the long-term), put the executable path in your PATH: | ||
+ | |||
+ | <pre> | ||
+ | export PATH=$PATH:~/lmdz/ICOSAGCM/bin:~/lmdz/ICOSA_LMDZ/bin # be sure to replace the path with your own lmdz installation folder | ||
+ | </pre> | ||
+ | |||
+ | |||
+ | If, when running the model, you want a NetCDF file (.nc) with all the data, you should modify the .xml file “'''file_def_dynamico.xml'''”, line 70, changing the “false” to “true” for “enabled”. This will enable the “dynamico.nc” file to be created, it is already a re-interpolation of the dynamico-grid into long-lat, which makes it usable directly with Ferret/Panoply. | ||
+ | |||
+ | Everything is ready to run the model. Go to '''test_HELD_SUAREZ''', then use the slurm command “'''sbatch'''” to submit a job to the cluster. | ||
+ | |||
+ | <pre> | ||
+ | cd /your/path/trunk/test_HELD_SUAREZ | ||
+ | sbatch script_d_execution.slurm | ||
+ | </pre> | ||
+ | |||
+ | Slurm script (example for spirit1): | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | #!/bin/bash | ||
+ | #SBATCH --nodes=1 | ||
+ | #SBATCH --ntasks-per-node=8 | ||
+ | #SBATCH --cpus-per-task=1 | ||
+ | #SBATCH --partition=zen4 # zen4: 64cores/node and 240GB of memory | ||
+ | ##SBATCH --partition=zen16 # zen16: 32 cores/node core and 496GB of memory | ||
+ | #SBATCH -J job_mpi_omp | ||
+ | #SBATCH --time=0:20:00 | ||
+ | #SBATCH --output %x.%j.out | ||
+ | |||
+ | source /your/path/trunk/ICOSAGCM/arch/arch-YOUR_ARCH.env | ||
+ | |||
+ | export OMP_NUM_THREADS=1 | ||
+ | export OMP_STACKSIZE=400M | ||
+ | |||
+ | mpirun icosa_gcm.exe > icosa_gcm.out 2>&1 | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | In this script, you should modify the path and “YOUR_ARCH” with your architecture (for the source command). Note that the test-case design is such that it assumes you will be using 10 MPI processes. Note that we are not using OpenMP here, it is not functional for now (TO UPDATE). | ||
+ | |||
+ | And here another one that should work on Irene-Rome (assuming you are in project "gen10391"): | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | #!/bin/bash | ||
+ | # Partition to run on: | ||
+ | #MSUB -q rome | ||
+ | # project to run on | ||
+ | #MSUB -A gen10391 | ||
+ | # disks to use | ||
+ | #MSUB -m scratch,work,store | ||
+ | # Job name | ||
+ | #MSUB -r job_mpi | ||
+ | # Job standard output: | ||
+ | #MSUB -o job_mpi.%I | ||
+ | # Job standard error: | ||
+ | #MSUB -e job_mpi.%I | ||
+ | # number of OpenMP threads c | ||
+ | #MSUB -c 1 | ||
+ | # number of MPI tasks n | ||
+ | #MSUB -n 40 | ||
+ | # number of nodes to use N | ||
+ | #MSUB -N 1 | ||
+ | # max job run time T (in seconds) | ||
+ | #MSUB -T 7200 | ||
+ | |||
+ | source ../dynamico/arch.env | ||
+ | ccc_mprun -l icosa_gcm.exe > icosa_gcm.out 2>&1 | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | To verify that the code is properly running, you can show directly the “icosa_gcm.out” file. | ||
+ | |||
+ | <pre> | ||
+ | tail -f icosa_gcm.out | ||
+ | </pre> | ||
+ | |||
+ | Once the code is finished running, something like this should appear (at the end of the icosa_gcm.out): | ||
+ | |||
+ | <pre> | ||
+ | GETIN restart_file_name = restart | ||
+ | masse advec mass rmsdpdt energie enstrophie entropie rmsv mt.ang | ||
+ | GLOB -0.999E-15 0.000E+00 0.79047E+01 0.110E-02 0.261E+00 0.155E-02 0.743E+01 0.206E-01 | ||
+ | |||
+ | Time elapsed : 601.628763000000 | ||
+ | </pre> | ||
+ | |||
+ | |||
+ | Moreover the following NetCDF output files should have been produced: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | Ai.nc daily_output_native.nc restart.nc time_counter.nc | ||
+ | apbp.nc daily_output.nc start0.nc dynamico.nc | ||
+ | </syntaxhighlight> | ||
+ | where: | ||
+ | * ''Ai.nc'', ''apbp.nc'' and ''time_counter.nc'' are unimportant | ||
+ | * ''start0.nc'' is a file containing the initial conditions of the simulation | ||
+ | * ''restart.nc'' is the output file containing the final state of the simulation | ||
+ | * ''daily_output_native.nc'' is the output file containing a time series of a selection of variables on the native (icosahedral) grid | ||
+ | * ''daily_output.nc'' is the output file containing a time series of a selection of variables re-interpolated on a regular longitude-latitude grid | ||
+ | * ''dynamico.nc'' is the output file containing all your simulation data, that can be easily plotted with ferret (for example) | ||
+ | |||
+ | === Using the restart.nc file to continue your simulation === | ||
+ | If you want to use '''restart.nc''' to avoid restarting the simulation from scratch, here is the procedure to follow: | ||
+ | |||
+ | In the run.def file, for the first launch, change the variable "nqtot" to "1" instead of 0. | ||
+ | |||
+ | Then, run the model as usual. | ||
+ | |||
+ | Once the execution is complete, a '''restart.nc''' file will be created (in addition to all other .nc files), rename it to "start.nc". | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | mv restart.nc start.nc | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | Next, modify the run.def file a second time. Change the variable "etat0 = held_suarez" to "etat0 = start_file", and add an additional line. You would then have these lines: | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | # etat0 = held_suarez (old line commented out) | ||
+ | etat0 = start_file | ||
+ | start_file_name = start | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | Note, even if your file is named "start.nc", it is "start" that needs to be specified in the run.def (the .nc is already taken into account). | ||
+ | |||
+ | Then, run the model as usual with the same script.slurm and sbatch. | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | sbatch script.slurm | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | You should see that the iterations start where those of the previous simulation stopped. With this command: | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | tail -f icosa_gcm.out | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | If you want to do this again, remember to always use the new '''restart.nc''' file as the starting point (renaming it to start.nc etc...). | ||
+ | |||
+ | ==== Mixed bag of comments about the run's setup and outputs ==== | ||
+ | For those interested in more details about the key aspects and main parameters: | ||
+ | * The run control parameters are set in the ''[[Run.def for Held&Suarez test case|run.def]]'' ASCII file which is read at run-time. This is where for instance one specifies the model resolution (parameters nbp: number of subdivisions of main triangles, and llm: number of vertical levels), time step (parameter day_step: number of steps per day) and length of the run (parameter ndays: number of days to run) | ||
+ | * The sub-splitting of the main rhombus (parameters nsplit_i and nsplit_j) controls the overall number of tiles (sub-domains). As a rule of thumb when running in parallel (MPI) one want to have as many sub-domains as available MPI processes. Since the icosahedron is made up of 10 rhombus this implies that one should target using a total of 10*nsplit_i*nspilt_j processes. | ||
+ | * Controlling the outputs made by XIOS is via the XML files, namely ''file_def_dynamico.xml'' | ||
+ | * and so much more... | ||
+ | |||
+ | == Running DYNAMICO with a physics package (Generic,Mars,Venus,Pluto) == | ||
+ | |||
+ | Please see [[DYNAMICO with LMDZ physics]] for compiling DYNAMICO with any LMDZ physics, and refer to each physics package for a tutorial: | ||
+ | *[[Venus_-_DYNAMICO#Running_Venus_-_DYNAMICO|Venus-DYNAMICO]] | ||
+ | *[[Mars Dynamico Installation manual|Mars-DYNAMICO]] | ||
+ | *[[Dynamico-giant#3._General_parameters|Dynamico-giant]] | ||
+ | |||
[[Category:FAQ]] | [[Category:FAQ]] | ||
+ | [[Category:DYNAMICO]] | ||
+ | [[Category:Generic-DYNAMICO]] | ||
+ | [[Category:Mars-DYNAMICO]] | ||
+ | [[Category:Venus-DYNAMICO]] |
Latest revision as of 10:15, 22 October 2024
Contents
What is DYNAMICO ?
DYNAMICO is a recent dynamical core for atmospheric general circulation models (GCM). It is based on an icosahedral hexagonal grid projected on the sphere, a hybrid pressure-based terrain-following vertical coordinate, second-order enstrophy-conserving finite-difference discretization and positive-definite advection.
DYNAMICO is coded in Fortran and meant to be used in a massively parallel environment (using MPI and OpenMP) and has been coupled to a number of physics packages, such as the Earth LMDZ6 physics package (see https://lmdz-forge.lmd.jussieu.fr/mediawiki/LMDZPedia/index.php/Accueil ; search there for the keyword DYNAMICO to get some related documentation) but also the planetary version of the Mars, Venus or Generic Planetary Climate Models (PCM).
The DYNAMICO source code is freely available and downloadable using git
git clone https://gitlab.in2p3.fr/ipsl/projets/dynamico/dynamico.git
The DYNAMICO project page (be warned that information there is somewhat obsolete and related to earlier version, now obsolete, on svn) can be found at http://forge.ipsl.jussieu.fr/dynamico
Installing and running DYNAMICO
Here we just describe how to compile and run DYNAMICO by itself, i.e. without being coupled to any physics. This is essentially done as an exercise to check that it has been correctly installed, before moving on to the more complex (and complete!) case of compiling and running with a given physics package
Prerequisites
There are a couple of prerequisites to installing and using DYNAMICO:
- An MPI library must be available (i.e. installed and ready to use)
- BLAS and LAPACK libraries must be available.
- The XIOS library, compiled with that same MPI library, must also be available. Check out the XIOS library page for some information on installing it.
Downloading and compiling DYNAMICO
Using git
git clone https://gitlab.in2p3.fr/ipsl/projets/dynamico/dynamico.git ICOSAGCM
will create a ICOSAGCM directory containing all the necessary source code. Note that it is advised that this directory be alongside the XIOS library directory (because some relative paths in the dynamico arch*.env files assume it is the case. If not you will need to modify these files accordingly).
In the ICOSAGCM directory is the make_icosa compilation script, which is based on FCM and thus requires that adequate architecture ASCII files be available. The arch subdirectory contains examples for a few machines. Assuming you want to compile using somemachine architecture files (i.e. files arch/arch-somemachine.fcm , arch/arch-somemachine.env, arch/arch-somemachine.path are available and contain the adequate information) you would run:
./make_icosa -parallel mpi_omp -with_xios -arch somemachine -job 8
If compilation went well than you will find the executable icosa_gcm.exe in the bin subdirectory. If it fails, be sure to check if you have OpenMP installed and else try to compile with -parallel mpi only.
Before running any example, be sure to modify the stack size to avoid any segmentation fault when running. Add this to your ~/.bashrc:
# This option will unlimit the stack size ulimit -s unlimited
And reload the script with:
source ~/.bashrc
For the experts: more about the arch files and their content
TO BE WRITTEN
For the experts: more about the make_icosa script
To know more about possible options to the make_icosa script:
./make_icosa -h
Running a simple Held and Suarez test case
DYNAMICO comes with a simple test case corresponding to a Held and Suarez simulation (...DESCRIBE H&S IN A NUTSHELL + REF TO THEIR PAPER...).
We will run a testCase “without the physics”, to verify that the Dynamical Core works alone.
To do this, make a new folder “test_HELD_SUAREZ”, alongside ICOSAGCM and XIOS.
cd /your/path/trunk/ mkdir test_HELD_SUAREZ
Then, we need to copy the specific .def files for this testCase. (they are in /ICOSAGCM)
# Go where the .def files are cd /your/path/trunk/ICOSAGCM/TEST_CASE/HELD_SUAREZ # Copy the .def files in the repository test_HELD_SUAREZ cp *def ../../../test_HELD_SUAREZ
The run.def file is specific to DYNAMICO. See this page for its description.
Do the same for the .xml files:
cd /your/path/trunk/ICOSAGCM/xml/DYNAMICO_XML cp *xml ../../../test_HELD_SUAREZ cd .. cp iodef.xml ../../test_HELD_SUAREZ
Then, copy the executable “icosa_gcm.exe” in the test directory test_HELD_SUAREZ (it is placed in ICOSAGCM/bin):
cp bin/icosa_gcm.exe ../test_HELD_SUAREZ
or (simpler in the long-term), put the executable path in your PATH:
export PATH=$PATH:~/lmdz/ICOSAGCM/bin:~/lmdz/ICOSA_LMDZ/bin # be sure to replace the path with your own lmdz installation folder
If, when running the model, you want a NetCDF file (.nc) with all the data, you should modify the .xml file “file_def_dynamico.xml”, line 70, changing the “false” to “true” for “enabled”. This will enable the “dynamico.nc” file to be created, it is already a re-interpolation of the dynamico-grid into long-lat, which makes it usable directly with Ferret/Panoply.
Everything is ready to run the model. Go to test_HELD_SUAREZ, then use the slurm command “sbatch” to submit a job to the cluster.
cd /your/path/trunk/test_HELD_SUAREZ sbatch script_d_execution.slurm
Slurm script (example for spirit1):
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
#SBATCH --cpus-per-task=1
#SBATCH --partition=zen4 # zen4: 64cores/node and 240GB of memory
##SBATCH --partition=zen16 # zen16: 32 cores/node core and 496GB of memory
#SBATCH -J job_mpi_omp
#SBATCH --time=0:20:00
#SBATCH --output %x.%j.out
source /your/path/trunk/ICOSAGCM/arch/arch-YOUR_ARCH.env
export OMP_NUM_THREADS=1
export OMP_STACKSIZE=400M
mpirun icosa_gcm.exe > icosa_gcm.out 2>&1
In this script, you should modify the path and “YOUR_ARCH” with your architecture (for the source command). Note that the test-case design is such that it assumes you will be using 10 MPI processes. Note that we are not using OpenMP here, it is not functional for now (TO UPDATE).
And here another one that should work on Irene-Rome (assuming you are in project "gen10391"):
#!/bin/bash
# Partition to run on:
#MSUB -q rome
# project to run on
#MSUB -A gen10391
# disks to use
#MSUB -m scratch,work,store
# Job name
#MSUB -r job_mpi
# Job standard output:
#MSUB -o job_mpi.%I
# Job standard error:
#MSUB -e job_mpi.%I
# number of OpenMP threads c
#MSUB -c 1
# number of MPI tasks n
#MSUB -n 40
# number of nodes to use N
#MSUB -N 1
# max job run time T (in seconds)
#MSUB -T 7200
source ../dynamico/arch.env
ccc_mprun -l icosa_gcm.exe > icosa_gcm.out 2>&1
To verify that the code is properly running, you can show directly the “icosa_gcm.out” file.
tail -f icosa_gcm.out
Once the code is finished running, something like this should appear (at the end of the icosa_gcm.out):
GETIN restart_file_name = restart masse advec mass rmsdpdt energie enstrophie entropie rmsv mt.ang GLOB -0.999E-15 0.000E+00 0.79047E+01 0.110E-02 0.261E+00 0.155E-02 0.743E+01 0.206E-01 Time elapsed : 601.628763000000
Moreover the following NetCDF output files should have been produced:
Ai.nc daily_output_native.nc restart.nc time_counter.nc
apbp.nc daily_output.nc start0.nc dynamico.nc
where:
- Ai.nc, apbp.nc and time_counter.nc are unimportant
- start0.nc is a file containing the initial conditions of the simulation
- restart.nc is the output file containing the final state of the simulation
- daily_output_native.nc is the output file containing a time series of a selection of variables on the native (icosahedral) grid
- daily_output.nc is the output file containing a time series of a selection of variables re-interpolated on a regular longitude-latitude grid
- dynamico.nc is the output file containing all your simulation data, that can be easily plotted with ferret (for example)
Using the restart.nc file to continue your simulation
If you want to use restart.nc to avoid restarting the simulation from scratch, here is the procedure to follow:
In the run.def file, for the first launch, change the variable "nqtot" to "1" instead of 0.
Then, run the model as usual.
Once the execution is complete, a restart.nc file will be created (in addition to all other .nc files), rename it to "start.nc".
mv restart.nc start.nc
Next, modify the run.def file a second time. Change the variable "etat0 = held_suarez" to "etat0 = start_file", and add an additional line. You would then have these lines:
# etat0 = held_suarez (old line commented out)
etat0 = start_file
start_file_name = start
Note, even if your file is named "start.nc", it is "start" that needs to be specified in the run.def (the .nc is already taken into account).
Then, run the model as usual with the same script.slurm and sbatch.
sbatch script.slurm
You should see that the iterations start where those of the previous simulation stopped. With this command:
tail -f icosa_gcm.out
If you want to do this again, remember to always use the new restart.nc file as the starting point (renaming it to start.nc etc...).
Mixed bag of comments about the run's setup and outputs
For those interested in more details about the key aspects and main parameters:
- The run control parameters are set in the run.def ASCII file which is read at run-time. This is where for instance one specifies the model resolution (parameters nbp: number of subdivisions of main triangles, and llm: number of vertical levels), time step (parameter day_step: number of steps per day) and length of the run (parameter ndays: number of days to run)
- The sub-splitting of the main rhombus (parameters nsplit_i and nsplit_j) controls the overall number of tiles (sub-domains). As a rule of thumb when running in parallel (MPI) one want to have as many sub-domains as available MPI processes. Since the icosahedron is made up of 10 rhombus this implies that one should target using a total of 10*nsplit_i*nspilt_j processes.
- Controlling the outputs made by XIOS is via the XML files, namely file_def_dynamico.xml
- and so much more...
Running DYNAMICO with a physics package (Generic,Mars,Venus,Pluto)
Please see DYNAMICO with LMDZ physics for compiling DYNAMICO with any LMDZ physics, and refer to each physics package for a tutorial: