Difference between revisions of "Quick Install and Run"

From Planets
Jump to: navigation, search
(Checking the Results of a Simulation)
 
(46 intermediate revisions by 4 users not shown)
Line 5: Line 5:
  
 
== Prerequisites: Tools and Libraries ==
 
== Prerequisites: Tools and Libraries ==
In order to use (i.e. compile and run) the GCM, one needs to have some tools and installed libraries at hand. We list below a (minimal) set that you should check that is available and/or that you'll need to first install on your machine. <s>By the way we also assume you're on a Linux OS.</s> <span style="color:#ff0000"> We also assume here that you're on a Linux native-OS/cluster.</span>
+
In order to use (i.e. compile and run) the GCM, one needs to have some tools and installed libraries at hand. We list below a (minimal) set that you should check that is available and/or that you'll need to first install on your machine. Note that we assume in this tutorial that you are on a Linux native-OS/cluster.
  
 
===  Fortran compiler ===
 
===  Fortran compiler ===
Line 22: Line 22:
 
The source code is managed using subversion (svn), which you'll need to download or update. Leaving aside the subtleties of svn and code organization for now, downloading the code amounts to doing the following:
 
The source code is managed using subversion (svn), which you'll need to download or update. Leaving aside the subtleties of svn and code organization for now, downloading the code amounts to doing the following:
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
svn checkout http://svn.lmd.jussieu.fr/Planeto/trunk --depth empty
+
svn checkout https://svn.lmd.jussieu.fr/Planeto/trunk --depth empty
 
cd trunk
 
cd trunk
 
svn update LMDZ.COMMON LMDZ.GENERIC
 
svn update LMDZ.COMMON LMDZ.GENERIC
Line 28: Line 28:
  
 
As a side note: the source code that will be fetched by svn can also be browsed online here: https://trac.lmd.jussieu.fr/Planeto
 
As a side note: the source code that will be fetched by svn can also be browsed online here: https://trac.lmd.jussieu.fr/Planeto
 +
 +
Note: if the command line above doesn't work, you may also try to replace 'http' by 'https'.
  
 
=== FCM ===
 
=== FCM ===
 
The FCM (Flexible Configuration Management) tool is a suite of perl scripts to help building and managing codes. We use a slightly modified version which can be obtained using subversion (svn). Ideally you'll want to download it somewhere on your computer once in for all. To do this:
 
The FCM (Flexible Configuration Management) tool is a suite of perl scripts to help building and managing codes. We use a slightly modified version which can be obtained using subversion (svn). Ideally you'll want to download it somewhere on your computer once in for all. To do this:
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
svn checkout http://forge.ipsl.jussieu.fr/fcm/svn/PATCHED/FCM_V1.2
+
svn checkout https://forge.ipsl.jussieu.fr/fcm/svn/PATCHED/FCM_V1.2
 
</syntaxhighlight>
 
</syntaxhighlight>
 
You'll then need to add the resulting FCM_V1.2/bin to your PATH environment variable so that the command "fcm" may be used from anywhere on your machine. e.g. by adding the following line in your .bashrc:
 
You'll then need to add the resulting FCM_V1.2/bin to your PATH environment variable so that the command "fcm" may be used from anywhere on your machine. e.g. by adding the following line in your .bashrc:
Line 41: Line 43:
  
 
=== the NetCDF library ===
 
=== the NetCDF library ===
The GCM reads and writes input and output files in NetCDF format. Therefore a NetCDF library must be available. As this library is not quite standard you'll probably have to install it yourself on your system. You can use the following home-made "install_netcdf4_hdf5_seq.bash" script to do so:
+
The GCM reads and writes input and output files in NetCDF format. Therefore a NetCDF library must be available. As this library is not quite standard you'll probably have to install it yourself on your system (check out [[the netCDF library]] page for more). You can use the following home-made "install_netcdf4_hdf5_seq.bash" script to do so. For this, ensure that you are in your home directory:
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/pub/import/install_netcdf4_hdf5_seq.bash
+
mkdir netcdf
 +
cd netcdf
 +
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5_seq.bash
 
chmod u=rwx install_netcdf4_hdf5_seq.bash
 
chmod u=rwx install_netcdf4_hdf5_seq.bash
 
./install_netcdf4_hdf5_seq.bash > netcdf.log 2>&1
 
./install_netcdf4_hdf5_seq.bash > netcdf.log 2>&1
 
</syntaxhighlight>
 
</syntaxhighlight>
Compiling the library and dependencies can take a while (more than 10 minutes; be patient).
+
Compiling the library and dependencies can take a while (>>15 minutes; be patient).
 
Once this is done, check file netcdf.log to verify that all went well.
 
Once this is done, check file netcdf.log to verify that all went well.
 
You may want to also add its "bin" directory to your PATH environment variable by adding in your .bashrc a line of:
 
You may want to also add its "bin" directory to your PATH environment variable by adding in your .bashrc a line of:
Line 55: Line 59:
 
The assumption here is that you have run the "install_netcdf4_hdf5_seq.bash" script in a "netcdf" subdirectory of your home directory. Adapt accordingly if not.
 
The assumption here is that you have run the "install_netcdf4_hdf5_seq.bash" script in a "netcdf" subdirectory of your home directory. Adapt accordingly if not.
  
<s>As a side note: The NetCDF library provides a very simple command line tool (ncdump) to inspect the contents of NetCDF files, but you'll need more advanced visualization tools (e.g. ncview, Ferret, Panoply, etc. seem further down this page in the "Checking the Results" section).</s>
+
As a side note: The NetCDF library provides a very simple command line tool (ncdump) to inspect the contents of NetCDF files, but you'll need more advanced visualization tools (e.g., Panoply, Python scripts, etc. - see further down this page in the "Checking the Results" section) for more advanced post-processing of the outputs.
 
 
<span style="color:#ff0000">As a side note: The NetCDF library provides a very simple command line tool (ncdump) to inspect the contents of NetCDF files, but you'll need more advanced visualization tools (e.g., ncview, Ferret, Panoply, Python, etc. - seen further down this page in the "Checking the Results" section) for a thorough post-processing of the output.</span>
 
  
 
=== the IOIPSL library ===
 
=== the IOIPSL library ===
Line 105: Line 107:
 
* A '''datadir''' directory containing external inputs (aerosol properties, stellar spectra, etc.)
 
* A '''datadir''' directory containing external inputs (aerosol properties, stellar spectra, etc.)
 
* Some ASCII *.def files containing run parameters, namely:
 
* Some ASCII *.def files containing run parameters, namely:
# ''run.def''' : "master def file" containing main run parameters
+
# [[The_run.def_Input_File | run.def]] : "master def file" containing main run parameters
# ''callphys.def'' : file containing flags and keys for the various physics parametrizations
+
# [[The_callphys.def_Input_File | callphys.def]] : file containing flags and keys for the various physics parametrizations
# ''z2sig.def'' : file describing the sought vertical discretization
+
# [[The_z2sig.def_Input_File | z2sig.def]] : file describing the sought vertical discretization
# ''traceur.def'' : file specifying the tracer number and names
+
# [[The_traceur.def_Input_File | traceur.def]] : file specifying the tracer number and names
# ''gases.def'' : file specifying the list of gases (main and trace) in the atmosphere
+
# [[The_gases.def_Input_File | gases.def]] : file specifying the list of gases (main and trace) in the atmosphere
  
 
== Compiling the GCM ==
 
== Compiling the GCM ==
Now that all the prerequisites are fulfilled, it is time to compile the GCM
+
Now that all the prerequisites are fulfilled, it is (almost!) time to compile the GCM
  
 
=== Prior to a first compilation: setting up the target architecture files ===
 
=== Prior to a first compilation: setting up the target architecture files ===
Compiling the model is done using a dedicated Bash script ''makelmdz_fcm'' located in the '''LMDZ.COMMON''' directory. This script however relies on ''architecture files''. These files contain information on which compiler to use, what compilation options to use, where relevant libraries are located etc. In practice, one must thus create these ASCII text files in the '''arch/''' subdirectory of '''LMDZ.COMMON'''. The naming convention is rather straightforward, when the script ''makelmdz_fcm'' is run with the option '''-arch somename''', it will look for files ''arch/arch-somename.env'', ''arch/arch-somename.path'' and ''arch/arch-somename.fcm''. Leaving aside a detailed description for later (see [[The_Target_Architecture_("arch")_Files|this page]]), <s>let us mention here that</s> <span style="color:#ff0000">here we mention that</span>:
+
Compiling the model is done using a dedicated Bash script ''makelmdz_fcm'' located in the '''LMDZ.COMMON''' directory. This script however relies on ''architecture files''. These files contain information on which compiler to use, what compilation options to use, where relevant libraries are located etc. In practice, one must thus create these ASCII text files in the '''arch/''' subdirectory of '''LMDZ.COMMON'''. The naming convention is rather straightforward, when the script ''makelmdz_fcm'' is run with the option '''-arch somename''', it will look for files ''arch/arch-somename.env'', ''arch/arch-somename.path'' and ''arch/arch-somename.fcm''. Leaving aside a detailed description for later (see [[The_Target_Architecture_("arch")_Files|this page]]), here we mention that:
 
* the ''arch*.env'' is an optional file containing ''environment'' information, such as setting up environment variables or loading modules on some machines, e.g.
 
* the ''arch*.env'' is an optional file containing ''environment'' information, such as setting up environment variables or loading modules on some machines, e.g.
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
 
export NETCDF_HOME=/path/to/the/netcdf/distribution
 
export NETCDF_HOME=/path/to/the/netcdf/distribution
 
</syntaxhighlight>
 
</syntaxhighlight>
An example of an '''arch*.env''' file where "recent" modules (as of May 2022) required for compilation and visualisation are loaded, has been given below:
+
A more realistic (but more specific) example of a '''arch*.env''' file using "recent" module commands, adapted for compilation and visualisation on a given supercomputer, would look more like the following:
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
 
module purge
 
module purge
Line 128: Line 130:
 
export NETCDFF_LIBDIR="/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/lib"
 
export NETCDFF_LIBDIR="/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/lib"
 
</syntaxhighlight>
 
</syntaxhighlight>
Note that the last two lines above specify paths to the '''include''' and '''lib''' directories used on this system and will most likely vary from system to system. Likewise, the exact module versions might have to be modified in your specific '''arch*.env''' file.
+
Note that the last two lines above specify paths to the '''include''' and '''lib''' directories used on this system and will certainly vary from system to system. Likewise, the exact module versions will most likely need to be adapted in your specific '''arch*.env''' file.
* the '''arch*.path''' is a mandatory file containing information relative to external libraries such as NetCDF an IOIPSL, e.g.
+
* the '''arch*.path''' is a mandatory file containing information relative to external libraries such as NetCDF and IOIPSL, e.g.
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
 
ROOT=$PWD
 
ROOT=$PWD
Line 162: Line 164:
 
</syntaxhighlight>
 
</syntaxhighlight>
 
Again, not going into a detailed description (follow [[The_Target_Architecture_("arch")_Files|this link]] for that), just note here that each line corresponds to a keyword (starting with "%") followed by the relevant options. Here, we mention a few of the main ones:
 
Again, not going into a detailed description (follow [[The_Target_Architecture_("arch")_Files|this link]] for that), just note here that each line corresponds to a keyword (starting with "%") followed by the relevant options. Here, we mention a few of the main ones:
** %COMPILER: The compiler to use (here, gfortran)
+
* %COMPILER: The compiler to use (here, gfortran)
** %BASE_FFLAGS: compiler options (always included)
+
* %BASE_FFLAGS: compiler options (always included)
** %PROD_FFLAGS: compilation flags to include if makelmdz_fcm is run with the "-prod" option
+
* %PROD_FFLAGS: compilation flags to include if makelmdz_fcm is run with the "-prod" option
** %DEBUG_FFLAGS: compilation flags to include if makelmdz_fcm is run with the "-debug" option
+
* %DEBUG_FFLAGS: compilation flags to include if makelmdz_fcm is run with the "-debug" option
** %BASE_LD: flags to add at the linking step of the compilation
+
* %BASE_LD: flags to add at the linking step of the compilation
  
=== compiling a test case (early Mars) ===
+
=== Compiling a test case (early Mars) ===
 
To compile the GCM at the sought resolution for the Early Mars test case run (in LMDZ.COMMON):
 
To compile the GCM at the sought resolution for the Early Mars test case run (in LMDZ.COMMON):
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
./makelmdz_fcm -arch local -p std -s 2 -d 32x32x15 -b 32x36 gcm  
+
./makelmdz_fcm -arch local -p std -d 32x32x15 -b 32x36 gcm  
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
<!-- -s option is no more needed ; * '''-s 2''': the physics parametrizations will handle 2 radiatively active tracers (water ice and dust for the Early Mars setup) -->
 
Here, we assume that you have generated the '''arch-local.*''' files as per what is suggested in the previous section.
 
Here, we assume that you have generated the '''arch-local.*''' files as per what is suggested in the previous section.
 
The options for ''makelmdz_fcm'' used here imply:
 
The options for ''makelmdz_fcm'' used here imply:
 
* '''-p std''': the GCM will use the "std" physics package (i.e. the generic physics)
 
* '''-p std''': the GCM will use the "std" physics package (i.e. the generic physics)
 
* '''-d 32x32x15''': the GCM grid will be 32x32 in longitude x latitude, with 15 vertical levels.
 
* '''-d 32x32x15''': the GCM grid will be 32x32 in longitude x latitude, with 15 vertical levels.
* '''-s 2''': the physics parametrizations will handle 2 radiatively active tracers (water ice and dust for the Early Mars setup)
 
 
* '''-b 32x36''': the physics radiative transfer will be done using 32 bands in the IR and 36 in the visible.
 
* '''-b 32x36''': the physics radiative transfer will be done using 32 bands in the IR and 36 in the visible.
 
For a glimpse at all the possible ''makelmdz_fcm'' options and their meanings, run:
 
For a glimpse at all the possible ''makelmdz_fcm'' options and their meanings, run:
Line 185: Line 187:
 
and/or check the dedicated [[The_makelmdz_fcm_GCM_Compilation_Script|makelmdz_fcm page]].
 
and/or check the dedicated [[The_makelmdz_fcm_GCM_Compilation_Script|makelmdz_fcm page]].
  
Upon successful compilation, the executable '''gcm_32x32x15_phystd_seq.e''' should be generated in the '''bin''' subdirectory.
+
Upon successful compilation, the executable '''gcm_32x32x15_phystd_b32x36_seq.e''' should be generated in the '''bin''' subdirectory.
  
 
== Running the GCM ==
 
== Running the GCM ==
To run your first simulation, you need to first copy (or move) the executable '''gcm_32x32x15_phystd_seq.e''' to the directory containing the initial conditions and parameter files, e.g. '''bench_earlymars_32x32x15_b32x36''' and run it.
+
To run your first simulation, you need to first copy (or move) the executable '''gcm_32x32x15_phystd_b32x36_seq.e''' to the directory containing the initial conditions and parameter files, e.g. '''bench_earlymars_32x32x15_b32x36''' and run it.
 
This is usually a two-step process: the (optional) first step is to source the environment architecture file (the very same that was used to compile the model), e.g.,:
 
This is usually a two-step process: the (optional) first step is to source the environment architecture file (the very same that was used to compile the model), e.g.,:
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
Line 195: Line 197:
 
The second step is to execute the model, e.g.,:
 
The second step is to execute the model, e.g.,:
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
./gcm_32x32x15_phystd_seq.e > gcm.out 2>&1
+
./gcm_32x32x15_phystd_b32x36_seq.e > gcm.out 2>&1
 
</syntaxhighlight>
 
</syntaxhighlight>
Here, the (text) outputs messages are redirected into a text file, '''gcm.out'''; more convenient for later inspection then if there is no redirection and then the outputs will be to the screen.
 
  
<span style="color:#ff0000">Hard to understand this last sentence above, please rephrase.</span>
+
With this command line, the (text) outputs messages are redirected into a text file, '''gcm.out'''. It is convenient to keep this file for later inspection (e.g., to track a bug). If there is no redirection (only '''./gcm_32x32x15_phystd_b32x36_seq.e'''), then the outputs will be directly on the screen.
  
 
== Checking the Results of a Simulation ==
 
== Checking the Results of a Simulation ==
Once the simulation is finished, you'll know that all went well (<span style="color:#ff0000">"everything is cool"</span>) if the last few lines of the standard text output reads:
+
Once the simulation is finished, you'll know that all went well ("everything is cool") if the last few lines of the standard text output reads:
[[File:tsurf.png|300px|thumb|Final surface temperature profile of a coarse low-resolution model planet, run for a model time of 1 year (plotted using Panoply).]]
+
[[File:tsurf_benchmark_early_Mars.png|300px|thumb|Final surface temperature map of the benchmark simulation (plotted using Panoply).]]
[[File:tsurf_spacetime.png|300px|thumb|The corresponding temperature space-time diagram (plotted using Panoply).]]
+
[[File:water_ice_cloud_column_benchmark_early_Mars.png|300px|thumb|Final water ice cloud column map of the benchmark simulation (plotted using Panoply).]]
 
<pre>
 
<pre>
 
  in abort_gcm
 
  in abort_gcm
Line 214: Line 215:
 
If not, start looking for an error message and a way to fix the problem...
 
If not, start looking for an error message and a way to fix the problem...
  
Apart from the <s>text standard outputs</s> <span style="color:#ff0000">standard text output</span> messages from the GCM, which are mostly for monitoring and checking the simulation progress, the user will <s>most likely be more</s> more likely be interested in checking the contents of the ''diagfi.nc'' file produced by the GCM, as it contains instantaneous values of the main model variables (atmospheric temperature, winds, etc.).
+
Apart from the standard text output messages from the GCM, which are mostly for monitoring and checking the simulation progress, the user will more likely be interested in checking the contents of the ''diagfi.nc'' file produced by the GCM, as it contains instantaneous values of the main model variables (atmospheric temperature, winds, etc.).
  
<span style="color:#ff0000">Again, in the spirt of the illustrative example concerned in this page, the plots presented here arise from an Early Mars compilation, with a coarse resolution of 32x32x15 (lat x long x vert; see Section "Compiling the GCM"). The orbital/rotation parameters of the planet were changed to resemble that of present-day-Earth. Further, a flat topography was assumed with the surface having a very low thermal inertia (1200 J s$$^{−1/2}$$ m$$^{−2}$$ K$$^{−1}$$) and albedo (0.1). The planet was also set to have Earth-like radiative transfer and to run from an isothermal temperature profile (≈ 293 K). All of this was done by manipulating the ''start.nc'' and ''startfi.nc'' files of the compiled model (see Section "GCM Input Datafiles and Datasets"). The model planet was then ran for a model time of 1 year (see Section "Running the GCM"). With this done, the ''diagfi.nc'' file can then be uploaded to a NetCDF visualisation software.</span>
 
  
<span style="color:#ff0000">In the plots shown here, we present the final surface temperature profile and the temperature "space-time" diagram of the model, both plotted using Panoply. The latter shows the evolution of the temperature through both, space (latitude) and time.</span>
 
  
<span style="color:#ff0000">Note that there are a variety of freely available softwares that can be used to visualise the NetCDF ''diagfi.nc'' file, such as Panoply, Ferret, Ncview, Grads, Python, etc.</span>
+
To check that you have successfully run the simulation, we provide some graphs to evaluate the results of your simulations, for the simulation described in this tutorial (early Mars benchmark, 32x32x15 resolution).
 +
 
 +
In the plots shown here, we present maps of the surface temperatures ('tsurf' variable) and the water ice cloud column ('h2o_ice_col' variable), both plotted using Panoply.
 +
 
 +
Side note: There are a variety of freely available software that can be used to visualise the NetCDF ''diagfi.nc'' file, such as Panoply, Ferret, Ncview, Grads, Python, etc. (see more details in the [[Tool_Box | Tool Box section]])
  
 
== Taking Things to the Next Level ==
 
== Taking Things to the Next Level ==
Line 228: Line 231:
 
* post-processing and analysis of model outputs.
 
* post-processing and analysis of model outputs.
 
All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)!
 
All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)!
 +
 +
[[Category:Generic-Model]]
 +
[[Category:Generic-LMDZ]]

Latest revision as of 13:55, 23 July 2023

In this page we give a hopefully exhaustive enough overview of the necessary prerequisites and steps to download, compile and run a simple simulation with the GCM in an "Early Mars" setup (i.e. a desert planet with a CO2 atmosphere) on a Linux computer.

Note that there is a dedicated, install script, that attempts to do all these steps (up to and including running the simulation) that you can obtain here: https://web.lmd.jussieu.fr/~lmdz/planets/install_lmdz_generic_earlymars.bash Automating the process is not trivial as there are many subtle variants of Linux flavors and user account setups, so the script may fail in your case, but hopefully the information given in this page should help you solve the encountered problems.

Prerequisites: Tools and Libraries

In order to use (i.e. compile and run) the GCM, one needs to have some tools and installed libraries at hand. We list below a (minimal) set that you should check that is available and/or that you'll need to first install on your machine. Note that we assume in this tutorial that you are on a Linux native-OS/cluster.

Fortran compiler

The GCM source code is in Fortran. One thus needs a Fortran compiler to build (compile) the executable. The most easily available one (on Linux) is gfortran and examples discussed here will assume it is the one used. You can check that you indeed have a gfortran compiler at hand with the following Bash command:

which gfortran

which should return something like

/usr/bin/gfortran

Subversion

The source code is managed using subversion (svn), which you'll need to download or update. Leaving aside the subtleties of svn and code organization for now, downloading the code amounts to doing the following:

svn checkout https://svn.lmd.jussieu.fr/Planeto/trunk --depth empty
cd trunk
svn update LMDZ.COMMON LMDZ.GENERIC

As a side note: the source code that will be fetched by svn can also be browsed online here: https://trac.lmd.jussieu.fr/Planeto

Note: if the command line above doesn't work, you may also try to replace 'http' by 'https'.

FCM

The FCM (Flexible Configuration Management) tool is a suite of perl scripts to help building and managing codes. We use a slightly modified version which can be obtained using subversion (svn). Ideally you'll want to download it somewhere on your computer once in for all. To do this:

svn checkout https://forge.ipsl.jussieu.fr/fcm/svn/PATCHED/FCM_V1.2

You'll then need to add the resulting FCM_V1.2/bin to your PATH environment variable so that the command "fcm" may be used from anywhere on your machine. e.g. by adding the following line in your .bashrc:

export PATH=$PATH:$HOME/FCM_V1.2/bin

The assumption here is that the downloaded FCM_V1.2 directory is in your home ($HOME) directory. Adapt accordingly if not.

the NetCDF library

The GCM reads and writes input and output files in NetCDF format. Therefore a NetCDF library must be available. As this library is not quite standard you'll probably have to install it yourself on your system (check out the netCDF library page for more). You can use the following home-made "install_netcdf4_hdf5_seq.bash" script to do so. For this, ensure that you are in your home directory:

mkdir netcdf
cd netcdf
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5_seq.bash
chmod u=rwx install_netcdf4_hdf5_seq.bash
./install_netcdf4_hdf5_seq.bash > netcdf.log 2>&1

Compiling the library and dependencies can take a while (>>15 minutes; be patient). Once this is done, check file netcdf.log to verify that all went well. You may want to also add its "bin" directory to your PATH environment variable by adding in your .bashrc a line of:

export PATH=$PATH:$HOME/netcdf/bin

The assumption here is that you have run the "install_netcdf4_hdf5_seq.bash" script in a "netcdf" subdirectory of your home directory. Adapt accordingly if not.

As a side note: The NetCDF library provides a very simple command line tool (ncdump) to inspect the contents of NetCDF files, but you'll need more advanced visualization tools (e.g., Panoply, Python scripts, etc. - see further down this page in the "Checking the Results" section) for more advanced post-processing of the outputs.

the IOIPSL library

The IOIPSL (Input/Output IPSL) library is a library designed to handle both the reading of some input files used by the GCM (the *.def files which are described further below) and the writing of some NetCDF output files.

Prior to a first compilation: ksh to bash conversion

Some of the IOIPSL install scripts are written in ksh (Korn shell). Given that most systems currently use Bash (Bourne Again Shell) as their command-line interpreter and not ksh (Korn Shell), you might need to install ksh on your system (assuming you have super-user privileges), for e.g., on Linux-Ubuntu:

sudo apt install ksh

Or, if that is not an option, change the occurrences in the package's scripts (ins_m_prec) from:

#!/bin/ksh

to

#!/bin/bash

Automated IOIPSL install script

Scripts to download and install the IOIPSL library can be found in the "ioipsl" subdirectory of the "LMDZ.COMMON" library. Since here we assume we're working with gfortran, the relevant one is "install_ioipsl_gfortran.bash". If your PATH environment variable is already such that it includes the path to your NetCDF library distribution's bin directory (see previous section) then all you need to do is execute the script:

./install_ioipsl_gfortran.bash

If all went well the script should end with:

OK: ioipsl library is in ...

(for further details about the IOIPSL library and installing it, follow the link and/or use the Search Box at the top of this page)

GCM Input Datafiles and Datasets

In order to run, the GCM needs some inputs, such as initial conditions (values of state variables), external inputs (e.g. optical properties of aerosols) and simulation setup (e.g. specifications on how long to run, which parametrizations should be activated, etc.)

In the spirit of the illustrative example considered here (an "Early Mars" simulation), a set of necessary input data may be downloaded with:

wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/planets/generic/bench_earlymars_32x32x15_b32x36.tar.gz

Once unpacked ("tar xvzf bench_earlymars_32x32x15_b32x36.tar.gz") the resulting "bench_earlymars_32x32x15_b32x36" will contain all that is needed, namely:

callphys.def  gases.def  startfi.nc  traceur.def
datadir/      run.def    start.nc    z2sig.def
  • Initial condition NetCDF files start.nc and startfi.nc; the first containing initial condition values for the dynamics and the second initial condition values for the physics.
  • A datadir directory containing external inputs (aerosol properties, stellar spectra, etc.)
  • Some ASCII *.def files containing run parameters, namely:
  1. run.def : "master def file" containing main run parameters
  2. callphys.def : file containing flags and keys for the various physics parametrizations
  3. z2sig.def : file describing the sought vertical discretization
  4. traceur.def : file specifying the tracer number and names
  5. gases.def : file specifying the list of gases (main and trace) in the atmosphere

Compiling the GCM

Now that all the prerequisites are fulfilled, it is (almost!) time to compile the GCM

Prior to a first compilation: setting up the target architecture files

Compiling the model is done using a dedicated Bash script makelmdz_fcm located in the LMDZ.COMMON directory. This script however relies on architecture files. These files contain information on which compiler to use, what compilation options to use, where relevant libraries are located etc. In practice, one must thus create these ASCII text files in the arch/ subdirectory of LMDZ.COMMON. The naming convention is rather straightforward, when the script makelmdz_fcm is run with the option -arch somename, it will look for files arch/arch-somename.env, arch/arch-somename.path and arch/arch-somename.fcm. Leaving aside a detailed description for later (see this page), here we mention that:

  • the arch*.env is an optional file containing environment information, such as setting up environment variables or loading modules on some machines, e.g.
export NETCDF_HOME=/path/to/the/netcdf/distribution

A more realistic (but more specific) example of a arch*.env file using "recent" module commands, adapted for compilation and visualisation on a given supercomputer, would look more like the following:

module purge
module load GCC/10.3.0  OpenMPI/4.1.1
module load netCDF-Fortran/4.5.3
export NETCDF_INCDIR="/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/include"
export NETCDFF_LIBDIR="/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/lib"

Note that the last two lines above specify paths to the include and lib directories used on this system and will certainly vary from system to system. Likewise, the exact module versions will most likely need to be adapted in your specific arch*.env file.

  • the arch*.path is a mandatory file containing information relative to external libraries such as NetCDF and IOIPSL, e.g.
ROOT=$PWD

NETCDF_LIBDIR="-L${NETCDF_HOME}/lib"
NETCDF_LIB="-lnetcdf -lnetcdff"
NETCDF_INCDIR="-I${NETCDF_HOME}/include"

IOIPSL_INCDIR="-I$ROOT/../IOIPSL/inc"
IOIPSL_LIBDIR="-L$ROOT/../IOIPSL/lib"
IOIPSL_LIB="-lioipsl"

Each library is referenced by a fixed identifier (NETCDF, IOIPSL, XIOS, ...) and 3 trailing strings: _LIBDIR, for the path to the library, _LIB, for the library name(s), and _INCDIR for the path to the library's include directory.

  • the arch*.fcm is a mandatory file containing information relative to the compiler and compilation options, e.g.
%COMPILER            gfortran
%LINK                gfortran
%AR                  ar
%MAKE                make
%FPP_FLAGS           -P -traditional
%FPP_DEF             NC_DOUBLE
%BASE_FFLAGS         -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons
%PROD_FFLAGS         -O3
%DEV_FFLAGS          -O
%DEBUG_FFLAGS        -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace
%MPI_FFLAGS
%OMP_FFLAGS         
%BASE_LD     
%MPI_LD
%OMP_LD

Again, not going into a detailed description (follow this link for that), just note here that each line corresponds to a keyword (starting with "%") followed by the relevant options. Here, we mention a few of the main ones:

  •  %COMPILER: The compiler to use (here, gfortran)
  •  %BASE_FFLAGS: compiler options (always included)
  •  %PROD_FFLAGS: compilation flags to include if makelmdz_fcm is run with the "-prod" option
  •  %DEBUG_FFLAGS: compilation flags to include if makelmdz_fcm is run with the "-debug" option
  •  %BASE_LD: flags to add at the linking step of the compilation

Compiling a test case (early Mars)

To compile the GCM at the sought resolution for the Early Mars test case run (in LMDZ.COMMON):

./makelmdz_fcm -arch local -p std -d 32x32x15 -b 32x36 gcm

Here, we assume that you have generated the arch-local.* files as per what is suggested in the previous section. The options for makelmdz_fcm used here imply:

  • -p std: the GCM will use the "std" physics package (i.e. the generic physics)
  • -d 32x32x15: the GCM grid will be 32x32 in longitude x latitude, with 15 vertical levels.
  • -b 32x36: the physics radiative transfer will be done using 32 bands in the IR and 36 in the visible.

For a glimpse at all the possible makelmdz_fcm options and their meanings, run:

./makelmdz_fcm -h

and/or check the dedicated makelmdz_fcm page.

Upon successful compilation, the executable gcm_32x32x15_phystd_b32x36_seq.e should be generated in the bin subdirectory.

Running the GCM

To run your first simulation, you need to first copy (or move) the executable gcm_32x32x15_phystd_b32x36_seq.e to the directory containing the initial conditions and parameter files, e.g. bench_earlymars_32x32x15_b32x36 and run it. This is usually a two-step process: the (optional) first step is to source the environment architecture file (the very same that was used to compile the model), e.g.,:

source ../LMDZ.COMMON/arch.env

The second step is to execute the model, e.g.,:

./gcm_32x32x15_phystd_b32x36_seq.e > gcm.out 2>&1

With this command line, the (text) outputs messages are redirected into a text file, gcm.out. It is convenient to keep this file for later inspection (e.g., to track a bug). If there is no redirection (only ./gcm_32x32x15_phystd_b32x36_seq.e), then the outputs will be directly on the screen.

Checking the Results of a Simulation

Once the simulation is finished, you'll know that all went well ("everything is cool") if the last few lines of the standard text output reads:

Final surface temperature map of the benchmark simulation (plotted using Panoply).
Final water ice cloud column map of the benchmark simulation (plotted using Panoply).
 in abort_gcm
 Stopping in leapfrog
 Reason = Simulation finished 
 Everything is cool

If not, start looking for an error message and a way to fix the problem...

Apart from the standard text output messages from the GCM, which are mostly for monitoring and checking the simulation progress, the user will more likely be interested in checking the contents of the diagfi.nc file produced by the GCM, as it contains instantaneous values of the main model variables (atmospheric temperature, winds, etc.).


To check that you have successfully run the simulation, we provide some graphs to evaluate the results of your simulations, for the simulation described in this tutorial (early Mars benchmark, 32x32x15 resolution).

In the plots shown here, we present maps of the surface temperatures ('tsurf' variable) and the water ice cloud column ('h2o_ice_col' variable), both plotted using Panoply.

Side note: There are a variety of freely available software that can be used to visualise the NetCDF diagfi.nc file, such as Panoply, Ferret, Ncview, Grads, Python, etc. (see more details in the Tool Box section)

Taking Things to the Next Level

The short tutorial presented in this page is meant to be useful to get an overview of what is required to install and run the GCM, in addition to checking the results of a simulation. Moving on to a more intensive and problem-specific usage will require diving into additional topics and aspects such as:

  • Selecting the appropriate inputs and run parameters for a given study.
  • Compiling and running in parallel (MPI and/or OpenMP) to obtain results in a reasonable time frame.
  • post-processing and analysis of model outputs.

All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)!