<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Emillour</id>
		<title>Planets - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Emillour"/>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Special:Contributions/Emillour"/>
		<updated>2026-05-15T16:11:31Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.7</generator>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;diff=3268</id>
		<title>Quick Install and Run</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;diff=3268"/>
				<updated>2026-05-15T14:33:02Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this page we give a hopefully exhaustive enough overview of the necessary prerequisites and steps to download, compile and run a simple simulation with the GCM in an &amp;quot;Early Mars&amp;quot; setup (i.e. a desert planet with a CO2 atmosphere) on a Linux computer.&lt;br /&gt;
&lt;br /&gt;
Note that there is a dedicated, install script, that attempts to do all these steps (up to and including running the simulation) that you can obtain here: https://web.lmd.jussieu.fr/~lmdz/planets/generic/install_scripts/install_lmdz_generic_earlymars.bash&lt;br /&gt;
Automating the process is not trivial as there are many subtle variants of Linux flavors and user account setups, so the script may fail in your case, but hopefully the information given in this page should help you solve the encountered problems.&lt;br /&gt;
&lt;br /&gt;
Note also that on some clusters (at least the ones we know of and extensively use, e.g. [[Using Adastra|Adastra]], [[Using the MESOIPSL cluster|MESOIPSL]], [[Using MeSU|MeSU]] or [[Using Irene Rome|Irene]]) some of the steps below may be skipped because the needed compilers and libraries are known and at hand. &lt;br /&gt;
&lt;br /&gt;
== Prerequisites: Tools and Libraries ==&lt;br /&gt;
In order to use (i.e. compile and run) the GCM, one needs to have some tools and installed libraries at hand. We list below a (minimal) set that you should check that is available and/or that you'll need to first install on your machine. Note that we assume in this tutorial that you are on a Linux native-OS/cluster.&lt;br /&gt;
&lt;br /&gt;
===  Fortran compiler ===&lt;br /&gt;
The GCM source code is in Fortran. One thus needs a Fortran compiler to build (compile) the executable.&lt;br /&gt;
The most easily available one (on Linux) is gfortran and examples discussed here will assume it is the one used.&lt;br /&gt;
You can check that you indeed have a gfortran compiler at hand with the following Bash command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
which gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
which should return something like&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/usr/bin/gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Subversion ===&lt;br /&gt;
The source code is managed using subversion (svn), which you'll need to download or update. Leaving aside the subtleties of svn and code organization for now, downloading the code amounts to doing the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://svn.lmd.jussieu.fr/Planeto/trunk --depth empty&lt;br /&gt;
cd trunk&lt;br /&gt;
svn update LMDZ.COMMON LMDZ.GENERIC&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As a side note: the source code that will be fetched by svn can also be browsed online here: https://trac.lmd.jussieu.fr/Planeto&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Note: if the command line above doesn't work, you may also try to replace 'http' by 'https'.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Git === &lt;br /&gt;
&lt;br /&gt;
Alternatively to svn, you can use [[Git usage|git to download the source code]]. &lt;br /&gt;
&lt;br /&gt;
=== FCM ===&lt;br /&gt;
The FCM (Flexible Configuration Management) tool is a suite of perl scripts to help building and managing codes. We use a slightly modified version which can be obtained using subversion (svn). Ideally you'll want to download it somewhere on your computer once in for all. To do this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://forge.ipsl.jussieu.fr/fcm/svn/PATCHED/FCM_V1.2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
As of April 2026 the svn server on which this is distributed is swamped by thousands of request by IA bots generating issues for regular users for whom the svn checkout is interrupted with error messages of the likes of:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn: E170013: Unable to connect to repository at URL 'https://forge.ipsl.fr/fcm/svn/PATCHED/FCM_V1.2'&lt;br /&gt;
svn: E120108: Error running context: The server unexpectedly closed the connection.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
As an alternative we have put a frozen version (revision 7509) of the IOIPSL source code online here:&lt;br /&gt;
https://web.lmd.jussieu.fr/~lmdz/planets/alternatives/FCM_V1.2.r12.tar.gz&lt;br /&gt;
&lt;br /&gt;
You'll then need to add the resulting FCM_V1.2/bin to your PATH environment variable so that the command &amp;quot;fcm&amp;quot; may be used from anywhere on your machine. e.g. by adding the following line in your .bashrc (the .bashrc file is a hidden configuration script in your home directory (~/.bashrc) that runs whenever you start a new Bash shell):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/FCM_V1.2/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that the downloaded FCM_V1.2 directory is in your home ($HOME) directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
=== the NetCDF library ===&lt;br /&gt;
The GCM reads and writes input and output files in NetCDF format, therefore a NetCDF library is required. Most of the clusters propose a NetCDF library that you can load before using the model. &lt;br /&gt;
&lt;br /&gt;
If this library is not available, you can install it by yourself on your system (check out [[the netCDF library]] page for more). You can use the following home-made &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script to do so. For this, ensure that you are in your home directory:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir netcdf&lt;br /&gt;
cd netcdf&lt;br /&gt;
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5_seq.bash&lt;br /&gt;
chmod u=rwx install_netcdf4_hdf5_seq.bash&lt;br /&gt;
./install_netcdf4_hdf5_seq.bash &amp;gt; netcdf.log 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Compiling the library and dependencies can take a while (&amp;gt;&amp;gt;15 minutes; be patient).&lt;br /&gt;
Once this is done, check file netcdf.log to verify that all went well.&lt;br /&gt;
You may want to also add its &amp;quot;bin&amp;quot; directory to your PATH environment variable by adding in your .bashrc a line of:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/netcdf/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that you have run the &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script in a &amp;quot;netcdf&amp;quot; subdirectory of your home directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
As a side note: The NetCDF library provides a very simple command line tool (ncdump) to inspect the contents of NetCDF files, but you'll need more advanced visualization tools (e.g., Panoply, Python scripts, etc. - see further down this page in the &amp;quot;Checking the Results&amp;quot; section) for more advanced post-processing of the outputs.&lt;br /&gt;
&lt;br /&gt;
=== the IOIPSL library ===&lt;br /&gt;
&lt;br /&gt;
The IOIPSL (Input/Output IPSL) library is designed to handle both the reading of some input files used by the GCM (the *.def files which are described further below) and the writing of some NetCDF output files.&lt;br /&gt;
&lt;br /&gt;
==== Automated IOIPSL install script ====&lt;br /&gt;
Scripts to download and install the IOIPSL library can be found in the &amp;quot;ioipsl&amp;quot; subdirectory of the &amp;quot;LMDZ.COMMON&amp;quot; library. Since here we assume we're working with gfortran, the relevant one is &amp;quot;install_ioipsl_gfortran.bash&amp;quot;. If your PATH environment variable is already such that it includes the path to your NetCDF library distribution's bin directory (see previous section) then all you need to do is execute the script:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_gfortran.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If all went well the script should end with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
OK: ioipsl library is in ...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''(for further details about [[The_IOIPSL_Library|the IOIPSL library]] and installing it, follow the link and/or use the Search Box at the top of this page)''&lt;br /&gt;
&lt;br /&gt;
== GCM Input Datafiles and Datasets ==&lt;br /&gt;
In order to run, the GCM needs some inputs, such as initial conditions (values of state variables), external inputs (e.g. optical properties of aerosols) and simulation setup (e.g. specifications on how long to run, which parametrizations should be activated, etc.)&lt;br /&gt;
&lt;br /&gt;
In the spirit of the illustrative example considered here (an &amp;quot;Early Mars&amp;quot; simulation), a set of necessary input data may be downloaded with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/reference_earlymars_64x48x26_b40x38.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Once unpacked (to do that, you can execute the command &amp;quot;tar xvzf reference_earlymars_64x48x26_b40x38.tar.gz&amp;quot;) the resulting &amp;quot;reference_earlymars_64x48x26_b40x38&amp;quot; will contain all that is needed, namely:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
callphys.def  gases.def  startfi.nc  traceur.def&lt;br /&gt;
datadir/      run.def    start.nc    z2sig.def&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Initial condition NetCDF files ''start.nc'' and ''startfi.nc''; the first containing initial condition values for the dynamics and the second initial condition values for the physics.&lt;br /&gt;
* A '''datadir''' directory containing external inputs (aerosol properties, stellar spectra, etc.)&lt;br /&gt;
* Some ASCII *.def files containing run parameters, namely:&lt;br /&gt;
# [[The_run.def_Input_File | run.def]] : &amp;quot;master def file&amp;quot; containing main run parameters&lt;br /&gt;
# [[The_callphys.def_Input_File | callphys.def]] : file containing flags and keys for the various physics parametrizations&lt;br /&gt;
# [[The_z2sig.def_Input_File | z2sig.def]] : file describing the sought vertical discretization&lt;br /&gt;
# [[The_traceur.def_Input_File | traceur.def]] : file specifying the tracer number and names&lt;br /&gt;
# [[The_gases.def_Input_File | gases.def]] : file specifying the list of gases (main and trace) in the atmosphere&lt;br /&gt;
&lt;br /&gt;
== Compiling the GCM ==&lt;br /&gt;
Now that all the prerequisites are fulfilled, it is (almost!) time to compile the GCM&lt;br /&gt;
&lt;br /&gt;
=== Prior to a first compilation: setting up the target architecture files ===&lt;br /&gt;
Compiling the model is done using a dedicated Bash script ''makelmdz_fcm'' located in the '''LMDZ.COMMON''' directory. This script however relies on ''architecture files''. These files contain information on which compiler to use, what compilation options to use, where relevant libraries are located etc. In practice, one must thus create these ASCII text files in the '''arch/''' subdirectory of '''LMDZ.COMMON'''. The naming convention is rather straightforward, when the script ''makelmdz_fcm'' is run with the option '''-arch somename''', it will look for files ''arch/arch-somename.env'', ''arch/arch-somename.path'' and ''arch/arch-somename.fcm''. Leaving aside a detailed description for later (see [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this page]]), here we mention that:&lt;br /&gt;
* the ''arch*.env'' is an optional file containing ''environment'' information, such as setting up environment variables or loading modules on some machines, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export NETCDF_HOME=/path/to/the/netcdf/distribution&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
A more realistic (but more specific) example of a '''arch*.env''' file using &amp;quot;recent&amp;quot; module commands, adapted for compilation and visualisation on a given supercomputer, would look more like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module purge&lt;br /&gt;
module load GCC/10.3.0  OpenMPI/4.1.1&lt;br /&gt;
module load netCDF-Fortran/4.5.3&lt;br /&gt;
export NETCDF_INCDIR=&amp;quot;/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/include&amp;quot;&lt;br /&gt;
export NETCDFF_LIBDIR=&amp;quot;/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/lib&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that the last two lines above specify paths to the '''include''' and '''lib''' directories used on this system and will certainly vary from system to system. Likewise, the exact module versions will most likely need to be adapted in your specific '''arch*.env''' file.&lt;br /&gt;
* the '''arch*.path''' is a mandatory file containing information relative to external libraries such as NetCDF and IOIPSL, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ROOT=$PWD&lt;br /&gt;
&lt;br /&gt;
NETCDF_LIBDIR=&amp;quot;-L${NETCDF_HOME}/lib&amp;quot;&lt;br /&gt;
NETCDF_LIB=&amp;quot;-lnetcdf -lnetcdff&amp;quot;&lt;br /&gt;
NETCDF_INCDIR=&amp;quot;-I${NETCDF_HOME}/include&amp;quot;&lt;br /&gt;
&lt;br /&gt;
IOIPSL_INCDIR=&amp;quot;-I$ROOT/../IOIPSL/inc&amp;quot;&lt;br /&gt;
IOIPSL_LIBDIR=&amp;quot;-L$ROOT/../IOIPSL/lib&amp;quot;&lt;br /&gt;
IOIPSL_LIB=&amp;quot;-lioipsl&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Each library is referenced by a fixed identifier (NETCDF, IOIPSL, XIOS, ...) and 3 trailing strings: '''_LIBDIR''', for the path to the library, '''_LIB''', for the library name(s), and '''_INCDIR''' for the path to the library's ''include'' directory.&lt;br /&gt;
&lt;br /&gt;
* the '''arch*.fcm''' is a mandatory file containing information relative to the compiler and compilation options, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
%COMPILER            gfortran&lt;br /&gt;
%LINK                gfortran&lt;br /&gt;
%AR                  ar&lt;br /&gt;
%MAKE                make&lt;br /&gt;
%FPP_FLAGS           -P -traditional&lt;br /&gt;
%FPP_DEF             NC_DOUBLE&lt;br /&gt;
%BASE_FFLAGS         -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons&lt;br /&gt;
%PROD_FFLAGS         -O3&lt;br /&gt;
%DEV_FFLAGS          -O&lt;br /&gt;
%DEBUG_FFLAGS        -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace&lt;br /&gt;
%MPI_FFLAGS&lt;br /&gt;
%OMP_FFLAGS         &lt;br /&gt;
%BASE_LD     &lt;br /&gt;
%MPI_LD&lt;br /&gt;
%OMP_LD              &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Again, not going into a detailed description (follow [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this link]] for that), just note here that each line corresponds to a keyword (starting with &amp;quot;%&amp;quot;) followed by the relevant options. Here, we mention a few of the main ones:&lt;br /&gt;
* %COMPILER: The compiler to use (here, gfortran)&lt;br /&gt;
* %BASE_FFLAGS: compiler options (always included)&lt;br /&gt;
* %PROD_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-prod&amp;quot; option&lt;br /&gt;
* %DEBUG_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-debug&amp;quot; option&lt;br /&gt;
* %BASE_LD: flags to add at the linking step of the compilation&lt;br /&gt;
&lt;br /&gt;
Note that if you are using a recent version of gfortran (10 or more), you have to add an extra option in the %BASE_FFLAGS, that is '''-fallow-argument-mismatch'''&lt;br /&gt;
&lt;br /&gt;
Also note that you can find in the '''LMDZ.COMMON/arch/''' many examples of arch files that you can re-use as is if you compile the model on our usual computing clusters (e.g. Spirit, Adastra, etc.). Just check the content of the directory to see if your favorite computing cluster already has arch files.&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (early Mars) ===&lt;br /&gt;
To compile the GCM at the sought resolution for the Early Mars test case run (in LMDZ.COMMON):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -arch local -p generic -d 64x48x26 -b 40x38 gcm &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;!-- -s option is no more needed ; * '''-s 2''': the physics parametrizations will handle 2 radiatively active tracers (water ice and dust for the Early Mars setup) --&amp;gt;&lt;br /&gt;
Here, we assume that you have generated the '''arch-local.*''' files as per what is suggested in the previous section.&lt;br /&gt;
The options for ''makelmdz_fcm'' used here imply:&lt;br /&gt;
* '''-p generic''': the GCM will use the &amp;quot;generic&amp;quot; physics package&lt;br /&gt;
* '''-d 64x48x26''': the GCM grid will be 64x48 in longitude x latitude, with 26 vertical levels.&lt;br /&gt;
* '''-b 40x38''': the physics radiative transfer will be done using 40 bands in the IR and 38 in the visible.&lt;br /&gt;
For a glimpse at all the possible ''makelmdz_fcm'' options and their meanings, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -h&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and/or check the dedicated [[The_makelmdz_fcm_GCM_Compilation_Script|makelmdz_fcm page]].&lt;br /&gt;
&lt;br /&gt;
Upon successful compilation, the executable '''gcm_64x48x26_phygeneric_b40x38_seq.e''' should be generated in the '''bin''' subdirectory.&lt;br /&gt;
&lt;br /&gt;
=== Known issues ===&lt;br /&gt;
&lt;br /&gt;
If the compilation fails, it might be due to the options used in the arch file. &lt;br /&gt;
For example, if you are using gfortran prior to 10, you could get an error such as:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gfortran: error: unrecognized command line option ‘-fallow-argument-mismatch’; did you mean ‘-Wno-argument-mismatch’?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This can be solved by removing the option '''-fallow-argument-mismatch''' from the arch.fcm file.&lt;br /&gt;
&lt;br /&gt;
If you are using a recent version of gfortran (10 of beyond) without the option '''-fallow-argument-mismatch''', the compilation will probably fail ith the error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  136 |      .       idim_index,nvarid)&lt;br /&gt;
      |             2                                       &lt;br /&gt;
......&lt;br /&gt;
  211 |       ierr = NF_DEF_VAR (nid, &amp;quot;aire&amp;quot;, NF_DOUBLE, 2, id,nvarid)&lt;br /&gt;
      |                                                    1&lt;br /&gt;
Error: Rank mismatch between actual argument at (1) and actual argument at (2) (scalar and rank-1)&lt;br /&gt;
fcm_internal compile failed (256)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Add the compilation option in the arch file to solve the issue.&lt;br /&gt;
&lt;br /&gt;
== Running the GCM ==&lt;br /&gt;
To run your first simulation, you need to first copy (or move) the executable '''gcm_64x48x26_phygeneric_b40x38_seq.e''' to the directory containing the initial conditions and parameter files, e.g. '''reference_earlymars_64x48x26_b40x38''' and run it.&lt;br /&gt;
This is usually a two-step process: the (optional) first step is to source the environment architecture file (the very same that was used to compile the model), e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ../LMDZ.COMMON/arch.env&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The second step is to execute the model, e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./gcm_64x48x26_phygeneric_b40x38_seq.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this command line, the (text) outputs messages are redirected into a text file, '''gcm.out'''. It is convenient to keep this file for later inspection (e.g., to track a bug). If there is no redirection (only '''./gcm_64x48x26_phygeneric_b40x38_seq.e'''), then the outputs will be directly on the screen.&lt;br /&gt;
&lt;br /&gt;
== Checking the Results of a Simulation ==&lt;br /&gt;
Once the simulation is finished, you'll know that all went well (&amp;quot;everything is cool&amp;quot;) if the last few lines of the standard text output reads:&lt;br /&gt;
[[File:tsurf_benchmark_early_Mars.png|300px|thumb|Final surface temperature map of the reference simulation (plotted using Panoply).]]&lt;br /&gt;
[[File:water_ice_cloud_column_benchmark_early_Mars.png|300px|thumb|Final water ice cloud column map of the reference simulation (plotted using Panoply).]]&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 in abort_gcm&lt;br /&gt;
 Stopping in leapfrog&lt;br /&gt;
 Reason = Simulation finished &lt;br /&gt;
 Everything is cool&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, start looking for an error message and a way to fix the problem...&lt;br /&gt;
&lt;br /&gt;
Apart from the standard text output messages from the GCM, which are mostly for monitoring and checking the simulation progress, the user will more likely be interested in checking the contents of the ''diagfi.nc'' file produced by the GCM, as it contains instantaneous values of the main model variables (atmospheric temperature, winds, etc.).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To check that you have successfully run the simulation, we provide some graphs to evaluate the results of your simulations, for a simulation similar to the one described in this tutorial (early Mars, 32x32x15 resolution). TODO : update plots to current example&lt;br /&gt;
&lt;br /&gt;
In the plots shown here, we present maps of the surface temperatures ('tsurf' variable) and the water ice cloud column ('h2o_ice_col' variable), both plotted using Panoply.&lt;br /&gt;
&lt;br /&gt;
Side note: There are a variety of freely available software that can be used to visualise the NetCDF ''diagfi.nc'' file, such as Panoply, Ferret, Ncview, Grads, Python, etc. (see more details in the [[Tool_Box | Tool Box section]])&lt;br /&gt;
&lt;br /&gt;
== Taking Things to the Next Level ==&lt;br /&gt;
The short tutorial presented in this page is meant to be useful to get an overview of what is required to install and run the GCM, in addition to checking the results of a simulation. Moving on to a more intensive and problem-specific usage will require diving into additional topics and aspects such as:&lt;br /&gt;
* Selecting the appropriate inputs and run parameters for a given study.&lt;br /&gt;
* Compiling and running in parallel (MPI and/or OpenMP) to obtain results in a reasonable time frame.&lt;br /&gt;
* post-processing and analysis of model outputs.&lt;br /&gt;
All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)!&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_IOIPSL_Library&amp;diff=3267</id>
		<title>The IOIPSL Library</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_IOIPSL_Library&amp;diff=3267"/>
				<updated>2026-05-15T14:08:20Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The IOIPSL (for Input Output IPSL) library is a library used by the GCM to read in input parameters from the run.def text file and related *.def files. It can in fact do more, such as writing output NetCDF files a, a feature not used by the Generic model's physics package.&lt;br /&gt;
&lt;br /&gt;
Just like for any library, one should need to only install it once and then just use it by linking to it and pointing to its modules. In practice via the compilation options:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
-L/path/to/the/ioipsl/library/lib -lioipsl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
-I/path/to/the/ioipsl/library/inc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Installing the IOIPSL library ==&lt;br /&gt;
There are some dedicated scripts in '''LMDZ.COMMON/ioipsl''' that one can adapt and use, hopefully without too much trouble using pointers and information given here&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
IOIPSL is written in Fortran, so you need to have a Fortran compiler at hand (gfortran in the following examples), as well as an available NetCDF library compiled using that same compiler.&lt;br /&gt;
&lt;br /&gt;
One of the IOIPSL install scripts uses ksh (Korn Shell), which is not always available (Bash, Bourne Again Shell, is now the standard). So you might want to first install it, e.g. on Linux-Ubuntu:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt install ksh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If that is not an option (e.g. you do not have super-user privileges to install ksh) the you will need to manually modify a file (see section &amp;quot;Know problems and issues worth knowing about&amp;quot; below)&lt;br /&gt;
&lt;br /&gt;
=== Downloading the IOIPSL library sources ===&lt;br /&gt;
One can do this using svn (subversion):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout --username icmc_users --password icmc2022 https://forge.ipsl.fr/igcmg/svn/IOIPSL/trunk IOIPSL&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
where the trailing argument ''IOIPSL'' is customizable to whatever (non existing) subdirectory name one wants to have the library put in.&lt;br /&gt;
&lt;br /&gt;
=== The makeioipsl_fcm compilation script and related FCM architecture files ===&lt;br /&gt;
The main script to compile the library is '''makeioipsl_fcm''', located in the distribution's top directory. It uses FCM (Flexible Configuration Management) to know about specific configuration options and set-up and thus requires (just like the GCM) appropriate ''architecture files''. These files contain information on which compiler to use, what compilation options to use, where relevant libraries are located etc. In practice one must thus create these ASCII text files in the '''arch/''' subdirectory. The naming convention is rather straightforward, when the script '''makeioipsl_fcm''' is run with the option '''-arch somename''', it will look for files '''arch/arch-somename.env''', '''arch/arch-somename.path''' and '''arch/arch-somename.fcm'''.&lt;br /&gt;
&lt;br /&gt;
TODO: DETAIL ARCH FILE CONTENTS (Or give link to dedicated page?)&lt;br /&gt;
&lt;br /&gt;
Note that one can (and should! At least that is what we recommend) use the same set of '''architecture files''' for the GCM and IOIPSL.&lt;br /&gt;
&lt;br /&gt;
Building the IOIPSL library then merely requires running the '''makeioipsl_fcm''' with the adequate mandatory '''-arch''' option (note that one can lear about all the possible options by running '''./makeioipsl_fcm -h'''), e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makeioipsl_fcm -arch gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Checking that the install was successful ===&lt;br /&gt;
If the previous step went well the '''IOIPSL/lib''' directory should contain the library:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
libioipsl.a&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and the '''IOIPSL/inc''' directory should contain the following module files:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
calendar.mod   flincom.mod   histcom.mod  restcom.mod&lt;br /&gt;
defprec.mod    fliocom.mod   ioipsl.mod   stringop.mod&lt;br /&gt;
errioipsl.mod  getincom.mod  mathelp.mod&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Congratulations! You can now use the IOIPSL library.&lt;br /&gt;
&lt;br /&gt;
== Know problems and issues worth knowing about ==&lt;br /&gt;
* ksh is needed to compile &amp;quot;out of the box&amp;quot;; if ksh is not available then the workaround is to replace line&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/ksh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
in the '''IOIPSL/ins_m_prec''' file&lt;br /&gt;
* Recent versions of the IOIPSL fails to work properly if compiled by gfortran version 4.8.5 (see e.g. https://trac.lmd.jussieu.fr/Planeto/ticket/62 ) but works fine with more recent versions of the compiler (tested with versions 7.2+)&lt;br /&gt;
* As of April 2022 the IOISPL distribution requires username/password authentication.&lt;br /&gt;
* As of April 2026 the svn server on which IOIPSL is distributed is swamped by thousands of request by IA bots generating issues for regular users for whom the svn checkout is interrupted with error messages of the likes of:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn: E170013: Unable to connect to repository at URL 'http://forge.ipsl.fr/igcmg/svn/IOIPSL/trunk'&lt;br /&gt;
svn: E120108: Error running context: The server unexpectedly closed the connection.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
As an alternative we have put a frozen version (revision 7509) of the IOIPSL source code online here:&lt;br /&gt;
https://web.lmd.jussieu.fr/~lmdz/planets/alternatives/IOIPSL.r7589.tar.gz&lt;br /&gt;
&lt;br /&gt;
[[Category:WhatIs]]&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Venus-Model]]&lt;br /&gt;
[[Category:Titan-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Other_GCM_Configurations_worth_knowing_about&amp;diff=3266</id>
		<title>Other GCM Configurations worth knowing about</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Other_GCM_Configurations_worth_knowing_about&amp;diff=3266"/>
				<updated>2026-05-15T09:58:43Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= 3D lon-lat LMDZ setup =&lt;br /&gt;
&lt;br /&gt;
== early Mars ==&lt;br /&gt;
&lt;br /&gt;
It is already described in the [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Quick_Install_and_Run ''Quick Install and Run''] section.&lt;br /&gt;
&lt;br /&gt;
== Earth with slab ocean ==&lt;br /&gt;
&lt;br /&gt;
TBD by Siddharth, once all changes have been committed (also need a validation of the model on Earth to be sure)&lt;br /&gt;
&lt;br /&gt;
== TRAPPIST-1e with photochemistry ==&lt;br /&gt;
&lt;br /&gt;
A temperate rocky planet in synchronous rotation around a low mass star.&lt;br /&gt;
&lt;br /&gt;
Here is an example to simulate the planet TRAPPIST-1e with an Earth atmosphere using the photochemical module of the GCM.&lt;br /&gt;
&lt;br /&gt;
To install the model and run it, follow [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Quick_Install_and_Run ''Quick Install and Run''] but with the following changes:&lt;br /&gt;
&lt;br /&gt;
=== GCM Input Datafiles and Datasets ===&lt;br /&gt;
Section [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;amp;action=edit&amp;amp;section=9 ''GCM Input Datafiles and Datasets''] download the TRAPPIST-1e files (instead of the early Mars files):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/bench_trappist1e_photochemistry_64x48x30_b38x36.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can find the same type of file with the additional folder containing the chemical network file:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
callphys.def  gases.def  startfi.nc  traceur.def&lt;br /&gt;
datadir/      run.def    start.nc    z2sig.def&lt;br /&gt;
chemnetwork/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Compiling the GCM ===&lt;br /&gt;
==== Prior to a first compilation: setting up the target architecture files ====&lt;br /&gt;
The chemical solver require the libraries BLAS and LAPACK which need to be specified in the '''arch*.fcm''' file:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
%COMPILER            gfortran&lt;br /&gt;
%LINK                gfortran&lt;br /&gt;
%AR                  ar&lt;br /&gt;
%MAKE                make&lt;br /&gt;
%FPP_FLAGS           -P -traditional&lt;br /&gt;
%FPP_DEF             NC_DOUBLE LAPACK BLAS SGEMV=DGEMV SGEMM=DGEMM&lt;br /&gt;
%BASE_FFLAGS         -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons&lt;br /&gt;
%PROD_FFLAGS         -O3&lt;br /&gt;
%DEV_FFLAGS          -O&lt;br /&gt;
%DEBUG_FFLAGS        -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace&lt;br /&gt;
%MPI_FFLAGS&lt;br /&gt;
%OMP_FFLAGS         &lt;br /&gt;
%BASE_LD             -llapack -lblas&lt;br /&gt;
%MPI_LD&lt;br /&gt;
%OMP_LD              &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Specific to photochemistry: set hard coded reactions ====&lt;br /&gt;
In '''/LMDZ.GENERIC/libf/aeronogeneric/chimiedata_h.F90''' you can hard code reaction if needed, for instance because the reaction rate is very specific and out of the generic formula or your photochemical reaction does not use a regular cross section.&lt;br /&gt;
&lt;br /&gt;
The TRAPPIST-1e test case use 3 hard coded reactions.&lt;br /&gt;
&lt;br /&gt;
*Uncomment the following lines to fill reaction species indexes:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
!===========================================================&lt;br /&gt;
!      r001 : HNO3 + rain -&amp;gt; H2O&lt;br /&gt;
!===========================================================&lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
indice_phot(nb_phot) = z3spec(1.0, indexchim('hno3'), 1.0, indexchim('h2o_vap'), 0.0, 1)&lt;br /&gt;
&lt;br /&gt;
!===========================================================&lt;br /&gt;
!      e001 : CO + OH -&amp;gt; CO2 + H &lt;br /&gt;
!===========================================================&lt;br /&gt;
nb_reaction_4 = nb_reaction_4 + 1&lt;br /&gt;
indice_4(nb_reaction_4) = z4spec(1.0, indexchim('co'), 1.0, indexchim('oh'), 1.0, indexchim('co2'), 1.0, indexchim('h'))&lt;br /&gt;
&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
!     photodissociation of NO : NO + hv -&amp;gt; N + O&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
indice_phot(nb_phot) = z3spec(1.0, indexchim('no'), 1.0, indexchim('n'), 1.0, indexchim('o'))&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Uncomment the following lines to fill reaction rates:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
!     carbon reactions&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
     &lt;br /&gt;
!---  e001: oh + co -&amp;gt; co2 + h&lt;br /&gt;
&lt;br /&gt;
      nb_reaction_4 = nb_reaction_4 + 1&lt;br /&gt;
&lt;br /&gt;
!     joshi et al., 2006&lt;br /&gt;
&lt;br /&gt;
      do ilev = 1,nlayer&lt;br /&gt;
         k1a0 = 1.34*2.5*dens(ilev)                                  &amp;amp;&lt;br /&gt;
               *1/(1/(3.62e-26*t(ilev)**(-2.739)*exp(-20./t(ilev)))  &amp;amp;&lt;br /&gt;
               + 1/(6.48e-33*t(ilev)**(0.14)*exp(-57./t(ilev))))     ! typo in paper corrected&lt;br /&gt;
         k1b0 = 1.17e-19*t(ilev)**(2.053)*exp(139./t(ilev))          &amp;amp;&lt;br /&gt;
              + 9.56e-12*t(ilev)**(-0.664)*exp(-167./t(ilev))&lt;br /&gt;
         k1ainf = 1.52e-17*t(ilev)**(1.858)*exp(28.8/t(ilev))        &amp;amp;&lt;br /&gt;
                + 4.78e-8*t(ilev)**(-1.851)*exp(-318./t(ilev))&lt;br /&gt;
         x = k1a0/(k1ainf - k1b0)&lt;br /&gt;
         y = k1b0/(k1ainf - k1b0)&lt;br /&gt;
         fc = 0.628*exp(-1223./t(ilev)) + (1. - 0.628)*exp(-39./t(ilev))  &amp;amp;&lt;br /&gt;
            + exp(-t(ilev)/255.)&lt;br /&gt;
         fx = fc**(1./(1. + (alog(x))**2))                           ! typo in paper corrected&lt;br /&gt;
         k1a = k1a0*((1. + y)/(1. + x))*fx&lt;br /&gt;
         k1b = k1b0*(1./(1.+x))*fx&lt;br /&gt;
            &lt;br /&gt;
         v_4(ilev,nb_reaction_4) = k1a + k1b&lt;br /&gt;
      end do&lt;br /&gt;
&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
!     washout r001 : HNO3 + rain -&amp;gt; H2O&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
      &lt;br /&gt;
rain_h2o  = 100.e-6&lt;br /&gt;
!rain_rate = 1.e-6  ! 10 days&lt;br /&gt;
rain_rate = 1.e-8&lt;br /&gt;
      &lt;br /&gt;
do ilev = 1,nlayer&lt;br /&gt;
   if (c(ilev,indexchim('h2o_vap'))/dens(ilev) &amp;gt;= rain_h2o) then&lt;br /&gt;
      v_phot(ilev,nb_phot) = rain_rate&lt;br /&gt;
   else&lt;br /&gt;
      v_phot(ilev,nb_phot) = 0.&lt;br /&gt;
   end if&lt;br /&gt;
end do&lt;br /&gt;
&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
!     photodissociation of NO&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
      &lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
      &lt;br /&gt;
colo3(nlayer) = 0.&lt;br /&gt;
!     ozone columns for other levels (molecule.cm-2)&lt;br /&gt;
do ilev = nlayer-1,1,-1&lt;br /&gt;
   colo3(ilev) = colo3(ilev+1) + (c(ilev+1,indexchim('o3')) + c(ilev,indexchim('o3')))*0.5*avocado*1e-4*((press(ilev) - press(ilev+1))*100.)/(1.e-3*zmmean(ilev)*g*dens(ilev))&lt;br /&gt;
end do&lt;br /&gt;
call jno(nlayer, c(nlayer:1:-1,indexchim('no')), c(nlayer:1:-1,indexchim('o2')), colo3(nlayer:1:-1), dens(nlayer:1:-1), press(nlayer:1:-1), sza, v_phot(nlayer:1:-1,nb_phot))&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Change the following lines to set the number of hard coded reactions:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
integer, parameter :: nphot_hard_coding = 2&lt;br /&gt;
integer, parameter :: n4_hard_coding    = 1&lt;br /&gt;
integer, parameter :: n3_hard_coding    = 0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (TRAPPIST-1e) ===&lt;br /&gt;
Change the following compiling option:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
-d 64x48x30 -b 38x36&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NB: option -b is mandatory to change while option -d will still run with lower or higher resolution (if '''z2sig.def''' remains coherent with the number of altitude levels, meaning at least as many altitude levels defined as the number of levels wanted).&lt;br /&gt;
&lt;br /&gt;
== TRAPPIST-1c in Venus-like conditions ==&lt;br /&gt;
&lt;br /&gt;
A warm rocky planet in synchronous rotation around a low mass star. Here we provide an '''example''' to simulate the atmosphere of Trappist-1c, assuming it evolved to a modern Venus-like atmosphere.&lt;br /&gt;
&lt;br /&gt;
The planetary parameters are taken from  [https://arxiv.org/abs/2010.01074 Algol et al. 2021] and can be found in this table [[Media:Planetary_parameters_Trappist1c.png]]&lt;br /&gt;
&lt;br /&gt;
First, install the model and run it, following [[Quick Install and Run]]  but instead of  ''Early Mars files'', please download ''bench_trappist1c_64x48x50_b32x36'' using this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/bench_trappist1c_64x48x50_b32x36.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (TRAPPIST-1c) ===&lt;br /&gt;
Change the following compiling option:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
-d 64x48x50 -b 32x36&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can find the same type of  ASCII *def files than in the case of ''Early Mars'',  but adapted to the planet's characteristics and orbital parameters of Trappist 1c.&lt;br /&gt;
In particular ''callphys.def'' contains the following changes:&lt;br /&gt;
&lt;br /&gt;
* The planet is assumed to be in 1:1 spin-orbit resonance, therefore&lt;br /&gt;
   diurnal = .false. &lt;br /&gt;
   tlocked = .true.&lt;br /&gt;
* The planet equilibrium temperature is about 342 K&lt;br /&gt;
   tplanet    = 341.9&lt;br /&gt;
* The host star is TRAPPIST1, with a stellar flux at 1 AU of 0.7527 [W m-2]&lt;br /&gt;
   stelspec_file = spectrum_TRAPPIST1_2022.dat&lt;br /&gt;
   tstellar = 2600.&lt;br /&gt;
   Fat1AU = 0.7527&lt;br /&gt;
* Fixed aerosol distribution, no radiative active tracers (no evaporation/condensation of H2O and CO2):&lt;br /&gt;
   aerofixed     = .true.&lt;br /&gt;
   aeroco2       = .false.&lt;br /&gt;
   aeroh2o       = .false.&lt;br /&gt;
* No water cycle model, no water cloud formation or water precipitation, no CO2 condensation:&lt;br /&gt;
   water         = .false.&lt;br /&gt;
   watercond     = .false.&lt;br /&gt;
   waterrain     = .false.&lt;br /&gt;
   hydrology     = .false.&lt;br /&gt;
   nonideal      = .true.&lt;br /&gt;
   co2cond       = .false.&lt;br /&gt;
* Following [https://www.sciencedirect.com/science/article/pii/S0032063313002596?via%3Dihub Haus et al. 2015] a prescribed radiatively active cloud model is included. &lt;br /&gt;
It can be activated/deactivated with the flag ''aerovenus''.&lt;br /&gt;
   aerovenus = .true.&lt;br /&gt;
* Mode 1, 2, 2p, 3 and the &amp;quot;unknown&amp;quot; UV absorber can be included/excluded by setting to true/false the following keywords. The characteristics of each mode (e.g. effect radius, effective variance) are based on Venus Express/ESA observations and can be found in this table [[Media:Table1 aerosolVenus trappist1c.png]]&lt;br /&gt;
   aerovenus1    = .true.&lt;br /&gt;
   aerovenus2    = .true.&lt;br /&gt;
   aerovenus2p   = .true.&lt;br /&gt;
   aerovenus3    = .true.&lt;br /&gt;
   aerovenusUV   = .true.&lt;br /&gt;
&lt;br /&gt;
The cloud model is prescribed from 1 to 0.037 ''bar'' pressure layers. For each mode, the top/bottom pressure can be modified by hard-coding model routine ''aerosol_opacity.F90''.&lt;br /&gt;
Here below an example for mode 1 particles, where the top pressure layer and bottom pressure layer are prescribed at 0.1 bar and 1 bar, respectively:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
!       1. Initialization&lt;br /&gt;
          aerosol(1:ngrid,1:nlayer,iaer)=0.0&lt;br /&gt;
          p_bot = 1.e5 ! bottom pressure [Pa]&lt;br /&gt;
          p_top = 1.e4&lt;br /&gt;
          h_bot = 1.0e3 ! bottom scale height [m]&lt;br /&gt;
          h_top = 5.0e3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
'''TO BE COMPLETED BY GABRIELLA'''&lt;br /&gt;
&lt;br /&gt;
== mini-Neptune GJ1214b ==&lt;br /&gt;
&lt;br /&gt;
A warm mini-Neptune&lt;br /&gt;
&lt;br /&gt;
'''TO BE COMPLETED BY BENJAMIN'''&lt;br /&gt;
&lt;br /&gt;
= 3D DYNAMICO setup =&lt;br /&gt;
&lt;br /&gt;
Due to the rich dynamical activities in their atmospheres (banded zonal jets, eddies, vortices, storms, equatorial oscillations,...) resulting from multi-scale dynamic interactions, the Global Climate Modelling of the giant planet requires to resolve eddies arising from hydrodynamical instabilities to correctly establish the planetary-scaled jets regime. To this purpose, their Rossby radius deformation $$L_D$$, which is the length scale at which rotational effects become as important as buoyancy or gravity wave effects in the evolution of the flow about some disturbance, is calculated to determine the most suitable horizontal grid resolution. At mid-latitude range, for the giant planets, $$L_D$$ is of the same order of magnitude as that of the Earth. As the giant planets have a size of roughly 10 times the Earth size (i.e., Jupiter and Saturn), the modelling grid must be of a horizontal resolution of 0.5$$^{\circ}$$ over longitude and latitude (vs 5$$^{\circ}$$ for the Earth), considering 3 grid points to resolved $$L_D$$. &lt;br /&gt;
Moreover, to have a chance to model the equatorial oscillation, meridional cell circulations and/or a seasonal inter-hemispheric circulation, a giant planet GCM must also include a high vertical resolution. Indeed, these climate phenomena have been studied for decades for the Earth's atmosphere, and result from small- and large-scale interactions between the troposphere and stratosphere. This implies that the propagation of dynamic instabilities, waves and turbulence should be resolved as far as possible along the vertical. Contrary to horizontal resolution, it doesn't really exist a criterion (similar to $$L_D$$) to determine the most suitable vertical grid resolution and still an adjustable parameter according to the processes to be represented. However, we advise the user to set a vertical resolution of at least 5 grid points per scale height as first stage.    &lt;br /&gt;
Finally, these atmospheres are cold, with long radiative response time which needs radiative transfer computations over decade-long years of Jupiter (given that a Jupiter year $$\approx$$ 12 Earth years), Saturn ( a Saturn year $$\approx$$ 30 Earth years), Uranus (a Uranus year $$\approx$$ 84 earth years) or Neptune (a Neptune year $$\approx$$ 169 Earth years), depending on the chosen planet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To be able to deal with these three -- and non-exhaustive -- requirements to build a giant planet GCM, we need massive computational ressources. For this, we use a dynamical core suitable and numerically stable for massive parallel ressource computations: [[The_DYNAMICO_dynamical_core | DYNAMICO]] [Dubos et al,. 2015].  &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
In these two following subsections, we purpose an example of installation for Jupiter and a Hot Jupiter. All the install, compiling, setting and parameters files for each giant planets could be found on: https://gitlab.in2p3.fr/aymeric.spiga/dynamico-giant (the old repo is archived as read-only https://github.com/aymeric-spiga/dynamico-giant)&lt;br /&gt;
&lt;br /&gt;
The [[Dynamico-giant | DYNAMICO-giant wiki is here]]&lt;br /&gt;
&lt;br /&gt;
If you have already downloaded '''LMDZ.COMMON''', '''LMDZ.GENERIC''', '''IOIPSL''', '''ARCH''', you only have to download:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''ICOSAGCM''': the DYNAMICO dynamical core&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://gitlab.in2p3.fr/ipsl/projets/dynamico/dynamico.git ICOSAGCM&lt;br /&gt;
cd ICOSAGCM&lt;br /&gt;
git checkout 110016896ae9e85e614af43223b18fe38f211020   # Version du 6 nov. 2024&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''ICOSA_LMDZ''': the interface using to link LMDZ.GENERIC physical packages and ICOSAGCM&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn update -r 3729 -q ICOSA_LMDZ   # Version du 18 avr. 2025&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''XIOS (XML Input Output Server)''': the library to interpolate input/output fields between the icosahedral and longitude/latitude regular grids on fly&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn co -r 2626 -q http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/trunk XIOS   # Version du 22 mar. 2024&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you haven't already download '''LMDZ.COMMON''', '''LMDZ.GENERIC''', '''IOIPSL''', '''ARCH''', you can use the '''install.sh''' script provided by the GitLab repository. &lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Once each part of the GCM is downloaded, you are able to compile it. &lt;br /&gt;
Firstly, you have to define your [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files | target architecture file ]] (hereafter named YOUR_ARCH_FILE) where you will fill in all the necessary information about the local environment, where libraries are located, which compiler, and compiler options will be used, etc.&lt;br /&gt;
Some architecture files related to specific machines are provided in the '''ARCH''' directory, which are referenced in the following lines without the prefix 'arch-' (i.e., arch-X64_IRENE-AMD.env will be referenced as X64_IRENE-AMD).  &lt;br /&gt;
&lt;br /&gt;
The main specificity of DYNAMICO-giant is that every main parts of the model ('''ICOSAGCM''', '''LMDZ.COMMON''' and '''LMDZ.GENERIC''') are compiled as libraries, and settings and running configuration are managed by the '''ICOSA_LMDZ''' interface.&lt;br /&gt;
&lt;br /&gt;
First, you have to compile '''IOIPSL''',&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd LMDZ.COMMON/ioipsl/                                                                                                             &lt;br /&gt;
    ./install_ioipsl_YOUR-MACHINE.bash&lt;br /&gt;
cd ../../&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
then '''XIOS''' library, &lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd XIOS/                                                                                                               &lt;br /&gt;
    ./make_xios --prod --arch YOUR_ARCH_FILE --arch_path ../ARCH --job 8 --full&lt;br /&gt;
cd -&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
the physics packaging,&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd LMDZ.COMMON/                                                                                                        &lt;br /&gt;
    ./makelmdz_fcm -p generic -p_opt &amp;quot;-b 20x25&amp;quot; -prod -parallel mpi -libphy -io xios -arch YOUR_ARCH_FILE -arch_path ../ARCH -j 8 -full&lt;br /&gt;
cd -&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
the dynamical core '''DYNAMICO''' (located in '''ICOSAGCM''' directory, named from the icosahedral shape of the horizontal mesh),&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd ICOSAGCM/&lt;br /&gt;
    ./make_icosa -prod -parallel mpi -external_ioipsl -with_xios -arch YOUR_ARCH_FILE -arch_path ../ARCH -job 8 -full&lt;br /&gt;
cd -&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
and finally the '''ICOSA_LMDZ''' interface&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd ICOSA_LMDZ/&lt;br /&gt;
    ./make_icosa_lmdz -p generic -p_opt &amp;quot;-b 20x25&amp;quot; -parallel mpi -arch YOUR_ARCH_FILE -arch_path ../ARCH -job 8 -nodeps&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This last step is a bit redundant with the two previous one, hence ''make_icosa_lmdz'' will execute ''./make_icosa'' (in the '''ICOSAGCM''' directory) and ''./makelmdz_fcm'' (in the '''LMDZ.COMMON''' directory) to create and source the architecture files shared between all parts of the model, as well as create the intermediate file ''config.fcm''. As you have already compiled these two elements, ''make_icosa_lmdz'' should only create the linked architecture files, ''config.fcm'' and compile the interface. Here, ''-nodeps'' option prevents the checking of XIOS and IOIPSL compilation, which saves you from recompiling these two elements again.&lt;br /&gt;
      &lt;br /&gt;
Finally, your executable programs should appeared in '''ICOSA_LMDZ/bin''' subdirectory, as '''icosa_lmdz.exe''' and in '''XIOS/bin''' subdirectory, as '''xios_server.exe''' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All these compiling steps are summed up in ''make_isoca_lmdz'' program that should be adapted to your own computational settings (i.e., through you target architecture file).&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./make_icosa_lmdz -p generic -p_opt &amp;quot;-b 20x25&amp;quot; -parallel mpi -arch YOUR_ARCH_FILE -arch_path ../ARCH -job 8 -full&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
Here, ''-full'' option assure the compilation of each part ('''IOIPSL''', '''XIOS''', '''LMDZ.COMMON''', '''ICOSAGCM''' and '''ICOSA_LMDZ''') of the model.  &lt;br /&gt;
&lt;br /&gt;
Now you can move your two executable files to your working directory and start to run your own simulation of Jupiter or a Hot Jupiter, as what follows.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: If you are using the GitLab file architecture (https://gitlab.in2p3.fr/aymeric.spiga/dynamico-giant), you should be able to compile the model directly from your working directory (for instance ''dynamico-giant/jupiter/'') by using the ''compile_occigen.sh'' program, which has to be adapted to your machine/cluster.&lt;br /&gt;
&lt;br /&gt;
''Note 2 : Depending on the compiler module you use, especially with gfortran, you may need to modify the tracers_icosa.F90 file located in the src directory in order to successfully compile ICOSAGCM. For example, if you are using GCC/11.3.0 and OpenMPI/4.1.4, you must update the insert_tracer_output subroutine as follows:''&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;fortran&amp;quot;&amp;gt;&lt;br /&gt;
SUBROUTINE insert_tracer_output&lt;br /&gt;
      USE xios_mod&lt;br /&gt;
      USE grid_param&lt;br /&gt;
      IMPLICIT NONE&lt;br /&gt;
      TYPE(xios_fieldgroup) :: fieldgroup_hdl&lt;br /&gt;
      TYPE(xios_field) :: field_hdl&lt;br /&gt;
      INTEGER :: iq&lt;br /&gt;
      CHARACTER(len=1000) :: tracername1&lt;br /&gt;
      CHARACTER(len=1000) :: tracername2&lt;br /&gt;
      CHARACTER(len=1000) :: tracername3 &lt;br /&gt;
&lt;br /&gt;
      CALL xios_get_handle(&amp;quot;standard_output_tracers&amp;quot;, fieldgroup_hdl)&lt;br /&gt;
      DO iq = 1, nqtot&lt;br /&gt;
         tracername1 = &amp;quot;tracer_&amp;quot;//TRIM(tracers(iq)%name)&lt;br /&gt;
         CALL xios_add_child(fieldgroup_hdl, field_hdl, tracername1)&lt;br /&gt;
         CALL xios_set_attr(field_hdl, name=TRIM(tracers(iq)%name))&lt;br /&gt;
      END DO&lt;br /&gt;
&lt;br /&gt;
      CALL xios_get_handle(&amp;quot;standard_output_tracers_init&amp;quot;, fieldgroup_hdl)&lt;br /&gt;
      DO iq = 1, nqtot&lt;br /&gt;
         tracername2 = &amp;quot;tracer_&amp;quot;//TRIM(tracers(iq)%name)//&amp;quot;_init&amp;quot;&lt;br /&gt;
         tracername3 = TRIM(tracers(iq)%name)//&amp;quot;_init&amp;quot;&lt;br /&gt;
         CALL xios_add_child(fieldgroup_hdl, field_hdl, tracername2)&lt;br /&gt;
         CALL xios_set_attr(field_hdl, name=tracername3)&lt;br /&gt;
      END DO&lt;br /&gt;
&lt;br /&gt;
   END SUBROUTINE insert_tracer_output&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Jupiter with DYNAMICO ==&lt;br /&gt;
Using a new dynamical core implies new setting files, in addition or as a replacement of those relevant to '''LMDZ.COMMON''' dynamical core using. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two kind of setting files:&lt;br /&gt;
&lt;br /&gt;
'''A first group relevant to DYNAMICO:'''&lt;br /&gt;
&lt;br /&gt;
- [[The ''context_dynamico.xml'' Input File|''context_dynamico.xml'']]: Configuration file for '''DYNAMICO''' for reading and writing  files using '''XIOS''', mainly used when you want to check the installation of '''ICOSAGCM''' with [[The_DYNAMICO_dynamical_core | an ''Held and Suarez'' test case]]. When your installation, compilation and run environment is fully functional, the dynamic core output files will not (necessarily) be useful and you can disable their writing. &lt;br /&gt;
&lt;br /&gt;
- [[The context_input_dynamico.xml Input File|''context_input_dynamico.xml'']]:&lt;br /&gt;
&lt;br /&gt;
- [[The file_def_dynamico.xml Input File|''file_def_dynamico.xml'']]: Definition of output diagnostic files which will be written into the output files only related to '''ICOSAGCM'''. &lt;br /&gt;
&lt;br /&gt;
- [[The field_def_dynamico.xml Input File|''field_def_dynamico.xml'']]: Definition of all existing variables that can be output from DYNAMICO.&lt;br /&gt;
&lt;br /&gt;
- [[The tracer.def Input File|''tracer.def'']]: Definition of the name and physico-chemical properties of the tracers which will be advected by the dynamical core. For now, there is two files related to tracers, we are working to harmonise it.  &lt;br /&gt;
&lt;br /&gt;
''' A second group relevant to LMDZ.GENERIC physical packages: '''&lt;br /&gt;
&lt;br /&gt;
- [[The context_lmdz_physics.xml Input File|''context_lmdz_physics.xml'']]: File in which are defined the horizontal grid, vertical coordinate, output file(s) definition, with the setting of frequency output writing, time unit, geophysical variables to be written, etc. Each new geophysical variables added here have to be defined in the ''field_def_physics.xml'' file.&lt;br /&gt;
&lt;br /&gt;
- [[The field_def_physics.xml Input File|''field_def_physics.xml'']]: Definition of all existing variables that can be output from the physical packages interfaced with '''DYNAMICO'''. This is where you will add each geophysical fields that you want to appear in the ''Xhistins.nc'' output files. For instance, related to the ''thermal plume scheme'' using for Jupiter's tropospheric dynamics, we have added the following variables: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot; line&amp;gt;&lt;br /&gt;
             &amp;lt;field id=&amp;quot;h2o_vap&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Vapor mass mixing ratio&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/kg&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;h2o_ice&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Vapor mass mixing ratio&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/kg&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;detr&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Detrainment&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/m2/s&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;entr&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Entrainment&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/m2/s&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;w_plm&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Plume vertical velocity&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;m/s&amp;quot; /&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [[The_callphys.def_Input_File|''callphys.def'']]: This setting file is used either with '''DYNAMICO''' or '''LMDZ.COMMON''' and allows the user to choose the physical parametrisation schemes and their appropriate main parameter values relevant to the planet being simulated. In our case of Jupiter, there are some specific parametrisations that should be added or modified from the example given as link at the beginning of this line: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
# Diurnal cycle ?  if diurnal=false, diurnally averaged solar heating&lt;br /&gt;
diurnal      = .false. #.true.&lt;br /&gt;
# Seasonal cycle ? if season=false, Ls stays constant, to value set in &amp;quot;start&amp;quot;&lt;br /&gt;
season       = .true. &lt;br /&gt;
# Tidally resonant orbit ? must have diurnal=false, correct rotation rate in newstart&lt;br /&gt;
tlocked      = .false.&lt;br /&gt;
# Tidal resonance ratio ? ratio T_orbit to T_rotation&lt;br /&gt;
nres         = 1&lt;br /&gt;
# Planet with rings?&lt;br /&gt;
rings_shadow = .false.&lt;br /&gt;
# Compute latitude-dependent gravity field??&lt;br /&gt;
oblate       = .true.&lt;br /&gt;
# Include non-zero flattening (a-b)/a?&lt;br /&gt;
flatten      = 0.06487&lt;br /&gt;
# Needed if oblate=.true.: J2&lt;br /&gt;
J2           = 0.01470&lt;br /&gt;
# Needed if oblate=.true.: Planet mean radius (m)&lt;br /&gt;
Rmean        = 69911000.&lt;br /&gt;
# Needed if oblate=.true.: Mass of the planet (*1e24 kg)&lt;br /&gt;
MassPlanet   = 1898.3&lt;br /&gt;
# use (read/write) a startfi.nc file? (default=.true.)&lt;br /&gt;
startphy_file = .false.&lt;br /&gt;
# constant value for surface albedo (if startphy_file = .false.)&lt;br /&gt;
surfalbedo   = 0.0&lt;br /&gt;
# constant value for surface emissivity (if startphy_file = .false.)&lt;br /&gt;
surfemis     = 1.0&lt;br /&gt;
&lt;br /&gt;
# the rad. transfer is computed every &amp;quot;iradia&amp;quot; physical timestep&lt;br /&gt;
iradia           = 160&lt;br /&gt;
# folder in which correlated-k data is stored ?&lt;br /&gt;
corrkdir         = Jupiter_HITRAN2012_REY_ISO_NoKarko_T460K_article2019_gauss8p8_095&lt;br /&gt;
# Uniform absorption coefficient in radiative transfer?&lt;br /&gt;
graybody         = .false.&lt;br /&gt;
# Characteristic planetary equilibrium (black body) temperature&lt;br /&gt;
# This is used only in the aerosol radiative transfer setup. (see aerave.F)&lt;br /&gt;
tplanet          = 100.&lt;br /&gt;
# Output global radiative balance in file 'rad_bal.out' - slow for 1D!!&lt;br /&gt;
meanOLR          = .false.&lt;br /&gt;
# Variable gas species: Radiatively active ?&lt;br /&gt;
varactive        = .false.&lt;br /&gt;
# Computes atmospheric specific heat capacity and&lt;br /&gt;
# could calculated by the dynamics, set in callphys.def or calculeted from gases.def.&lt;br /&gt;
# You have to choose: 0 for dynamics (3d), 1 for forced in callfis (1d) or 2: computed from gases.def (1d)&lt;br /&gt;
# Force_cpp and check_cpp_match are now deprecated.  &lt;br /&gt;
cpp_mugaz_mode = 0&lt;br /&gt;
# Specific heat capacity in J K-1 kg-1 [only used if cpp_mugaz_mode = 1]&lt;br /&gt;
cpp              = 11500.&lt;br /&gt;
# Molecular mass in g mol-1 [only used if cpp_mugaz_mode = 1]&lt;br /&gt;
mugaz            = 2.30&lt;br /&gt;
### DEBUG&lt;br /&gt;
# To not call abort when temperature is outside boundaries:&lt;br /&gt;
strictboundcorrk = .false.&lt;br /&gt;
# To not stop run when temperature is greater than 400 K for H2-H2 CIA dataset:   &lt;br /&gt;
strictboundcia = .false.&lt;br /&gt;
# Add temperature sponge effect after radiative transfer?&lt;br /&gt;
callradsponge    = .false.&lt;br /&gt;
&lt;br /&gt;
Fat1AU = 1366.0&lt;br /&gt;
&lt;br /&gt;
## Other physics options&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# call turbulent vertical diffusion ?&lt;br /&gt;
calldifv    = .false.&lt;br /&gt;
# use turbdiff instead of vdifc ?&lt;br /&gt;
UseTurbDiff = .true.&lt;br /&gt;
# call convective adjustment ?&lt;br /&gt;
calladj     = .true.&lt;br /&gt;
# call thermal plume model ?&lt;br /&gt;
calltherm   = .true.&lt;br /&gt;
# call thermal conduction in the soil ?&lt;br /&gt;
callsoil    = .false.&lt;br /&gt;
# Internal heat flux (matters only if callsoil=F)&lt;br /&gt;
intheat     = 7.48&lt;br /&gt;
# Remove lower boundary (e.g. for gas giant sims)&lt;br /&gt;
nosurf      = .true.&lt;br /&gt;
#########################################################################&lt;br /&gt;
## extra non-standard definitions for Earth&lt;br /&gt;
#########################################################################&lt;br /&gt;
&lt;br /&gt;
## Thermal plume model options&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
dvimpl               = .true.&lt;br /&gt;
r_aspect_thermals    = 2.0&lt;br /&gt;
tau_thermals         = 0.0&lt;br /&gt;
betalpha             = 0.9&lt;br /&gt;
afact                = 0.7&lt;br /&gt;
fact_epsilon         = 2.e-4&lt;br /&gt;
alpha_max            = 0.7&lt;br /&gt;
fomass_max           = 0.5&lt;br /&gt;
pres_limit           = 2.e5&lt;br /&gt;
&lt;br /&gt;
## Tracer and aerosol options&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# Ammonia cloud (Saturn/Jupiter)?&lt;br /&gt;
aeronh3             = .true.&lt;br /&gt;
size_nh3_cloud      = 10.D-6&lt;br /&gt;
pres_nh3_cloud      = 1.1D5                        # old: 9.D4&lt;br /&gt;
tau_nh3_cloud       = 10.                          # old: 15.&lt;br /&gt;
# Radiatively active aerosol (Saturn/Jupiter)?&lt;br /&gt;
aeroback2lay         = .true.&lt;br /&gt;
optprop_back2lay_vis = optprop_jupiter_vis_n20.dat&lt;br /&gt;
optprop_back2lay_ir  = optprop_jupiter_ir_n20.dat&lt;br /&gt;
obs_tau_col_tropo    = 4.0&lt;br /&gt;
size_tropo           = 5.e-7&lt;br /&gt;
pres_bottom_tropo    = 8.0D4&lt;br /&gt;
pres_top_tropo       = 1.8D4&lt;br /&gt;
obs_tau_col_strato   = 0.1D0&lt;br /&gt;
# Auroral aerosols (Saturn/Jupiter)?&lt;br /&gt;
aeroaurora         = .false.&lt;br /&gt;
size_aurora        = 3.e-7&lt;br /&gt;
obs_tau_col_aurora = 2.0&lt;br /&gt;
&lt;br /&gt;
# Radiatively active CO2 aerosol?&lt;br /&gt;
aeroco2            = .false.&lt;br /&gt;
# Fixed CO2 aerosol distribution?&lt;br /&gt;
aerofixco2     = .false.&lt;br /&gt;
# Radiatively active water aerosol?&lt;br /&gt;
aeroh2o        = .false.&lt;br /&gt;
# Fixed water aerosol distribution?&lt;br /&gt;
aerofixh2o     = .false.&lt;br /&gt;
# basic dust opacity&lt;br /&gt;
dusttau        = 0.0&lt;br /&gt;
# Varying H2O cloud fraction?&lt;br /&gt;
CLFvarying     = .false.&lt;br /&gt;
# H2O cloud fraction if fixed?&lt;br /&gt;
CLFfixval      = 0.0&lt;br /&gt;
# fixed radii for cloud particles?&lt;br /&gt;
radfixed       = .false.&lt;br /&gt;
# number mixing ratio of CO2 ice particles&lt;br /&gt;
Nmix_co2       = 100000.&lt;br /&gt;
# number mixing ratio of water particles (for rafixed=.false.)&lt;br /&gt;
Nmix_h2o       = 1.e7&lt;br /&gt;
# number mixing ratio of water ice particles (for rafixed=.false.)&lt;br /&gt;
Nmix_h2o_ice   = 5.e5&lt;br /&gt;
# radius of H2O water particles (for rafixed=.true.):&lt;br /&gt;
rad_h2o        = 10.e-6&lt;br /&gt;
# radius of H2O ice particles (for rafixed=.true.):&lt;br /&gt;
rad_h2o_ice    = 35.e-6&lt;br /&gt;
# atm mass update due to tracer evaporation/condensation?&lt;br /&gt;
mass_redistrib = .false.&lt;br /&gt;
&lt;br /&gt;
## Water options &lt;br /&gt;
## ~~~~~~~~~~~~~&lt;br /&gt;
# Model water cycle&lt;br /&gt;
water         = .true.&lt;br /&gt;
# Model water cloud formation&lt;br /&gt;
watercond     = .true.&lt;br /&gt;
# Model water precipitation (including coagulation etc.)&lt;br /&gt;
waterrain     = .true.&lt;br /&gt;
# Use simple precipitation scheme?&lt;br /&gt;
precip_scheme = 1&lt;br /&gt;
# Evaporate precipitation?&lt;br /&gt;
evap_prec     = .true.&lt;br /&gt;
# multiplicative constant in Boucher 95 precip scheme&lt;br /&gt;
Cboucher      = 1.&lt;br /&gt;
# Include hydrology ?&lt;br /&gt;
hydrology     = .false.&lt;br /&gt;
# H2O snow (and ice) albedo ?&lt;br /&gt;
albedosnow    = 0.6&lt;br /&gt;
# Maximum sea ice thickness ?&lt;br /&gt;
maxicethick   = 10.&lt;br /&gt;
# Freezing point of seawater (degrees C) ?&lt;br /&gt;
Tsaldiff      = 0.0&lt;br /&gt;
# Evolve surface water sources ?&lt;br /&gt;
sourceevol    = .false.&lt;br /&gt;
&lt;br /&gt;
## CO2 options &lt;br /&gt;
## ~~~~~~~~~~~&lt;br /&gt;
# call CO2 condensation ?&lt;br /&gt;
co2cond       = .false.&lt;br /&gt;
# Set initial temperature profile to 1 K above CO2 condensation everywhere?&lt;br /&gt;
nearco2cond   = .false.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [[The_gases.def_Input_file|''gases.def'']]: File containing the gas composition of the atmosphere you want to model, with their molar mixing ratios. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
# gases&lt;br /&gt;
5&lt;br /&gt;
H2_&lt;br /&gt;
He_&lt;br /&gt;
CH4&lt;br /&gt;
C2H2&lt;br /&gt;
C2H6&lt;br /&gt;
0.863&lt;br /&gt;
0.134&lt;br /&gt;
0.0018&lt;br /&gt;
1.e-7&lt;br /&gt;
1.e-5&lt;br /&gt;
# First line is number of gases&lt;br /&gt;
# Followed by gas names (always 3 characters)&lt;br /&gt;
# and then molar mixing ratios.&lt;br /&gt;
# mixing ratio -1 means the gas is variable.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [[The jupiter_const.def Input File|''jupiter_const.def'']]: Files that gather all orbital and physical parameters of Jupiter.&lt;br /&gt;
&lt;br /&gt;
- [[The_traceur.def_Input_File|''traceur.def'']]: At this time, only two tracers are used for modelling Jupiter atmosphere, so the ''traceur.def'' file is summed up as follow&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
2&lt;br /&gt;
h2o_vap&lt;br /&gt;
h2o_ice&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''' Two additional files are used to set the running parameter of the simulation itself:'''&lt;br /&gt;
&lt;br /&gt;
- [[The run_icosa.def Input File | ''run_icosa.def'']]: file containing parameters for '''ICOSAGCM''' to execute the simulation, use to determine the [[Advanced Use of the GCM | horizontal and vertical resolutions]], the number of processors, the number of subdivisions, the duration of the simulation, etc.&lt;br /&gt;
&lt;br /&gt;
- ''run.def'': file which brings together all the setting files and will be reading by the interface '''ICOSA_LMDZ''' to link each part of the model ('''ICOSAGCM''', '''LMDZ.GENERIC''') with its particular setting file(s) when the library '''XIOS''' does not take action (through the ''.xml'' files).&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
###########################################################################&lt;br /&gt;
### INCLUDE OTHER DEF FILES (physics, specific settings, etc...)&lt;br /&gt;
###########################################################################&lt;br /&gt;
INCLUDEDEF=run_icosa.def&lt;br /&gt;
&lt;br /&gt;
INCLUDEDEF=jupiter_const.def&lt;br /&gt;
&lt;br /&gt;
INCLUDEDEF=callphys.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
prt_level=0&lt;br /&gt;
&lt;br /&gt;
## iphysiq must be same as itau_physics&lt;br /&gt;
iphysiq=40&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hot Jupiter with DYNAMICO ==&lt;br /&gt;
&lt;br /&gt;
Modelling the atmosphere of Hot Jupiter is challenging because of the extreme temperature conditions, and the fact that these planets are gas giants. Therefore, using a dynamical core such as Dynamico is strongly recommended. Here, we discuss how to perform a cloudless simulation of the Hot Jupiter WASP-43 b, using Dynamico.&lt;br /&gt;
&lt;br /&gt;
'''1st step''': You need to go to the github mentionned previously for Dynamico: https://github.com/aymeric-spiga/dynamico-giant. ''Git clone'' this repo on your favorite cluster, and ''checkout'' to the &amp;quot;hot_jupiter&amp;quot; branch.&lt;br /&gt;
&lt;br /&gt;
'''2nd step''': Now, run the install.sh script. This script will install '''all''' the required models ('''LMDZ.COMMON''', '''LMDZ.GENERIC''','''ICOSA_LMDZ''','''XIOS''','''FCM''','''ICOSAGCM'''). At this point, you only miss '''IOIPSL'''. To install it, go to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
dynamico-giant/code/LMDZ.COMMON/ioipsl/ &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There, you will find some examples of installations script. You need to create one that will work on your cluster, with your own arch files.&lt;br /&gt;
During the installation of '''IOIPSL''', you might be asked for a login/password. Contact TGCC computing center to get access.&lt;br /&gt;
&lt;br /&gt;
'''3rd step''': Great, now we have all we need to get started. Navigate to the ''hot_jupiter'' folder. You will find a ''compile_mesopsl.sh'' and a ''compile_occigen.sh'' script. Use them as examples to create the compile script adapted to your own cluster, then run it. &lt;br /&gt;
While running, I suggest that you take a look at the ''log_compile'' file. The compilation can take a while (~ 10minutes, especially because of XIOS). On quick trick to make sure that everything went right is to check the number of ''Build command finished'' messages in ''log_compile''. If everything worked out, there should be 6 of them.&lt;br /&gt;
&lt;br /&gt;
'''4th step''': Okay, the model compiled, good job ! Now we need to create the initial condition for our run. In the hot_jupiter1d folder, you already have a ''temp_profile.txt'' computed with the 1D version of the LMDZ.GENERIC (see rcm1d on this page). Thus, no need to recompute a 1D model but it will be needed if you want to model another Hot Jupiter.&lt;br /&gt;
Navigate to the 'makestart' folder, located at &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
dynamico-giant/hot_jupiter/makestart/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To generate the initial conditions for the 3D run, we're gonna start the model using the temperature profile from the 1D run. to do that, you will find a &amp;quot;job_mpi&amp;quot; script. Open it, and adapt it to your cluster and launch the job. This job is using 20 procs, and it runs 5 days of simulations. &lt;br /&gt;
If everything goes well, you should see few netcdf files appear. The important ones are '''start_icosa0.nc''', '''startfi0.nc''' and '''Xhistins.nc'''. &lt;br /&gt;
If you see these files, you're all set to launch a real simulation !&lt;br /&gt;
&lt;br /&gt;
'''5th step''': Go back to ''hot_jupiter'' folder. There are a bunch of script to launch your simulation. Take a look at the ''astro_fat_mpi'' script, and adapt it to your cluster. Then you can launch your simulation by doing &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
./run_astro_fat&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will start the simulation, using 90 procs. In the same folder, check if the icosa_lmdz.out file is created. This is the logfile of the simulation, while it is running. You can check there that everything is going well.&lt;br /&gt;
&lt;br /&gt;
'''Important side note''': When using the ''run_astro_fat'' script to run a simulation, it will run a chained simulation, restarting the simulation from the previous state after 100 days of simulations and generating ''Xhistins.nc'' files. This is your results file, where you will find all the variables that controls your atmosphere (temperature field, wind fields, etc..). &lt;br /&gt;
&lt;br /&gt;
Good luck and enjoy the generic PCM Dynamico for Hot Jupiter !&lt;br /&gt;
&lt;br /&gt;
'''2nd important side note''': These 5 steps are the basic needed steps to run a simulation. If you want to tune simulations to another planet, or change other stuff, you need to take a look at '''*.def''' and '''*.xml''' files. If you're lost in all of this, take a look at the different pages of this website and/or contact us !&lt;br /&gt;
Also, you might want to check the wiki on the [https://github.com/aymeric-spiga/dynamico-giant ''Github''], that explains a lot of settings for Dynamico&lt;br /&gt;
&lt;br /&gt;
= 3D LES setup =&lt;br /&gt;
&lt;br /&gt;
== Proxima b with LES ==&lt;br /&gt;
&lt;br /&gt;
To model the subgrid atmospheric turbulence, the [[WRF dynamical core for LES/mesoscale simulations|'''WRF''']] dynamical core coupled with the LMD Generic physics package is used. The first studied conducted was to resolve the convective activity of the substellar point of Proxami-b (Lefevre et al 2021). The impact of the stellar insolation and rotation period were studied. The files for the reference case, with a stellar flux of 880 W/m2 and an 11 days rotation period, are presented&lt;br /&gt;
&lt;br /&gt;
The input_* file are the used to initialize the temperature, pressure, winds and moisture of the domain. &lt;br /&gt;
input_souding : altitude (km), potential temperature, water vapour (kg/kg), u, v&lt;br /&gt;
input_therm : normalized gas constant, isobaric heat capacity, pressure, density, temperature&lt;br /&gt;
input_hr : SW heating, LW heating, Large-scale heating extracted from the GCM. Only the last one is used in this configuration.&lt;br /&gt;
&lt;br /&gt;
The file namelist.input is used to set up the domain parameters (resolution, grid points, etc). The file levels specifies the eta-levels of the vertical domain.&lt;br /&gt;
&lt;br /&gt;
Planet is used set up the atmospheric parameters, in order : gravity (m/s2), isobaric heat capacity (J/kg/K), molecular mass (g/mol), reference temperature (K), surface pressure (Pa), planet radius (m) and planet rotation rate (s-1).&lt;br /&gt;
&lt;br /&gt;
The files *.def are the parameter for the physics. Compared to GCM runs, the convective adjustment in callphys.def is turned off&lt;br /&gt;
&lt;br /&gt;
The file controle.txt, equivalent of the field controle in GCM start.nc, needed to initialize some physics constants.&lt;br /&gt;
&lt;br /&gt;
TBC ML&lt;br /&gt;
&lt;br /&gt;
= 1D setups =&lt;br /&gt;
&lt;br /&gt;
== rcm1d program ==&lt;br /&gt;
&lt;br /&gt;
Running the model in 1D (i.e. considering simply a column of atmosphere) is a common first step to test a new setup. To do so, you first have to compile the 1D version of the model. The command line is very similar to [[Quick_Install_and_Run#Compiling a test case (early Mars)|the one for the 3D]], except for 2 changes:&lt;br /&gt;
# put just the vertical resolution after the -d option (&amp;quot;VERT&amp;quot; instead of ''LON''x''LAT''x''VERT'' for the 3D case)&lt;br /&gt;
# at the end of the line, replace &amp;quot;gcm&amp;quot; with &amp;quot;rcm1d&amp;quot;&lt;br /&gt;
It will generate a file called '''rcm1d_XX_phyxxx_seq.e''', where ''XX'' and ''phyxxx'' are the vertical resolution and the physics package, respectively.&lt;br /&gt;
&lt;br /&gt;
Check out the [[Generic unicolumn rcm1d program| dedicated page about rcm1d]] for more details.&lt;br /&gt;
&lt;br /&gt;
Note that the '''.def''' files differ a bit from the 3D case: [[The_run.def_Input_File|'''run.def''']] is replaced with [[The_rcm1d.def_Input_File|'''rcm1d.def''']], which contains more general information. Indeed, the 1D model generally does not use [[The_start.nc_and_startfi.nc_input_files|'''start.nc''']] or [[The_start.nc_and_startfi.nc_input_files|'''startfi.nc''']] files to initialize. You can find examples of 1D configuration in ''LMDZ.GENERIC/deftank'' (e.g. '''rcm1d.def.earlymars''', '''rcm1d.def.earth'''), the best thing is to have a look at them.&lt;br /&gt;
&lt;br /&gt;
== kcm1d program ==&lt;br /&gt;
&lt;br /&gt;
Our 1-D inverse model&lt;br /&gt;
&lt;br /&gt;
TBD by Guillaume or Martin&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;br /&gt;
[[Category:Generic-DYNAMICO]]&lt;br /&gt;
[[Category:Generic-WRF]]&lt;br /&gt;
[[Category:Generic-1D]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Generic_unicolumn_rcm1d_program&amp;diff=3265</id>
		<title>Generic unicolumn rcm1d program</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Generic_unicolumn_rcm1d_program&amp;diff=3265"/>
				<updated>2026-05-15T09:52:32Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: Created page with &amp;quot;It is possible to run the Gneric PCM in a uni-column (aka &amp;quot;single-column&amp;quot;) configuration: this can be done via the '''rcm1d''' program; quite useful form some first studies bu...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;It is possible to run the Gneric PCM in a uni-column (aka &amp;quot;single-column&amp;quot;) configuration: this can be done via the '''rcm1d''' program; quite useful form some first studies but also when developing and testing parametrizations.&lt;br /&gt;
&lt;br /&gt;
== Compilation ==&lt;br /&gt;
The main program '''rcm1d''' is compiled using the same compilation script, [[The makelmdz fcm GCM Compilation Script|makelmdz_fcm]], as for the 3D Generic lon-lat PCM. Nevertheless, there are few modifications:&lt;br /&gt;
* the ''-d'' option requires only one argument, the number of vertical levels;&lt;br /&gt;
* the main program to compile is ''rcm1d'' rather than ''gcm''.&lt;br /&gt;
So for instance to compile a case for 26 vertical levels one would run something like:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
makelmdz_fcm -arch somearch -d 26 -p generic rcm1d&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that it is possible to compile and run with the XIOS library, which will require compiling in &amp;quot;mpi&amp;quot; mode (required by XIOS), even though the run will be serial:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
makelmdz_fcm -arch somearch -d 26 -p generic -parallel mpi -io xios rcm1d&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Inputs ==&lt;br /&gt;
Just like the 3D GCM, the ''rcm1d'' program needs some inputs to run. The needed files are:&lt;br /&gt;
* &amp;lt;code&amp;gt;z2sig.def&amp;lt;/code&amp;gt; for the definition of vertical levels;&lt;br /&gt;
* &amp;lt;code&amp;gt;traceur.def&amp;lt;/code&amp;gt; for the definition of tracers that the user wants the model to run with;&lt;br /&gt;
* &amp;lt;code&amp;gt;callphys.def&amp;lt;/code&amp;gt; for the definition of parametrizations that the user wants the model to run with;&lt;br /&gt;
* &amp;lt;code&amp;gt;rcm1d.def&amp;lt;/code&amp;gt; for the run configuration, which is similar to the 3D [[The run.def Input File|run.def]]; see [[The rcm1d.def Input File]]. In practice the &amp;lt;code&amp;gt;rcm1d.def&amp;lt;/code&amp;gt; file is in fact copied as &amp;lt;code&amp;gt;run.def&amp;lt;/code&amp;gt; by the rcm1d program when it runs. &lt;br /&gt;
&lt;br /&gt;
Unlike the 3D GCMs, the ''rcm1d'' program can run without any start files, which is its default (&amp;lt;code&amp;gt;restart=.false.&amp;lt;/code&amp;gt;). In this setup, one can (and often needs) needs to provide initial profiles of each tracer. These consist in files called ''profile_sometracername'' containing column-wise the initial values of the considered tracer. Then, the first line corresponds to the surface tracer and the following lines correspond to the layers. At the end of a 1D simulation, ''rcm1d'' outputs a restart file '''restart1D.nc''' which can be used as an initial condition for a following run.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;code&amp;gt;restart=.true.&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;rcm1d.def&amp;lt;/code&amp;gt; then the program will look for start files '''start1D.nc''' and '''startfi.nc''' and use these as initial conditions.&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
If compiled without XIOS then ''rcm1d'' will output &amp;lt;code&amp;gt;diagfi.nc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;stats.nc&amp;lt;/code&amp;gt; files, just like the 3D GCM and the optional &amp;lt;code&amp;gt;diagfi.def&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;stats.def&amp;lt;/code&amp;gt; files can be respectively added to specify which variables need be outputted.&lt;br /&gt;
Likewise if compiled with XIOS then ''rcm1d'' will output all files as specified in the relevant xml files, just like the 3D GCM.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, at the end of a simulation &amp;lt;code&amp;gt;rcm1d&amp;lt;/code&amp;gt; also outputs a '''restart1D.nc''' file containing the final computed state.&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-1D]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Dust_Cycle_in_Mars_PCM5&amp;diff=3260</id>
		<title>Dust Cycle in Mars PCM5</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Dust_Cycle_in_Mars_PCM5&amp;diff=3260"/>
				<updated>2026-04-21T12:55:51Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page details at different levels how to use and what is featured in the dust cycle in Mars PCM5.&lt;br /&gt;
&lt;br /&gt;
== Brief Overview ==&lt;br /&gt;
The reference work leading to the implementation in the PCM is the PhD work of J.-B. Madeleine; check out his PhD manuscript and 2011 JGR article&lt;br /&gt;
&amp;quot;Revisiting the radiative impact of dust on Mars using the LMD Global Climate Model&amp;quot; https://doi.org/10.1029/2011JE003855&lt;br /&gt;
&lt;br /&gt;
The main features and concepts on how the dust is handled are:&lt;br /&gt;
* Dust is modeled as a log-normal population (varying in size), which in the end requires managing only two ''tracers'', the first two moments of the distribution, which are the dust mass mixing ratio and number (tracers ''dust_mass'' and ''dust_number'').&lt;br /&gt;
* All the physical processes like large scale advection, mixing by the turbulence in the planetary boundary layer, sedimentation, etc. are modeled.&lt;br /&gt;
* How the dust gets injected in the atmosphere (i.e. the details of dust lifting from the surface) is not modeled; instead we use a simple assumption that there is always some injection of dust from the surface (most of it simply falling back down; but some of it, when the conditions are right, propagates)&lt;br /&gt;
* In addition, the dust in each column is rescaled so that its column opacity then matches that of a driving dust scenario (typically derived from observations).&lt;br /&gt;
&lt;br /&gt;
== Flags concerning the dust cycle in a PCM version 5 setup ==&lt;br /&gt;
A setup including adequate flags and parameters for a PCM5 dust cycle can be found in the reference [[callphys.def.GCM5|https://trac.lmd.jussieu.fr/Planeto/browser/trunk/LMDZ.MARS/deftank/callphys.def.GCM5] reference provided in deftank.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, there should be the 2 dedicated tracers (dust moments) at hand (i.e. in the traceur.def file):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dust_mass&lt;br /&gt;
dust_number&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In practice the relevant callphys.def parameters are:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Directory where external input files are:&lt;br /&gt;
datadir=/users/lmdz/WWW/planets/mars/datadir&lt;br /&gt;
&lt;br /&gt;
## Dust scenario. Used if the dust is prescribed (i.e. if active=F)&lt;br /&gt;
## ~~~~~~~~~~~~~&lt;br /&gt;
#  =1 Dust opt.deph read in startfi; =2 Viking scenario; =3 MGS scenario,&lt;br /&gt;
#  =4 Mars Year 24 from TES assimilation (old version of MY24; dust_tes.nc file)&lt;br /&gt;
#  =6 &amp;quot;cold&amp;quot; (low dust) scenario ; =7 &amp;quot;warm&amp;quot; (high dust) scenario&lt;br /&gt;
#  =8 &amp;quot;climatology&amp;quot; (our best guess of a typical Mars year) scenario&lt;br /&gt;
#  =24 Mars Year 24  ; =25 Mars Year 25 (year with a global dust storm) ; ...&lt;br /&gt;
#  =30 Mars Year 30 &lt;br /&gt;
iaervar = 26&lt;br /&gt;
# Dust opacity at 610 Pa (when constant, i.e. for the iaervar=1 case)&lt;br /&gt;
tauvis=0.2&lt;br /&gt;
# Dust vertical distribution: &lt;br /&gt;
# (=0: old distrib. (Pollack90), =1: top set by &amp;quot;topdustref&amp;quot;,&lt;br /&gt;
#  =2: Viking scenario; =3 MGS scenario)&lt;br /&gt;
iddist  = 3&lt;br /&gt;
# Dust top altitude (km). (Matters only if iddist=1)&lt;br /&gt;
topdustref = 55.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* '''iaervar''' points to the driving dust scenario (daily maps; netcdf files located under '''datadir''') to use: e.g. ''dust_clim.nc'' if &amp;lt;code&amp;gt;iaervar=8&amp;lt;/code&amp;gt;, ''dust_MY30.nc'' if &amp;lt;code&amp;gt;iaervar=30&amp;lt;/code&amp;gt;, etc. In the special case where &amp;lt;code&amp;gt;iaervar=1&amp;lt;/code&amp;gt; then the driving dust column opacity is constant (over space and time) to the value specified by flag '''tauvis'''.&lt;br /&gt;
* '''tauvis''' is only used when &amp;lt;code&amp;gt;iaervar=1&amp;lt;/code&amp;gt; in which case it is the imposed value of the column dust opacity (at reference pressure of 610Pa).&lt;br /&gt;
* '''iddist''' and '''topdustref''' are not used (date back to even simpler setup)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
## Tracer (dust water, ice and/or chemical species) options :&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# DUST: Transported dust ? (if &amp;gt;0, use 'dustbin' dust bins)&lt;br /&gt;
dustbin    = 2&lt;br /&gt;
# DUST: Radiatively active dust ? (matters if dustbin&amp;gt;0)&lt;br /&gt;
active  = .true.&lt;br /&gt;
# DUST: use mass and number mixing ratios to predict dust size ?&lt;br /&gt;
doubleq   = .true.&lt;br /&gt;
# DUST: use a small population of dust particules (submicron dust)?&lt;br /&gt;
submicron = .false.&lt;br /&gt;
# DUST: lifted by GCM surface winds ?&lt;br /&gt;
lifting = .true.&lt;br /&gt;
# DUST: lifted by dust devils ?&lt;br /&gt;
callddevil = .false.&lt;br /&gt;
# DUST: Scavenging by H2O/CO2 snowfall ?&lt;br /&gt;
scavenging = .true.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* '''dustbin''' should be 2, because using the 2-moment scheme (with &amp;lt;code&amp;gt;doubleq=.true.&amp;lt;/code&amp;gt;).&lt;br /&gt;
* '''active''' should definitely be set to &amp;lt;code&amp;gt;.true.&amp;lt;/code&amp;gt;, to account for the radiative effect of dust.&lt;br /&gt;
* '''doubleq'''  should be &amp;lt;code&amp;gt;.true.&amp;lt;/code&amp;gt; (with &amp;lt;code&amp;gt;dustbin=2&amp;lt;/code&amp;gt;), to use the two-moment scheme&lt;br /&gt;
* '''submicron''' should be &amp;lt;code&amp;gt;.false.&amp;lt;/code&amp;gt;; was put there to eventually have a second distribution of small particles around. Not used nor validated.&lt;br /&gt;
* '''lifting''' should be &amp;lt;code&amp;gt;.true.&amp;lt;/code&amp;gt;, to impose continuous dust injection from the surface into the first atmospheric layer.&lt;br /&gt;
* '''callddevil''' should be &amp;lt;code&amp;gt;.false.&amp;lt;/code&amp;gt;; was put there to eventually account for dust injection via dust devils. Not used nor validated.&lt;br /&gt;
* '''scavenging ''' should be &amp;lt;code&amp;gt;.true.&amp;lt;/code&amp;gt; to include scavenging of dust by H2O and or CO2 snowfall (assuming CO2 and/or H2O cycles are also computed). Significant effect in the polar night.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# SCATTERERS: set number of scatterers. must be compliant with preceding options.&lt;br /&gt;
naerkind = 2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* '''nearkind ''' is the number of radiatively active scatterers. Dust is one of them (if &amp;lt;code&amp;gt;active=.true.&amp;lt;/code&amp;gt;) so nearkind should be at least 1; and if the water cycle is also computed with radiatively active clouds (&amp;lt;code&amp;gt;water=.true.&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;activice=.true.&amp;lt;/code&amp;gt;) then &amp;lt;code&amp;gt;naerkind=2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== More technical stuff, and how/where it is managed in the Mars PCM code ==&lt;br /&gt;
....TODO....&lt;br /&gt;
&lt;br /&gt;
=== Routines ===&lt;br /&gt;
* initracer : Initialize some dust properties stored in the '''tracer_mod''' module: dedicated tracer indexes &amp;lt;code&amp;gt;igcm_dust_mass&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;igcm_dust_number&amp;lt;/code&amp;gt;, reference dust density &amp;lt;code&amp;gt;rho_q(igcm_dust_mass)&amp;lt;/code&amp;gt;, variance of the lifted/injected dust distribution &amp;lt;code&amp;gt;varian&amp;lt;/code&amp;gt;, reference effective radius of the lifted dust &amp;lt;code&amp;gt;reff_lift&amp;lt;/code&amp;gt; and injection/lifting coefficient &amp;lt;code&amp;gt;alpha_lift(igcm_dust_mass)&amp;lt;/code&amp;gt;&lt;br /&gt;
* aeropacity : where the optical depth of the aerosols is computed (see e.g. the &amp;quot;dust_doubleq&amp;quot; case for dust) and the call to compute_dustscaling is done&lt;br /&gt;
* compute_dustscaling (in dust_scaling_mod) : where &amp;quot;tauscaling&amp;quot;, the dust rescaling coefficient, is computed&lt;br /&gt;
* vdifc : where the dust is lifted/injected from the surface into the atmosphere&lt;br /&gt;
&lt;br /&gt;
=== Parameters and variables in the code ===&lt;br /&gt;
* '''tauscaling''' : dust rescaling coefficient (one value per column) &lt;br /&gt;
* '''tau_pref_gcm''' : dust column opacity at 610 Pa (should be equal to &amp;quot;tau_pref_scenario&amp;quot;, the dust column opacity from the driving scenario)&lt;br /&gt;
* '''freedust''' : this parameter should be &amp;lt;code&amp;gt;freedust=.false.&amp;lt;/code&amp;gt;, so that there is rescaling of the dust using the &amp;quot;tauscaling&amp;quot; coefficient&lt;br /&gt;
* '''dustscaling_mode''' : this parameter should be &amp;lt;code&amp;gt;dustscaling_mode=1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using an old but simple dust scheme ==&lt;br /&gt;
&lt;br /&gt;
If one wants to use a very simple scheme for dust instead of the usual 'doubleq' (and 'active') dust, it is possible to use an old parametrization based on '''&amp;quot;Conrath dust&amp;quot;'''. In this case, the dust follows the &amp;quot;Conrath profile&amp;quot; and the radiative effect of dust is still included. This simple parametrization can be useful for some tests under idealized conditions like for the 1D model.&lt;br /&gt;
&lt;br /&gt;
In &amp;quot;callphys.def&amp;quot;, the necessary flags are &amp;lt;code&amp;gt;doubleq = .false.&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;dustbin = 0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;active  = .false.&amp;lt;/code&amp;gt;. So there is no dust-related tracer in &amp;quot;traceur.def&amp;quot; and &amp;lt;code&amp;gt;naerkind = 1&amp;lt;/code&amp;gt;.&lt;br /&gt;
This is actually the default choice for the main scatterer. The &amp;quot;Conrath dust&amp;quot; can be used with different dust scenarios (&amp;lt;code&amp;gt;iaervar&amp;lt;/code&amp;gt;) and dust vertical distributions (&amp;lt;code&amp;gt;iddist&amp;lt;/code&amp;gt;). In line with the aim to get the simplest case, one can choose &amp;lt;code&amp;gt;iddist = 0&amp;lt;/code&amp;gt; to set the old dust vertical distribution function (pollack90) depending only on the dust scenario (for example &amp;lt;code&amp;gt;iaervar = 1&amp;lt;/code&amp;gt; to define a simple case).&lt;br /&gt;
&lt;br /&gt;
[[Category:Mars-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_XIOS_Library&amp;diff=3259</id>
		<title>The XIOS Library</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_XIOS_Library&amp;diff=3259"/>
				<updated>2026-04-14T16:08:00Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: /* axes, domains and grids */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://forge.ipsl.jussieu.fr/ioserver/wiki XIOS] (Xml I/O Server) library is based on client-server principles where the server manages the outputs asynchronously from the client (the climate model) so that the bottleneck of writing data is alleviated.&lt;br /&gt;
&lt;br /&gt;
== Installing the XIOS library ==&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
There are a couple of prerequisites to installing and using the XIOS library:&lt;br /&gt;
# An MPI library must be available&lt;br /&gt;
# A NetCDF4-HDF5 library, preferably compiled with MPI enabled, must be available (see, e.g. dedicated section on  [[The_netCDF_library]])&lt;br /&gt;
The rest of this page assume all prerequisites are met. People interested in building an appropriate NetCDF library on their Linux machine might be interested in the following installation script: https://web.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5.bash (which might need some adaptations to work in your specific case).&lt;br /&gt;
&lt;br /&gt;
=== Downloading and compiling the XIOS library ===&lt;br /&gt;
The XIOS source code is available for download using svn (subversion). To download it, go to your trunk repository and run the line e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn co http://forge.ipsl.fr/ioserver/svn/XIOS/trunk XIOS&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* To compile the library, one must first have adequate architecture &amp;quot;arch&amp;quot; files at hand, just like for the GCM (see [[The_Target Architecture_(&amp;quot;arch&amp;quot;)_Files]]). In principle both ''arch.env'' and ''arch.path'' files could be the same as for the GCM; ''arch.fcm'' will of course differ, as XIOS source code is in C++ (along with a Fortran interface). If using a &amp;quot;known&amp;quot; machine (e.g. Occigen, Irene-Rome, Ciclad) then ready-to-use up-to-date arch files for that machine should be present in the ''arch'' directory. If not you will have to create your own (it is advised to use the existing ones as templates!)&lt;br /&gt;
* Assuming ''some_machine'' arch files (i.e. files ''arch-some_machine.env'', ''arch-some_machine.path'', ''arch-some_machine.fcm'') are present in the '''arch''' subdirectory, compiling the XIOS is done using the dedicated ''make_xios'' script, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./make_xios --prod --arch some_machine --job 8 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If the compilation steps went well then the '''lib''' directory should contain file ''libxios.a'' and the '''bin''' directory should contain&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fcm_env.ksh  generic_testcase.exe  xios_server.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== XIOS documentation ===&lt;br /&gt;
Note that the downloaded XIOS distribution includes some documentation in the '''doc''' subdirectory:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
reference_xml.pdf  XIOS_reference_guide.pdf  XIOS_user_guide.pdf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Definitely worth checking out!&lt;br /&gt;
&lt;br /&gt;
== Compiling the GCM with the XIOS library ==&lt;br /&gt;
&lt;br /&gt;
To compile with XIOS enabled, one must specify the option&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 -io xios&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
to the [[The makelmdz fcm GCM Compilation Script|makelmdz_fcm]] script.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== XIOS output controls ==&lt;br /&gt;
&lt;br /&gt;
All aspects of the outputs (name, units, file, post-processing operations, etc.) are controlled by dedicated XML files which are read at run-time. Samples of xml files are provided in the &amp;quot;deftank&amp;quot; directory.&lt;br /&gt;
&lt;br /&gt;
=== In a nutshell ===&lt;br /&gt;
* the master file read by XIOS is ''iodef.xml''; and contains specific XIOS parameters such as ''using_server'' to dictate whether XIOS is run in client-server mode (true) or attached (false) mode, ''info_level'' to set the verbosity of XIOS messages (0: none, 100: very verbose), ''print_file'' to set whether XIOS messages will be sent to standard output (false) or dedicated xios_*.out and xios_*.err files (true).&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
                        &amp;lt;variable id=&amp;quot;using_server&amp;quot; type=&amp;quot;bool&amp;quot;&amp;gt;false&amp;lt;/variable&amp;gt;&lt;br /&gt;
                        &amp;lt;variable id=&amp;quot;info_level&amp;quot; type=&amp;quot;int&amp;quot;&amp;gt;0&amp;lt;/variable&amp;gt;&lt;br /&gt;
                        &amp;lt;variable id=&amp;quot;print_file&amp;quot; type=&amp;quot;bool&amp;quot;&amp;gt; false &amp;lt;/variable&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* It is common practice to have LMDZ-related definitions and outputs in separate XML files, e.g. ''context_lmdz.xml'' which are included in ''iodef.xml'' via the ''src'' attribute, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;context id=&amp;quot;LMDZ&amp;quot; src=&amp;quot;./context_lmdz_physics.xml&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The ''context_lmdz_physics.xml'' file must then contain all fields/grid/file output definitions, which may be split into multiple XML files, for instance the definition of model variables (i.e. all fields that may be outputed) is often put in a separate file ''field_def_physics.xml'' which is referenced within ''context_lmdz_physics.xml'' as:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;field_definition src=&amp;quot;./field_def_physics.xml&amp;quot; /&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Concerning output files, the current recommended practice is to use separate ''file_def_histsomething_lmdz.xml'' files, one for each ''histsomething.nc'' file to generate, and include these in ''context_lmdz.xml'' using the ''file_definition'' key. e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;!-- Define output files&lt;br /&gt;
              Each file contains the list of variables and their output levels --&amp;gt;&lt;br /&gt;
  &amp;lt;file_definition src=&amp;quot;./file_def_histins.xml&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;file_definition src=&amp;quot;./file_def_specIR.xml&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Some XIOS key concepts ===&lt;br /&gt;
==== calendars ====&lt;br /&gt;
The calendar is set via the Fortran source code (see '''xios_output_mod.F90''' in the physics). Without going into details here, note that it is flexible enough so that day length, year length, etc. may be defined by the user. However a strong limitation is that the calendar time step should be an integer number of seconds.&lt;br /&gt;
&lt;br /&gt;
TODO: refer to specific stuff/settings for Mars, Generic, Venus cases...&lt;br /&gt;
&lt;br /&gt;
==== axes, domains and grids ====&lt;br /&gt;
First a bit of XIOS nomenclature:&lt;br /&gt;
* an '''axis''' is 1D; e.g. pseudo-altitude or pseudo-pressure or sub-surface depth or wavelength or ...&lt;br /&gt;
* a '''domain''' is a horizontal 2D surface; e.g. the globe or some portion of it&lt;br /&gt;
* a '''grid''' is the combination of a domain and one axis (or more); e.g. the atmosphere or the sub-surface of a planet&lt;br /&gt;
Most of the '''axis''' and '''domain''' are defined in the code (since all the information is known there) and only referred to in the XML via dedicated '''id''' values, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;axis_definition&amp;gt;&lt;br /&gt;
        &amp;lt;axis id=&amp;quot;presnivs&amp;quot; &lt;br /&gt;
              standard_name=&amp;quot;Pseudo-pressure of model vertical levels&amp;quot; &lt;br /&gt;
              unit=&amp;quot;Pa&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;/axis&amp;gt;&lt;br /&gt;
        &amp;lt;axis id=&amp;quot;altitude&amp;quot; &lt;br /&gt;
              standard_name=&amp;quot;Pseudo-altitude of model vertical levels&amp;quot; &lt;br /&gt;
              unit=&amp;quot;km&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;/axis&amp;gt;&lt;br /&gt;
    &amp;lt;/axis_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Likewise the global computational domain is defined in the code and known in the XML via its '''id'''(=&amp;quot;dom_glo&amp;quot;):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain_definition&amp;gt;&lt;br /&gt;
        &amp;lt;domain id=&amp;quot;dom_glo&amp;quot; data_dim=&amp;quot;2&amp;quot; /&amp;gt;&lt;br /&gt;
    &amp;lt;/domain_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
From there one may generate a grid, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;grid_definition&amp;gt;&lt;br /&gt;
        &amp;lt;!-- toggle axis id below to change output vertical axis --&amp;gt;&lt;br /&gt;
        &amp;lt;grid id=&amp;quot;grid_3d&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;domain id=&amp;quot;dom_glo&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;!-- &amp;lt;axis id=&amp;quot;presnivs&amp;quot; /&amp;gt; --&amp;gt;&lt;br /&gt;
            &amp;lt;axis id=&amp;quot;altitude&amp;quot; /&amp;gt;&lt;br /&gt;
        &amp;lt;/grid&amp;gt;&lt;br /&gt;
    &amp;lt;/grid_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that '''grid_3d''' is defined in the XML file and thus may be changed by the user without having to modify the PCM source code. For instance by simply adding the following definitions:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain_definition&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
        &amp;lt;domain id=&amp;quot;dom_128_96&amp;quot; ni_glo=&amp;quot;128&amp;quot; nj_glo=&amp;quot;96&amp;quot; type=&amp;quot;rectilinear&amp;quot;  &amp;gt;&lt;br /&gt;
          &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
          &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;/domain&amp;gt;&lt;br /&gt;
    &amp;lt;/domain_definition&amp;gt;&lt;br /&gt;
    &amp;lt;grid_definition&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
        &amp;lt;grid id=&amp;quot;my_grid_3d&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;domain id=&amp;quot;dom_128_96&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;axis id=&amp;quot;altitude&amp;quot; /&amp;gt;&lt;br /&gt;
        &amp;lt;/grid&amp;gt;&lt;br /&gt;
    &amp;lt;/grid_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Specifying to output variables on grid '''my_grid_3d''' will trigger XIOS interpolations so that the output fields are on a regular 128x96 longitudexlatitude grid.&lt;br /&gt;
&lt;br /&gt;
One can also add some specifications on the longitude and latitude bounds of the domain to generate:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain_definition&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
        &amp;lt;domain id=&amp;quot;dom_64_49&amp;quot; ni_glo=&amp;quot;64&amp;quot; nj_glo=&amp;quot;49&amp;quot; type=&amp;quot;rectilinear&amp;quot;  &amp;gt;&lt;br /&gt;
          &amp;lt;generate_rectilinear_domain lat_start=&amp;quot;90&amp;quot; lat_end=&amp;quot;-90&amp;quot; lon_start=&amp;quot;-180&amp;quot; lon_end=&amp;quot;174.375&amp;quot; /&amp;gt;&lt;br /&gt;
          &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;/domain&amp;gt;&lt;br /&gt;
    &amp;lt;/domain_definition&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== field definitions ====&lt;br /&gt;
For XIOS a field is defined with and '''id''' and most be assigned to a reference '''grid''' (this is how XIOS knows a field is a simple scalar, or a surface or a volume, and thus to which computational grid it is related to).e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;field_definition prec=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;field_group id=&amp;quot;fields_2D&amp;quot; domain_ref=&amp;quot;dom_glo&amp;quot;&amp;gt;&lt;br /&gt;
           &amp;lt;field id=&amp;quot;aire&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Mesh area&amp;quot;&lt;br /&gt;
                   unit=&amp;quot;m2&amp;quot; /&amp;gt;&lt;br /&gt;
           &amp;lt;field id=&amp;quot;phis&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Surface geopotential (gz)&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;m2/s2&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;tsol&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Surface Temperature&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;K&amp;quot; /&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;/field_group&amp;gt;&lt;br /&gt;
&lt;br /&gt;
        &amp;lt;field_group id=&amp;quot;fields_3D&amp;quot; grid_ref=&amp;quot;grid_3d&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;temp&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Atmospheric temperature&amp;quot;&lt;br /&gt;
                   unit=&amp;quot;K&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;pres&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Atmospheric pressure&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;Pa&amp;quot; /&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;/field_group&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;/field_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is vital that all the fields which are sent to XIOS via the code are declared in the XML file otherwise there will be a run-time error message of the likes of:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
In file &amp;quot;object_factory_impl.hpp&amp;quot;, function &amp;quot;static std::shared_ptr&amp;lt;U&amp;gt; xios::CObjectFactory::GetObject(const std::__cxx11::basic_string&amp;lt;char, std::char_traits&amp;lt;char&amp;gt;, std::allocator&amp;lt;char&amp;gt;&amp;gt; &amp;amp;) [with U = xios::CAxis]&amp;quot;,  line 78 -&amp;gt; [ id = weirdvar, U = field ] object was not found.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In the message above XIOS received from the code a variable called &amp;quot;weirdvar&amp;quot; which is not defined in the XML... One must update the XML file with the proper definition (&amp;lt;field id=&amp;quot;weirdvar&amp;quot; ... /&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
==== output file definitions ====&lt;br /&gt;
It is by defining a '''file''' that the user specifies what the output file will be, which variables it will contain, etc. as illustrated with this simple Venusian example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;file_definition&amp;gt;&lt;br /&gt;
        &amp;lt;!-- Instantaneous outputs; every physics time steps --&amp;gt;&lt;br /&gt;
        &amp;lt;file id=&amp;quot;Xins&amp;quot;&lt;br /&gt;
              output_freq=&amp;quot;1ts&amp;quot; &lt;br /&gt;
              type=&amp;quot;one_file&amp;quot;&lt;br /&gt;
              enabled=&amp;quot;.true.&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
            &amp;lt;!-- VARS 2D --&amp;gt;&lt;br /&gt;
            &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
                         freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;phis&amp;quot; operation=&amp;quot;once&amp;quot; /&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;aire&amp;quot; operation=&amp;quot;once&amp;quot; /&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;tsol&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/field_group&amp;gt;&lt;br /&gt;
&lt;br /&gt;
            &amp;lt;!-- VARS 3D --&amp;gt;&lt;br /&gt;
            &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
                         freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;temp&amp;quot; /&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;pres&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/field_group&amp;gt;&lt;br /&gt;
        &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It is mandatory to have an '''operation''' attribute defined (this can be either done at the level of the definition of the variable or, as above at the level of the definition of the outputs); there is no default. If this attribute is missing you will get an error message of the likes of:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
In file &amp;quot;attribute_template_impl.hpp&amp;quot;, function &amp;quot;virtual void xios::CAttributeTemplate&amp;lt;std::basic_string&amp;lt;char&amp;gt;&amp;gt;::checkEmpty() const [T = std::basic_string&amp;lt;char&amp;gt;]&amp;quot;,  line 78 -&amp;gt; On checking attribute with id=operation : data is not initialized &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Thorougher description with illustrative examples ===&lt;br /&gt;
TODO: PUT SOME SIMPLE ILLUSTRATIVE EXAMPLES HERE&lt;br /&gt;
&lt;br /&gt;
See for example the following page: [[controling outputs in the dynamics with DYNAMICO]]&lt;br /&gt;
&lt;br /&gt;
==== Specifying that the time axis should be labeled in days rather than seconds ====&lt;br /&gt;
The default for XIOS is to label temporal axes (&amp;quot;time_instant&amp;quot; and &amp;quot;time_counter&amp;quot;) in seconds. But one may ask that they be labelled in days by setting the optional '''time_unit''' attribute of a file to '''days''', e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;file id=&amp;quot;my_output_file&amp;quot; &lt;br /&gt;
       output_freq=&amp;quot;1ts&amp;quot;&lt;br /&gt;
       time_units=&amp;quot;days&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Force flushing and writing files every ### time steps ====&lt;br /&gt;
XIOS handles its buffers and only writes to output files when needed. This is quite efficient and worthwhile, except for instance when the model crashes as some data might then not be included in the output files. One may use the '''sync_freq''' (optional) attribute of a file to force XIOS to write to the file at some predefined frequency, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;file id=&amp;quot;my_output_file&amp;quot; &lt;br /&gt;
       output_freq=&amp;quot;1ts&amp;quot;&lt;br /&gt;
       sync_freq=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Very useful when debugging.&lt;br /&gt;
&lt;br /&gt;
==== Specifying an offset (in time) for the outputs ====&lt;br /&gt;
One may use the attribute '''record_offset''' of a file to impose that the outputs in the file begin after a certain number of time steps of the simulation (useful for instance when debugging). For instance if there are 192 time steps per day and the run is 10 days long but one only wants outputs for the last day and at every time step of that day then one should have a '''record_offset''' of -9*192=-1728 (note the ''-''; the value to specify is negative), e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;file id=&amp;quot;my_output_file&amp;quot; &lt;br /&gt;
       output_freq=&amp;quot;1ts&amp;quot;&lt;br /&gt;
       record_offset=&amp;quot;-1728ts&amp;quot;&lt;br /&gt;
       time_units=&amp;quot;days&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
                         freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;my_variable&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/field_group&amp;gt;&lt;br /&gt;
 &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The ''time_counter'' values in the file will be from 9.0052 (=9.+1./192.) to 10. (since here the time axis unit is requested to be in days)&lt;br /&gt;
&lt;br /&gt;
An alternative way to have the first n timesteps of a time series excluded from the output is to specify a ''freq_offset'' attribute to the field. For instance, to follow up on the example above, to extract every time step of the final 10th day of simulation with 192 time steps par day one should specify a '''freq_offset''' of 9*192=1728, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;file id=&amp;quot;my_output_file&amp;quot; &lt;br /&gt;
       output_freq=&amp;quot;1ts&amp;quot;&lt;br /&gt;
       time_units=&amp;quot;days&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
                         freq_offset=&amp;quot;1728ts&amp;quot;&lt;br /&gt;
                         freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;my_variable&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/field_group&amp;gt;&lt;br /&gt;
 &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The main difference, compared to the previous example using the '''record_offset''' file attribute, is that the ''time_counter'' values in the file will this time be from 0.0052 (=1./192) to 1.0.&lt;br /&gt;
&lt;br /&gt;
==== Saving or loading interpolation weights ====&lt;br /&gt;
With the XIOS library one can define output domains (grid) which are different from input domains (grids), and XIOS does the necessary interpolation.&lt;br /&gt;
&lt;br /&gt;
This requires, once source and destination grids are known, to compute some interpolation weights (during the initialization step). For large grids, this can take some time. One can however tell XIOS to save the interpolation weights in a file and use that file (if it is present) rather than recompute them when a new simulation is ran.&lt;br /&gt;
&lt;br /&gt;
In practice one must add extra keys to the &amp;quot;interpolate_domain&amp;quot; tag, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain id=&amp;quot;dom_256_192&amp;quot; type=&amp;quot;rectilinear&amp;quot; ni_glo=&amp;quot;256&amp;quot; nj_glo=&amp;quot;192&amp;quot; &amp;gt;&lt;br /&gt;
      &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
      &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot; write_weight=&amp;quot;true&amp;quot; mode=&amp;quot;read_or_compute&amp;quot; /&amp;gt;&lt;br /&gt;
    &amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will automatically generate a NetCDF file containing the weights. Default file name will be something like xios_interpolation_weights_CONTEXT_INPUTDOMAIN_OUTPUTDOMAIN.nc , where CONTEXT, INPUTDOMAIN and OUTPUTDOMAIN are inherited from the context (i.e. definitions of these in the xml files).&lt;br /&gt;
&lt;br /&gt;
One can specify the name of the file with the key &amp;quot;weight_filename&amp;quot;, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain id=&amp;quot;dom_256_192&amp;quot; type=&amp;quot;rectilinear&amp;quot; ni_glo=&amp;quot;256&amp;quot; nj_glo=&amp;quot;192&amp;quot; &amp;gt;&lt;br /&gt;
      &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
      &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot; write_weight=&amp;quot;true&amp;quot; mode=&amp;quot;read_or_compute&amp;quot; weight_filename=&amp;quot;xios_weights&amp;quot; /&amp;gt;&lt;br /&gt;
    &amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can also happen that for a given variable we want the interpolation not to be conservative. For example, a variable like the area of a mesh grid should not be interpolated between different domains. Since the interpolation is specific to a domain (and defined in the &amp;quot;domain id&amp;quot;), we have to create a new domain for all the variable that should be interpolated in another way. For the variable &amp;quot;Area&amp;quot; for example, the syntax is as follow :&lt;br /&gt;
&lt;br /&gt;
* Create the new domain:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;domain id=&amp;quot;dom_64_48_quantity_T&amp;quot; type=&amp;quot;rectilinear&amp;quot; ni_glo=&amp;quot;64&amp;quot; nj_glo=&amp;quot;48&amp;quot;   &amp;gt;&lt;br /&gt;
   &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
   &amp;lt;interpolate_domain quantity=&amp;quot;true&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Assign the variable to this domain:&lt;br /&gt;
Later in the context file, the variable should be outputted using this new domain (note that it still can be outputed in the same file as the other variables) :&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;field_group domain_ref=&amp;quot;dom_64_48_quantity_T&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
     freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;field field_ref=&amp;quot;area&amp;quot; operation=&amp;quot;once&amp;quot; /&amp;gt;&lt;br /&gt;
  &amp;lt;/field_group&amp;gt;&lt;br /&gt;
&amp;lt;/field_group&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples &amp;amp; adding outputs ===&lt;br /&gt;
&lt;br /&gt;
See [[LMDZ XIOS outputs]] for more details on the outputs generated via XIOS for the LMDZ dynamics/physics.&lt;br /&gt;
&lt;br /&gt;
If you use [[DYNAMICO]], check out the [[Controling outputs in the dynamics with DYNAMICO|DYNAMICO outputs]] page.&lt;br /&gt;
&lt;br /&gt;
== Using the XIOS library in client-server mode ==&lt;br /&gt;
To run with XIOS in client-server mode requires the following:&lt;br /&gt;
* The client-server mode should be activated (in file ''iodef.xml''):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
                        &amp;lt;variable id=&amp;quot;using_server&amp;quot; type=&amp;quot;bool&amp;quot;&amp;gt;true&amp;lt;/variable&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* The '''xios_server.exe''' executable should be present alongside the GCM executable '''gcm_***.e''' and they should be run together in MPMD (Multiple Programs, Multiple Data) mode : some of the MPI processes being allocated to the GCM and the others to XIOS ; in practice much less are needed by XIOS than the GCM, this however also depends on the amount of outputs and postprocessing computations, e.g. temporal averaging and grid interpolations, that XIOS will have to do. For example if the MPI execution wrapper is ''mpirun'' and that 26 processes are to be used by the GCM ''gcm_64x52x20_phygeneric_para.e'' and 2 by XIOS (i.e. using overall 28 processes):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mpirun -np 26 gcm_64x52x20_phygeneric_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1 : -np 2 xios_server.exe&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Venus-Model]]&lt;br /&gt;
[[Category:Titan-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_XIOS_Library&amp;diff=3258</id>
		<title>The XIOS Library</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_XIOS_Library&amp;diff=3258"/>
				<updated>2026-04-14T16:06:51Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: /* axes, domains and grids */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://forge.ipsl.jussieu.fr/ioserver/wiki XIOS] (Xml I/O Server) library is based on client-server principles where the server manages the outputs asynchronously from the client (the climate model) so that the bottleneck of writing data is alleviated.&lt;br /&gt;
&lt;br /&gt;
== Installing the XIOS library ==&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
There are a couple of prerequisites to installing and using the XIOS library:&lt;br /&gt;
# An MPI library must be available&lt;br /&gt;
# A NetCDF4-HDF5 library, preferably compiled with MPI enabled, must be available (see, e.g. dedicated section on  [[The_netCDF_library]])&lt;br /&gt;
The rest of this page assume all prerequisites are met. People interested in building an appropriate NetCDF library on their Linux machine might be interested in the following installation script: https://web.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5.bash (which might need some adaptations to work in your specific case).&lt;br /&gt;
&lt;br /&gt;
=== Downloading and compiling the XIOS library ===&lt;br /&gt;
The XIOS source code is available for download using svn (subversion). To download it, go to your trunk repository and run the line e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn co http://forge.ipsl.fr/ioserver/svn/XIOS/trunk XIOS&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* To compile the library, one must first have adequate architecture &amp;quot;arch&amp;quot; files at hand, just like for the GCM (see [[The_Target Architecture_(&amp;quot;arch&amp;quot;)_Files]]). In principle both ''arch.env'' and ''arch.path'' files could be the same as for the GCM; ''arch.fcm'' will of course differ, as XIOS source code is in C++ (along with a Fortran interface). If using a &amp;quot;known&amp;quot; machine (e.g. Occigen, Irene-Rome, Ciclad) then ready-to-use up-to-date arch files for that machine should be present in the ''arch'' directory. If not you will have to create your own (it is advised to use the existing ones as templates!)&lt;br /&gt;
* Assuming ''some_machine'' arch files (i.e. files ''arch-some_machine.env'', ''arch-some_machine.path'', ''arch-some_machine.fcm'') are present in the '''arch''' subdirectory, compiling the XIOS is done using the dedicated ''make_xios'' script, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./make_xios --prod --arch some_machine --job 8 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If the compilation steps went well then the '''lib''' directory should contain file ''libxios.a'' and the '''bin''' directory should contain&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fcm_env.ksh  generic_testcase.exe  xios_server.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== XIOS documentation ===&lt;br /&gt;
Note that the downloaded XIOS distribution includes some documentation in the '''doc''' subdirectory:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
reference_xml.pdf  XIOS_reference_guide.pdf  XIOS_user_guide.pdf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Definitely worth checking out!&lt;br /&gt;
&lt;br /&gt;
== Compiling the GCM with the XIOS library ==&lt;br /&gt;
&lt;br /&gt;
To compile with XIOS enabled, one must specify the option&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 -io xios&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
to the [[The makelmdz fcm GCM Compilation Script|makelmdz_fcm]] script.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== XIOS output controls ==&lt;br /&gt;
&lt;br /&gt;
All aspects of the outputs (name, units, file, post-processing operations, etc.) are controlled by dedicated XML files which are read at run-time. Samples of xml files are provided in the &amp;quot;deftank&amp;quot; directory.&lt;br /&gt;
&lt;br /&gt;
=== In a nutshell ===&lt;br /&gt;
* the master file read by XIOS is ''iodef.xml''; and contains specific XIOS parameters such as ''using_server'' to dictate whether XIOS is run in client-server mode (true) or attached (false) mode, ''info_level'' to set the verbosity of XIOS messages (0: none, 100: very verbose), ''print_file'' to set whether XIOS messages will be sent to standard output (false) or dedicated xios_*.out and xios_*.err files (true).&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
                        &amp;lt;variable id=&amp;quot;using_server&amp;quot; type=&amp;quot;bool&amp;quot;&amp;gt;false&amp;lt;/variable&amp;gt;&lt;br /&gt;
                        &amp;lt;variable id=&amp;quot;info_level&amp;quot; type=&amp;quot;int&amp;quot;&amp;gt;0&amp;lt;/variable&amp;gt;&lt;br /&gt;
                        &amp;lt;variable id=&amp;quot;print_file&amp;quot; type=&amp;quot;bool&amp;quot;&amp;gt; false &amp;lt;/variable&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* It is common practice to have LMDZ-related definitions and outputs in separate XML files, e.g. ''context_lmdz.xml'' which are included in ''iodef.xml'' via the ''src'' attribute, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;context id=&amp;quot;LMDZ&amp;quot; src=&amp;quot;./context_lmdz_physics.xml&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The ''context_lmdz_physics.xml'' file must then contain all fields/grid/file output definitions, which may be split into multiple XML files, for instance the definition of model variables (i.e. all fields that may be outputed) is often put in a separate file ''field_def_physics.xml'' which is referenced within ''context_lmdz_physics.xml'' as:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;field_definition src=&amp;quot;./field_def_physics.xml&amp;quot; /&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Concerning output files, the current recommended practice is to use separate ''file_def_histsomething_lmdz.xml'' files, one for each ''histsomething.nc'' file to generate, and include these in ''context_lmdz.xml'' using the ''file_definition'' key. e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;!-- Define output files&lt;br /&gt;
              Each file contains the list of variables and their output levels --&amp;gt;&lt;br /&gt;
  &amp;lt;file_definition src=&amp;quot;./file_def_histins.xml&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;file_definition src=&amp;quot;./file_def_specIR.xml&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Some XIOS key concepts ===&lt;br /&gt;
==== calendars ====&lt;br /&gt;
The calendar is set via the Fortran source code (see '''xios_output_mod.F90''' in the physics). Without going into details here, note that it is flexible enough so that day length, year length, etc. may be defined by the user. However a strong limitation is that the calendar time step should be an integer number of seconds.&lt;br /&gt;
&lt;br /&gt;
TODO: refer to specific stuff/settings for Mars, Generic, Venus cases...&lt;br /&gt;
&lt;br /&gt;
==== axes, domains and grids ====&lt;br /&gt;
First a bit of XIOS nomenclature:&lt;br /&gt;
* an '''axis''' is 1D; e.g. pseudo-altitude or pseudo-pressure or sub-surface depth or wavelength or ...&lt;br /&gt;
* a '''domain''' is a horizontal 2D surface; e.g. the globe or some portion of it&lt;br /&gt;
* a '''grid''' is the combination of a domain and one axis (or more); e.g. the atmosphere or the sub-surface of a planet&lt;br /&gt;
Most of the '''axis''' and '''domain''' are defined in the code (since all the information is known there) and only referred to in the XML via dedicated '''id''' values, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;axis_definition&amp;gt;&lt;br /&gt;
        &amp;lt;axis id=&amp;quot;presnivs&amp;quot; &lt;br /&gt;
              standard_name=&amp;quot;Pseudo-pressure of model vertical levels&amp;quot; &lt;br /&gt;
              unit=&amp;quot;Pa&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;/axis&amp;gt;&lt;br /&gt;
        &amp;lt;axis id=&amp;quot;altitude&amp;quot; &lt;br /&gt;
              standard_name=&amp;quot;Pseudo-altitude of model vertical levels&amp;quot; &lt;br /&gt;
              unit=&amp;quot;km&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;/axis&amp;gt;&lt;br /&gt;
    &amp;lt;/axis_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Likewise the global computational domain is defined in the code and known in the XML via its '''id'''(=&amp;quot;dom_glo&amp;quot;):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain_definition&amp;gt;&lt;br /&gt;
        &amp;lt;domain id=&amp;quot;dom_glo&amp;quot; data_dim=&amp;quot;2&amp;quot; /&amp;gt;&lt;br /&gt;
    &amp;lt;/domain_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
From there one may generate a grid, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;grid_definition&amp;gt;&lt;br /&gt;
        &amp;lt;!-- toggle axis id below to change output vertical axis --&amp;gt;&lt;br /&gt;
        &amp;lt;grid id=&amp;quot;grid_3d&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;domain id=&amp;quot;dom_glo&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;!-- &amp;lt;axis id=&amp;quot;presnivs&amp;quot; /&amp;gt; --&amp;gt;&lt;br /&gt;
            &amp;lt;axis id=&amp;quot;altitude&amp;quot; /&amp;gt;&lt;br /&gt;
        &amp;lt;/grid&amp;gt;&lt;br /&gt;
    &amp;lt;/grid_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that '''grid_3d''' is defined in the XML file and thus may be changed by the user without having to modify the PCM source code. For instance by simply adding the following definitions:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain_definition&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
        &amp;lt;domain id=&amp;quot;dom_128_96&amp;quot; ni_glo=&amp;quot;128&amp;quot; nj_glo=&amp;quot;96&amp;quot; type=&amp;quot;rectilinear&amp;quot;  &amp;gt;&lt;br /&gt;
          &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
          &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;/domain&amp;gt;&lt;br /&gt;
    &amp;lt;/domain_definition&amp;gt;&lt;br /&gt;
    &amp;lt;grid_definition&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
        &amp;lt;grid id=&amp;quot;my_grid_3d&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;domain id=&amp;quot;dom_128_96&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;axis id=&amp;quot;altitude&amp;quot; /&amp;gt;&lt;br /&gt;
        &amp;lt;/grid&amp;gt;&lt;br /&gt;
    &amp;lt;/grid_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Specifying to output variables on grid '''my_grid_3d''' will trigger XIOS interpolations so that the output fields are on a regular 128x96 longitudexlatitude grid.&lt;br /&gt;
&lt;br /&gt;
One can also add some specifications on the longitude and latitude bounds of the domain to generate:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain_definition&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
        &amp;lt;domain id=&amp;quot;dom_64_49&amp;quot; ni_glo=&amp;quot;64&amp;quot; nj_glo=&amp;quot;48&amp;quot; type=&amp;quot;rectilinear&amp;quot;  &amp;gt;&lt;br /&gt;
          &amp;lt;generate_rectilinear_domain lat_start=&amp;quot;90&amp;quot; lat_end=&amp;quot;-90&amp;quot; lon_start=&amp;quot;-180&amp;quot; lon_end=&amp;quot;174.375&amp;quot; /&amp;gt;&lt;br /&gt;
          &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;/domain&amp;gt;&lt;br /&gt;
    &amp;lt;/domain_definition&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== field definitions ====&lt;br /&gt;
For XIOS a field is defined with and '''id''' and most be assigned to a reference '''grid''' (this is how XIOS knows a field is a simple scalar, or a surface or a volume, and thus to which computational grid it is related to).e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;field_definition prec=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;field_group id=&amp;quot;fields_2D&amp;quot; domain_ref=&amp;quot;dom_glo&amp;quot;&amp;gt;&lt;br /&gt;
           &amp;lt;field id=&amp;quot;aire&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Mesh area&amp;quot;&lt;br /&gt;
                   unit=&amp;quot;m2&amp;quot; /&amp;gt;&lt;br /&gt;
           &amp;lt;field id=&amp;quot;phis&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Surface geopotential (gz)&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;m2/s2&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;tsol&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Surface Temperature&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;K&amp;quot; /&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;/field_group&amp;gt;&lt;br /&gt;
&lt;br /&gt;
        &amp;lt;field_group id=&amp;quot;fields_3D&amp;quot; grid_ref=&amp;quot;grid_3d&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;temp&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Atmospheric temperature&amp;quot;&lt;br /&gt;
                   unit=&amp;quot;K&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;pres&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Atmospheric pressure&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;Pa&amp;quot; /&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;/field_group&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;/field_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is vital that all the fields which are sent to XIOS via the code are declared in the XML file otherwise there will be a run-time error message of the likes of:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
In file &amp;quot;object_factory_impl.hpp&amp;quot;, function &amp;quot;static std::shared_ptr&amp;lt;U&amp;gt; xios::CObjectFactory::GetObject(const std::__cxx11::basic_string&amp;lt;char, std::char_traits&amp;lt;char&amp;gt;, std::allocator&amp;lt;char&amp;gt;&amp;gt; &amp;amp;) [with U = xios::CAxis]&amp;quot;,  line 78 -&amp;gt; [ id = weirdvar, U = field ] object was not found.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In the message above XIOS received from the code a variable called &amp;quot;weirdvar&amp;quot; which is not defined in the XML... One must update the XML file with the proper definition (&amp;lt;field id=&amp;quot;weirdvar&amp;quot; ... /&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
==== output file definitions ====&lt;br /&gt;
It is by defining a '''file''' that the user specifies what the output file will be, which variables it will contain, etc. as illustrated with this simple Venusian example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;file_definition&amp;gt;&lt;br /&gt;
        &amp;lt;!-- Instantaneous outputs; every physics time steps --&amp;gt;&lt;br /&gt;
        &amp;lt;file id=&amp;quot;Xins&amp;quot;&lt;br /&gt;
              output_freq=&amp;quot;1ts&amp;quot; &lt;br /&gt;
              type=&amp;quot;one_file&amp;quot;&lt;br /&gt;
              enabled=&amp;quot;.true.&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
            &amp;lt;!-- VARS 2D --&amp;gt;&lt;br /&gt;
            &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
                         freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;phis&amp;quot; operation=&amp;quot;once&amp;quot; /&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;aire&amp;quot; operation=&amp;quot;once&amp;quot; /&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;tsol&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/field_group&amp;gt;&lt;br /&gt;
&lt;br /&gt;
            &amp;lt;!-- VARS 3D --&amp;gt;&lt;br /&gt;
            &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
                         freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;temp&amp;quot; /&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;pres&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/field_group&amp;gt;&lt;br /&gt;
        &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It is mandatory to have an '''operation''' attribute defined (this can be either done at the level of the definition of the variable or, as above at the level of the definition of the outputs); there is no default. If this attribute is missing you will get an error message of the likes of:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
In file &amp;quot;attribute_template_impl.hpp&amp;quot;, function &amp;quot;virtual void xios::CAttributeTemplate&amp;lt;std::basic_string&amp;lt;char&amp;gt;&amp;gt;::checkEmpty() const [T = std::basic_string&amp;lt;char&amp;gt;]&amp;quot;,  line 78 -&amp;gt; On checking attribute with id=operation : data is not initialized &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Thorougher description with illustrative examples ===&lt;br /&gt;
TODO: PUT SOME SIMPLE ILLUSTRATIVE EXAMPLES HERE&lt;br /&gt;
&lt;br /&gt;
See for example the following page: [[controling outputs in the dynamics with DYNAMICO]]&lt;br /&gt;
&lt;br /&gt;
==== Specifying that the time axis should be labeled in days rather than seconds ====&lt;br /&gt;
The default for XIOS is to label temporal axes (&amp;quot;time_instant&amp;quot; and &amp;quot;time_counter&amp;quot;) in seconds. But one may ask that they be labelled in days by setting the optional '''time_unit''' attribute of a file to '''days''', e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;file id=&amp;quot;my_output_file&amp;quot; &lt;br /&gt;
       output_freq=&amp;quot;1ts&amp;quot;&lt;br /&gt;
       time_units=&amp;quot;days&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Force flushing and writing files every ### time steps ====&lt;br /&gt;
XIOS handles its buffers and only writes to output files when needed. This is quite efficient and worthwhile, except for instance when the model crashes as some data might then not be included in the output files. One may use the '''sync_freq''' (optional) attribute of a file to force XIOS to write to the file at some predefined frequency, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;file id=&amp;quot;my_output_file&amp;quot; &lt;br /&gt;
       output_freq=&amp;quot;1ts&amp;quot;&lt;br /&gt;
       sync_freq=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Very useful when debugging.&lt;br /&gt;
&lt;br /&gt;
==== Specifying an offset (in time) for the outputs ====&lt;br /&gt;
One may use the attribute '''record_offset''' of a file to impose that the outputs in the file begin after a certain number of time steps of the simulation (useful for instance when debugging). For instance if there are 192 time steps per day and the run is 10 days long but one only wants outputs for the last day and at every time step of that day then one should have a '''record_offset''' of -9*192=-1728 (note the ''-''; the value to specify is negative), e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;file id=&amp;quot;my_output_file&amp;quot; &lt;br /&gt;
       output_freq=&amp;quot;1ts&amp;quot;&lt;br /&gt;
       record_offset=&amp;quot;-1728ts&amp;quot;&lt;br /&gt;
       time_units=&amp;quot;days&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
                         freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;my_variable&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/field_group&amp;gt;&lt;br /&gt;
 &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The ''time_counter'' values in the file will be from 9.0052 (=9.+1./192.) to 10. (since here the time axis unit is requested to be in days)&lt;br /&gt;
&lt;br /&gt;
An alternative way to have the first n timesteps of a time series excluded from the output is to specify a ''freq_offset'' attribute to the field. For instance, to follow up on the example above, to extract every time step of the final 10th day of simulation with 192 time steps par day one should specify a '''freq_offset''' of 9*192=1728, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;file id=&amp;quot;my_output_file&amp;quot; &lt;br /&gt;
       output_freq=&amp;quot;1ts&amp;quot;&lt;br /&gt;
       time_units=&amp;quot;days&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
                         freq_offset=&amp;quot;1728ts&amp;quot;&lt;br /&gt;
                         freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;my_variable&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/field_group&amp;gt;&lt;br /&gt;
 &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The main difference, compared to the previous example using the '''record_offset''' file attribute, is that the ''time_counter'' values in the file will this time be from 0.0052 (=1./192) to 1.0.&lt;br /&gt;
&lt;br /&gt;
==== Saving or loading interpolation weights ====&lt;br /&gt;
With the XIOS library one can define output domains (grid) which are different from input domains (grids), and XIOS does the necessary interpolation.&lt;br /&gt;
&lt;br /&gt;
This requires, once source and destination grids are known, to compute some interpolation weights (during the initialization step). For large grids, this can take some time. One can however tell XIOS to save the interpolation weights in a file and use that file (if it is present) rather than recompute them when a new simulation is ran.&lt;br /&gt;
&lt;br /&gt;
In practice one must add extra keys to the &amp;quot;interpolate_domain&amp;quot; tag, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain id=&amp;quot;dom_256_192&amp;quot; type=&amp;quot;rectilinear&amp;quot; ni_glo=&amp;quot;256&amp;quot; nj_glo=&amp;quot;192&amp;quot; &amp;gt;&lt;br /&gt;
      &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
      &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot; write_weight=&amp;quot;true&amp;quot; mode=&amp;quot;read_or_compute&amp;quot; /&amp;gt;&lt;br /&gt;
    &amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will automatically generate a NetCDF file containing the weights. Default file name will be something like xios_interpolation_weights_CONTEXT_INPUTDOMAIN_OUTPUTDOMAIN.nc , where CONTEXT, INPUTDOMAIN and OUTPUTDOMAIN are inherited from the context (i.e. definitions of these in the xml files).&lt;br /&gt;
&lt;br /&gt;
One can specify the name of the file with the key &amp;quot;weight_filename&amp;quot;, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain id=&amp;quot;dom_256_192&amp;quot; type=&amp;quot;rectilinear&amp;quot; ni_glo=&amp;quot;256&amp;quot; nj_glo=&amp;quot;192&amp;quot; &amp;gt;&lt;br /&gt;
      &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
      &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot; write_weight=&amp;quot;true&amp;quot; mode=&amp;quot;read_or_compute&amp;quot; weight_filename=&amp;quot;xios_weights&amp;quot; /&amp;gt;&lt;br /&gt;
    &amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can also happen that for a given variable we want the interpolation not to be conservative. For example, a variable like the area of a mesh grid should not be interpolated between different domains. Since the interpolation is specific to a domain (and defined in the &amp;quot;domain id&amp;quot;), we have to create a new domain for all the variable that should be interpolated in another way. For the variable &amp;quot;Area&amp;quot; for example, the syntax is as follow :&lt;br /&gt;
&lt;br /&gt;
* Create the new domain:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;domain id=&amp;quot;dom_64_48_quantity_T&amp;quot; type=&amp;quot;rectilinear&amp;quot; ni_glo=&amp;quot;64&amp;quot; nj_glo=&amp;quot;48&amp;quot;   &amp;gt;&lt;br /&gt;
   &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
   &amp;lt;interpolate_domain quantity=&amp;quot;true&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Assign the variable to this domain:&lt;br /&gt;
Later in the context file, the variable should be outputted using this new domain (note that it still can be outputed in the same file as the other variables) :&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;field_group domain_ref=&amp;quot;dom_64_48_quantity_T&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
     freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;field field_ref=&amp;quot;area&amp;quot; operation=&amp;quot;once&amp;quot; /&amp;gt;&lt;br /&gt;
  &amp;lt;/field_group&amp;gt;&lt;br /&gt;
&amp;lt;/field_group&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples &amp;amp; adding outputs ===&lt;br /&gt;
&lt;br /&gt;
See [[LMDZ XIOS outputs]] for more details on the outputs generated via XIOS for the LMDZ dynamics/physics.&lt;br /&gt;
&lt;br /&gt;
If you use [[DYNAMICO]], check out the [[Controling outputs in the dynamics with DYNAMICO|DYNAMICO outputs]] page.&lt;br /&gt;
&lt;br /&gt;
== Using the XIOS library in client-server mode ==&lt;br /&gt;
To run with XIOS in client-server mode requires the following:&lt;br /&gt;
* The client-server mode should be activated (in file ''iodef.xml''):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
                        &amp;lt;variable id=&amp;quot;using_server&amp;quot; type=&amp;quot;bool&amp;quot;&amp;gt;true&amp;lt;/variable&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* The '''xios_server.exe''' executable should be present alongside the GCM executable '''gcm_***.e''' and they should be run together in MPMD (Multiple Programs, Multiple Data) mode : some of the MPI processes being allocated to the GCM and the others to XIOS ; in practice much less are needed by XIOS than the GCM, this however also depends on the amount of outputs and postprocessing computations, e.g. temporal averaging and grid interpolations, that XIOS will have to do. For example if the MPI execution wrapper is ''mpirun'' and that 26 processes are to be used by the GCM ''gcm_64x52x20_phygeneric_para.e'' and 2 by XIOS (i.e. using overall 28 processes):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mpirun -np 26 gcm_64x52x20_phygeneric_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1 : -np 2 xios_server.exe&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Venus-Model]]&lt;br /&gt;
[[Category:Titan-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_the_MESOIPSL_cluster&amp;diff=3251</id>
		<title>Using the MESOIPSL cluster</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_the_MESOIPSL_cluster&amp;diff=3251"/>
				<updated>2026-04-10T16:22:58Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides some information for those who use the MESOIPSL clusters (also known as &amp;quot;spirit&amp;quot;, replacing &amp;quot;ciclad&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Note that there are two distinct MESOIPSL Spirit clusters, one in Sorbonne Université (SU), and one in Ecole Polytechnique (X). If you log on to &amp;quot;spirit1&amp;quot; or &amp;quot;spirit2&amp;quot; (as shown below) then you are on the SU-Spirit cluster whereas if you log on to &amp;quot;spiritx1&amp;quot; or &amp;quot;spiritx2&amp;quot; then you are on the X-Spirit cluster.&lt;br /&gt;
&lt;br /&gt;
If you need to run on GPUs, that is possible using the 3rd MESOIPSL cluster, HAL.&lt;br /&gt;
&lt;br /&gt;
== How to access the cluster ==&lt;br /&gt;
If you had an account on Ciclad, then you have one on Spirit. If you need to open an account (this is of course reserved to IPSL users) then proceed to this page: https://documentations.ipsl.fr/spirit/getting_started/account.html&lt;br /&gt;
&lt;br /&gt;
Once you have an account you can ssh to the cluster via either of the spirit1 or spirit2 login nodes:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh yourMESOIPSLlogin@spirit1.ipsl.fr&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
or equivalently&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh yourMESOIPSLlogin@spirit2.ipsl.fr&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IMPORTANT: the required ssh authentication level is such that it requires ED25519 or RSA (4096 bits) key types. If your ssh to the machines fails, the first thing to check is that you indeed are using RSA keys.&lt;br /&gt;
&lt;br /&gt;
Here is probably also the right place to point to the MEOIPSL cluster's main page: https://documentations.ipsl.fr/spirit/&lt;br /&gt;
&lt;br /&gt;
== OS and disk space ==&lt;br /&gt;
As the welcome message will remind you:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 5.4.0-125-generic x86_64)&lt;br /&gt;
CPU AMD EPYC 7402P 24-Core Processor 2.8GHz&lt;br /&gt;
=========================================================================&lt;br /&gt;
*        Mesocentre ESPRI IPSL (Cluster Spirit) JUSSIEU                 *&lt;br /&gt;
=========================================================================&lt;br /&gt;
** Disk Space :&lt;br /&gt;
- /home/login     (32Go and 300000 files max per user) : Backup every day.&lt;br /&gt;
- /data/login     ( 1To and 300000 files max per user) : NO BACKUP&lt;br /&gt;
- /scratchu/login ( 2To and 300000 files max per user) : NO BACKUP&lt;br /&gt;
- /bdd/ : Datasets&lt;br /&gt;
- /climserv-home/, /homedata ,/scratchx : SpiritX workspace ( READ-ONLY)&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
Migration Documentation ( Temporary URL )&lt;br /&gt;
https://documentations.ipsl.fr/spirit/spirit_clusters/migration_from_ciclad_climserv.html&lt;br /&gt;
**  Support Contact  mailto:meso-support@ipsl.fr&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This is Ubuntu Linux and the &amp;quot;HOME&amp;quot; is quite limited in size. Most work should be done on the '''data''' and/or '''scratchu''' disks.&lt;br /&gt;
&lt;br /&gt;
It is up to you to tailor your environment. By default it is quite bare; it is up to you to load the modules you'll need to have access to specific software or compilers or libraries (and versions thereof).&lt;br /&gt;
To know what modules are available:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To load a given module, here the Panoply software:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load panoply&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Compiling the PCMs on Spirit ==&lt;br /&gt;
&lt;br /&gt;
Many compilers and compiler versions, along with precompiled NetCDF libraries are available on Spirit. Which can simplify installing and using the model there.&lt;br /&gt;
In practice one should first decide which compiler suite to use : Intel ifort/icc (depreciated?) or GNU gfortran/gcc &lt;br /&gt;
&lt;br /&gt;
=== Intel compilers ===&lt;br /&gt;
Dedicated arch files are available in the '''LMDZ.COMMON/arch''' subdirectory! They are labeled '''ifort_MESOIPSL''' (XIOS and DYNAMICO also have similarly named arch files). To use them when compiling the PCM, one simply needs to specify ''-arch ifort_MESOIPSL'' argument to the [[The makelmdz fcm GCM Compilation Script|makelmdz_fcm]] script.&lt;br /&gt;
 &lt;br /&gt;
Likewise there is a dedicated '''install_ioipsl_ifort_MESOIPSL.bash''' install script for IOIPSL available in '''LMDZ.COMMON/ioipsl''', you simply need to execute it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_ifort_MESOIPSL.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
to download and compile the IOIPSL library&lt;br /&gt;
&lt;br /&gt;
=== GNU compilers ===&lt;br /&gt;
Dedicated arch files are available in the '''LMDZ.COMMON/arch''' subdirectory! They are labeled '''MESOIPSL-gnu''' (XIOS and DYNAMICO also have similarly named arch files). To use them when compiling the PCM, one simply needs to specify ''-arch MESOIPSL-gnu'' argument to the [[The makelmdz fcm GCM Compilation Script|makelmdz_fcm]] script.&lt;br /&gt;
&lt;br /&gt;
Likewise there is a dedicated '''install_ioipsl_MESOIPSL-gnu.bash''' install script for IOIPSL available in '''LMDZ.COMMON/ioipsl''', you simply need to execute it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_MESOIPSL-gnu.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
to download and compile the IOIPSL library&lt;br /&gt;
&lt;br /&gt;
== Example of a job to run a GCM simulation ==&lt;br /&gt;
Here to run using 24 MPI tasks with 2 OpenMP threads each:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=24&lt;br /&gt;
#SBATCH --cpus-per-task=2&lt;br /&gt;
#SBATCH --partition=zen4 # zen4: 64cores/node and 240GB of memory&lt;br /&gt;
##SBATCH --partition=zen16 # zen16: 32 cores/node core and 496GB of memory&lt;br /&gt;
#SBATCH -J job_mpi_omp&lt;br /&gt;
#SBATCH --time=0:55:00&lt;br /&gt;
#SBATCH --output %x.%j.out&lt;br /&gt;
&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env&lt;br /&gt;
&lt;br /&gt;
export OMP_NUM_THREADS=2&lt;br /&gt;
export OMP_STACKSIZE=400M&lt;br /&gt;
&lt;br /&gt;
mpirun gcm_64x48x54_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that there is a per-user limitation of (maximum) 96 cores for a given job.&lt;br /&gt;
&lt;br /&gt;
== Sending data on Spirit ==&lt;br /&gt;
This function could be used to send data to your scratch directory on spirit. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
function rsend {&lt;br /&gt;
    a=${1:-.}&lt;br /&gt;
    b=${2:-$a}&lt;br /&gt;
    c=${3:-spirit}&lt;br /&gt;
    rsync -avzl $a/ $c:/scratchx/$USER/$b/&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To be used as such: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
rsend folder # simply send folder to your scratch dir&lt;br /&gt;
rsend folder1 folder2 # send folder1 into folder2&lt;br /&gt;
rsend folder1 folder2 machine #send folder1 into folder2 on machine (depends on your ssh config)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:FAQ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run_Mars_PCM&amp;diff=3250</id>
		<title>Quick Install and Run Mars PCM</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run_Mars_PCM&amp;diff=3250"/>
				<updated>2026-04-10T16:14:38Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
In this page we give a hopefully exhaustive enough overview of the necessary prerequisites and steps to download, compile and run a simple simulation with the Mars PCM, set up on a Linux computer.&lt;br /&gt;
&lt;br /&gt;
Note that there is a dedicated, install script, that attempts to do all these steps (up to and including running the simulation) that you can obtain here: https://web.lmd.jussieu.fr/~lmdz/planets/install_lmdz_mars.bash&lt;br /&gt;
Automating the process is not trivial as there are many subtle variants of Linux flavors and user account setups, so the script may fail in your case, but hopefully the information given in this page should help you solve the encountered problems.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note also that on some clusters (at least the ones we know of and extensively use, e.g. [[Using Adastra|Adastra]], [[Using the MESOIPSL cluster|MESOIPSL]], [[Using MeSU|MeSU]] or [[Using Irene Rome|Irene]]) some of the steps below may be skipped because the needed compilers and libraries are known and at hand. &lt;br /&gt;
&lt;br /&gt;
== Prerequisites: Tools and Libraries ==&lt;br /&gt;
In order to use (i.e. compile and run) the GCM, one needs to have some tools and installed libraries at hand. We list below a (minimal) set that you should check that is available and/or that you'll need to first install on your machine. Note that we assume in this tutorial that you are on a Linux native-OS/cluster.&lt;br /&gt;
&lt;br /&gt;
===  Fortran compiler ===&lt;br /&gt;
The GCM source code is in Fortran. One thus needs a Fortran compiler to build (compile) the executable.&lt;br /&gt;
The most easily available one (on Linux) is gfortran and examples discussed here will assume it is the one used.&lt;br /&gt;
You can check that you indeed have a gfortran compiler at hand with the following Bash command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
which gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
which should return something like&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/usr/bin/gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Subversion ===&lt;br /&gt;
The source code is managed using subversion (svn), which you'll need to download or update. Leaving aside the subtleties of svn and code organization for now, downloading the code amounts to doing the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://svn.lmd.jussieu.fr/Planeto/trunk --depth empty&lt;br /&gt;
cd trunk&lt;br /&gt;
svn update LMDZ.COMMON LMDZ.MARS&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As a side note: the source code that will be fetched by svn can also be browsed online here: https://trac.lmd.jussieu.fr/Planeto&lt;br /&gt;
&lt;br /&gt;
=== FCM ===&lt;br /&gt;
The FCM (Flexible Configuration Management) tool is a suite of perl scripts to help building and managing codes. We use a slightly modified version which can be obtained using subversion (svn). Ideally you'll want to download it somewhere on your computer once in for all. To do this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://forge.ipsl.jussieu.fr/fcm/svn/PATCHED/FCM_V1.2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You'll then need to add the resulting FCM_V1.2/bin to your PATH environment variable so that the command &amp;quot;fcm&amp;quot; may be used from anywhere on your machine. e.g. by adding the following line in your .bashrc:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/FCM_V1.2/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that the downloaded FCM_V1.2 directory is in your home ($HOME) directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
=== the NetCDF library ===&lt;br /&gt;
The GCM reads and writes input and output files in NetCDF format. Therefore a NetCDF library must be available. As this library is not quite standard you'll probably have to install it yourself on your system (check out [[the netCDF library]] page for more). You can use the following home-made &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script to do so. For this, ensure that you are in your home directory:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir netcdf&lt;br /&gt;
cd netcdf&lt;br /&gt;
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5_seq.bash&lt;br /&gt;
chmod u=rwx install_netcdf4_hdf5_seq.bash&lt;br /&gt;
./install_netcdf4_hdf5_seq.bash &amp;gt; netcdf.log 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Compiling the library and dependencies can take a while (&amp;gt;&amp;gt;15 minutes; be patient).&lt;br /&gt;
Once this is done, check file netcdf.log to verify that all went well.&lt;br /&gt;
You may want to also add its &amp;quot;bin&amp;quot; directory to your PATH environment variable by adding in your .bashrc a line of:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/netcdf/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that you have run the &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script in a &amp;quot;netcdf&amp;quot; subdirectory of your home directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
As a side note: The NetCDF library provides a very simple command line tool (ncdump) to inspect the contents of NetCDF files, but you'll need more advanced visualization tools (e.g., Panoply, Python scripts, etc. - see further down this page in the &amp;quot;Checking the Results&amp;quot; section) for more advanced post-processing of the outputs.&lt;br /&gt;
&lt;br /&gt;
=== the IOIPSL library ===&lt;br /&gt;
&lt;br /&gt;
The IOIPSL (Input/Output IPSL) library is a library designed to handle both the reading of some input files used by the GCM (the *.def files which are described further below) and the writing of some NetCDF output files.&lt;br /&gt;
&lt;br /&gt;
==== Prior to a first compilation: ksh to bash conversion ====&lt;br /&gt;
Some of the IOIPSL install scripts are written in ksh (Korn shell).&lt;br /&gt;
Given that most systems currently use Bash (Bourne Again Shell) as their command-line interpreter and not ksh (Korn Shell), you might need to install ksh on your system (assuming you have super-user privileges), for e.g., on Linux-Ubuntu:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt install ksh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Or, if that is not an option, change the occurrences in the package's scripts ('''ins_m_prec''') from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/ksh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Automated IOIPSL install script ====&lt;br /&gt;
Scripts to download and install the IOIPSL library can be found in the &amp;quot;ioipsl&amp;quot; subdirectory of the &amp;quot;LMDZ.COMMON&amp;quot; library. Since here we assume we're working with gfortran, the relevant one is &amp;quot;install_ioipsl_gfortran.bash&amp;quot;. If your PATH environment variable is already such that it includes the path to your NetCDF library distribution's bin directory (see previous section) then all you need to do is execute the script:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_gfortran.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If all went well the script should end with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
OK: ioipsl library is in ...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''(for further details about [[The_IOIPSL_Library|the IOIPSL library]] and installing it, follow the link and/or use the Search Box at the top of this page)''&lt;br /&gt;
&lt;br /&gt;
== GCM Input Datafiles and Datasets ==&lt;br /&gt;
In order to run, the GCM needs some inputs, such as initial conditions (values of state variables), external inputs (e.g. optical properties of aerosols) and simulation setup (e.g. specifications on how long to run, which parametrizations should be activated, etc.)&lt;br /&gt;
&lt;br /&gt;
In the spirit of the illustrative example considered here (an &amp;quot;GCM6&amp;quot; simulation, i.e. simulation including CO2, dust and water cycles with a model top around 120 km), a set of necessary input data may be downloaded (note that a few reference cases are available on https://web.lmd.jussieu.fr/~lmdz/planets/mars/ ) with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/planets/mars/reference_GCM6_64x48x54.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once unpacked (&amp;quot;tar xvzf reference_GCM6_64x48x54.tar.gz&amp;quot;) the resulting &amp;quot;reference_GCM6_64x48x54&amp;quot; will contain all that is needed, namely:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
callphys.def diagfi.def run.def startfi.nc start.nc stats.def traceur.def z2sig.def&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Initial condition NetCDF files ''start.nc'' and ''startfi.nc''; the first containing initial condition values for the dynamics and the second initial condition values for the physics. These also define the model resolution and number of atmospheric layers (64x48x54 - the compiled GCM binary must match this!) &lt;br /&gt;
* Some mandatory ASCII *.def files containing run parameters, namely:&lt;br /&gt;
# [[The_run.def_Input_File | run.def]] : &amp;quot;master def file&amp;quot; containing main run parameters&lt;br /&gt;
# [[The_callphys.def_Input_File | callphys.def]] : file containing flags and keys for the various physics parametrizations&lt;br /&gt;
# [[The_z2sig.def_Input_File | z2sig.def]] : file describing the sought vertical discretization&lt;br /&gt;
# [[The_traceur.def_Input_File | traceur.def]] : file specifying the tracer number and names&lt;br /&gt;
* Some optional ASCII *def files containing names of variables to output:&lt;br /&gt;
# [[The_diagfi.def_Input_File | diagfi.def]] : list of variables that will be included in output file diagfi.nc (if there is no diagfi.def file then ALL variables will be included; you probably don't want that as the resulting file will rapidly grow to be HUGE)&lt;br /&gt;
# [[The_stats.def_Input_File | stats.def]] : list of variables that will be included in output file stats.nc (if there is no stats.def file then ALL variables will be included and the resulting file will be large)&lt;br /&gt;
&lt;br /&gt;
== Compiling the GCM ==&lt;br /&gt;
Now that all the prerequisites are fulfilled, it is (almost!) time to compile the GCM&lt;br /&gt;
&lt;br /&gt;
=== Prior to a first compilation: setting up the target architecture files ===&lt;br /&gt;
Compiling the model is done using a dedicated Bash script ''makelmdz_fcm'' located in the '''LMDZ.COMMON''' directory. This script however relies on ''architecture files''. These files contain information on which compiler to use, what compilation options to use, where relevant libraries are located etc. In practice, one must thus create these ASCII text files in the '''arch/''' subdirectory of '''LMDZ.COMMON'''. The naming convention is rather straightforward, when the script ''makelmdz_fcm'' is run with the option '''-arch somename''', it will look for files ''arch/arch-somename.env'', ''arch/arch-somename.path'' and ''arch/arch-somename.fcm''. Leaving aside a detailed description for later (see [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this page]]), here we mention that:&lt;br /&gt;
* the ''arch*.env'' is an optional file containing ''environment'' information, such as setting up environment variables or loading modules on some machines, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export NETCDF_HOME=/path/to/the/netcdf/distribution&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* the '''arch*.path''' is a mandatory file containing information relative to external libraries such as NetCDF and IOIPSL, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ROOT=$PWD&lt;br /&gt;
&lt;br /&gt;
NETCDF_LIBDIR=&amp;quot;-L${NETCDF_HOME}/lib&amp;quot;&lt;br /&gt;
NETCDF_LIB=&amp;quot;-lnetcdf -lnetcdff&amp;quot;&lt;br /&gt;
NETCDF_INCDIR=&amp;quot;-I${NETCDF_HOME}/include&amp;quot;&lt;br /&gt;
&lt;br /&gt;
IOIPSL_INCDIR=&amp;quot;-I$ROOT/../IOIPSL/inc&amp;quot;&lt;br /&gt;
IOIPSL_LIBDIR=&amp;quot;-L$ROOT/../IOIPSL/lib&amp;quot;&lt;br /&gt;
IOIPSL_LIB=&amp;quot;-lioipsl&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Each library is referenced by a fixed identifier (NETCDF, IOIPSL, XIOS, ...) and 3 trailing strings: '''_LIBDIR''', for the path to the library, '''_LIB''', for the library name(s), and '''_INCDIR''' for the path to the library's ''include'' directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* the '''arch*.fcm''' is a mandatory file containing information relative to the compiler and compilation options, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
%COMPILER            gfortran&lt;br /&gt;
%LINK                gfortran&lt;br /&gt;
%AR                  ar&lt;br /&gt;
%MAKE                make&lt;br /&gt;
%FPP_FLAGS           -P -traditional&lt;br /&gt;
%FPP_DEF             NC_DOUBLE&lt;br /&gt;
%BASE_FFLAGS         -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons&lt;br /&gt;
%PROD_FFLAGS         -O3&lt;br /&gt;
%DEV_FFLAGS          -O&lt;br /&gt;
%DEBUG_FFLAGS        -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace&lt;br /&gt;
%MPI_FFLAGS&lt;br /&gt;
%OMP_FFLAGS         &lt;br /&gt;
%BASE_LD     &lt;br /&gt;
%MPI_LD&lt;br /&gt;
%OMP_LD              &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Again, not going into a detailed description (follow [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this link]] for that), just note here that each line corresponds to a keyword (starting with &amp;quot;%&amp;quot;) followed by the relevant options. Here, we mention a few of the main ones:&lt;br /&gt;
* %COMPILER: The compiler to use (here, gfortran)&lt;br /&gt;
* %BASE_FFLAGS: compiler options (always included)&lt;br /&gt;
* %PROD_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-prod&amp;quot; option&lt;br /&gt;
* %DEBUG_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-debug&amp;quot; option&lt;br /&gt;
* %BASE_LD: flags to add at the linking step of the compilation&lt;br /&gt;
&lt;br /&gt;
=== Compiling the test case  ===&lt;br /&gt;
To compile the GCM at the sought resolution, run (in LMDZ.COMMON):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -arch local -p mars -d 64x48x54 -j 8 gcm &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Here, we assume that you have generated the '''arch-local.*''' files as per what is suggested in the previous section.&lt;br /&gt;
The options for ''makelmdz_fcm'' used here imply:&lt;br /&gt;
* '''-p mars''': the GCM will use the &amp;quot;mars&amp;quot; physics package&lt;br /&gt;
* '''-d 64x48x54''': the GCM grid will be 64x48 in longitude x latitude, with 54 vertical levels.&lt;br /&gt;
For a glimpse at all the possible ''makelmdz_fcm'' options and their meanings, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -h&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and/or check the dedicated [[The_makelmdz_fcm_GCM_Compilation_Script|makelmdz_fcm page]].&lt;br /&gt;
&lt;br /&gt;
Upon successful compilation, the executable '''gcm_64x48x54_phymars_seq.e''' should be generated in the '''bin''' subdirectory.&lt;br /&gt;
&lt;br /&gt;
== Running the GCM ==&lt;br /&gt;
To run your first simulation, you first need to have a ''datadir'' at hand, which is a directory containing some input datafiles (e.g. driving dust scenarios, aerosol properties, etc.) that the GCM will need at run time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/planets/mars/datadir.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And simply, after having un-tarred it&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
tar xvzf datadir.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Link the resulting '''datadir''' directory to where you will run (e.g. the '''reference_GCM6_64x48x54''' directory), for instance if '''datadir''' is along side '''reference_GCM6_64x48x54''', then in the later do:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ln -s ../datadir .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Secondly you need to copy (or move) the executable '''gcm_64x48x54_phymars_seq.e''' from '''LMDZ.COMMON/bin''' to the directory containing the initial conditions and parameter files, e.g. '''reference_GCM6_64x48x54'''.&lt;br /&gt;
&lt;br /&gt;
You can now run the GCM.&lt;br /&gt;
This is usually a two-step process: the (optional) first step is to source the environment architecture file (the very same that was used to compile the model), e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ../LMDZ.COMMON/arch.env&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The second step is to execute the model, e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./gcm_64x48x54_phymars_seq.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this command line, the (text) outputs messages are redirected into a text file, '''gcm.out'''. It is convenient to keep this file for later inspection (e.g., to track a bug). If there is no redirection (only '''./gcm_64x48x54_phymars_seq.e'''), then the outputs will be directly on the screen.&lt;br /&gt;
&lt;br /&gt;
== Checking the Results of a Simulation ==&lt;br /&gt;
Once the simulation is finished, you'll know that all went well (&amp;quot;everything is cool&amp;quot;) if the last few lines of the standard text output are:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 in abort_gcm&lt;br /&gt;
 Stopping in leapfrog&lt;br /&gt;
 Reason = Simulation finished &lt;br /&gt;
 Everything is cool&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, start looking for an error message and a way to fix the problem...&lt;br /&gt;
&lt;br /&gt;
Apart from the standard text output messages from the GCM, which are mostly for monitoring and checking the simulation progress, the user will more likely be interested in checking the contents of the ''diagfi.nc'' file produced by the GCM, as it contains instantaneous values of the main model variables (atmospheric temperature, winds, etc.).&lt;br /&gt;
&lt;br /&gt;
...TODO...ADD HERE SOME ILLUSTRATIVE PLOTS OF THE EXPECTED BENCH OUTPUTS...&lt;br /&gt;
&lt;br /&gt;
== Taking Things to the Next Level ==&lt;br /&gt;
See [[Advanced_Topics_Mars_PCM|Advanced use of the Mars PCM]]&lt;br /&gt;
&lt;br /&gt;
The short tutorial presented in this page is meant to be useful to get an overview of what is required to install and run the GCM, in addition to checking the results of a simulation. Moving on to a more intensive and problem-specific usage will require diving into additional topics and aspects such as:&lt;br /&gt;
* Selecting the appropriate inputs and run parameters for a given study.&lt;br /&gt;
* Compiling and running in parallel (MPI and/or OpenMP) to obtain results in a reasonable time frame.&lt;br /&gt;
* post-processing and analysis of model outputs.&lt;br /&gt;
All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)!&lt;br /&gt;
&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Mars-LMDZ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;diff=3249</id>
		<title>Quick Install and Run</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;diff=3249"/>
				<updated>2026-04-10T16:12:01Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this page we give a hopefully exhaustive enough overview of the necessary prerequisites and steps to download, compile and run a simple simulation with the GCM in an &amp;quot;Early Mars&amp;quot; setup (i.e. a desert planet with a CO2 atmosphere) on a Linux computer.&lt;br /&gt;
&lt;br /&gt;
Note that there is a dedicated, install script, that attempts to do all these steps (up to and including running the simulation) that you can obtain here: https://web.lmd.jussieu.fr/~lmdz/planets/generic/install_scripts/install_lmdz_generic_earlymars.bash&lt;br /&gt;
Automating the process is not trivial as there are many subtle variants of Linux flavors and user account setups, so the script may fail in your case, but hopefully the information given in this page should help you solve the encountered problems.&lt;br /&gt;
&lt;br /&gt;
Note also that on some clusters (at least the ones we know of and extensively use, e.g. [[Using Adastra|Adastra]], [[Using the MESOIPSL cluster|MESOIPSL]], [[Using MeSU|MeSU]] or [[Using Irene Rome|Irene]]) some of the steps below may be skipped because the needed compilers and libraries are known and at hand. &lt;br /&gt;
&lt;br /&gt;
== Prerequisites: Tools and Libraries ==&lt;br /&gt;
In order to use (i.e. compile and run) the GCM, one needs to have some tools and installed libraries at hand. We list below a (minimal) set that you should check that is available and/or that you'll need to first install on your machine. Note that we assume in this tutorial that you are on a Linux native-OS/cluster.&lt;br /&gt;
&lt;br /&gt;
===  Fortran compiler ===&lt;br /&gt;
The GCM source code is in Fortran. One thus needs a Fortran compiler to build (compile) the executable.&lt;br /&gt;
The most easily available one (on Linux) is gfortran and examples discussed here will assume it is the one used.&lt;br /&gt;
You can check that you indeed have a gfortran compiler at hand with the following Bash command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
which gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
which should return something like&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/usr/bin/gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Subversion ===&lt;br /&gt;
The source code is managed using subversion (svn), which you'll need to download or update. Leaving aside the subtleties of svn and code organization for now, downloading the code amounts to doing the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://svn.lmd.jussieu.fr/Planeto/trunk --depth empty&lt;br /&gt;
cd trunk&lt;br /&gt;
svn update LMDZ.COMMON LMDZ.GENERIC&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As a side note: the source code that will be fetched by svn can also be browsed online here: https://trac.lmd.jussieu.fr/Planeto&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Note: if the command line above doesn't work, you may also try to replace 'http' by 'https'.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Git === &lt;br /&gt;
&lt;br /&gt;
Alternatively to svn, you can use [[Git usage|git to download the source code]]. &lt;br /&gt;
&lt;br /&gt;
=== FCM ===&lt;br /&gt;
The FCM (Flexible Configuration Management) tool is a suite of perl scripts to help building and managing codes. We use a slightly modified version which can be obtained using subversion (svn). Ideally you'll want to download it somewhere on your computer once in for all. To do this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://forge.ipsl.jussieu.fr/fcm/svn/PATCHED/FCM_V1.2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You'll then need to add the resulting FCM_V1.2/bin to your PATH environment variable so that the command &amp;quot;fcm&amp;quot; may be used from anywhere on your machine. e.g. by adding the following line in your .bashrc (the .bashrc file is a hidden configuration script in your home directory (~/.bashrc) that runs whenever you start a new Bash shell):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/FCM_V1.2/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that the downloaded FCM_V1.2 directory is in your home ($HOME) directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
=== the NetCDF library ===&lt;br /&gt;
The GCM reads and writes input and output files in NetCDF format, therefore a NetCDF library is required. Most of the clusters propose a NetCDF library that you can load before using the model. &lt;br /&gt;
&lt;br /&gt;
If this library is not available, you can install it by yourself on your system (check out [[the netCDF library]] page for more). You can use the following home-made &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script to do so. For this, ensure that you are in your home directory:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir netcdf&lt;br /&gt;
cd netcdf&lt;br /&gt;
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5_seq.bash&lt;br /&gt;
chmod u=rwx install_netcdf4_hdf5_seq.bash&lt;br /&gt;
./install_netcdf4_hdf5_seq.bash &amp;gt; netcdf.log 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Compiling the library and dependencies can take a while (&amp;gt;&amp;gt;15 minutes; be patient).&lt;br /&gt;
Once this is done, check file netcdf.log to verify that all went well.&lt;br /&gt;
You may want to also add its &amp;quot;bin&amp;quot; directory to your PATH environment variable by adding in your .bashrc a line of:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/netcdf/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that you have run the &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script in a &amp;quot;netcdf&amp;quot; subdirectory of your home directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
As a side note: The NetCDF library provides a very simple command line tool (ncdump) to inspect the contents of NetCDF files, but you'll need more advanced visualization tools (e.g., Panoply, Python scripts, etc. - see further down this page in the &amp;quot;Checking the Results&amp;quot; section) for more advanced post-processing of the outputs.&lt;br /&gt;
&lt;br /&gt;
=== the IOIPSL library ===&lt;br /&gt;
&lt;br /&gt;
The IOIPSL (Input/Output IPSL) library is designed to handle both the reading of some input files used by the GCM (the *.def files which are described further below) and the writing of some NetCDF output files.&lt;br /&gt;
&lt;br /&gt;
==== Automated IOIPSL install script ====&lt;br /&gt;
Scripts to download and install the IOIPSL library can be found in the &amp;quot;ioipsl&amp;quot; subdirectory of the &amp;quot;LMDZ.COMMON&amp;quot; library. Since here we assume we're working with gfortran, the relevant one is &amp;quot;install_ioipsl_gfortran.bash&amp;quot;. If your PATH environment variable is already such that it includes the path to your NetCDF library distribution's bin directory (see previous section) then all you need to do is execute the script:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_gfortran.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If all went well the script should end with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
OK: ioipsl library is in ...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''(for further details about [[The_IOIPSL_Library|the IOIPSL library]] and installing it, follow the link and/or use the Search Box at the top of this page)''&lt;br /&gt;
&lt;br /&gt;
== GCM Input Datafiles and Datasets ==&lt;br /&gt;
In order to run, the GCM needs some inputs, such as initial conditions (values of state variables), external inputs (e.g. optical properties of aerosols) and simulation setup (e.g. specifications on how long to run, which parametrizations should be activated, etc.)&lt;br /&gt;
&lt;br /&gt;
In the spirit of the illustrative example considered here (an &amp;quot;Early Mars&amp;quot; simulation), a set of necessary input data may be downloaded with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/reference_earlymars_64x48x26_b32x36.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Once unpacked (to do that, you can execute the command &amp;quot;tar xvzf reference_earlymars_64x48x26_b32x36.tar.gz&amp;quot;) the resulting &amp;quot;reference_earlymars_64x48x26_b32x36&amp;quot; will contain all that is needed, namely:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
callphys.def  gases.def  startfi.nc  traceur.def&lt;br /&gt;
datadir/      run.def    start.nc    z2sig.def&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Initial condition NetCDF files ''start.nc'' and ''startfi.nc''; the first containing initial condition values for the dynamics and the second initial condition values for the physics.&lt;br /&gt;
* A '''datadir''' directory containing external inputs (aerosol properties, stellar spectra, etc.)&lt;br /&gt;
* Some ASCII *.def files containing run parameters, namely:&lt;br /&gt;
# [[The_run.def_Input_File | run.def]] : &amp;quot;master def file&amp;quot; containing main run parameters&lt;br /&gt;
# [[The_callphys.def_Input_File | callphys.def]] : file containing flags and keys for the various physics parametrizations&lt;br /&gt;
# [[The_z2sig.def_Input_File | z2sig.def]] : file describing the sought vertical discretization&lt;br /&gt;
# [[The_traceur.def_Input_File | traceur.def]] : file specifying the tracer number and names&lt;br /&gt;
# [[The_gases.def_Input_File | gases.def]] : file specifying the list of gases (main and trace) in the atmosphere&lt;br /&gt;
&lt;br /&gt;
== Compiling the GCM ==&lt;br /&gt;
Now that all the prerequisites are fulfilled, it is (almost!) time to compile the GCM&lt;br /&gt;
&lt;br /&gt;
=== Prior to a first compilation: setting up the target architecture files ===&lt;br /&gt;
Compiling the model is done using a dedicated Bash script ''makelmdz_fcm'' located in the '''LMDZ.COMMON''' directory. This script however relies on ''architecture files''. These files contain information on which compiler to use, what compilation options to use, where relevant libraries are located etc. In practice, one must thus create these ASCII text files in the '''arch/''' subdirectory of '''LMDZ.COMMON'''. The naming convention is rather straightforward, when the script ''makelmdz_fcm'' is run with the option '''-arch somename''', it will look for files ''arch/arch-somename.env'', ''arch/arch-somename.path'' and ''arch/arch-somename.fcm''. Leaving aside a detailed description for later (see [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this page]]), here we mention that:&lt;br /&gt;
* the ''arch*.env'' is an optional file containing ''environment'' information, such as setting up environment variables or loading modules on some machines, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export NETCDF_HOME=/path/to/the/netcdf/distribution&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
A more realistic (but more specific) example of a '''arch*.env''' file using &amp;quot;recent&amp;quot; module commands, adapted for compilation and visualisation on a given supercomputer, would look more like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module purge&lt;br /&gt;
module load GCC/10.3.0  OpenMPI/4.1.1&lt;br /&gt;
module load netCDF-Fortran/4.5.3&lt;br /&gt;
export NETCDF_INCDIR=&amp;quot;/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/include&amp;quot;&lt;br /&gt;
export NETCDFF_LIBDIR=&amp;quot;/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/lib&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that the last two lines above specify paths to the '''include''' and '''lib''' directories used on this system and will certainly vary from system to system. Likewise, the exact module versions will most likely need to be adapted in your specific '''arch*.env''' file.&lt;br /&gt;
* the '''arch*.path''' is a mandatory file containing information relative to external libraries such as NetCDF and IOIPSL, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ROOT=$PWD&lt;br /&gt;
&lt;br /&gt;
NETCDF_LIBDIR=&amp;quot;-L${NETCDF_HOME}/lib&amp;quot;&lt;br /&gt;
NETCDF_LIB=&amp;quot;-lnetcdf -lnetcdff&amp;quot;&lt;br /&gt;
NETCDF_INCDIR=&amp;quot;-I${NETCDF_HOME}/include&amp;quot;&lt;br /&gt;
&lt;br /&gt;
IOIPSL_INCDIR=&amp;quot;-I$ROOT/../IOIPSL/inc&amp;quot;&lt;br /&gt;
IOIPSL_LIBDIR=&amp;quot;-L$ROOT/../IOIPSL/lib&amp;quot;&lt;br /&gt;
IOIPSL_LIB=&amp;quot;-lioipsl&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Each library is referenced by a fixed identifier (NETCDF, IOIPSL, XIOS, ...) and 3 trailing strings: '''_LIBDIR''', for the path to the library, '''_LIB''', for the library name(s), and '''_INCDIR''' for the path to the library's ''include'' directory.&lt;br /&gt;
&lt;br /&gt;
* the '''arch*.fcm''' is a mandatory file containing information relative to the compiler and compilation options, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
%COMPILER            gfortran&lt;br /&gt;
%LINK                gfortran&lt;br /&gt;
%AR                  ar&lt;br /&gt;
%MAKE                make&lt;br /&gt;
%FPP_FLAGS           -P -traditional&lt;br /&gt;
%FPP_DEF             NC_DOUBLE&lt;br /&gt;
%BASE_FFLAGS         -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons&lt;br /&gt;
%PROD_FFLAGS         -O3&lt;br /&gt;
%DEV_FFLAGS          -O&lt;br /&gt;
%DEBUG_FFLAGS        -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace&lt;br /&gt;
%MPI_FFLAGS&lt;br /&gt;
%OMP_FFLAGS         &lt;br /&gt;
%BASE_LD     &lt;br /&gt;
%MPI_LD&lt;br /&gt;
%OMP_LD              &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Again, not going into a detailed description (follow [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this link]] for that), just note here that each line corresponds to a keyword (starting with &amp;quot;%&amp;quot;) followed by the relevant options. Here, we mention a few of the main ones:&lt;br /&gt;
* %COMPILER: The compiler to use (here, gfortran)&lt;br /&gt;
* %BASE_FFLAGS: compiler options (always included)&lt;br /&gt;
* %PROD_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-prod&amp;quot; option&lt;br /&gt;
* %DEBUG_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-debug&amp;quot; option&lt;br /&gt;
* %BASE_LD: flags to add at the linking step of the compilation&lt;br /&gt;
&lt;br /&gt;
Note that if you are using a recent version of gfortran (10 or more), you have to add an extra option in the %BASE_FFLAGS, that is '''-fallow-argument-mismatch'''&lt;br /&gt;
&lt;br /&gt;
Also note that you can find in the '''LMDZ.COMMON/arch/''' many examples of arch files that you can re-use as is if you compile the model on our usual computing clusters (e.g. Spirit, Adastra, etc.). Just check the content of the directory to see if your favorite computing cluster already has arch files.&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (early Mars) ===&lt;br /&gt;
To compile the GCM at the sought resolution for the Early Mars test case run (in LMDZ.COMMON):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -arch local -p generic -d 64x48x26 -b 32x36 gcm &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;!-- -s option is no more needed ; * '''-s 2''': the physics parametrizations will handle 2 radiatively active tracers (water ice and dust for the Early Mars setup) --&amp;gt;&lt;br /&gt;
Here, we assume that you have generated the '''arch-local.*''' files as per what is suggested in the previous section.&lt;br /&gt;
The options for ''makelmdz_fcm'' used here imply:&lt;br /&gt;
* '''-p generic''': the GCM will use the &amp;quot;generic&amp;quot; physics package&lt;br /&gt;
* '''-d 64x48x26''': the GCM grid will be 64x48 in longitude x latitude, with 26 vertical levels.&lt;br /&gt;
* '''-b 32x36''': the physics radiative transfer will be done using 32 bands in the IR and 36 in the visible.&lt;br /&gt;
For a glimpse at all the possible ''makelmdz_fcm'' options and their meanings, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -h&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and/or check the dedicated [[The_makelmdz_fcm_GCM_Compilation_Script|makelmdz_fcm page]].&lt;br /&gt;
&lt;br /&gt;
Upon successful compilation, the executable '''gcm_64x48x26_phygeneric_b32x36_seq.e''' should be generated in the '''bin''' subdirectory.&lt;br /&gt;
&lt;br /&gt;
=== Known issues ===&lt;br /&gt;
&lt;br /&gt;
If the compilation fails, it might be due to the options used in the arch file. &lt;br /&gt;
For example, if you are using gfortran prior to 10, you could get an error such as:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gfortran: error: unrecognized command line option ‘-fallow-argument-mismatch’; did you mean ‘-Wno-argument-mismatch’?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This can be solved by removing the option '''-fallow-argument-mismatch''' from the arch.fcm file.&lt;br /&gt;
&lt;br /&gt;
If you are using a recent version of gfortran (10 of beyond) without the option '''-fallow-argument-mismatch''', the compilation will probably fail ith the error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  136 |      .       idim_index,nvarid)&lt;br /&gt;
      |             2                                       &lt;br /&gt;
......&lt;br /&gt;
  211 |       ierr = NF_DEF_VAR (nid, &amp;quot;aire&amp;quot;, NF_DOUBLE, 2, id,nvarid)&lt;br /&gt;
      |                                                    1&lt;br /&gt;
Error: Rank mismatch between actual argument at (1) and actual argument at (2) (scalar and rank-1)&lt;br /&gt;
fcm_internal compile failed (256)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Add the compilation option in the arch file to solve the issue.&lt;br /&gt;
&lt;br /&gt;
== Running the GCM ==&lt;br /&gt;
To run your first simulation, you need to first copy (or move) the executable '''gcm_64x48x26_phygeneric_b32x36_seq.e''' to the directory containing the initial conditions and parameter files, e.g. '''reference_earlymars_64x48x26_b32x36''' and run it.&lt;br /&gt;
This is usually a two-step process: the (optional) first step is to source the environment architecture file (the very same that was used to compile the model), e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ../LMDZ.COMMON/arch.env&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The second step is to execute the model, e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./gcm_64x48x26_phygeneric_b32x36_seq.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this command line, the (text) outputs messages are redirected into a text file, '''gcm.out'''. It is convenient to keep this file for later inspection (e.g., to track a bug). If there is no redirection (only '''./gcm_64x48x26_phygeneric_b32x36_seq.e'''), then the outputs will be directly on the screen.&lt;br /&gt;
&lt;br /&gt;
== Checking the Results of a Simulation ==&lt;br /&gt;
Once the simulation is finished, you'll know that all went well (&amp;quot;everything is cool&amp;quot;) if the last few lines of the standard text output reads:&lt;br /&gt;
[[File:tsurf_benchmark_early_Mars.png|300px|thumb|Final surface temperature map of the reference simulation (plotted using Panoply).]]&lt;br /&gt;
[[File:water_ice_cloud_column_benchmark_early_Mars.png|300px|thumb|Final water ice cloud column map of the reference simulation (plotted using Panoply).]]&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 in abort_gcm&lt;br /&gt;
 Stopping in leapfrog&lt;br /&gt;
 Reason = Simulation finished &lt;br /&gt;
 Everything is cool&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, start looking for an error message and a way to fix the problem...&lt;br /&gt;
&lt;br /&gt;
Apart from the standard text output messages from the GCM, which are mostly for monitoring and checking the simulation progress, the user will more likely be interested in checking the contents of the ''diagfi.nc'' file produced by the GCM, as it contains instantaneous values of the main model variables (atmospheric temperature, winds, etc.).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To check that you have successfully run the simulation, we provide some graphs to evaluate the results of your simulations, for a simulation similar to the one described in this tutorial (early Mars, 32x32x15 resolution). TODO : update plots to current example&lt;br /&gt;
&lt;br /&gt;
In the plots shown here, we present maps of the surface temperatures ('tsurf' variable) and the water ice cloud column ('h2o_ice_col' variable), both plotted using Panoply.&lt;br /&gt;
&lt;br /&gt;
Side note: There are a variety of freely available software that can be used to visualise the NetCDF ''diagfi.nc'' file, such as Panoply, Ferret, Ncview, Grads, Python, etc. (see more details in the [[Tool_Box | Tool Box section]])&lt;br /&gt;
&lt;br /&gt;
== Taking Things to the Next Level ==&lt;br /&gt;
The short tutorial presented in this page is meant to be useful to get an overview of what is required to install and run the GCM, in addition to checking the results of a simulation. Moving on to a more intensive and problem-specific usage will require diving into additional topics and aspects such as:&lt;br /&gt;
* Selecting the appropriate inputs and run parameters for a given study.&lt;br /&gt;
* Compiling and running in parallel (MPI and/or OpenMP) to obtain results in a reasonable time frame.&lt;br /&gt;
* post-processing and analysis of model outputs.&lt;br /&gt;
All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)!&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run_Venus_PCM&amp;diff=3248</id>
		<title>Quick Install and Run Venus PCM</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run_Venus_PCM&amp;diff=3248"/>
				<updated>2026-04-10T16:11:22Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
In this page we give a hopefully exhaustive enough overview of the necessary prerequisites and steps to download, compile and run a simple simulation with the Venus PCM (with the LMDZ dynamical core), set up on a Linux computer.&lt;br /&gt;
&lt;br /&gt;
Note that there is a dedicated, install script, that attempts to do all these steps (up to and including running the simulation) that you can obtain here: https://web.lmd.jussieu.fr/~lmdz/planets/install_lmdz_venus.bash&lt;br /&gt;
Automating the process is not trivial as there are many subtle variants of Linux flavors and user account setups, so the script may fail in your case, but hopefully the information given in this page should help you solve the encountered problems.&lt;br /&gt;
&lt;br /&gt;
Note also that on some clusters (at least the ones we know of and extensively use, e.g. [[Using Adastra|Adastra]], [[Using the MESOIPSL cluster|MESOIPSL]], [[Using MeSU|MeSU]] or [[Using Irene Rome|Irene]]) some of the steps below may be skipped because the needed compilers and libraries are known and at hand. &lt;br /&gt;
&lt;br /&gt;
== Prerequisites: Tools and Libraries ==&lt;br /&gt;
In order to use (i.e. compile and run) the GCM, one needs to have some tools and installed libraries at hand. We list below a (minimal) set that you should check that is available and/or that you'll need to first install on your machine. Note that we assume in this tutorial that you are on a Linux native-OS/cluster.&lt;br /&gt;
&lt;br /&gt;
===  Fortran compiler ===&lt;br /&gt;
The GCM source code is in Fortran. One thus needs a Fortran compiler to build (compile) the executable.&lt;br /&gt;
The most easily available one (on Linux) is gfortran and examples discussed here will assume it is the one used.&lt;br /&gt;
You can check that you indeed have a gfortran compiler at hand with the following Bash command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
which gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
which should return something like&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/usr/bin/gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Subversion ===&lt;br /&gt;
The source code is managed using subversion (svn), which you'll need to download or update. Leaving aside the subtleties of svn and code organization for now, downloading the code amounts to doing the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://svn.lmd.jussieu.fr/Planeto/trunk --depth empty&lt;br /&gt;
cd trunk&lt;br /&gt;
svn update LMDZ.COMMON LMDZ.VENUS&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As a side note: the source code that will be fetched by svn can also be browsed online here: https://trac.lmd.jussieu.fr/Planeto&lt;br /&gt;
&lt;br /&gt;
=== FCM ===&lt;br /&gt;
The FCM (Flexible Configuration Management) tool is a suite of perl scripts to help building and managing codes. We use a slightly modified version which can be obtained using subversion (svn). Ideally you'll want to download it somewhere on your computer once in for all. To do this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://forge.ipsl.fr/fcm/svn/PATCHED/FCM_V1.2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You'll then need to add the resulting FCM_V1.2/bin to your PATH environment variable so that the command &amp;quot;fcm&amp;quot; may be used from anywhere on your machine. e.g. by adding the following line in your .bashrc:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/FCM_V1.2/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that the downloaded FCM_V1.2 directory is in your home ($HOME) directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
=== the NetCDF library ===&lt;br /&gt;
The GCM reads and writes input and output files in NetCDF format. Therefore a NetCDF library must be available. As this library is not quite standard you'll probably have to install it yourself on your system (check out [[the netCDF library]] page for more). You can use the following home-made &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script to do so. For this, ensure that you are in your home directory:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir netcdf&lt;br /&gt;
cd netcdf&lt;br /&gt;
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5_seq.bash&lt;br /&gt;
chmod u=rwx install_netcdf4_hdf5_seq.bash&lt;br /&gt;
./install_netcdf4_hdf5_seq.bash &amp;gt; netcdf.log 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Compiling the library and dependencies can take a while (&amp;gt;&amp;gt;15 minutes; be patient).&lt;br /&gt;
Once this is done, check file netcdf.log to verify that all went well.&lt;br /&gt;
You may want to also add its &amp;quot;bin&amp;quot; directory to your PATH environment variable by adding in your .bashrc a line of:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/netcdf/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that you have run the &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script in a &amp;quot;netcdf&amp;quot; subdirectory of your home directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
As a side note: The NetCDF library provides a very simple command line tool (ncdump) to inspect the contents of NetCDF files, but you'll need more advanced visualization tools (e.g., Panoply, Python scripts, etc. - see further down this page in the &amp;quot;Checking the Results&amp;quot; section) for more advanced post-processing of the outputs.&lt;br /&gt;
&lt;br /&gt;
=== the IOIPSL library ===&lt;br /&gt;
&lt;br /&gt;
The IOIPSL (Input/Output IPSL) library is a library designed to handle both the reading of some input files used by the GCM (the *.def files which are described further below) and the writing of some NetCDF output files.&lt;br /&gt;
&lt;br /&gt;
==== Prior to a first compilation: ksh to bash conversion ====&lt;br /&gt;
Some of the IOIPSL install scripts are written in ksh (Korn shell).&lt;br /&gt;
Given that most systems currently use Bash (Bourne Again Shell) as their command-line interpreter and not ksh (Korn Shell), you might need to install ksh on your system (assuming you have super-user privileges), for e.g., on Linux-Ubuntu:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt install ksh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Or, if that is not an option, change the occurrences in the package's scripts ('''ins_m_prec''') from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/ksh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Automated IOIPSL install script ====&lt;br /&gt;
Scripts to download and install the IOIPSL library can be found in the &amp;quot;ioipsl&amp;quot; subdirectory of the &amp;quot;LMDZ.COMMON&amp;quot; library. Since here we assume we're working with gfortran, the relevant one is &amp;quot;install_ioipsl_gfortran.bash&amp;quot;. If your PATH environment variable is already such that it includes the path to your NetCDF library distribution's bin directory (see previous section) then all you need to do is execute the script:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_gfortran.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If all went well the script should end with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
OK: ioipsl library is in ...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''(for further details about [[The_IOIPSL_Library|the IOIPSL library]] and installing it, follow the link and/or use the Search Box at the top of this page)''&lt;br /&gt;
&lt;br /&gt;
== GCM Input Datafiles and Datasets ==&lt;br /&gt;
In order to run, the GCM needs some inputs, such as initial conditions (values of state variables), external inputs (e.g. optical properties) and simulation setup (e.g. specifications on how long to run, which parametrizations should be activated, etc.)&lt;br /&gt;
&lt;br /&gt;
In the spirit of the illustrative example considered here, a set of necessary input data may be downloaded with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/planets/venus/bench_48x32x50.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that this is a quite low resolution case, mostly for simple tests or checking that the model was well installed. For scientific work the model is typically run at higher resolution (e.g. 96x96x50; in fact the same directory can be found a bench_96x96x50.tar.gz file).&lt;br /&gt;
&lt;br /&gt;
Nonetheless this ''bench_48x32x50'' example already provides insights on the required input files one needs:&lt;br /&gt;
* a ''run.def'' file, along with companion ''gcm.def'' and ''physiq.def'' ASCII files&lt;br /&gt;
* a ''z2sig.def'' ASCII file, which is read at runtime and contains information about the vertical levels of the PCM&lt;br /&gt;
* a ''traceur.def'' ASCII file, which contains the list of tracers the PCM will use&lt;br /&gt;
* a ''start.nc'' and a ''startphy.nc'' NetCDF files which respectively contain the initial conditions for the dynamics and the physics&lt;br /&gt;
* Input datasets (read at run-time by the PCM) ''ksi_global.txt'' and ''SolarNetFlux_RH.dat''&lt;br /&gt;
&lt;br /&gt;
== Compiling the GCM ==&lt;br /&gt;
Now that all the prerequisites are fulfilled, it is (almost!) time to compile the GCM&lt;br /&gt;
&lt;br /&gt;
=== Prior to a first compilation: setting up the target architecture files ===&lt;br /&gt;
Compiling the model is done using a dedicated Bash script ''makelmdz_fcm'' located in the '''LMDZ.COMMON''' directory. This script however relies on ''architecture files''. These files contain information on which compiler to use, what compilation options to use, where relevant libraries are located etc. In practice, one must thus create these ASCII text files in the '''arch/''' subdirectory of '''LMDZ.COMMON'''. The naming convention is rather straightforward, when the script ''makelmdz_fcm'' is run with the option '''-arch somename''', it will look for files ''arch/arch-somename.env'', ''arch/arch-somename.path'' and ''arch/arch-somename.fcm''. Leaving aside a detailed description for later (see [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this page]]), here we mention that:&lt;br /&gt;
* the ''arch*.env'' is an optional file containing ''environment'' information, such as setting up environment variables or loading modules on some machines, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export NETCDF_HOME=/path/to/the/netcdf/distribution&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* the '''arch*.path''' is a mandatory file containing information relative to external libraries such as NetCDF and IOIPSL, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ROOT=$PWD&lt;br /&gt;
&lt;br /&gt;
NETCDF_LIBDIR=&amp;quot;-L${NETCDF_HOME}/lib&amp;quot;&lt;br /&gt;
NETCDF_LIB=&amp;quot;-lnetcdf -lnetcdff&amp;quot;&lt;br /&gt;
NETCDF_INCDIR=&amp;quot;-I${NETCDF_HOME}/include&amp;quot;&lt;br /&gt;
&lt;br /&gt;
IOIPSL_INCDIR=&amp;quot;-I$ROOT/../IOIPSL/inc&amp;quot;&lt;br /&gt;
IOIPSL_LIBDIR=&amp;quot;-L$ROOT/../IOIPSL/lib&amp;quot;&lt;br /&gt;
IOIPSL_LIB=&amp;quot;-lioipsl&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Each library is referenced by a fixed identifier (NETCDF, IOIPSL, XIOS, ...) and 3 trailing strings: '''_LIBDIR''', for the path to the library, '''_LIB''', for the library name(s), and '''_INCDIR''' for the path to the library's ''include'' directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* the '''arch*.fcm''' is a mandatory file containing information relative to the compiler and compilation options, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
%COMPILER            gfortran&lt;br /&gt;
%LINK                gfortran&lt;br /&gt;
%AR                  ar&lt;br /&gt;
%MAKE                make&lt;br /&gt;
%FPP_FLAGS           -P -traditional&lt;br /&gt;
%FPP_DEF             NC_DOUBLE&lt;br /&gt;
%BASE_FFLAGS         -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons&lt;br /&gt;
%PROD_FFLAGS         -O3&lt;br /&gt;
%DEV_FFLAGS          -O&lt;br /&gt;
%DEBUG_FFLAGS        -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace&lt;br /&gt;
%MPI_FFLAGS&lt;br /&gt;
%OMP_FFLAGS         &lt;br /&gt;
%BASE_LD     &lt;br /&gt;
%MPI_LD&lt;br /&gt;
%OMP_LD              &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Again, not going into a detailed description (follow [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this link]] for that), just note here that each line corresponds to a keyword (starting with &amp;quot;%&amp;quot;) followed by the relevant options. Here, we mention a few of the main ones:&lt;br /&gt;
* %COMPILER: The compiler to use (here, gfortran)&lt;br /&gt;
* %BASE_FFLAGS: compiler options (always included)&lt;br /&gt;
* %PROD_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-prod&amp;quot; option&lt;br /&gt;
* %DEBUG_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-debug&amp;quot; option&lt;br /&gt;
* %BASE_LD: flags to add at the linking step of the compilation&lt;br /&gt;
&lt;br /&gt;
=== Compiling the test case  ===&lt;br /&gt;
To compile the GCM at the sought resolution, run (in LMDZ.COMMON):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -arch local -p venus -d 48x32x50 -j 8 gcm &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Here, we assume that you have generated the '''arch-local.*''' files as per what is suggested in the previous section.&lt;br /&gt;
The options for ''makelmdz_fcm'' used here imply:&lt;br /&gt;
* '''-p venus''': the GCM will use the &amp;quot;venus&amp;quot; physics package&lt;br /&gt;
* '''-d 48x32x50''': the GCM grid will be 48x32 in longitude x latitude, with 50 vertical levels.&lt;br /&gt;
For a glimpse at all the possible ''makelmdz_fcm'' options and their meanings, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -h&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and/or check the dedicated [[The_makelmdz_fcm_GCM_Compilation_Script|makelmdz_fcm page]].&lt;br /&gt;
&lt;br /&gt;
Upon successful compilation, the executable '''gcm_48x32x50_phyvenus_seq.e''' should be generated in the '''bin''' subdirectory.&lt;br /&gt;
&lt;br /&gt;
== Running the GCM ==&lt;br /&gt;
You need to copy (or move) the executable '''gcm_48x32x50_phyvenus_seq.e''' from '''LMDZ.COMMON/bin''' to the directory containing the initial conditions and parameter files, e.g. '''bench_48x32x50'''.&lt;br /&gt;
&lt;br /&gt;
You can now run the GCM.&lt;br /&gt;
This is usually a two-step process: the (optional) first step is to source the environment architecture file (the very same that was used to compile the model), e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ../LMDZ.COMMON/arch.env&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The second step is to execute the model, e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./gcm_48x32x50_phyvenus_seq.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this command line, the (text) outputs messages are redirected into a text file, '''gcm.out'''. It is convenient to keep this file for later inspection (e.g., to track a bug). If there is no redirection (only '''./gcm_48x32x50_phyvenus_seq.e'''), then the outputs will be directly on the screen.&lt;br /&gt;
&lt;br /&gt;
== Checking the Results of a Simulation ==&lt;br /&gt;
Once the simulation is finished, you'll know that all went well (&amp;quot;everything is cool&amp;quot;) if the last few lines of the standard text output are:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 in abort_gcm&lt;br /&gt;
 Stopping in leapfrog&lt;br /&gt;
 Reason = Simulation finished &lt;br /&gt;
 Everything is cool&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, start looking for an error message and a way to fix the problem...&lt;br /&gt;
&lt;br /&gt;
Apart from the standard text output messages from the GCM, which are mostly for monitoring and checking the simulation progress, the user will more likely be interested in checking the contents of the ''histmth.nc'' file produced by the GCM, as it contains instantaneous values of the main model variables (atmospheric temperature, winds, etc.).&lt;br /&gt;
&lt;br /&gt;
...TODO...ADD HERE SOME ILLUSTRATIVE PLOTS OF THE EXPECTED BENCH OUTPUTS...&lt;br /&gt;
&lt;br /&gt;
== Taking Things to the Next Level ==&lt;br /&gt;
The short tutorial presented in this page is meant to be useful to get an overview of what is required to install and run the GCM, in addition to checking the results of a simulation. Moving on to a more intensive and problem-specific usage will require diving into additional topics and aspects.&lt;br /&gt;
&lt;br /&gt;
To run our reference simulation in 96x96x50 (no chemistry, limited to 95 km altitude), you can find here:&lt;br /&gt;
* initial states: [https://web.lmd.jussieu.fr/~lmdz/planets/venus/start-96x96x50.nc?ref_type=heads start-96x96x50.nc] and [https://web.lmd.jussieu.fr/~lmdz/planets/venus/startphy-96x96x50.nc?ref_type=heads startphy-96x96x50.nc]&lt;br /&gt;
* needed inputs: [https://web.lmd.jussieu.fr/~lmdz/planets/venus/venus-inputs-Avr2026.tgz?ref_type=heads venus-inputs-Avr2026.tgz]&lt;br /&gt;
* readme for inputs: [https://web.lmd.jussieu.fr/~lmdz/planets/venus/readme-inputs.txt?ref_type=heads readme-inputs.txt]&lt;br /&gt;
&lt;br /&gt;
For such a run, these features will be essential:&lt;br /&gt;
* Compiling and running in parallel (MPI) to run faster: [[Running the Venus PCM in parallel]]&lt;br /&gt;
* Using [[The XIOS Library|the XIOS library]] (instead of IOIPSL) to handle PCM outputs: [[Managing the Venus PCM outputs]]&lt;br /&gt;
&lt;br /&gt;
To dive deeper into more advanced studies, you may consider also:&lt;br /&gt;
* Running with advanced configurations of the physics packages, e.g. adding chemistry, thermospheric processes, etc. &lt;br /&gt;
* Selecting the appropriate inputs and run parameters for a given study.&lt;br /&gt;
* Compiling and running with the other dynamical cores (DYNAMICO and WRF)&lt;br /&gt;
* post-processing and analysis of model outputs&lt;br /&gt;
&lt;br /&gt;
All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)!&lt;br /&gt;
&lt;br /&gt;
[[Category:Venus-Model]]&lt;br /&gt;
[[Category:Venus-LMDZ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_the_MESOIPSL_cluster&amp;diff=3247</id>
		<title>Using the MESOIPSL cluster</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_the_MESOIPSL_cluster&amp;diff=3247"/>
				<updated>2026-04-09T14:18:51Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides some information for those who use the MESOIPSL clusters (also known as &amp;quot;spirit&amp;quot;, replacing &amp;quot;ciclad&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Note that there are two distinct MESOIPSL Spirit clusters, one in Sorbonne Université (SU), and one in Ecole Polytechnique (X). If you log on to &amp;quot;spirit1&amp;quot; or &amp;quot;spirit2&amp;quot; (as shown below) then you are on the SU-Spirit cluster whereas if you log on to &amp;quot;spiritx1&amp;quot; or &amp;quot;spiritx2&amp;quot; then you are on the X-Spirit cluster.&lt;br /&gt;
&lt;br /&gt;
If you need to run on GPUs, that is possible using the 3rd MESOIPSL cluster, HAL.&lt;br /&gt;
&lt;br /&gt;
== How to access the cluster ==&lt;br /&gt;
If you had an account on Ciclad, then you have one on Spirit. If you need to open an account (this is of course reserved to IPSL users) then proceed to this page: https://documentations.ipsl.fr/spirit/getting_started/account.html&lt;br /&gt;
&lt;br /&gt;
Once you have an account you can ssh to the cluster via either of the spirit1 or spirit2 login nodes:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh yourMESOIPSLlogin@spirit1.ipsl.fr&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
or equivalently&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh yourMESOIPSLlogin@spirit2.ipsl.fr&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IMPORTANT: the required ssh authentication level is such that it requires ED25519 or RSA (4096 bits) key types. If your ssh to the machines fails, the first thing to check is that you indeed are using RSA keys.&lt;br /&gt;
&lt;br /&gt;
Here is probably also the right place to point to the MEOIPSL cluster's main page: https://documentations.ipsl.fr/spirit/&lt;br /&gt;
&lt;br /&gt;
== OS and disk space ==&lt;br /&gt;
As the welcome message will remind you:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 5.4.0-125-generic x86_64)&lt;br /&gt;
CPU AMD EPYC 7402P 24-Core Processor 2.8GHz&lt;br /&gt;
=========================================================================&lt;br /&gt;
*        Mesocentre ESPRI IPSL (Cluster Spirit) JUSSIEU                 *&lt;br /&gt;
=========================================================================&lt;br /&gt;
** Disk Space :&lt;br /&gt;
- /home/login     (32Go and 300000 files max per user) : Backup every day.&lt;br /&gt;
- /data/login     ( 1To and 300000 files max per user) : NO BACKUP&lt;br /&gt;
- /scratchu/login ( 2To and 300000 files max per user) : NO BACKUP&lt;br /&gt;
- /bdd/ : Datasets&lt;br /&gt;
- /climserv-home/, /homedata ,/scratchx : SpiritX workspace ( READ-ONLY)&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
Migration Documentation ( Temporary URL )&lt;br /&gt;
https://documentations.ipsl.fr/spirit/spirit_clusters/migration_from_ciclad_climserv.html&lt;br /&gt;
**  Support Contact  mailto:meso-support@ipsl.fr&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This is Ubuntu Linux and the &amp;quot;HOME&amp;quot; is quite limited in size. Most work should be done on the '''data''' and/or '''scratchu''' disks.&lt;br /&gt;
&lt;br /&gt;
It is up to you to tailor your environment. By default it is quite bare; it is up to you to load the modules you'll need to have access to specific software or compilers or libraries (and versions thereof).&lt;br /&gt;
To know what modules are available:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To load a given module, here the Panoply software:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load panoply&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Compiling the PCMs on Spirit ==&lt;br /&gt;
&lt;br /&gt;
Many compilers and compiler versions, along with precompiled NetCDF libraries are available on Spirit. Which can simplify installing and using the model there.&lt;br /&gt;
In practice one should first decide which compiler suite to use : Intel ifort (depreciated?) or GNU gfortran &lt;br /&gt;
&lt;br /&gt;
=== Intel compilers ===&lt;br /&gt;
Dedicated arch files are available in the '''LMDZ.COMMON/arch''' subdirectory! They are labeled '''ifort_MESOIPSL''' (XIOS and DYNAMICO also have similarly named arch files). To use them when compiling the PCM, one simply needs to specify ''-arch ifort_MESOIPSL'' argument to the ''makelmdz_fcm'' script.&lt;br /&gt;
 &lt;br /&gt;
Likewise there is a dedicated '''install_ioipsl_ifort_MESOIPSL.bash''' install script for IOIPSL available in '''LMDZ.COMMON/ioipsl''', you simply need to execute it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_ifort_MESOIPSL.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compile the utilities, you should define the three variables below as indicated, where the first line may have caused problems if you already tried it:&lt;br /&gt;
&amp;lt;br&amp;gt; NETCDF_HOME=$NETCDF_FORTRAN_ROOT&lt;br /&gt;
&amp;lt;br&amp;gt;COMPILER=&amp;quot;ifort&amp;quot;&lt;br /&gt;
&amp;lt;br&amp;gt;COMPILER_OPTIONS=&amp;quot;-O2 -ip&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== GNU compilers ===&lt;br /&gt;
Dedicated arch files are available in the '''LMDZ.COMMON/arch''' subdirectory! They are labeled '''MESOIPSL-gnu''' (XIOS and DYNAMICO also have similarly named arch files). To use them when compiling the PCM, one simply needs to specify ''-arch MESOIPSL-gnu'' argument to the ''makelmdz_fcm'' script.&lt;br /&gt;
&lt;br /&gt;
Likewise there is a dedicated '''install_ioipsl_MESOIPSL-gnu.bash''' install script for IOIPSL available in '''LMDZ.COMMON/ioipsl''', you simply need to execute it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_MESOIPSL-gnu.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compile the utilities, you should define the three variables below as indicated, where the first line may have caused problems if you already tried it:&lt;br /&gt;
&amp;lt;br&amp;gt; NETCDF_HOME=$NETCDF_FORTRAN_ROOT&lt;br /&gt;
&amp;lt;br&amp;gt;COMPILER=&amp;quot;gfortran&amp;quot;&lt;br /&gt;
&amp;lt;br&amp;gt;COMPILER_OPTIONS=&amp;quot;-O2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example of a job to run a GCM simulation ==&lt;br /&gt;
Here to run using 24 MPI tasks with 2 OpenMP threads each:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=24&lt;br /&gt;
#SBATCH --cpus-per-task=2&lt;br /&gt;
#SBATCH --partition=zen4 # zen4: 64cores/node and 240GB of memory&lt;br /&gt;
##SBATCH --partition=zen16 # zen16: 32 cores/node core and 496GB of memory&lt;br /&gt;
#SBATCH -J job_mpi_omp&lt;br /&gt;
#SBATCH --time=0:55:00&lt;br /&gt;
#SBATCH --output %x.%j.out&lt;br /&gt;
&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env&lt;br /&gt;
&lt;br /&gt;
export OMP_NUM_THREADS=2&lt;br /&gt;
export OMP_STACKSIZE=400M&lt;br /&gt;
&lt;br /&gt;
mpirun gcm_64x48x54_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that there is a per-user limitation of (maximum) 96 cores for a given job.&lt;br /&gt;
&lt;br /&gt;
== Sending data on Spirit ==&lt;br /&gt;
This function could be used to send data to your scratch directory on spirit. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
function rsend {&lt;br /&gt;
    a=${1:-.}&lt;br /&gt;
    b=${2:-$a}&lt;br /&gt;
    c=${3:-spirit}&lt;br /&gt;
    rsync -avzl $a/ $c:/scratchx/$USER/$b/&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To be used as such: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
rsend folder # simply send folder to your scratch dir&lt;br /&gt;
rsend folder1 folder2 # send folder1 into folder2&lt;br /&gt;
rsend folder1 folder2 machine #send folder1 into folder2 on machine (depends on your ssh config)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:FAQ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_the_MESOIPSL_cluster&amp;diff=3246</id>
		<title>Using the MESOIPSL cluster</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_the_MESOIPSL_cluster&amp;diff=3246"/>
				<updated>2026-04-09T14:11:03Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides some information for those who use the MESOIPSL clusters (also known as &amp;quot;spirit&amp;quot;, replacing &amp;quot;ciclad&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Note that there are two distinct MESOIPSL Spirit clusters, one in Sorbonne Université (SU), and one in Ecole Polytechnique (X). If you log on to &amp;quot;spirit1&amp;quot; or &amp;quot;spirit2&amp;quot; (as shown below) then you are on the SU-Spirit cluster whereas if you log on to &amp;quot;spiritx1&amp;quot; or &amp;quot;spiritx2&amp;quot; then you are on the X-Spirit cluster.&lt;br /&gt;
&lt;br /&gt;
If you need to run on GPUs, that is possible using the 3rd MESOIPSL cluster, HAL.&lt;br /&gt;
&lt;br /&gt;
== How to access the cluster ==&lt;br /&gt;
If you had an account on Ciclad, then you have one on Spirit. If you need to open an account (this is of course reserved to IPSL users) then proceed to this page: https://documentations.ipsl.fr/spirit/getting_started/account.html&lt;br /&gt;
&lt;br /&gt;
Once you have an account you can ssh to the cluster via either of the spirit1 or spirit2 login nodes:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh yourMESOIPSLlogin@spirit1.ipsl.fr&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
or equivalently&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh yourMESOIPSLlogin@spirit2.ipsl.fr&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IMPORTANT: the required ssh authentication level is such that it requires ED25519 or RSA (4096 bits) key types. If your ssh to the machines fails, the first thing to check is that you indeed are using RSA keys.&lt;br /&gt;
&lt;br /&gt;
Here is probably also the right place to point to the MEOIPSL cluster's main page: https://documentations.ipsl.fr/spirit/&lt;br /&gt;
&lt;br /&gt;
== OS and disk space ==&lt;br /&gt;
As the welcome message will remind you:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 5.4.0-125-generic x86_64)&lt;br /&gt;
CPU AMD EPYC 7402P 24-Core Processor 2.8GHz&lt;br /&gt;
=========================================================================&lt;br /&gt;
*        Mesocentre ESPRI IPSL (Cluster Spirit) JUSSIEU                 *&lt;br /&gt;
=========================================================================&lt;br /&gt;
** Disk Space :&lt;br /&gt;
- /home/login     (32Go and 300000 files max per user) : Backup every day.&lt;br /&gt;
- /data/login     ( 1To and 300000 files max per user) : NO BACKUP&lt;br /&gt;
- /scratchu/login ( 2To and 300000 files max per user) : NO BACKUP&lt;br /&gt;
- /bdd/ : Datasets&lt;br /&gt;
- /climserv-home/, /homedata ,/scratchx : SpiritX workspace ( READ-ONLY)&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
Migration Documentation ( Temporary URL )&lt;br /&gt;
https://documentations.ipsl.fr/spirit/spirit_clusters/migration_from_ciclad_climserv.html&lt;br /&gt;
**  Support Contact  mailto:meso-support@ipsl.fr&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This is Ubuntu Linux and the &amp;quot;HOME&amp;quot; is quite limited in size. Most work should be done on the '''data''' and/or '''scratchu''' disks.&lt;br /&gt;
&lt;br /&gt;
It is up to you to tailor your environment. By default it is quite bare; it is up to you to load the modules you'll need to have access to specific software or compilers or libraries (and versions thereof).&lt;br /&gt;
To know what modules are available:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module avail&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To load a given module, here the Panoply software:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load panoply&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Compiling the PCMs on Spirit ==&lt;br /&gt;
&lt;br /&gt;
Many compilers and compiler versions, along with precompiled NetCDF libraries are available on Spirit. Which can simplify installing and using the model there.&lt;br /&gt;
In practice one should first decide which compiler suite to use : Intel ifort (depreciated?) or GNU gfortran &lt;br /&gt;
&lt;br /&gt;
=== Intel compilers ===&lt;br /&gt;
Dedicated arch files are available in the '''LMDZ.COMMON/arch''' subdirectory! They are labeled '''ifort_MESOIPSL''' (XIOS and DYNAMICO also have similarly named arch files). &lt;br /&gt;
 &lt;br /&gt;
Likewise there is a dedicated '''install_ioipsl_ifort_MESOIPSL.bash''' install script for IOIPSL available in '''LMDZ.COMMON/ioipsl''', you simply need to execute it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_ifort_MESOIPSL.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compile the utilities, you should define the three variables below as indicated, where the first line may have caused problems if you already tried it:&lt;br /&gt;
&amp;lt;br&amp;gt; NETCDF_HOME=$NETCDF_FORTRAN_ROOT&lt;br /&gt;
&amp;lt;br&amp;gt;COMPILER=&amp;quot;ifort&amp;quot;&lt;br /&gt;
&amp;lt;br&amp;gt;COMPILER_OPTIONS=&amp;quot;-O2 -ip&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== GNU compilers ===&lt;br /&gt;
Dedicated arch files are available in the '''LMDZ.COMMON/arch''' subdirectory! They are labeled '''MESOIPSL-gnu''' (XIOS and DYNAMICO also have similarly named arch files).&lt;br /&gt;
&lt;br /&gt;
Likewise there is a dedicated '''install_ioipsl_MESOIPSL-gnu.bash''' install script for IOIPSL available in '''LMDZ.COMMON/ioipsl''', you simply need to execute it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_MESOIPSL-gnu.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compile the utilities, you should define the three variables below as indicated, where the first line may have caused problems if you already tried it:&lt;br /&gt;
&amp;lt;br&amp;gt; NETCDF_HOME=$NETCDF_FORTRAN_ROOT&lt;br /&gt;
&amp;lt;br&amp;gt;COMPILER=&amp;quot;gfortran&amp;quot;&lt;br /&gt;
&amp;lt;br&amp;gt;COMPILER_OPTIONS=&amp;quot;-O2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example of a job to run a GCM simulation ==&lt;br /&gt;
Here to run using 24 MPI tasks with 2 OpenMP threads each:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=24&lt;br /&gt;
#SBATCH --cpus-per-task=2&lt;br /&gt;
#SBATCH --partition=zen4 # zen4: 64cores/node and 240GB of memory&lt;br /&gt;
##SBATCH --partition=zen16 # zen16: 32 cores/node core and 496GB of memory&lt;br /&gt;
#SBATCH -J job_mpi_omp&lt;br /&gt;
#SBATCH --time=0:55:00&lt;br /&gt;
#SBATCH --output %x.%j.out&lt;br /&gt;
&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env&lt;br /&gt;
&lt;br /&gt;
export OMP_NUM_THREADS=2&lt;br /&gt;
export OMP_STACKSIZE=400M&lt;br /&gt;
&lt;br /&gt;
mpirun gcm_64x48x54_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that there is a per-user limitation of (maximum) 96 cores for a given job.&lt;br /&gt;
&lt;br /&gt;
== Sending data on Spirit ==&lt;br /&gt;
This function could be used to send data to your scratch directory on spirit. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
function rsend {&lt;br /&gt;
    a=${1:-.}&lt;br /&gt;
    b=${2:-$a}&lt;br /&gt;
    c=${3:-spirit}&lt;br /&gt;
    rsync -avzl $a/ $c:/scratchx/$USER/$b/&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To be used as such: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
rsend folder # simply send folder to your scratch dir&lt;br /&gt;
rsend folder1 folder2 # send folder1 into folder2&lt;br /&gt;
rsend folder1 folder2 machine #send folder1 into folder2 on machine (depends on your ssh config)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:FAQ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;diff=3245</id>
		<title>Quick Install and Run</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;diff=3245"/>
				<updated>2026-04-08T12:52:23Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this page we give a hopefully exhaustive enough overview of the necessary prerequisites and steps to download, compile and run a simple simulation with the GCM in an &amp;quot;Early Mars&amp;quot; setup (i.e. a desert planet with a CO2 atmosphere) on a Linux computer.&lt;br /&gt;
&lt;br /&gt;
Note that there is a dedicated, install script, that attempts to do all these steps (up to and including running the simulation) that you can obtain here: https://web.lmd.jussieu.fr/~lmdz/planets/generic/install_scripts/install_lmdz_generic_earlymars.bash&lt;br /&gt;
Automating the process is not trivial as there are many subtle variants of Linux flavors and user account setups, so the script may fail in your case, but hopefully the information given in this page should help you solve the encountered problems.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites: Tools and Libraries ==&lt;br /&gt;
In order to use (i.e. compile and run) the GCM, one needs to have some tools and installed libraries at hand. We list below a (minimal) set that you should check that is available and/or that you'll need to first install on your machine. Note that we assume in this tutorial that you are on a Linux native-OS/cluster.&lt;br /&gt;
&lt;br /&gt;
===  Fortran compiler ===&lt;br /&gt;
The GCM source code is in Fortran. One thus needs a Fortran compiler to build (compile) the executable.&lt;br /&gt;
The most easily available one (on Linux) is gfortran and examples discussed here will assume it is the one used.&lt;br /&gt;
You can check that you indeed have a gfortran compiler at hand with the following Bash command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
which gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
which should return something like&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/usr/bin/gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Subversion ===&lt;br /&gt;
The source code is managed using subversion (svn), which you'll need to download or update. Leaving aside the subtleties of svn and code organization for now, downloading the code amounts to doing the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://svn.lmd.jussieu.fr/Planeto/trunk --depth empty&lt;br /&gt;
cd trunk&lt;br /&gt;
svn update LMDZ.COMMON LMDZ.GENERIC&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As a side note: the source code that will be fetched by svn can also be browsed online here: https://trac.lmd.jussieu.fr/Planeto&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Note: if the command line above doesn't work, you may also try to replace 'http' by 'https'.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Git === &lt;br /&gt;
&lt;br /&gt;
Alternatively to svn, you can use [[Git usage|git to download the source code]]. &lt;br /&gt;
&lt;br /&gt;
=== FCM ===&lt;br /&gt;
The FCM (Flexible Configuration Management) tool is a suite of perl scripts to help building and managing codes. We use a slightly modified version which can be obtained using subversion (svn). Ideally you'll want to download it somewhere on your computer once in for all. To do this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://forge.ipsl.jussieu.fr/fcm/svn/PATCHED/FCM_V1.2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You'll then need to add the resulting FCM_V1.2/bin to your PATH environment variable so that the command &amp;quot;fcm&amp;quot; may be used from anywhere on your machine. e.g. by adding the following line in your .bashrc (the .bashrc file is a hidden configuration script in your home directory (~/.bashrc) that runs whenever you start a new Bash shell):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/FCM_V1.2/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that the downloaded FCM_V1.2 directory is in your home ($HOME) directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
=== the NetCDF library ===&lt;br /&gt;
The GCM reads and writes input and output files in NetCDF format, therefore a NetCDF library is required. Most of the clusters propose a NetCDF library that you can load before using the model. &lt;br /&gt;
&lt;br /&gt;
If this library is not available, you can install it by yourself on your system (check out [[the netCDF library]] page for more). You can use the following home-made &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script to do so. For this, ensure that you are in your home directory:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir netcdf&lt;br /&gt;
cd netcdf&lt;br /&gt;
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5_seq.bash&lt;br /&gt;
chmod u=rwx install_netcdf4_hdf5_seq.bash&lt;br /&gt;
./install_netcdf4_hdf5_seq.bash &amp;gt; netcdf.log 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Compiling the library and dependencies can take a while (&amp;gt;&amp;gt;15 minutes; be patient).&lt;br /&gt;
Once this is done, check file netcdf.log to verify that all went well.&lt;br /&gt;
You may want to also add its &amp;quot;bin&amp;quot; directory to your PATH environment variable by adding in your .bashrc a line of:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/netcdf/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that you have run the &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script in a &amp;quot;netcdf&amp;quot; subdirectory of your home directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
As a side note: The NetCDF library provides a very simple command line tool (ncdump) to inspect the contents of NetCDF files, but you'll need more advanced visualization tools (e.g., Panoply, Python scripts, etc. - see further down this page in the &amp;quot;Checking the Results&amp;quot; section) for more advanced post-processing of the outputs.&lt;br /&gt;
&lt;br /&gt;
=== the IOIPSL library ===&lt;br /&gt;
&lt;br /&gt;
The IOIPSL (Input/Output IPSL) library is designed to handle both the reading of some input files used by the GCM (the *.def files which are described further below) and the writing of some NetCDF output files.&lt;br /&gt;
&lt;br /&gt;
==== Automated IOIPSL install script ====&lt;br /&gt;
Scripts to download and install the IOIPSL library can be found in the &amp;quot;ioipsl&amp;quot; subdirectory of the &amp;quot;LMDZ.COMMON&amp;quot; library. Since here we assume we're working with gfortran, the relevant one is &amp;quot;install_ioipsl_gfortran.bash&amp;quot;. If your PATH environment variable is already such that it includes the path to your NetCDF library distribution's bin directory (see previous section) then all you need to do is execute the script:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_gfortran.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If all went well the script should end with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
OK: ioipsl library is in ...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''(for further details about [[The_IOIPSL_Library|the IOIPSL library]] and installing it, follow the link and/or use the Search Box at the top of this page)''&lt;br /&gt;
&lt;br /&gt;
== GCM Input Datafiles and Datasets ==&lt;br /&gt;
In order to run, the GCM needs some inputs, such as initial conditions (values of state variables), external inputs (e.g. optical properties of aerosols) and simulation setup (e.g. specifications on how long to run, which parametrizations should be activated, etc.)&lt;br /&gt;
&lt;br /&gt;
In the spirit of the illustrative example considered here (an &amp;quot;Early Mars&amp;quot; simulation), a set of necessary input data may be downloaded with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/reference_earlymars_64x48x26_b32x36.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Once unpacked (to do that, you can execute the command &amp;quot;tar xvzf reference_earlymars_64x48x26_b32x36.tar.gz&amp;quot;) the resulting &amp;quot;reference_earlymars_64x48x26_b32x36&amp;quot; will contain all that is needed, namely:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
callphys.def  gases.def  startfi.nc  traceur.def&lt;br /&gt;
datadir/      run.def    start.nc    z2sig.def&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Initial condition NetCDF files ''start.nc'' and ''startfi.nc''; the first containing initial condition values for the dynamics and the second initial condition values for the physics.&lt;br /&gt;
* A '''datadir''' directory containing external inputs (aerosol properties, stellar spectra, etc.)&lt;br /&gt;
* Some ASCII *.def files containing run parameters, namely:&lt;br /&gt;
# [[The_run.def_Input_File | run.def]] : &amp;quot;master def file&amp;quot; containing main run parameters&lt;br /&gt;
# [[The_callphys.def_Input_File | callphys.def]] : file containing flags and keys for the various physics parametrizations&lt;br /&gt;
# [[The_z2sig.def_Input_File | z2sig.def]] : file describing the sought vertical discretization&lt;br /&gt;
# [[The_traceur.def_Input_File | traceur.def]] : file specifying the tracer number and names&lt;br /&gt;
# [[The_gases.def_Input_File | gases.def]] : file specifying the list of gases (main and trace) in the atmosphere&lt;br /&gt;
&lt;br /&gt;
== Compiling the GCM ==&lt;br /&gt;
Now that all the prerequisites are fulfilled, it is (almost!) time to compile the GCM&lt;br /&gt;
&lt;br /&gt;
=== Prior to a first compilation: setting up the target architecture files ===&lt;br /&gt;
Compiling the model is done using a dedicated Bash script ''makelmdz_fcm'' located in the '''LMDZ.COMMON''' directory. This script however relies on ''architecture files''. These files contain information on which compiler to use, what compilation options to use, where relevant libraries are located etc. In practice, one must thus create these ASCII text files in the '''arch/''' subdirectory of '''LMDZ.COMMON'''. The naming convention is rather straightforward, when the script ''makelmdz_fcm'' is run with the option '''-arch somename''', it will look for files ''arch/arch-somename.env'', ''arch/arch-somename.path'' and ''arch/arch-somename.fcm''. Leaving aside a detailed description for later (see [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this page]]), here we mention that:&lt;br /&gt;
* the ''arch*.env'' is an optional file containing ''environment'' information, such as setting up environment variables or loading modules on some machines, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export NETCDF_HOME=/path/to/the/netcdf/distribution&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
A more realistic (but more specific) example of a '''arch*.env''' file using &amp;quot;recent&amp;quot; module commands, adapted for compilation and visualisation on a given supercomputer, would look more like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module purge&lt;br /&gt;
module load GCC/10.3.0  OpenMPI/4.1.1&lt;br /&gt;
module load netCDF-Fortran/4.5.3&lt;br /&gt;
export NETCDF_INCDIR=&amp;quot;/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/include&amp;quot;&lt;br /&gt;
export NETCDFF_LIBDIR=&amp;quot;/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/lib&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that the last two lines above specify paths to the '''include''' and '''lib''' directories used on this system and will certainly vary from system to system. Likewise, the exact module versions will most likely need to be adapted in your specific '''arch*.env''' file.&lt;br /&gt;
* the '''arch*.path''' is a mandatory file containing information relative to external libraries such as NetCDF and IOIPSL, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ROOT=$PWD&lt;br /&gt;
&lt;br /&gt;
NETCDF_LIBDIR=&amp;quot;-L${NETCDF_HOME}/lib&amp;quot;&lt;br /&gt;
NETCDF_LIB=&amp;quot;-lnetcdf -lnetcdff&amp;quot;&lt;br /&gt;
NETCDF_INCDIR=&amp;quot;-I${NETCDF_HOME}/include&amp;quot;&lt;br /&gt;
&lt;br /&gt;
IOIPSL_INCDIR=&amp;quot;-I$ROOT/../IOIPSL/inc&amp;quot;&lt;br /&gt;
IOIPSL_LIBDIR=&amp;quot;-L$ROOT/../IOIPSL/lib&amp;quot;&lt;br /&gt;
IOIPSL_LIB=&amp;quot;-lioipsl&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Each library is referenced by a fixed identifier (NETCDF, IOIPSL, XIOS, ...) and 3 trailing strings: '''_LIBDIR''', for the path to the library, '''_LIB''', for the library name(s), and '''_INCDIR''' for the path to the library's ''include'' directory.&lt;br /&gt;
&lt;br /&gt;
* the '''arch*.fcm''' is a mandatory file containing information relative to the compiler and compilation options, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
%COMPILER            gfortran&lt;br /&gt;
%LINK                gfortran&lt;br /&gt;
%AR                  ar&lt;br /&gt;
%MAKE                make&lt;br /&gt;
%FPP_FLAGS           -P -traditional&lt;br /&gt;
%FPP_DEF             NC_DOUBLE&lt;br /&gt;
%BASE_FFLAGS         -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons&lt;br /&gt;
%PROD_FFLAGS         -O3&lt;br /&gt;
%DEV_FFLAGS          -O&lt;br /&gt;
%DEBUG_FFLAGS        -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace&lt;br /&gt;
%MPI_FFLAGS&lt;br /&gt;
%OMP_FFLAGS         &lt;br /&gt;
%BASE_LD     &lt;br /&gt;
%MPI_LD&lt;br /&gt;
%OMP_LD              &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Again, not going into a detailed description (follow [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this link]] for that), just note here that each line corresponds to a keyword (starting with &amp;quot;%&amp;quot;) followed by the relevant options. Here, we mention a few of the main ones:&lt;br /&gt;
* %COMPILER: The compiler to use (here, gfortran)&lt;br /&gt;
* %BASE_FFLAGS: compiler options (always included)&lt;br /&gt;
* %PROD_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-prod&amp;quot; option&lt;br /&gt;
* %DEBUG_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-debug&amp;quot; option&lt;br /&gt;
* %BASE_LD: flags to add at the linking step of the compilation&lt;br /&gt;
&lt;br /&gt;
Note that if you are using a recent version of gfortran (10 or more), you have to add an extra option in the %BASE_FFLAGS, that is '''-fallow-argument-mismatch'''&lt;br /&gt;
&lt;br /&gt;
Also note that you can find in the '''LMDZ.COMMON/arch/''' many examples of arch files that you can re-use as is if you compile the model on our usual computing clusters (e.g. Spirit, Adastra, etc.). Just check the content of the directory to see if your favorite computing cluster already has arch files.&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (early Mars) ===&lt;br /&gt;
To compile the GCM at the sought resolution for the Early Mars test case run (in LMDZ.COMMON):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -arch local -p generic -d 64x48x26 -b 32x36 gcm &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;!-- -s option is no more needed ; * '''-s 2''': the physics parametrizations will handle 2 radiatively active tracers (water ice and dust for the Early Mars setup) --&amp;gt;&lt;br /&gt;
Here, we assume that you have generated the '''arch-local.*''' files as per what is suggested in the previous section.&lt;br /&gt;
The options for ''makelmdz_fcm'' used here imply:&lt;br /&gt;
* '''-p generic''': the GCM will use the &amp;quot;generic&amp;quot; physics package&lt;br /&gt;
* '''-d 64x48x26''': the GCM grid will be 64x48 in longitude x latitude, with 26 vertical levels.&lt;br /&gt;
* '''-b 32x36''': the physics radiative transfer will be done using 32 bands in the IR and 36 in the visible.&lt;br /&gt;
For a glimpse at all the possible ''makelmdz_fcm'' options and their meanings, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -h&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and/or check the dedicated [[The_makelmdz_fcm_GCM_Compilation_Script|makelmdz_fcm page]].&lt;br /&gt;
&lt;br /&gt;
Upon successful compilation, the executable '''gcm_64x48x26_phygeneric_b32x36_seq.e''' should be generated in the '''bin''' subdirectory.&lt;br /&gt;
&lt;br /&gt;
=== Known issues ===&lt;br /&gt;
&lt;br /&gt;
If the compilation fails, it might be due to the options used in the arch file. &lt;br /&gt;
For example, if you are using gfortran prior to 10, you could get an error such as:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gfortran: error: unrecognized command line option ‘-fallow-argument-mismatch’; did you mean ‘-Wno-argument-mismatch’?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This can be solved by removing the option '''-fallow-argument-mismatch''' from the arch.fcm file.&lt;br /&gt;
&lt;br /&gt;
If you are using a recent version of gfortran (10 of beyond) without the option '''-fallow-argument-mismatch''', the compilation will probably fail ith the error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  136 |      .       idim_index,nvarid)&lt;br /&gt;
      |             2                                       &lt;br /&gt;
......&lt;br /&gt;
  211 |       ierr = NF_DEF_VAR (nid, &amp;quot;aire&amp;quot;, NF_DOUBLE, 2, id,nvarid)&lt;br /&gt;
      |                                                    1&lt;br /&gt;
Error: Rank mismatch between actual argument at (1) and actual argument at (2) (scalar and rank-1)&lt;br /&gt;
fcm_internal compile failed (256)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Add the compilation option in the arch file to solve the issue.&lt;br /&gt;
&lt;br /&gt;
== Running the GCM ==&lt;br /&gt;
To run your first simulation, you need to first copy (or move) the executable '''gcm_64x48x26_phygeneric_b32x36_seq.e''' to the directory containing the initial conditions and parameter files, e.g. '''reference_earlymars_64x48x26_b32x36''' and run it.&lt;br /&gt;
This is usually a two-step process: the (optional) first step is to source the environment architecture file (the very same that was used to compile the model), e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ../LMDZ.COMMON/arch.env&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The second step is to execute the model, e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./gcm_64x48x26_phygeneric_b32x36_seq.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this command line, the (text) outputs messages are redirected into a text file, '''gcm.out'''. It is convenient to keep this file for later inspection (e.g., to track a bug). If there is no redirection (only '''./gcm_64x48x26_phygeneric_b32x36_seq.e'''), then the outputs will be directly on the screen.&lt;br /&gt;
&lt;br /&gt;
== Checking the Results of a Simulation ==&lt;br /&gt;
Once the simulation is finished, you'll know that all went well (&amp;quot;everything is cool&amp;quot;) if the last few lines of the standard text output reads:&lt;br /&gt;
[[File:tsurf_benchmark_early_Mars.png|300px|thumb|Final surface temperature map of the reference simulation (plotted using Panoply).]]&lt;br /&gt;
[[File:water_ice_cloud_column_benchmark_early_Mars.png|300px|thumb|Final water ice cloud column map of the reference simulation (plotted using Panoply).]]&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 in abort_gcm&lt;br /&gt;
 Stopping in leapfrog&lt;br /&gt;
 Reason = Simulation finished &lt;br /&gt;
 Everything is cool&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, start looking for an error message and a way to fix the problem...&lt;br /&gt;
&lt;br /&gt;
Apart from the standard text output messages from the GCM, which are mostly for monitoring and checking the simulation progress, the user will more likely be interested in checking the contents of the ''diagfi.nc'' file produced by the GCM, as it contains instantaneous values of the main model variables (atmospheric temperature, winds, etc.).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To check that you have successfully run the simulation, we provide some graphs to evaluate the results of your simulations, for a simulation similar to the one described in this tutorial (early Mars, 32x32x15 resolution). TODO : update plots to current example&lt;br /&gt;
&lt;br /&gt;
In the plots shown here, we present maps of the surface temperatures ('tsurf' variable) and the water ice cloud column ('h2o_ice_col' variable), both plotted using Panoply.&lt;br /&gt;
&lt;br /&gt;
Side note: There are a variety of freely available software that can be used to visualise the NetCDF ''diagfi.nc'' file, such as Panoply, Ferret, Ncview, Grads, Python, etc. (see more details in the [[Tool_Box | Tool Box section]])&lt;br /&gt;
&lt;br /&gt;
== Taking Things to the Next Level ==&lt;br /&gt;
The short tutorial presented in this page is meant to be useful to get an overview of what is required to install and run the GCM, in addition to checking the results of a simulation. Moving on to a more intensive and problem-specific usage will require diving into additional topics and aspects such as:&lt;br /&gt;
* Selecting the appropriate inputs and run parameters for a given study.&lt;br /&gt;
* Compiling and running in parallel (MPI and/or OpenMP) to obtain results in a reasonable time frame.&lt;br /&gt;
* post-processing and analysis of model outputs.&lt;br /&gt;
All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)!&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Overview_of_the_Mars_PCM&amp;diff=3239</id>
		<title>Overview of the Mars PCM</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Overview_of_the_Mars_PCM&amp;diff=3239"/>
				<updated>2026-03-25T07:44:15Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== The Mars PCM ==&lt;br /&gt;
In a nutshell, the Mars Planetary Climate Model is a General Circulation Model (GCM) of the Martian atmosphere. It is in fact a suite of models which may be run in various configurations: with the historical lon-lat (LMDZ) dynamics, or the modern DYNAMICO (icosahedral grid) dynamics, or as a regional model using [[WRF dynamical core for LES/mesoscale simulations|'''WRF''']] (a limited area dynamical core), or even as a 1D (single column) model.&lt;br /&gt;
&lt;br /&gt;
== Reference Versions of the Mars PCM ==&lt;br /&gt;
In practice the model is most often run using the legacy LMDZ lon-lat global dynamics for 3D simulations. However within this setup there are various configurations or versions of the model that can be used.&lt;br /&gt;
Note that some bundles containing reference datasets for the versions mentioned hereafter can be found here: https://web.lmd.jussieu.fr/~lmdz/planets/mars/&lt;br /&gt;
 &lt;br /&gt;
=== GCM version 5 ===&lt;br /&gt;
This is the version of the model that was used to generate Mars Climate Database version 5.3. In practice, the main features of these simulations are:&lt;br /&gt;
* a horizontal grid (longitudexlatitude) resolution of 64x48, with 49 vertical layers (when including the thermosphere)&lt;br /&gt;
* a dust cycle where column dust opacity is imposed via input dust scenarios.&lt;br /&gt;
&lt;br /&gt;
=== GCM version 6 ===&lt;br /&gt;
This is the latest version of the model that was used to generate Mars Climate Database version 6.1. In practice, the main features of these simulations are:&lt;br /&gt;
* a horizontal grid (longitudexlatitude) resolution of 64x48, with 73 vertical layers (when including the thermosphere)&lt;br /&gt;
* a dust cycle where column dust opacity is not rescaled to but driven by dust scenarios, via injection of rocket dust storms and the use of mountain slope dust re-injection parametrization. A dedicated page on the dust cycle in version 6 can be found [[Dust_Cycle_in_Mars_PCM6|here]].&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
Most likely you will want to install and run the model, then you should start [[Quick_Install_and_Run_Mars_PCM|with this page]]. You can also check the [[Help_Mars_PCM|&amp;quot;Getting Help&amp;quot;]] page on this wiki, or browse through all pages tagged as belonging to the &amp;quot;Mars Model Category&amp;quot; under the [[Special:Categories|&amp;quot;All Categories&amp;quot;]] section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Mars-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Other_GCM_Configurations_worth_knowing_about&amp;diff=3212</id>
		<title>Other GCM Configurations worth knowing about</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Other_GCM_Configurations_worth_knowing_about&amp;diff=3212"/>
				<updated>2026-02-23T10:51:03Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: /* Compiling a test case (TRAPPIST-1c) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= 3D lon-lat LMDZ setup =&lt;br /&gt;
&lt;br /&gt;
== early Mars ==&lt;br /&gt;
&lt;br /&gt;
It is already described in the [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Quick_Install_and_Run ''Quick Install and Run''] section.&lt;br /&gt;
&lt;br /&gt;
== Earth with slab ocean ==&lt;br /&gt;
&lt;br /&gt;
TBD by Siddharth, once all changes have been committed (also need a validation of the model on Earth to be sure)&lt;br /&gt;
&lt;br /&gt;
== TRAPPIST-1e with photochemistry ==&lt;br /&gt;
&lt;br /&gt;
A temperate rocky planet in synchronous rotation around a low mass star.&lt;br /&gt;
&lt;br /&gt;
Here is an example to simulate the planet TRAPPIST-1e with an Earth atmosphere using the photochemical module of the GCM.&lt;br /&gt;
&lt;br /&gt;
To install the model and run it, follow [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Quick_Install_and_Run ''Quick Install and Run''] but with the following changes:&lt;br /&gt;
&lt;br /&gt;
=== GCM Input Datafiles and Datasets ===&lt;br /&gt;
Section [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;amp;action=edit&amp;amp;section=9 ''GCM Input Datafiles and Datasets''] download the TRAPPIST-1e files (instead of the early Mars files):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/bench_trappist1e_photochemistry_64x48x30_b38x36.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can find the same type of file with the additional folder containing the chemical network file:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
callphys.def  gases.def  startfi.nc  traceur.def&lt;br /&gt;
datadir/      run.def    start.nc    z2sig.def&lt;br /&gt;
chemnetwork/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Compiling the GCM ===&lt;br /&gt;
==== Prior to a first compilation: setting up the target architecture files ====&lt;br /&gt;
The chemical solver require the libraries BLAS and LAPACK which need to be specified in the '''arch*.fcm''' file:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
%COMPILER            gfortran&lt;br /&gt;
%LINK                gfortran&lt;br /&gt;
%AR                  ar&lt;br /&gt;
%MAKE                make&lt;br /&gt;
%FPP_FLAGS           -P -traditional&lt;br /&gt;
%FPP_DEF             NC_DOUBLE LAPACK BLAS SGEMV=DGEMV SGEMM=DGEMM&lt;br /&gt;
%BASE_FFLAGS         -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons&lt;br /&gt;
%PROD_FFLAGS         -O3&lt;br /&gt;
%DEV_FFLAGS          -O&lt;br /&gt;
%DEBUG_FFLAGS        -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace&lt;br /&gt;
%MPI_FFLAGS&lt;br /&gt;
%OMP_FFLAGS         &lt;br /&gt;
%BASE_LD             -llapack -lblas&lt;br /&gt;
%MPI_LD&lt;br /&gt;
%OMP_LD              &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Specific to photochemistry: set hard coded reactions ====&lt;br /&gt;
In '''/LMDZ.GENERIC/libf/aeronogeneric/chimiedata_h.F90''' you can hard code reaction if needed, for instance because the reaction rate is very specific and out of the generic formula or your photochemical reaction does not use a regular cross section.&lt;br /&gt;
&lt;br /&gt;
The TRAPPIST-1e test case use 3 hard coded reactions.&lt;br /&gt;
&lt;br /&gt;
*Uncomment the following lines to fill reaction species indexes:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
!===========================================================&lt;br /&gt;
!      r001 : HNO3 + rain -&amp;gt; H2O&lt;br /&gt;
!===========================================================&lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
indice_phot(nb_phot) = z3spec(1.0, indexchim('hno3'), 1.0, indexchim('h2o_vap'), 0.0, 1)&lt;br /&gt;
&lt;br /&gt;
!===========================================================&lt;br /&gt;
!      e001 : CO + OH -&amp;gt; CO2 + H &lt;br /&gt;
!===========================================================&lt;br /&gt;
nb_reaction_4 = nb_reaction_4 + 1&lt;br /&gt;
indice_4(nb_reaction_4) = z4spec(1.0, indexchim('co'), 1.0, indexchim('oh'), 1.0, indexchim('co2'), 1.0, indexchim('h'))&lt;br /&gt;
&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
!     photodissociation of NO : NO + hv -&amp;gt; N + O&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
indice_phot(nb_phot) = z3spec(1.0, indexchim('no'), 1.0, indexchim('n'), 1.0, indexchim('o'))&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Uncomment the following lines to fill reaction rates:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
!     carbon reactions&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
     &lt;br /&gt;
!---  e001: oh + co -&amp;gt; co2 + h&lt;br /&gt;
&lt;br /&gt;
      nb_reaction_4 = nb_reaction_4 + 1&lt;br /&gt;
&lt;br /&gt;
!     joshi et al., 2006&lt;br /&gt;
&lt;br /&gt;
      do ilev = 1,nlayer&lt;br /&gt;
         k1a0 = 1.34*2.5*dens(ilev)                                  &amp;amp;&lt;br /&gt;
               *1/(1/(3.62e-26*t(ilev)**(-2.739)*exp(-20./t(ilev)))  &amp;amp;&lt;br /&gt;
               + 1/(6.48e-33*t(ilev)**(0.14)*exp(-57./t(ilev))))     ! typo in paper corrected&lt;br /&gt;
         k1b0 = 1.17e-19*t(ilev)**(2.053)*exp(139./t(ilev))          &amp;amp;&lt;br /&gt;
              + 9.56e-12*t(ilev)**(-0.664)*exp(-167./t(ilev))&lt;br /&gt;
         k1ainf = 1.52e-17*t(ilev)**(1.858)*exp(28.8/t(ilev))        &amp;amp;&lt;br /&gt;
                + 4.78e-8*t(ilev)**(-1.851)*exp(-318./t(ilev))&lt;br /&gt;
         x = k1a0/(k1ainf - k1b0)&lt;br /&gt;
         y = k1b0/(k1ainf - k1b0)&lt;br /&gt;
         fc = 0.628*exp(-1223./t(ilev)) + (1. - 0.628)*exp(-39./t(ilev))  &amp;amp;&lt;br /&gt;
            + exp(-t(ilev)/255.)&lt;br /&gt;
         fx = fc**(1./(1. + (alog(x))**2))                           ! typo in paper corrected&lt;br /&gt;
         k1a = k1a0*((1. + y)/(1. + x))*fx&lt;br /&gt;
         k1b = k1b0*(1./(1.+x))*fx&lt;br /&gt;
            &lt;br /&gt;
         v_4(ilev,nb_reaction_4) = k1a + k1b&lt;br /&gt;
      end do&lt;br /&gt;
&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
!     washout r001 : HNO3 + rain -&amp;gt; H2O&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
      &lt;br /&gt;
rain_h2o  = 100.e-6&lt;br /&gt;
!rain_rate = 1.e-6  ! 10 days&lt;br /&gt;
rain_rate = 1.e-8&lt;br /&gt;
      &lt;br /&gt;
do ilev = 1,nlayer&lt;br /&gt;
   if (c(ilev,indexchim('h2o_vap'))/dens(ilev) &amp;gt;= rain_h2o) then&lt;br /&gt;
      v_phot(ilev,nb_phot) = rain_rate&lt;br /&gt;
   else&lt;br /&gt;
      v_phot(ilev,nb_phot) = 0.&lt;br /&gt;
   end if&lt;br /&gt;
end do&lt;br /&gt;
&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
!     photodissociation of NO&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
      &lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
      &lt;br /&gt;
colo3(nlayer) = 0.&lt;br /&gt;
!     ozone columns for other levels (molecule.cm-2)&lt;br /&gt;
do ilev = nlayer-1,1,-1&lt;br /&gt;
   colo3(ilev) = colo3(ilev+1) + (c(ilev+1,indexchim('o3')) + c(ilev,indexchim('o3')))*0.5*avocado*1e-4*((press(ilev) - press(ilev+1))*100.)/(1.e-3*zmmean(ilev)*g*dens(ilev))&lt;br /&gt;
end do&lt;br /&gt;
call jno(nlayer, c(nlayer:1:-1,indexchim('no')), c(nlayer:1:-1,indexchim('o2')), colo3(nlayer:1:-1), dens(nlayer:1:-1), press(nlayer:1:-1), sza, v_phot(nlayer:1:-1,nb_phot))&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Change the following lines to set the number of hard coded reactions:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
integer, parameter :: nphot_hard_coding = 2&lt;br /&gt;
integer, parameter :: n4_hard_coding    = 1&lt;br /&gt;
integer, parameter :: n3_hard_coding    = 0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (TRAPPIST-1e) ===&lt;br /&gt;
Change the following compiling option:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
-d 64x48x30 -b 38x36&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NB: option -b is mandatory to change while option -d will still run with lower or higher resolution (if '''z2sig.def''' remains coherent with the number of altitude levels, meaning at least as many altitude levels defined as the number of levels wanted).&lt;br /&gt;
&lt;br /&gt;
== TRAPPIST-1c in Venus-like conditions ==&lt;br /&gt;
&lt;br /&gt;
A warm rocky planet in synchronous rotation around a low mass star. Here we provide an '''example''' to simulate the atmosphere of Trappist-1c, assuming it evolved to a modern Venus-like atmosphere.&lt;br /&gt;
&lt;br /&gt;
The planetary parameters are taken from  [https://arxiv.org/abs/2010.01074 Algol et al. 2021] and can be found in this table [[Media:Planetary_parameters_Trappist1c.png]]&lt;br /&gt;
&lt;br /&gt;
First, install the model and run it, following [[Quick Install and Run]]  but instead of  ''Early Mars files'', please download ''bench_trappist1c_64x48x50_b32x36'' using this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/bench_trappist1c_64x48x50_b32x36.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (TRAPPIST-1c) ===&lt;br /&gt;
Change the following compiling option:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
-d 64x48x50 -b 32x36&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can find the same type of  ASCII *def files than in the case of ''Early Mars'',  but adapted to the planet's characteristics and orbital parameters of Trappist 1c.&lt;br /&gt;
In particular ''callphys.def'' contains the following changes:&lt;br /&gt;
&lt;br /&gt;
* The planet is assumed to be in 1:1 spin-orbit resonance, therefore&lt;br /&gt;
   diurnal = .false. &lt;br /&gt;
   tlocked = .true.&lt;br /&gt;
* The planet equilibrium temperature is about 342 K&lt;br /&gt;
   tplanet    = 341.9&lt;br /&gt;
* The host star is TRAPPIST1, with a stellar flux at 1 AU of 0.7527 [W m-2]&lt;br /&gt;
   stelspec_file = spectrum_TRAPPIST1_2022.dat&lt;br /&gt;
   tstellar = 2600.&lt;br /&gt;
   Fat1AU = 0.7527&lt;br /&gt;
* Fixed aerosol distribution, no radiative active tracers (no evaporation/condensation of H2O and CO2):&lt;br /&gt;
   aerofixed     = .true.&lt;br /&gt;
   aeroco2       = .false.&lt;br /&gt;
   aeroh2o       = .false.&lt;br /&gt;
* No water cycle model, no water cloud formation or water precipitation, no CO2 condensation:&lt;br /&gt;
   water         = .false.&lt;br /&gt;
   watercond     = .false.&lt;br /&gt;
   waterrain     = .false.&lt;br /&gt;
   hydrology     = .false.&lt;br /&gt;
   nonideal      = .true.&lt;br /&gt;
   co2cond       = .false.&lt;br /&gt;
* Following [https://www.sciencedirect.com/science/article/pii/S0032063313002596?via%3Dihub Haus et al. 2015] a prescribed radiatively active cloud model is included. &lt;br /&gt;
It can be activated/deactivated with the flag ''aerovenus''.&lt;br /&gt;
   aerovenus = .true.&lt;br /&gt;
* Mode 1, 2, 2p, 3 and the &amp;quot;unknown&amp;quot; UV absorber can be included/excluded by setting to true/false the following keywords. The characteristics of each mode (e.g. effect radius, effective variance) are based on Venus Express/ESA observations and can be found in this table [[Media:Table1 aerosolVenus trappist1c.png]]&lt;br /&gt;
   aerovenus1    = .true.&lt;br /&gt;
   aerovenus2    = .true.&lt;br /&gt;
   aerovenus2p   = .true.&lt;br /&gt;
   aerovenus3    = .true.&lt;br /&gt;
   aerovenusUV   = .true.&lt;br /&gt;
&lt;br /&gt;
The cloud model is prescribed from 1 to 0.037 ''bar'' pressure layers. For each mode, the top/bottom pressure can be modified by hard-coding model routine ''aerosol_opacity.F90''.&lt;br /&gt;
Here below an example for mode 1 particles, where the top pressure layer and bottom pressure layer are prescribed at 0.1 bar and 1 bar, respectively:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
!       1. Initialization&lt;br /&gt;
          aerosol(1:ngrid,1:nlayer,iaer)=0.0&lt;br /&gt;
          p_bot = 1.e5 ! bottom pressure [Pa]&lt;br /&gt;
          p_top = 1.e4&lt;br /&gt;
          h_bot = 1.0e3 ! bottom scale height [m]&lt;br /&gt;
          h_top = 5.0e3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
'''TO BE COMPLETED BY GABRIELLA'''&lt;br /&gt;
&lt;br /&gt;
== mini-Neptune GJ1214b ==&lt;br /&gt;
&lt;br /&gt;
A warm mini-Neptune&lt;br /&gt;
&lt;br /&gt;
'''TO BE COMPLETED BY BENJAMIN'''&lt;br /&gt;
&lt;br /&gt;
= 3D DYNAMICO setup =&lt;br /&gt;
&lt;br /&gt;
Due to the rich dynamical activities in their atmospheres (banded zonal jets, eddies, vortices, storms, equatorial oscillations,...) resulting from multi-scale dynamic interactions, the Global Climate Modelling of the giant planet requires to resolve eddies arising from hydrodynamical instabilities to correctly establish the planetary-scaled jets regime. To this purpose, their Rossby radius deformation $$L_D$$, which is the length scale at which rotational effects become as important as buoyancy or gravity wave effects in the evolution of the flow about some disturbance, is calculated to determine the most suitable horizontal grid resolution. At mid-latitude range, for the giant planets, $$L_D$$ is of the same order of magnitude as that of the Earth. As the giant planets have a size of roughly 10 times the Earth size (i.e., Jupiter and Saturn), the modelling grid must be of a horizontal resolution of 0.5$$^{\circ}$$ over longitude and latitude (vs 5$$^{\circ}$$ for the Earth), considering 3 grid points to resolved $$L_D$$. &lt;br /&gt;
Moreover, to have a chance to model the equatorial oscillation, meridional cell circulations and/or a seasonal inter-hemispheric circulation, a giant planet GCM must also include a high vertical resolution. Indeed, these climate phenomena have been studied for decades for the Earth's atmosphere, and result from small- and large-scale interactions between the troposphere and stratosphere. This implies that the propagation of dynamic instabilities, waves and turbulence should be resolved as far as possible along the vertical. Contrary to horizontal resolution, it doesn't really exist a criterion (similar to $$L_D$$) to determine the most suitable vertical grid resolution and still an adjustable parameter according to the processes to be represented. However, we advise the user to set a vertical resolution of at least 5 grid points per scale height as first stage.    &lt;br /&gt;
Finally, these atmospheres are cold, with long radiative response time which needs radiative transfer computations over decade-long years of Jupiter (given that a Jupiter year $$\approx$$ 12 Earth years), Saturn ( a Saturn year $$\approx$$ 30 Earth years), Uranus (a Uranus year $$\approx$$ 84 earth years) or Neptune (a Neptune year $$\approx$$ 169 Earth years), depending on the chosen planet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To be able to deal with these three -- and non-exhaustive -- requirements to build a giant planet GCM, we need massive computational ressources. For this, we use a dynamical core suitable and numerically stable for massive parallel ressource computations: [[The_DYNAMICO_dynamical_core | DYNAMICO]] [Dubos et al,. 2015].  &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
In these two following subsections, we purpose an example of installation for Jupiter and a Hot Jupiter. All the install, compiling, setting and parameters files for each giant planets could be found on: https://gitlab.in2p3.fr/aymeric.spiga/dynamico-giant (the old repo is archived as read-only https://github.com/aymeric-spiga/dynamico-giant)&lt;br /&gt;
&lt;br /&gt;
The [[Dynamico-giant | DYNAMICO-giant wiki is here]]&lt;br /&gt;
&lt;br /&gt;
If you have already downloaded '''LMDZ.COMMON''', '''LMDZ.GENERIC''', '''IOIPSL''', '''ARCH''', you only have to download:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''ICOSAGCM''': the DYNAMICO dynamical core&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://gitlab.in2p3.fr/ipsl/projets/dynamico/dynamico.git ICOSAGCM&lt;br /&gt;
cd ICOSAGCM&lt;br /&gt;
git checkout 110016896ae9e85e614af43223b18fe38f211020   # Version du 6 nov. 2024&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''ICOSA_LMDZ''': the interface using to link LMDZ.GENERIC physical packages and ICOSAGCM&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn update -r 3729 -q ICOSA_LMDZ   # Version du 18 avr. 2025&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''XIOS (XML Input Output Server)''': the library to interpolate input/output fields between the icosahedral and longitude/latitude regular grids on fly&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn co -r 2626 -q http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/trunk XIOS   # Version du 22 mar. 2024&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you haven't already download '''LMDZ.COMMON''', '''LMDZ.GENERIC''', '''IOIPSL''', '''ARCH''', you can use the '''install.sh''' script provided by the GitLab repository. &lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Once each part of the GCM is downloaded, you are able to compile it. &lt;br /&gt;
Firstly, you have to define your [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files | target architecture file ]] (hereafter named YOUR_ARCH_FILE) where you will fill in all the necessary information about the local environment, where libraries are located, which compiler, and compiler options will be used, etc.&lt;br /&gt;
Some architecture files related to specific machines are provided in the '''ARCH''' directory, which are referenced in the following lines without the prefix 'arch-' (i.e., arch-X64_IRENE-AMD.env will be referenced as X64_IRENE-AMD).  &lt;br /&gt;
&lt;br /&gt;
The main specificity of DYNAMICO-giant is that every main parts of the model ('''ICOSAGCM''', '''LMDZ.COMMON''' and '''LMDZ.GENERIC''') are compiled as libraries, and settings and running configuration are managed by the '''ICOSA_LMDZ''' interface.&lt;br /&gt;
&lt;br /&gt;
First, you have to compile '''IOIPSL''',&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd LMDZ.COMMON/ioipsl/                                                                                                             &lt;br /&gt;
    ./install_ioipsl_YOUR-MACHINE.bash&lt;br /&gt;
cd ../../&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
then '''XIOS''' library, &lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd XIOS/                                                                                                               &lt;br /&gt;
    ./make_xios --prod --arch YOUR_ARCH_FILE --arch_path ../ARCH --job 8 --full&lt;br /&gt;
cd -&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
the physics packaging,&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd LMDZ.COMMON/                                                                                                        &lt;br /&gt;
    ./makelmdz_fcm -p generic -p_opt &amp;quot;-b 20x25&amp;quot; -prod -parallel mpi -libphy -io xios -arch YOUR_ARCH_FILE -arch_path ../ARCH -j 8 -full&lt;br /&gt;
cd -&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
the dynamical core '''DYNAMICO''' (located in '''ICOSAGCM''' directory, named from the icosahedral shape of the horizontal mesh),&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd ICOSAGCM/&lt;br /&gt;
    ./make_icosa -prod -parallel mpi -external_ioipsl -with_xios -arch YOUR_ARCH_FILE -arch_path ../ARCH -job 8 -full&lt;br /&gt;
cd -&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
and finally the '''ICOSA_LMDZ''' interface&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd ICOSA_LMDZ/&lt;br /&gt;
    ./make_icosa_lmdz -p generic -p_opt &amp;quot;-b 20x25&amp;quot; -parallel mpi -arch YOUR_ARCH_FILE -arch_path ../ARCH -job 8 -nodeps&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This last step is a bit redundant with the two previous one, hence ''make_icosa_lmdz'' will execute ''./make_icosa'' (in the '''ICOSAGCM''' directory) and ''./makelmdz_fcm'' (in the '''LMDZ.COMMON''' directory) to create and source the architecture files shared between all parts of the model, as well as create the intermediate file ''config.fcm''. As you have already compiled these two elements, ''make_icosa_lmdz'' should only create the linked architecture files, ''config.fcm'' and compile the interface. Here, ''-nodeps'' option prevents the checking of XIOS and IOIPSL compilation, which saves you from recompiling these two elements again.&lt;br /&gt;
      &lt;br /&gt;
Finally, your executable programs should appeared in '''ICOSA_LMDZ/bin''' subdirectory, as '''icosa_lmdz.exe''' and in '''XIOS/bin''' subdirectory, as '''xios_server.exe''' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All these compiling steps are summed up in ''make_isoca_lmdz'' program that should be adapted to your own computational settings (i.e., through you target architecture file).&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./make_icosa_lmdz -p generic -p_opt &amp;quot;-b 20x25&amp;quot; -parallel mpi -arch YOUR_ARCH_FILE -arch_path ../ARCH -job 8 -full&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
Here, ''-full'' option assure the compilation of each part ('''IOIPSL''', '''XIOS''', '''LMDZ.COMMON''', '''ICOSAGCM''' and '''ICOSA_LMDZ''') of the model.  &lt;br /&gt;
&lt;br /&gt;
Now you can move your two executable files to your working directory and start to run your own simulation of Jupiter or a Hot Jupiter, as what follows.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: If you are using the GitLab file architecture (https://gitlab.in2p3.fr/aymeric.spiga/dynamico-giant), you should be able to compile the model directly from your working directory (for instance ''dynamico-giant/jupiter/'') by using the ''compile_occigen.sh'' program, which has to be adapted to your machine/cluster.&lt;br /&gt;
&lt;br /&gt;
''Note 2 : Depending on the compiler module you use, especially with gfortran, you may need to modify the tracers_icosa.F90 file located in the src directory in order to successfully compile ICOSAGCM. For example, if you are using GCC/11.3.0 and OpenMPI/4.1.4, you must update the insert_tracer_output subroutine as follows:''&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;fortran&amp;quot;&amp;gt;&lt;br /&gt;
SUBROUTINE insert_tracer_output&lt;br /&gt;
      USE xios_mod&lt;br /&gt;
      USE grid_param&lt;br /&gt;
      IMPLICIT NONE&lt;br /&gt;
      TYPE(xios_fieldgroup) :: fieldgroup_hdl&lt;br /&gt;
      TYPE(xios_field) :: field_hdl&lt;br /&gt;
      INTEGER :: iq&lt;br /&gt;
      CHARACTER(len=1000) :: tracername1&lt;br /&gt;
      CHARACTER(len=1000) :: tracername2&lt;br /&gt;
      CHARACTER(len=1000) :: tracername3 &lt;br /&gt;
&lt;br /&gt;
      CALL xios_get_handle(&amp;quot;standard_output_tracers&amp;quot;, fieldgroup_hdl)&lt;br /&gt;
      DO iq = 1, nqtot&lt;br /&gt;
         tracername1 = &amp;quot;tracer_&amp;quot;//TRIM(tracers(iq)%name)&lt;br /&gt;
         CALL xios_add_child(fieldgroup_hdl, field_hdl, tracername1)&lt;br /&gt;
         CALL xios_set_attr(field_hdl, name=TRIM(tracers(iq)%name))&lt;br /&gt;
      END DO&lt;br /&gt;
&lt;br /&gt;
      CALL xios_get_handle(&amp;quot;standard_output_tracers_init&amp;quot;, fieldgroup_hdl)&lt;br /&gt;
      DO iq = 1, nqtot&lt;br /&gt;
         tracername2 = &amp;quot;tracer_&amp;quot;//TRIM(tracers(iq)%name)//&amp;quot;_init&amp;quot;&lt;br /&gt;
         tracername3 = TRIM(tracers(iq)%name)//&amp;quot;_init&amp;quot;&lt;br /&gt;
         CALL xios_add_child(fieldgroup_hdl, field_hdl, tracername2)&lt;br /&gt;
         CALL xios_set_attr(field_hdl, name=tracername3)&lt;br /&gt;
      END DO&lt;br /&gt;
&lt;br /&gt;
   END SUBROUTINE insert_tracer_output&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Jupiter with DYNAMICO ==&lt;br /&gt;
Using a new dynamical core implies new setting files, in addition or as a replacement of those relevant to '''LMDZ.COMMON''' dynamical core using. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two kind of setting files:&lt;br /&gt;
&lt;br /&gt;
'''A first group relevant to DYNAMICO:'''&lt;br /&gt;
&lt;br /&gt;
- [[The ''context_dynamico.xml'' Input File|''context_dynamico.xml'']]: Configuration file for '''DYNAMICO''' for reading and writing  files using '''XIOS''', mainly used when you want to check the installation of '''ICOSAGCM''' with [[The_DYNAMICO_dynamical_core | an ''Held and Suarez'' test case]]. When your installation, compilation and run environment is fully functional, the dynamic core output files will not (necessarily) be useful and you can disable their writing. &lt;br /&gt;
&lt;br /&gt;
- [[The context_input_dynamico.xml Input File|''context_input_dynamico.xml'']]:&lt;br /&gt;
&lt;br /&gt;
- [[The file_def_dynamico.xml Input File|''file_def_dynamico.xml'']]: Definition of output diagnostic files which will be written into the output files only related to '''ICOSAGCM'''. &lt;br /&gt;
&lt;br /&gt;
- [[The field_def_dynamico.xml Input File|''field_def_dynamico.xml'']]: Definition of all existing variables that can be output from DYNAMICO.&lt;br /&gt;
&lt;br /&gt;
- [[The tracer.def Input File|''tracer.def'']]: Definition of the name and physico-chemical properties of the tracers which will be advected by the dynamical core. For now, there is two files related to tracers, we are working to harmonise it.  &lt;br /&gt;
&lt;br /&gt;
''' A second group relevant to LMDZ.GENERIC physical packages: '''&lt;br /&gt;
&lt;br /&gt;
- [[The context_lmdz_physics.xml Input File|''context_lmdz_physics.xml'']]: File in which are defined the horizontal grid, vertical coordinate, output file(s) definition, with the setting of frequency output writing, time unit, geophysical variables to be written, etc. Each new geophysical variables added here have to be defined in the ''field_def_physics.xml'' file.&lt;br /&gt;
&lt;br /&gt;
- [[The field_def_physics.xml Input File|''field_def_physics.xml'']]: Definition of all existing variables that can be output from the physical packages interfaced with '''DYNAMICO'''. This is where you will add each geophysical fields that you want to appear in the ''Xhistins.nc'' output files. For instance, related to the ''thermal plume scheme'' using for Jupiter's tropospheric dynamics, we have added the following variables: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot; line&amp;gt;&lt;br /&gt;
             &amp;lt;field id=&amp;quot;h2o_vap&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Vapor mass mixing ratio&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/kg&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;h2o_ice&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Vapor mass mixing ratio&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/kg&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;detr&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Detrainment&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/m2/s&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;entr&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Entrainment&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/m2/s&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;w_plm&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Plume vertical velocity&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;m/s&amp;quot; /&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [[The_callphys.def_Input_File|''callphys.def'']]: This setting file is used either with '''DYNAMICO''' or '''LMDZ.COMMON''' and allows the user to choose the physical parametrisation schemes and their appropriate main parameter values relevant to the planet being simulated. In our case of Jupiter, there are some specific parametrisations that should be added or modified from the example given as link at the beginning of this line: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
# Diurnal cycle ?  if diurnal=false, diurnally averaged solar heating&lt;br /&gt;
diurnal      = .false. #.true.&lt;br /&gt;
# Seasonal cycle ? if season=false, Ls stays constant, to value set in &amp;quot;start&amp;quot;&lt;br /&gt;
season       = .true. &lt;br /&gt;
# Tidally resonant orbit ? must have diurnal=false, correct rotation rate in newstart&lt;br /&gt;
tlocked      = .false.&lt;br /&gt;
# Tidal resonance ratio ? ratio T_orbit to T_rotation&lt;br /&gt;
nres         = 1&lt;br /&gt;
# Planet with rings?&lt;br /&gt;
rings_shadow = .false.&lt;br /&gt;
# Compute latitude-dependent gravity field??&lt;br /&gt;
oblate       = .true.&lt;br /&gt;
# Include non-zero flattening (a-b)/a?&lt;br /&gt;
flatten      = 0.06487&lt;br /&gt;
# Needed if oblate=.true.: J2&lt;br /&gt;
J2           = 0.01470&lt;br /&gt;
# Needed if oblate=.true.: Planet mean radius (m)&lt;br /&gt;
Rmean        = 69911000.&lt;br /&gt;
# Needed if oblate=.true.: Mass of the planet (*1e24 kg)&lt;br /&gt;
MassPlanet   = 1898.3&lt;br /&gt;
# use (read/write) a startfi.nc file? (default=.true.)&lt;br /&gt;
startphy_file = .false.&lt;br /&gt;
# constant value for surface albedo (if startphy_file = .false.)&lt;br /&gt;
surfalbedo   = 0.0&lt;br /&gt;
# constant value for surface emissivity (if startphy_file = .false.)&lt;br /&gt;
surfemis     = 1.0&lt;br /&gt;
&lt;br /&gt;
# the rad. transfer is computed every &amp;quot;iradia&amp;quot; physical timestep&lt;br /&gt;
iradia           = 160&lt;br /&gt;
# folder in which correlated-k data is stored ?&lt;br /&gt;
corrkdir         = Jupiter_HITRAN2012_REY_ISO_NoKarko_T460K_article2019_gauss8p8_095&lt;br /&gt;
# Uniform absorption coefficient in radiative transfer?&lt;br /&gt;
graybody         = .false.&lt;br /&gt;
# Characteristic planetary equilibrium (black body) temperature&lt;br /&gt;
# This is used only in the aerosol radiative transfer setup. (see aerave.F)&lt;br /&gt;
tplanet          = 100.&lt;br /&gt;
# Output global radiative balance in file 'rad_bal.out' - slow for 1D!!&lt;br /&gt;
meanOLR          = .false.&lt;br /&gt;
# Variable gas species: Radiatively active ?&lt;br /&gt;
varactive        = .false.&lt;br /&gt;
# Computes atmospheric specific heat capacity and&lt;br /&gt;
# could calculated by the dynamics, set in callphys.def or calculeted from gases.def.&lt;br /&gt;
# You have to choose: 0 for dynamics (3d), 1 for forced in callfis (1d) or 2: computed from gases.def (1d)&lt;br /&gt;
# Force_cpp and check_cpp_match are now deprecated.  &lt;br /&gt;
cpp_mugaz_mode = 0&lt;br /&gt;
# Specific heat capacity in J K-1 kg-1 [only used if cpp_mugaz_mode = 1]&lt;br /&gt;
cpp              = 11500.&lt;br /&gt;
# Molecular mass in g mol-1 [only used if cpp_mugaz_mode = 1]&lt;br /&gt;
mugaz            = 2.30&lt;br /&gt;
### DEBUG&lt;br /&gt;
# To not call abort when temperature is outside boundaries:&lt;br /&gt;
strictboundcorrk = .false.&lt;br /&gt;
# To not stop run when temperature is greater than 400 K for H2-H2 CIA dataset:   &lt;br /&gt;
strictboundcia = .false.&lt;br /&gt;
# Add temperature sponge effect after radiative transfer?&lt;br /&gt;
callradsponge    = .false.&lt;br /&gt;
&lt;br /&gt;
Fat1AU = 1366.0&lt;br /&gt;
&lt;br /&gt;
## Other physics options&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# call turbulent vertical diffusion ?&lt;br /&gt;
calldifv    = .false.&lt;br /&gt;
# use turbdiff instead of vdifc ?&lt;br /&gt;
UseTurbDiff = .true.&lt;br /&gt;
# call convective adjustment ?&lt;br /&gt;
calladj     = .true.&lt;br /&gt;
# call thermal plume model ?&lt;br /&gt;
calltherm   = .true.&lt;br /&gt;
# call thermal conduction in the soil ?&lt;br /&gt;
callsoil    = .false.&lt;br /&gt;
# Internal heat flux (matters only if callsoil=F)&lt;br /&gt;
intheat     = 7.48&lt;br /&gt;
# Remove lower boundary (e.g. for gas giant sims)&lt;br /&gt;
nosurf      = .true.&lt;br /&gt;
#########################################################################&lt;br /&gt;
## extra non-standard definitions for Earth&lt;br /&gt;
#########################################################################&lt;br /&gt;
&lt;br /&gt;
## Thermal plume model options&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
dvimpl               = .true.&lt;br /&gt;
r_aspect_thermals    = 2.0&lt;br /&gt;
tau_thermals         = 0.0&lt;br /&gt;
betalpha             = 0.9&lt;br /&gt;
afact                = 0.7&lt;br /&gt;
fact_epsilon         = 2.e-4&lt;br /&gt;
alpha_max            = 0.7&lt;br /&gt;
fomass_max           = 0.5&lt;br /&gt;
pres_limit           = 2.e5&lt;br /&gt;
&lt;br /&gt;
## Tracer and aerosol options&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# Ammonia cloud (Saturn/Jupiter)?&lt;br /&gt;
aeronh3             = .true.&lt;br /&gt;
size_nh3_cloud      = 10.D-6&lt;br /&gt;
pres_nh3_cloud      = 1.1D5                        # old: 9.D4&lt;br /&gt;
tau_nh3_cloud       = 10.                          # old: 15.&lt;br /&gt;
# Radiatively active aerosol (Saturn/Jupiter)?&lt;br /&gt;
aeroback2lay         = .true.&lt;br /&gt;
optprop_back2lay_vis = optprop_jupiter_vis_n20.dat&lt;br /&gt;
optprop_back2lay_ir  = optprop_jupiter_ir_n20.dat&lt;br /&gt;
obs_tau_col_tropo    = 4.0&lt;br /&gt;
size_tropo           = 5.e-7&lt;br /&gt;
pres_bottom_tropo    = 8.0D4&lt;br /&gt;
pres_top_tropo       = 1.8D4&lt;br /&gt;
obs_tau_col_strato   = 0.1D0&lt;br /&gt;
# Auroral aerosols (Saturn/Jupiter)?&lt;br /&gt;
aeroaurora         = .false.&lt;br /&gt;
size_aurora        = 3.e-7&lt;br /&gt;
obs_tau_col_aurora = 2.0&lt;br /&gt;
&lt;br /&gt;
# Radiatively active CO2 aerosol?&lt;br /&gt;
aeroco2            = .false.&lt;br /&gt;
# Fixed CO2 aerosol distribution?&lt;br /&gt;
aerofixco2     = .false.&lt;br /&gt;
# Radiatively active water aerosol?&lt;br /&gt;
aeroh2o        = .false.&lt;br /&gt;
# Fixed water aerosol distribution?&lt;br /&gt;
aerofixh2o     = .false.&lt;br /&gt;
# basic dust opacity&lt;br /&gt;
dusttau        = 0.0&lt;br /&gt;
# Varying H2O cloud fraction?&lt;br /&gt;
CLFvarying     = .false.&lt;br /&gt;
# H2O cloud fraction if fixed?&lt;br /&gt;
CLFfixval      = 0.0&lt;br /&gt;
# fixed radii for cloud particles?&lt;br /&gt;
radfixed       = .false.&lt;br /&gt;
# number mixing ratio of CO2 ice particles&lt;br /&gt;
Nmix_co2       = 100000.&lt;br /&gt;
# number mixing ratio of water particles (for rafixed=.false.)&lt;br /&gt;
Nmix_h2o       = 1.e7&lt;br /&gt;
# number mixing ratio of water ice particles (for rafixed=.false.)&lt;br /&gt;
Nmix_h2o_ice   = 5.e5&lt;br /&gt;
# radius of H2O water particles (for rafixed=.true.):&lt;br /&gt;
rad_h2o        = 10.e-6&lt;br /&gt;
# radius of H2O ice particles (for rafixed=.true.):&lt;br /&gt;
rad_h2o_ice    = 35.e-6&lt;br /&gt;
# atm mass update due to tracer evaporation/condensation?&lt;br /&gt;
mass_redistrib = .false.&lt;br /&gt;
&lt;br /&gt;
## Water options &lt;br /&gt;
## ~~~~~~~~~~~~~&lt;br /&gt;
# Model water cycle&lt;br /&gt;
water         = .true.&lt;br /&gt;
# Model water cloud formation&lt;br /&gt;
watercond     = .true.&lt;br /&gt;
# Model water precipitation (including coagulation etc.)&lt;br /&gt;
waterrain     = .true.&lt;br /&gt;
# Use simple precipitation scheme?&lt;br /&gt;
precip_scheme = 1&lt;br /&gt;
# Evaporate precipitation?&lt;br /&gt;
evap_prec     = .true.&lt;br /&gt;
# multiplicative constant in Boucher 95 precip scheme&lt;br /&gt;
Cboucher      = 1.&lt;br /&gt;
# Include hydrology ?&lt;br /&gt;
hydrology     = .false.&lt;br /&gt;
# H2O snow (and ice) albedo ?&lt;br /&gt;
albedosnow    = 0.6&lt;br /&gt;
# Maximum sea ice thickness ?&lt;br /&gt;
maxicethick   = 10.&lt;br /&gt;
# Freezing point of seawater (degrees C) ?&lt;br /&gt;
Tsaldiff      = 0.0&lt;br /&gt;
# Evolve surface water sources ?&lt;br /&gt;
sourceevol    = .false.&lt;br /&gt;
&lt;br /&gt;
## CO2 options &lt;br /&gt;
## ~~~~~~~~~~~&lt;br /&gt;
# call CO2 condensation ?&lt;br /&gt;
co2cond       = .false.&lt;br /&gt;
# Set initial temperature profile to 1 K above CO2 condensation everywhere?&lt;br /&gt;
nearco2cond   = .false.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [[The_gases.def_Input_file|''gases.def'']]: File containing the gas composition of the atmosphere you want to model, with their molar mixing ratios. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
# gases&lt;br /&gt;
5&lt;br /&gt;
H2_&lt;br /&gt;
He_&lt;br /&gt;
CH4&lt;br /&gt;
C2H2&lt;br /&gt;
C2H6&lt;br /&gt;
0.863&lt;br /&gt;
0.134&lt;br /&gt;
0.0018&lt;br /&gt;
1.e-7&lt;br /&gt;
1.e-5&lt;br /&gt;
# First line is number of gases&lt;br /&gt;
# Followed by gas names (always 3 characters)&lt;br /&gt;
# and then molar mixing ratios.&lt;br /&gt;
# mixing ratio -1 means the gas is variable.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [[The jupiter_const.def Input File|''jupiter_const.def'']]: Files that gather all orbital and physical parameters of Jupiter.&lt;br /&gt;
&lt;br /&gt;
- [[The_traceur.def_Input_File|''traceur.def'']]: At this time, only two tracers are used for modelling Jupiter atmosphere, so the ''traceur.def'' file is summed up as follow&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
2&lt;br /&gt;
h2o_vap&lt;br /&gt;
h2o_ice&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''' Two additional files are used to set the running parameter of the simulation itself:'''&lt;br /&gt;
&lt;br /&gt;
- [[The run_icosa.def Input File | ''run_icosa.def'']]: file containing parameters for '''ICOSAGCM''' to execute the simulation, use to determine the [[Advanced Use of the GCM | horizontal and vertical resolutions]], the number of processors, the number of subdivisions, the duration of the simulation, etc.&lt;br /&gt;
&lt;br /&gt;
- ''run.def'': file which brings together all the setting files and will be reading by the interface '''ICOSA_LMDZ''' to link each part of the model ('''ICOSAGCM''', '''LMDZ.GENERIC''') with its particular setting file(s) when the library '''XIOS''' does not take action (through the ''.xml'' files).&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
###########################################################################&lt;br /&gt;
### INCLUDE OTHER DEF FILES (physics, specific settings, etc...)&lt;br /&gt;
###########################################################################&lt;br /&gt;
INCLUDEDEF=run_icosa.def&lt;br /&gt;
&lt;br /&gt;
INCLUDEDEF=jupiter_const.def&lt;br /&gt;
&lt;br /&gt;
INCLUDEDEF=callphys.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
prt_level=0&lt;br /&gt;
&lt;br /&gt;
## iphysiq must be same as itau_physics&lt;br /&gt;
iphysiq=40&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hot Jupiter with DYNAMICO ==&lt;br /&gt;
&lt;br /&gt;
Modelling the atmosphere of Hot Jupiter is challenging because of the extreme temperature conditions, and the fact that these planets are gas giants. Therefore, using a dynamical core such as Dynamico is strongly recommended. Here, we discuss how to perform a cloudless simulation of the Hot Jupiter WASP-43 b, using Dynamico.&lt;br /&gt;
&lt;br /&gt;
'''1st step''': You need to go to the github mentionned previously for Dynamico: https://github.com/aymeric-spiga/dynamico-giant. ''Git clone'' this repo on your favorite cluster, and ''checkout'' to the &amp;quot;hot_jupiter&amp;quot; branch.&lt;br /&gt;
&lt;br /&gt;
'''2nd step''': Now, run the install.sh script. This script will install '''all''' the required models ('''LMDZ.COMMON''', '''LMDZ.GENERIC''','''ICOSA_LMDZ''','''XIOS''','''FCM''','''ICOSAGCM'''). At this point, you only miss '''IOIPSL'''. To install it, go to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
dynamico-giant/code/LMDZ.COMMON/ioipsl/ &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There, you will find some examples of installations script. You need to create one that will work on your cluster, with your own arch files.&lt;br /&gt;
During the installation of '''IOIPSL''', you might be asked for a login/password. Contact TGCC computing center to get access.&lt;br /&gt;
&lt;br /&gt;
'''3rd step''': Great, now we have all we need to get started. Navigate to the ''hot_jupiter'' folder. You will find a ''compile_mesopsl.sh'' and a ''compile_occigen.sh'' script. Use them as examples to create the compile script adapted to your own cluster, then run it. &lt;br /&gt;
While running, I suggest that you take a look at the ''log_compile'' file. The compilation can take a while (~ 10minutes, especially because of XIOS). On quick trick to make sure that everything went right is to check the number of ''Build command finished'' messages in ''log_compile''. If everything worked out, there should be 6 of them.&lt;br /&gt;
&lt;br /&gt;
'''4th step''': Okay, the model compiled, good job ! Now we need to create the initial condition for our run. In the hot_jupiter1d folder, you already have a ''temp_profile.txt'' computed with the 1D version of the LMDZ.GENERIC (see rcm1d on this page). Thus, no need to recompute a 1D model but it will be needed if you want to model another Hot Jupiter.&lt;br /&gt;
Navigate to the 'makestart' folder, located at &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
dynamico-giant/hot_jupiter/makestart/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To generate the initial conditions for the 3D run, we're gonna start the model using the temperature profile from the 1D run. to do that, you will find a &amp;quot;job_mpi&amp;quot; script. Open it, and adapt it to your cluster and launch the job. This job is using 20 procs, and it runs 5 days of simulations. &lt;br /&gt;
If everything goes well, you should see few netcdf files appear. The important ones are '''start_icosa0.nc''', '''startfi0.nc''' and '''Xhistins.nc'''. &lt;br /&gt;
If you see these files, you're all set to launch a real simulation !&lt;br /&gt;
&lt;br /&gt;
'''5th step''': Go back to ''hot_jupiter'' folder. There are a bunch of script to launch your simulation. Take a look at the ''astro_fat_mpi'' script, and adapt it to your cluster. Then you can launch your simulation by doing &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
./run_astro_fat&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will start the simulation, using 90 procs. In the same folder, check if the icosa_lmdz.out file is created. This is the logfile of the simulation, while it is running. You can check there that everything is going well.&lt;br /&gt;
&lt;br /&gt;
'''Important side note''': When using the ''run_astro_fat'' script to run a simulation, it will run a chained simulation, restarting the simulation from the previous state after 100 days of simulations and generating ''Xhistins.nc'' files. This is your results file, where you will find all the variables that controls your atmosphere (temperature field, wind fields, etc..). &lt;br /&gt;
&lt;br /&gt;
Good luck and enjoy the generic PCM Dynamico for Hot Jupiter !&lt;br /&gt;
&lt;br /&gt;
'''2nd important side note''': These 5 steps are the basic needed steps to run a simulation. If you want to tune simulations to another planet, or change other stuff, you need to take a look at '''*.def''' and '''*.xml''' files. If you're lost in all of this, take a look at the different pages of this website and/or contact us !&lt;br /&gt;
Also, you might want to check the wiki on the [https://github.com/aymeric-spiga/dynamico-giant ''Github''], that explains a lot of settings for Dynamico&lt;br /&gt;
&lt;br /&gt;
= 3D LES setup =&lt;br /&gt;
&lt;br /&gt;
== Proxima b with LES ==&lt;br /&gt;
&lt;br /&gt;
To model the subgrid atmospheric turbulence, the [[WRF dynamical core for LES/mesoscale simulations|'''WRF''']] dynamical core coupled with the LMD Generic physics package is used. The first studied conducted was to resolve the convective activity of the substellar point of Proxami-b (Lefevre et al 2021). The impact of the stellar insolation and rotation period were studied. The files for the reference case, with a stellar flux of 880 W/m2 and an 11 days rotation period, are presented&lt;br /&gt;
&lt;br /&gt;
The input_* file are the used to initialize the temperature, pressure, winds and moisture of the domain. &lt;br /&gt;
input_souding : altitude (km), potential temperature, water vapour (kg/kg), u, v&lt;br /&gt;
input_therm : normalized gas constant, isobaric heat capacity, pressure, density, temperature&lt;br /&gt;
input_hr : SW heating, LW heating, Large-scale heating extracted from the GCM. Only the last one is used in this configuration.&lt;br /&gt;
&lt;br /&gt;
The file namelist.input is used to set up the domain parameters (resolution, grid points, etc). The file levels specifies the eta-levels of the vertical domain.&lt;br /&gt;
&lt;br /&gt;
Planet is used set up the atmospheric parameters, in order : gravity (m/s2), isobaric heat capacity (J/kg/K), molecular mass (g/mol), reference temperature (K), surface pressure (Pa), planet radius (m) and planet rotation rate (s-1).&lt;br /&gt;
&lt;br /&gt;
The files *.def are the parameter for the physics. Compared to GCM runs, the convective adjustment in callphys.def is turned off&lt;br /&gt;
&lt;br /&gt;
The file controle.txt, equivalent of the field controle in GCM start.nc, needed to initialize some physics constants.&lt;br /&gt;
&lt;br /&gt;
TBC ML&lt;br /&gt;
&lt;br /&gt;
= 1D setup =&lt;br /&gt;
&lt;br /&gt;
== rcm1d test case ==&lt;br /&gt;
&lt;br /&gt;
Running the model in 1D (i.e. considering simply a column of atmosphere) is a common first step to test a new setup. To do so, you first have to compile the 1D version of the model. The command line is very similar to [[Quick_Install_and_Run#Compiling a test case (early Mars)|the one for the 3D]], except for 2 changes:&lt;br /&gt;
# put just the vertical resolution after the -d option (&amp;quot;VERT&amp;quot; instead of ''LON''x''LAT''x''VERT'' for the 3D case)&lt;br /&gt;
# at the end of the line, replace &amp;quot;gcm&amp;quot; with &amp;quot;rcm1d&amp;quot;&lt;br /&gt;
It will generate a file called '''rcm1d_XX_phyxxx_seq.e''', where ''XX'' and ''phyxxx'' are the vertical resolution and the physics package, respectively.&lt;br /&gt;
&lt;br /&gt;
Then, copy the executable in your working directory. &lt;br /&gt;
&lt;br /&gt;
Note that the '''.def''' files differ a bit from the 3D case: [[The_run.def_Input_File|'''run.def''']] is replaced with [[The_rcm1d.def_Input_File|'''rcm1d.def''']], which contains more general information. Indeed, the 1D model does not use [[The_start.nc_and_startfi.nc_input_files|'''start.nc''']] or [[The_start.nc_and_startfi.nc_input_files|'''startfi.nc''']] files to initialize. Instead it reads everything from the '''.def''' files. You can find examples of 1D configuration in ''LMDZ.GENERIC/deftank'' (e.g. '''rcm1d.def.earlymars''', '''rcm1d.def.earth'''), the best thing is to have a look at them.&lt;br /&gt;
&lt;br /&gt;
== kcm1d test case ==&lt;br /&gt;
&lt;br /&gt;
Our 1-D inverse model&lt;br /&gt;
&lt;br /&gt;
TBD by Guillaume or Martin&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;br /&gt;
[[Category:Generic-DYNAMICO]]&lt;br /&gt;
[[Category:Generic-WRF]]&lt;br /&gt;
[[Category:Generic-1D]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Other_GCM_Configurations_worth_knowing_about&amp;diff=3211</id>
		<title>Other GCM Configurations worth knowing about</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Other_GCM_Configurations_worth_knowing_about&amp;diff=3211"/>
				<updated>2026-02-23T08:15:40Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= 3D lon-lat LMDZ setup =&lt;br /&gt;
&lt;br /&gt;
== early Mars ==&lt;br /&gt;
&lt;br /&gt;
It is already described in the [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Quick_Install_and_Run ''Quick Install and Run''] section.&lt;br /&gt;
&lt;br /&gt;
== Earth with slab ocean ==&lt;br /&gt;
&lt;br /&gt;
TBD by Siddharth, once all changes have been committed (also need a validation of the model on Earth to be sure)&lt;br /&gt;
&lt;br /&gt;
== TRAPPIST-1e with photochemistry ==&lt;br /&gt;
&lt;br /&gt;
A temperate rocky planet in synchronous rotation around a low mass star.&lt;br /&gt;
&lt;br /&gt;
Here is an example to simulate the planet TRAPPIST-1e with an Earth atmosphere using the photochemical module of the GCM.&lt;br /&gt;
&lt;br /&gt;
To install the model and run it, follow [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Quick_Install_and_Run ''Quick Install and Run''] but with the following changes:&lt;br /&gt;
&lt;br /&gt;
=== GCM Input Datafiles and Datasets ===&lt;br /&gt;
Section [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;amp;action=edit&amp;amp;section=9 ''GCM Input Datafiles and Datasets''] download the TRAPPIST-1e files (instead of the early Mars files):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/bench_trappist1e_photochemistry_64x48x30_b38x36.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can find the same type of file with the additional folder containing the chemical network file:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
callphys.def  gases.def  startfi.nc  traceur.def&lt;br /&gt;
datadir/      run.def    start.nc    z2sig.def&lt;br /&gt;
chemnetwork/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Compiling the GCM ===&lt;br /&gt;
==== Prior to a first compilation: setting up the target architecture files ====&lt;br /&gt;
The chemical solver require the libraries BLAS and LAPACK which need to be specified in the '''arch*.fcm''' file:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
%COMPILER            gfortran&lt;br /&gt;
%LINK                gfortran&lt;br /&gt;
%AR                  ar&lt;br /&gt;
%MAKE                make&lt;br /&gt;
%FPP_FLAGS           -P -traditional&lt;br /&gt;
%FPP_DEF             NC_DOUBLE LAPACK BLAS SGEMV=DGEMV SGEMM=DGEMM&lt;br /&gt;
%BASE_FFLAGS         -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons&lt;br /&gt;
%PROD_FFLAGS         -O3&lt;br /&gt;
%DEV_FFLAGS          -O&lt;br /&gt;
%DEBUG_FFLAGS        -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace&lt;br /&gt;
%MPI_FFLAGS&lt;br /&gt;
%OMP_FFLAGS         &lt;br /&gt;
%BASE_LD             -llapack -lblas&lt;br /&gt;
%MPI_LD&lt;br /&gt;
%OMP_LD              &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Specific to photochemistry: set hard coded reactions ====&lt;br /&gt;
In '''/LMDZ.GENERIC/libf/aeronogeneric/chimiedata_h.F90''' you can hard code reaction if needed, for instance because the reaction rate is very specific and out of the generic formula or your photochemical reaction does not use a regular cross section.&lt;br /&gt;
&lt;br /&gt;
The TRAPPIST-1e test case use 3 hard coded reactions.&lt;br /&gt;
&lt;br /&gt;
*Uncomment the following lines to fill reaction species indexes:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
!===========================================================&lt;br /&gt;
!      r001 : HNO3 + rain -&amp;gt; H2O&lt;br /&gt;
!===========================================================&lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
indice_phot(nb_phot) = z3spec(1.0, indexchim('hno3'), 1.0, indexchim('h2o_vap'), 0.0, 1)&lt;br /&gt;
&lt;br /&gt;
!===========================================================&lt;br /&gt;
!      e001 : CO + OH -&amp;gt; CO2 + H &lt;br /&gt;
!===========================================================&lt;br /&gt;
nb_reaction_4 = nb_reaction_4 + 1&lt;br /&gt;
indice_4(nb_reaction_4) = z4spec(1.0, indexchim('co'), 1.0, indexchim('oh'), 1.0, indexchim('co2'), 1.0, indexchim('h'))&lt;br /&gt;
&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
!     photodissociation of NO : NO + hv -&amp;gt; N + O&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
indice_phot(nb_phot) = z3spec(1.0, indexchim('no'), 1.0, indexchim('n'), 1.0, indexchim('o'))&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Uncomment the following lines to fill reaction rates:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
!     carbon reactions&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
     &lt;br /&gt;
!---  e001: oh + co -&amp;gt; co2 + h&lt;br /&gt;
&lt;br /&gt;
      nb_reaction_4 = nb_reaction_4 + 1&lt;br /&gt;
&lt;br /&gt;
!     joshi et al., 2006&lt;br /&gt;
&lt;br /&gt;
      do ilev = 1,nlayer&lt;br /&gt;
         k1a0 = 1.34*2.5*dens(ilev)                                  &amp;amp;&lt;br /&gt;
               *1/(1/(3.62e-26*t(ilev)**(-2.739)*exp(-20./t(ilev)))  &amp;amp;&lt;br /&gt;
               + 1/(6.48e-33*t(ilev)**(0.14)*exp(-57./t(ilev))))     ! typo in paper corrected&lt;br /&gt;
         k1b0 = 1.17e-19*t(ilev)**(2.053)*exp(139./t(ilev))          &amp;amp;&lt;br /&gt;
              + 9.56e-12*t(ilev)**(-0.664)*exp(-167./t(ilev))&lt;br /&gt;
         k1ainf = 1.52e-17*t(ilev)**(1.858)*exp(28.8/t(ilev))        &amp;amp;&lt;br /&gt;
                + 4.78e-8*t(ilev)**(-1.851)*exp(-318./t(ilev))&lt;br /&gt;
         x = k1a0/(k1ainf - k1b0)&lt;br /&gt;
         y = k1b0/(k1ainf - k1b0)&lt;br /&gt;
         fc = 0.628*exp(-1223./t(ilev)) + (1. - 0.628)*exp(-39./t(ilev))  &amp;amp;&lt;br /&gt;
            + exp(-t(ilev)/255.)&lt;br /&gt;
         fx = fc**(1./(1. + (alog(x))**2))                           ! typo in paper corrected&lt;br /&gt;
         k1a = k1a0*((1. + y)/(1. + x))*fx&lt;br /&gt;
         k1b = k1b0*(1./(1.+x))*fx&lt;br /&gt;
            &lt;br /&gt;
         v_4(ilev,nb_reaction_4) = k1a + k1b&lt;br /&gt;
      end do&lt;br /&gt;
&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
!     washout r001 : HNO3 + rain -&amp;gt; H2O&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
      &lt;br /&gt;
rain_h2o  = 100.e-6&lt;br /&gt;
!rain_rate = 1.e-6  ! 10 days&lt;br /&gt;
rain_rate = 1.e-8&lt;br /&gt;
      &lt;br /&gt;
do ilev = 1,nlayer&lt;br /&gt;
   if (c(ilev,indexchim('h2o_vap'))/dens(ilev) &amp;gt;= rain_h2o) then&lt;br /&gt;
      v_phot(ilev,nb_phot) = rain_rate&lt;br /&gt;
   else&lt;br /&gt;
      v_phot(ilev,nb_phot) = 0.&lt;br /&gt;
   end if&lt;br /&gt;
end do&lt;br /&gt;
&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
!     photodissociation of NO&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
      &lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
      &lt;br /&gt;
colo3(nlayer) = 0.&lt;br /&gt;
!     ozone columns for other levels (molecule.cm-2)&lt;br /&gt;
do ilev = nlayer-1,1,-1&lt;br /&gt;
   colo3(ilev) = colo3(ilev+1) + (c(ilev+1,indexchim('o3')) + c(ilev,indexchim('o3')))*0.5*avocado*1e-4*((press(ilev) - press(ilev+1))*100.)/(1.e-3*zmmean(ilev)*g*dens(ilev))&lt;br /&gt;
end do&lt;br /&gt;
call jno(nlayer, c(nlayer:1:-1,indexchim('no')), c(nlayer:1:-1,indexchim('o2')), colo3(nlayer:1:-1), dens(nlayer:1:-1), press(nlayer:1:-1), sza, v_phot(nlayer:1:-1,nb_phot))&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Change the following lines to set the number of hard coded reactions:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
integer, parameter :: nphot_hard_coding = 2&lt;br /&gt;
integer, parameter :: n4_hard_coding    = 1&lt;br /&gt;
integer, parameter :: n3_hard_coding    = 0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (TRAPPIST-1e) ===&lt;br /&gt;
Change the following compiling option:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
-d 64x48x30 -b 38x36&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NB: option -b is mandatory to change while option -d will still run with lower or higher resolution (if '''z2sig.def''' remains coherent with the number of altitude levels, meaning at least as many altitude levels defined as the number of levels wanted).&lt;br /&gt;
&lt;br /&gt;
== TRAPPIST-1c in Venus-like conditions ==&lt;br /&gt;
&lt;br /&gt;
A warm rocky planet in synchronous rotation around a low mass star. Here we provide an '''example''' to simulate the atmosphere of Trappist-1c, assuming it evolved to a modern Venus-like atmosphere.&lt;br /&gt;
&lt;br /&gt;
The planetary parameters are taken from  [https://arxiv.org/abs/2010.01074 Algol et al. 2021] and can be found in this table [[Media:Planetary_parameters_Trappist1c.png]]&lt;br /&gt;
&lt;br /&gt;
First, install the model and run it, following [[Quick Install and Run]]  but instead of  ''Early Mars files'', please download ''bench_trappist1c_64x48x50_b32x36'' using this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/bench_trappist1c_64x48x50_b32x36.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (TRAPPIST-1c) ===&lt;br /&gt;
Change the following compiling option:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
-d 64x48x50 -b 32x36&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can find the same type of  ASCII *def files than in the case of ''Early Mars'',  but adapted to the planet's characteristics and orbital parameters of Trappist 1c.&lt;br /&gt;
In particular ''callphys.def'' contains the following changes:&lt;br /&gt;
&lt;br /&gt;
* The planet is assumed to be in 1:1 spin-orbit resonance, therefore&lt;br /&gt;
   Diurnal = .false. &lt;br /&gt;
   Tlocked = .true.&lt;br /&gt;
* The planet equilibrium temperature is about 342 K&lt;br /&gt;
   tplanet    = 341.9&lt;br /&gt;
* The host star is a late spectral type M8V, with a stellar flux at 1 AU of 0.7527 [W m-2]&lt;br /&gt;
   startype = 9&lt;br /&gt;
   Fat1AU = 0.7527&lt;br /&gt;
* Fixed aerosol distribution, no radiative active tracers (no evaporation/condensation of H2O and CO2):&lt;br /&gt;
   aerofixed     = .true.&lt;br /&gt;
   aeroco2       = .false.&lt;br /&gt;
   aeroh2o       = .false.&lt;br /&gt;
* No water cycle model, no water cloud formation or water precipitation, no CO2 condensation:&lt;br /&gt;
   water         = .false.&lt;br /&gt;
   watercond     = .false.&lt;br /&gt;
   waterrain     = .false.&lt;br /&gt;
   hydrology     = .false.&lt;br /&gt;
   nonideal      = .true.&lt;br /&gt;
   co2cond       = .false.&lt;br /&gt;
* Following [https://www.sciencedirect.com/science/article/pii/S0032063313002596?via%3Dihub Haus et al. 2015] a prescribed radiatively active cloud model is included. &lt;br /&gt;
It can be activated/deactivated with the flag ''aerovenus''.&lt;br /&gt;
   aerovenus = .true.&lt;br /&gt;
* Mode 1, 2, 2p, 3 and the &amp;quot;unknown&amp;quot; UV absorber can be included/excluded by setting to true/false the following keywords. The characteristics of each mode (e.g. effect radius, effective variance) are based on Venus Express/ESA observations and can be found in this table [[Media:Table1 aerosolVenus trappist1c.png]]&lt;br /&gt;
   aerovenus1    = .true.&lt;br /&gt;
   aerovenus2    = .true.&lt;br /&gt;
   aerovenus2p   = .true.&lt;br /&gt;
   aerovenus3    = .true.&lt;br /&gt;
   aerovenusUV   = .true.&lt;br /&gt;
&lt;br /&gt;
The cloud model is prescribed from 1 to 0.037 ''bar'' pressure layers. For each mode, the top/bottom pressure can be modified by hard-coding model routine ''aerosol_opacity.F90''.&lt;br /&gt;
Here below an example for mode 1 particles, where the top pressure layer and bottom pressure layer are prescribed at 0.1 bar and 1 bar, respectively:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
!       1. Initialization&lt;br /&gt;
          aerosol(1:ngrid,1:nlayer,iaer)=0.0&lt;br /&gt;
          p_bot = 1.e5 ! bottom pressure [Pa]&lt;br /&gt;
          p_top = 1.e4&lt;br /&gt;
          h_bot = 1.0e3 ! bottom scale height [m]&lt;br /&gt;
          h_top = 5.0e3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
'''TO BE COMPLETED BY GABRIELLA'''&lt;br /&gt;
&lt;br /&gt;
== mini-Neptune GJ1214b ==&lt;br /&gt;
&lt;br /&gt;
A warm mini-Neptune&lt;br /&gt;
&lt;br /&gt;
'''TO BE COMPLETED BY BENJAMIN'''&lt;br /&gt;
&lt;br /&gt;
= 3D DYNAMICO setup =&lt;br /&gt;
&lt;br /&gt;
Due to the rich dynamical activities in their atmospheres (banded zonal jets, eddies, vortices, storms, equatorial oscillations,...) resulting from multi-scale dynamic interactions, the Global Climate Modelling of the giant planet requires to resolve eddies arising from hydrodynamical instabilities to correctly establish the planetary-scaled jets regime. To this purpose, their Rossby radius deformation $$L_D$$, which is the length scale at which rotational effects become as important as buoyancy or gravity wave effects in the evolution of the flow about some disturbance, is calculated to determine the most suitable horizontal grid resolution. At mid-latitude range, for the giant planets, $$L_D$$ is of the same order of magnitude as that of the Earth. As the giant planets have a size of roughly 10 times the Earth size (i.e., Jupiter and Saturn), the modelling grid must be of a horizontal resolution of 0.5$$^{\circ}$$ over longitude and latitude (vs 5$$^{\circ}$$ for the Earth), considering 3 grid points to resolved $$L_D$$. &lt;br /&gt;
Moreover, to have a chance to model the equatorial oscillation, meridional cell circulations and/or a seasonal inter-hemispheric circulation, a giant planet GCM must also include a high vertical resolution. Indeed, these climate phenomena have been studied for decades for the Earth's atmosphere, and result from small- and large-scale interactions between the troposphere and stratosphere. This implies that the propagation of dynamic instabilities, waves and turbulence should be resolved as far as possible along the vertical. Contrary to horizontal resolution, it doesn't really exist a criterion (similar to $$L_D$$) to determine the most suitable vertical grid resolution and still an adjustable parameter according to the processes to be represented. However, we advise the user to set a vertical resolution of at least 5 grid points per scale height as first stage.    &lt;br /&gt;
Finally, these atmospheres are cold, with long radiative response time which needs radiative transfer computations over decade-long years of Jupiter (given that a Jupiter year $$\approx$$ 12 Earth years), Saturn ( a Saturn year $$\approx$$ 30 Earth years), Uranus (a Uranus year $$\approx$$ 84 earth years) or Neptune (a Neptune year $$\approx$$ 169 Earth years), depending on the chosen planet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To be able to deal with these three -- and non-exhaustive -- requirements to build a giant planet GCM, we need massive computational ressources. For this, we use a dynamical core suitable and numerically stable for massive parallel ressource computations: [[The_DYNAMICO_dynamical_core | DYNAMICO]] [Dubos et al,. 2015].  &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
In these two following subsections, we purpose an example of installation for Jupiter and a Hot Jupiter. All the install, compiling, setting and parameters files for each giant planets could be found on: https://gitlab.in2p3.fr/aymeric.spiga/dynamico-giant (the old repo is archived as read-only https://github.com/aymeric-spiga/dynamico-giant)&lt;br /&gt;
&lt;br /&gt;
The [[Dynamico-giant | DYNAMICO-giant wiki is here]]&lt;br /&gt;
&lt;br /&gt;
If you have already downloaded '''LMDZ.COMMON''', '''LMDZ.GENERIC''', '''IOIPSL''', '''ARCH''', you only have to download:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''ICOSAGCM''': the DYNAMICO dynamical core&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://gitlab.in2p3.fr/ipsl/projets/dynamico/dynamico.git ICOSAGCM&lt;br /&gt;
cd ICOSAGCM&lt;br /&gt;
git checkout 110016896ae9e85e614af43223b18fe38f211020   # Version du 6 nov. 2024&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''ICOSA_LMDZ''': the interface using to link LMDZ.GENERIC physical packages and ICOSAGCM&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn update -r 3729 -q ICOSA_LMDZ   # Version du 18 avr. 2025&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''XIOS (XML Input Output Server)''': the library to interpolate input/output fields between the icosahedral and longitude/latitude regular grids on fly&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn co -r 2626 -q http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/trunk XIOS   # Version du 22 mar. 2024&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you haven't already download '''LMDZ.COMMON''', '''LMDZ.GENERIC''', '''IOIPSL''', '''ARCH''', you can use the '''install.sh''' script provided by the GitLab repository. &lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Once each part of the GCM is downloaded, you are able to compile it. &lt;br /&gt;
Firstly, you have to define your [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files | target architecture file ]] (hereafter named YOUR_ARCH_FILE) where you will fill in all the necessary information about the local environment, where libraries are located, which compiler, and compiler options will be used, etc.&lt;br /&gt;
Some architecture files related to specific machines are provided in the '''ARCH''' directory, which are referenced in the following lines without the prefix 'arch-' (i.e., arch-X64_IRENE-AMD.env will be referenced as X64_IRENE-AMD).  &lt;br /&gt;
&lt;br /&gt;
The main specificity of DYNAMICO-giant is that every main parts of the model ('''ICOSAGCM''', '''LMDZ.COMMON''' and '''LMDZ.GENERIC''') are compiled as libraries, and settings and running configuration are managed by the '''ICOSA_LMDZ''' interface.&lt;br /&gt;
&lt;br /&gt;
First, you have to compile '''IOIPSL''',&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd LMDZ.COMMON/ioipsl/                                                                                                             &lt;br /&gt;
    ./install_ioipsl_YOUR-MACHINE.bash&lt;br /&gt;
cd ../../&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
then '''XIOS''' library, &lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd XIOS/                                                                                                               &lt;br /&gt;
    ./make_xios --prod --arch YOUR_ARCH_FILE --arch_path ../ARCH --job 8 --full&lt;br /&gt;
cd -&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
the physics packaging,&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd LMDZ.COMMON/                                                                                                        &lt;br /&gt;
    ./makelmdz_fcm -p generic -p_opt &amp;quot;-b 20x25&amp;quot; -prod -parallel mpi -libphy -io xios -arch YOUR_ARCH_FILE -arch_path ../ARCH -j 8 -full&lt;br /&gt;
cd -&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
the dynamical core '''DYNAMICO''' (located in '''ICOSAGCM''' directory, named from the icosahedral shape of the horizontal mesh),&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd ICOSAGCM/&lt;br /&gt;
    ./make_icosa -prod -parallel mpi -external_ioipsl -with_xios -arch YOUR_ARCH_FILE -arch_path ../ARCH -job 8 -full&lt;br /&gt;
cd -&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
and finally the '''ICOSA_LMDZ''' interface&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd ICOSA_LMDZ/&lt;br /&gt;
    ./make_icosa_lmdz -p generic -p_opt &amp;quot;-b 20x25&amp;quot; -parallel mpi -arch YOUR_ARCH_FILE -arch_path ../ARCH -job 8 -nodeps&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This last step is a bit redundant with the two previous one, hence ''make_icosa_lmdz'' will execute ''./make_icosa'' (in the '''ICOSAGCM''' directory) and ''./makelmdz_fcm'' (in the '''LMDZ.COMMON''' directory) to create and source the architecture files shared between all parts of the model, as well as create the intermediate file ''config.fcm''. As you have already compiled these two elements, ''make_icosa_lmdz'' should only create the linked architecture files, ''config.fcm'' and compile the interface. Here, ''-nodeps'' option prevents the checking of XIOS and IOIPSL compilation, which saves you from recompiling these two elements again.&lt;br /&gt;
      &lt;br /&gt;
Finally, your executable programs should appeared in '''ICOSA_LMDZ/bin''' subdirectory, as '''icosa_lmdz.exe''' and in '''XIOS/bin''' subdirectory, as '''xios_server.exe''' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All these compiling steps are summed up in ''make_isoca_lmdz'' program that should be adapted to your own computational settings (i.e., through you target architecture file).&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./make_icosa_lmdz -p generic -p_opt &amp;quot;-b 20x25&amp;quot; -parallel mpi -arch YOUR_ARCH_FILE -arch_path ../ARCH -job 8 -full&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
Here, ''-full'' option assure the compilation of each part ('''IOIPSL''', '''XIOS''', '''LMDZ.COMMON''', '''ICOSAGCM''' and '''ICOSA_LMDZ''') of the model.  &lt;br /&gt;
&lt;br /&gt;
Now you can move your two executable files to your working directory and start to run your own simulation of Jupiter or a Hot Jupiter, as what follows.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: If you are using the GitLab file architecture (https://gitlab.in2p3.fr/aymeric.spiga/dynamico-giant), you should be able to compile the model directly from your working directory (for instance ''dynamico-giant/jupiter/'') by using the ''compile_occigen.sh'' program, which has to be adapted to your machine/cluster.&lt;br /&gt;
&lt;br /&gt;
''Note 2 : Depending on the compiler module you use, especially with gfortran, you may need to modify the tracers_icosa.F90 file located in the src directory in order to successfully compile ICOSAGCM. For example, if you are using GCC/11.3.0 and OpenMPI/4.1.4, you must update the insert_tracer_output subroutine as follows:''&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;fortran&amp;quot;&amp;gt;&lt;br /&gt;
SUBROUTINE insert_tracer_output&lt;br /&gt;
      USE xios_mod&lt;br /&gt;
      USE grid_param&lt;br /&gt;
      IMPLICIT NONE&lt;br /&gt;
      TYPE(xios_fieldgroup) :: fieldgroup_hdl&lt;br /&gt;
      TYPE(xios_field) :: field_hdl&lt;br /&gt;
      INTEGER :: iq&lt;br /&gt;
      CHARACTER(len=1000) :: tracername1&lt;br /&gt;
      CHARACTER(len=1000) :: tracername2&lt;br /&gt;
      CHARACTER(len=1000) :: tracername3 &lt;br /&gt;
&lt;br /&gt;
      CALL xios_get_handle(&amp;quot;standard_output_tracers&amp;quot;, fieldgroup_hdl)&lt;br /&gt;
      DO iq = 1, nqtot&lt;br /&gt;
         tracername1 = &amp;quot;tracer_&amp;quot;//TRIM(tracers(iq)%name)&lt;br /&gt;
         CALL xios_add_child(fieldgroup_hdl, field_hdl, tracername1)&lt;br /&gt;
         CALL xios_set_attr(field_hdl, name=TRIM(tracers(iq)%name))&lt;br /&gt;
      END DO&lt;br /&gt;
&lt;br /&gt;
      CALL xios_get_handle(&amp;quot;standard_output_tracers_init&amp;quot;, fieldgroup_hdl)&lt;br /&gt;
      DO iq = 1, nqtot&lt;br /&gt;
         tracername2 = &amp;quot;tracer_&amp;quot;//TRIM(tracers(iq)%name)//&amp;quot;_init&amp;quot;&lt;br /&gt;
         tracername3 = TRIM(tracers(iq)%name)//&amp;quot;_init&amp;quot;&lt;br /&gt;
         CALL xios_add_child(fieldgroup_hdl, field_hdl, tracername2)&lt;br /&gt;
         CALL xios_set_attr(field_hdl, name=tracername3)&lt;br /&gt;
      END DO&lt;br /&gt;
&lt;br /&gt;
   END SUBROUTINE insert_tracer_output&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Jupiter with DYNAMICO ==&lt;br /&gt;
Using a new dynamical core implies new setting files, in addition or as a replacement of those relevant to '''LMDZ.COMMON''' dynamical core using. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two kind of setting files:&lt;br /&gt;
&lt;br /&gt;
'''A first group relevant to DYNAMICO:'''&lt;br /&gt;
&lt;br /&gt;
- [[The ''context_dynamico.xml'' Input File|''context_dynamico.xml'']]: Configuration file for '''DYNAMICO''' for reading and writing  files using '''XIOS''', mainly used when you want to check the installation of '''ICOSAGCM''' with [[The_DYNAMICO_dynamical_core | an ''Held and Suarez'' test case]]. When your installation, compilation and run environment is fully functional, the dynamic core output files will not (necessarily) be useful and you can disable their writing. &lt;br /&gt;
&lt;br /&gt;
- [[The context_input_dynamico.xml Input File|''context_input_dynamico.xml'']]:&lt;br /&gt;
&lt;br /&gt;
- [[The file_def_dynamico.xml Input File|''file_def_dynamico.xml'']]: Definition of output diagnostic files which will be written into the output files only related to '''ICOSAGCM'''. &lt;br /&gt;
&lt;br /&gt;
- [[The field_def_dynamico.xml Input File|''field_def_dynamico.xml'']]: Definition of all existing variables that can be output from DYNAMICO.&lt;br /&gt;
&lt;br /&gt;
- [[The tracer.def Input File|''tracer.def'']]: Definition of the name and physico-chemical properties of the tracers which will be advected by the dynamical core. For now, there is two files related to tracers, we are working to harmonise it.  &lt;br /&gt;
&lt;br /&gt;
''' A second group relevant to LMDZ.GENERIC physical packages: '''&lt;br /&gt;
&lt;br /&gt;
- [[The context_lmdz_physics.xml Input File|''context_lmdz_physics.xml'']]: File in which are defined the horizontal grid, vertical coordinate, output file(s) definition, with the setting of frequency output writing, time unit, geophysical variables to be written, etc. Each new geophysical variables added here have to be defined in the ''field_def_physics.xml'' file.&lt;br /&gt;
&lt;br /&gt;
- [[The field_def_physics.xml Input File|''field_def_physics.xml'']]: Definition of all existing variables that can be output from the physical packages interfaced with '''DYNAMICO'''. This is where you will add each geophysical fields that you want to appear in the ''Xhistins.nc'' output files. For instance, related to the ''thermal plume scheme'' using for Jupiter's tropospheric dynamics, we have added the following variables: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot; line&amp;gt;&lt;br /&gt;
             &amp;lt;field id=&amp;quot;h2o_vap&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Vapor mass mixing ratio&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/kg&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;h2o_ice&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Vapor mass mixing ratio&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/kg&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;detr&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Detrainment&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/m2/s&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;entr&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Entrainment&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/m2/s&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;w_plm&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Plume vertical velocity&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;m/s&amp;quot; /&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [[The_callphys.def_Input_File|''callphys.def'']]: This setting file is used either with '''DYNAMICO''' or '''LMDZ.COMMON''' and allows the user to choose the physical parametrisation schemes and their appropriate main parameter values relevant to the planet being simulated. In our case of Jupiter, there are some specific parametrisations that should be added or modified from the example given as link at the beginning of this line: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
# Diurnal cycle ?  if diurnal=false, diurnally averaged solar heating&lt;br /&gt;
diurnal      = .false. #.true.&lt;br /&gt;
# Seasonal cycle ? if season=false, Ls stays constant, to value set in &amp;quot;start&amp;quot;&lt;br /&gt;
season       = .true. &lt;br /&gt;
# Tidally resonant orbit ? must have diurnal=false, correct rotation rate in newstart&lt;br /&gt;
tlocked      = .false.&lt;br /&gt;
# Tidal resonance ratio ? ratio T_orbit to T_rotation&lt;br /&gt;
nres         = 1&lt;br /&gt;
# Planet with rings?&lt;br /&gt;
rings_shadow = .false.&lt;br /&gt;
# Compute latitude-dependent gravity field??&lt;br /&gt;
oblate       = .true.&lt;br /&gt;
# Include non-zero flattening (a-b)/a?&lt;br /&gt;
flatten      = 0.06487&lt;br /&gt;
# Needed if oblate=.true.: J2&lt;br /&gt;
J2           = 0.01470&lt;br /&gt;
# Needed if oblate=.true.: Planet mean radius (m)&lt;br /&gt;
Rmean        = 69911000.&lt;br /&gt;
# Needed if oblate=.true.: Mass of the planet (*1e24 kg)&lt;br /&gt;
MassPlanet   = 1898.3&lt;br /&gt;
# use (read/write) a startfi.nc file? (default=.true.)&lt;br /&gt;
startphy_file = .false.&lt;br /&gt;
# constant value for surface albedo (if startphy_file = .false.)&lt;br /&gt;
surfalbedo   = 0.0&lt;br /&gt;
# constant value for surface emissivity (if startphy_file = .false.)&lt;br /&gt;
surfemis     = 1.0&lt;br /&gt;
&lt;br /&gt;
# the rad. transfer is computed every &amp;quot;iradia&amp;quot; physical timestep&lt;br /&gt;
iradia           = 160&lt;br /&gt;
# folder in which correlated-k data is stored ?&lt;br /&gt;
corrkdir         = Jupiter_HITRAN2012_REY_ISO_NoKarko_T460K_article2019_gauss8p8_095&lt;br /&gt;
# Uniform absorption coefficient in radiative transfer?&lt;br /&gt;
graybody         = .false.&lt;br /&gt;
# Characteristic planetary equilibrium (black body) temperature&lt;br /&gt;
# This is used only in the aerosol radiative transfer setup. (see aerave.F)&lt;br /&gt;
tplanet          = 100.&lt;br /&gt;
# Output global radiative balance in file 'rad_bal.out' - slow for 1D!!&lt;br /&gt;
meanOLR          = .false.&lt;br /&gt;
# Variable gas species: Radiatively active ?&lt;br /&gt;
varactive        = .false.&lt;br /&gt;
# Computes atmospheric specific heat capacity and&lt;br /&gt;
# could calculated by the dynamics, set in callphys.def or calculeted from gases.def.&lt;br /&gt;
# You have to choose: 0 for dynamics (3d), 1 for forced in callfis (1d) or 2: computed from gases.def (1d)&lt;br /&gt;
# Force_cpp and check_cpp_match are now deprecated.  &lt;br /&gt;
cpp_mugaz_mode = 0&lt;br /&gt;
# Specific heat capacity in J K-1 kg-1 [only used if cpp_mugaz_mode = 1]&lt;br /&gt;
cpp              = 11500.&lt;br /&gt;
# Molecular mass in g mol-1 [only used if cpp_mugaz_mode = 1]&lt;br /&gt;
mugaz            = 2.30&lt;br /&gt;
### DEBUG&lt;br /&gt;
# To not call abort when temperature is outside boundaries:&lt;br /&gt;
strictboundcorrk = .false.&lt;br /&gt;
# To not stop run when temperature is greater than 400 K for H2-H2 CIA dataset:   &lt;br /&gt;
strictboundcia = .false.&lt;br /&gt;
# Add temperature sponge effect after radiative transfer?&lt;br /&gt;
callradsponge    = .false.&lt;br /&gt;
&lt;br /&gt;
Fat1AU = 1366.0&lt;br /&gt;
&lt;br /&gt;
## Other physics options&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# call turbulent vertical diffusion ?&lt;br /&gt;
calldifv    = .false.&lt;br /&gt;
# use turbdiff instead of vdifc ?&lt;br /&gt;
UseTurbDiff = .true.&lt;br /&gt;
# call convective adjustment ?&lt;br /&gt;
calladj     = .true.&lt;br /&gt;
# call thermal plume model ?&lt;br /&gt;
calltherm   = .true.&lt;br /&gt;
# call thermal conduction in the soil ?&lt;br /&gt;
callsoil    = .false.&lt;br /&gt;
# Internal heat flux (matters only if callsoil=F)&lt;br /&gt;
intheat     = 7.48&lt;br /&gt;
# Remove lower boundary (e.g. for gas giant sims)&lt;br /&gt;
nosurf      = .true.&lt;br /&gt;
#########################################################################&lt;br /&gt;
## extra non-standard definitions for Earth&lt;br /&gt;
#########################################################################&lt;br /&gt;
&lt;br /&gt;
## Thermal plume model options&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
dvimpl               = .true.&lt;br /&gt;
r_aspect_thermals    = 2.0&lt;br /&gt;
tau_thermals         = 0.0&lt;br /&gt;
betalpha             = 0.9&lt;br /&gt;
afact                = 0.7&lt;br /&gt;
fact_epsilon         = 2.e-4&lt;br /&gt;
alpha_max            = 0.7&lt;br /&gt;
fomass_max           = 0.5&lt;br /&gt;
pres_limit           = 2.e5&lt;br /&gt;
&lt;br /&gt;
## Tracer and aerosol options&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# Ammonia cloud (Saturn/Jupiter)?&lt;br /&gt;
aeronh3             = .true.&lt;br /&gt;
size_nh3_cloud      = 10.D-6&lt;br /&gt;
pres_nh3_cloud      = 1.1D5                        # old: 9.D4&lt;br /&gt;
tau_nh3_cloud       = 10.                          # old: 15.&lt;br /&gt;
# Radiatively active aerosol (Saturn/Jupiter)?&lt;br /&gt;
aeroback2lay         = .true.&lt;br /&gt;
optprop_back2lay_vis = optprop_jupiter_vis_n20.dat&lt;br /&gt;
optprop_back2lay_ir  = optprop_jupiter_ir_n20.dat&lt;br /&gt;
obs_tau_col_tropo    = 4.0&lt;br /&gt;
size_tropo           = 5.e-7&lt;br /&gt;
pres_bottom_tropo    = 8.0D4&lt;br /&gt;
pres_top_tropo       = 1.8D4&lt;br /&gt;
obs_tau_col_strato   = 0.1D0&lt;br /&gt;
# Auroral aerosols (Saturn/Jupiter)?&lt;br /&gt;
aeroaurora         = .false.&lt;br /&gt;
size_aurora        = 3.e-7&lt;br /&gt;
obs_tau_col_aurora = 2.0&lt;br /&gt;
&lt;br /&gt;
# Radiatively active CO2 aerosol?&lt;br /&gt;
aeroco2            = .false.&lt;br /&gt;
# Fixed CO2 aerosol distribution?&lt;br /&gt;
aerofixco2     = .false.&lt;br /&gt;
# Radiatively active water aerosol?&lt;br /&gt;
aeroh2o        = .false.&lt;br /&gt;
# Fixed water aerosol distribution?&lt;br /&gt;
aerofixh2o     = .false.&lt;br /&gt;
# basic dust opacity&lt;br /&gt;
dusttau        = 0.0&lt;br /&gt;
# Varying H2O cloud fraction?&lt;br /&gt;
CLFvarying     = .false.&lt;br /&gt;
# H2O cloud fraction if fixed?&lt;br /&gt;
CLFfixval      = 0.0&lt;br /&gt;
# fixed radii for cloud particles?&lt;br /&gt;
radfixed       = .false.&lt;br /&gt;
# number mixing ratio of CO2 ice particles&lt;br /&gt;
Nmix_co2       = 100000.&lt;br /&gt;
# number mixing ratio of water particles (for rafixed=.false.)&lt;br /&gt;
Nmix_h2o       = 1.e7&lt;br /&gt;
# number mixing ratio of water ice particles (for rafixed=.false.)&lt;br /&gt;
Nmix_h2o_ice   = 5.e5&lt;br /&gt;
# radius of H2O water particles (for rafixed=.true.):&lt;br /&gt;
rad_h2o        = 10.e-6&lt;br /&gt;
# radius of H2O ice particles (for rafixed=.true.):&lt;br /&gt;
rad_h2o_ice    = 35.e-6&lt;br /&gt;
# atm mass update due to tracer evaporation/condensation?&lt;br /&gt;
mass_redistrib = .false.&lt;br /&gt;
&lt;br /&gt;
## Water options &lt;br /&gt;
## ~~~~~~~~~~~~~&lt;br /&gt;
# Model water cycle&lt;br /&gt;
water         = .true.&lt;br /&gt;
# Model water cloud formation&lt;br /&gt;
watercond     = .true.&lt;br /&gt;
# Model water precipitation (including coagulation etc.)&lt;br /&gt;
waterrain     = .true.&lt;br /&gt;
# Use simple precipitation scheme?&lt;br /&gt;
precip_scheme = 1&lt;br /&gt;
# Evaporate precipitation?&lt;br /&gt;
evap_prec     = .true.&lt;br /&gt;
# multiplicative constant in Boucher 95 precip scheme&lt;br /&gt;
Cboucher      = 1.&lt;br /&gt;
# Include hydrology ?&lt;br /&gt;
hydrology     = .false.&lt;br /&gt;
# H2O snow (and ice) albedo ?&lt;br /&gt;
albedosnow    = 0.6&lt;br /&gt;
# Maximum sea ice thickness ?&lt;br /&gt;
maxicethick   = 10.&lt;br /&gt;
# Freezing point of seawater (degrees C) ?&lt;br /&gt;
Tsaldiff      = 0.0&lt;br /&gt;
# Evolve surface water sources ?&lt;br /&gt;
sourceevol    = .false.&lt;br /&gt;
&lt;br /&gt;
## CO2 options &lt;br /&gt;
## ~~~~~~~~~~~&lt;br /&gt;
# call CO2 condensation ?&lt;br /&gt;
co2cond       = .false.&lt;br /&gt;
# Set initial temperature profile to 1 K above CO2 condensation everywhere?&lt;br /&gt;
nearco2cond   = .false.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [[The_gases.def_Input_file|''gases.def'']]: File containing the gas composition of the atmosphere you want to model, with their molar mixing ratios. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
# gases&lt;br /&gt;
5&lt;br /&gt;
H2_&lt;br /&gt;
He_&lt;br /&gt;
CH4&lt;br /&gt;
C2H2&lt;br /&gt;
C2H6&lt;br /&gt;
0.863&lt;br /&gt;
0.134&lt;br /&gt;
0.0018&lt;br /&gt;
1.e-7&lt;br /&gt;
1.e-5&lt;br /&gt;
# First line is number of gases&lt;br /&gt;
# Followed by gas names (always 3 characters)&lt;br /&gt;
# and then molar mixing ratios.&lt;br /&gt;
# mixing ratio -1 means the gas is variable.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [[The jupiter_const.def Input File|''jupiter_const.def'']]: Files that gather all orbital and physical parameters of Jupiter.&lt;br /&gt;
&lt;br /&gt;
- [[The_traceur.def_Input_File|''traceur.def'']]: At this time, only two tracers are used for modelling Jupiter atmosphere, so the ''traceur.def'' file is summed up as follow&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
2&lt;br /&gt;
h2o_vap&lt;br /&gt;
h2o_ice&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''' Two additional files are used to set the running parameter of the simulation itself:'''&lt;br /&gt;
&lt;br /&gt;
- [[The run_icosa.def Input File | ''run_icosa.def'']]: file containing parameters for '''ICOSAGCM''' to execute the simulation, use to determine the [[Advanced Use of the GCM | horizontal and vertical resolutions]], the number of processors, the number of subdivisions, the duration of the simulation, etc.&lt;br /&gt;
&lt;br /&gt;
- ''run.def'': file which brings together all the setting files and will be reading by the interface '''ICOSA_LMDZ''' to link each part of the model ('''ICOSAGCM''', '''LMDZ.GENERIC''') with its particular setting file(s) when the library '''XIOS''' does not take action (through the ''.xml'' files).&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
###########################################################################&lt;br /&gt;
### INCLUDE OTHER DEF FILES (physics, specific settings, etc...)&lt;br /&gt;
###########################################################################&lt;br /&gt;
INCLUDEDEF=run_icosa.def&lt;br /&gt;
&lt;br /&gt;
INCLUDEDEF=jupiter_const.def&lt;br /&gt;
&lt;br /&gt;
INCLUDEDEF=callphys.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
prt_level=0&lt;br /&gt;
&lt;br /&gt;
## iphysiq must be same as itau_physics&lt;br /&gt;
iphysiq=40&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hot Jupiter with DYNAMICO ==&lt;br /&gt;
&lt;br /&gt;
Modelling the atmosphere of Hot Jupiter is challenging because of the extreme temperature conditions, and the fact that these planets are gas giants. Therefore, using a dynamical core such as Dynamico is strongly recommended. Here, we discuss how to perform a cloudless simulation of the Hot Jupiter WASP-43 b, using Dynamico.&lt;br /&gt;
&lt;br /&gt;
'''1st step''': You need to go to the github mentionned previously for Dynamico: https://github.com/aymeric-spiga/dynamico-giant. ''Git clone'' this repo on your favorite cluster, and ''checkout'' to the &amp;quot;hot_jupiter&amp;quot; branch.&lt;br /&gt;
&lt;br /&gt;
'''2nd step''': Now, run the install.sh script. This script will install '''all''' the required models ('''LMDZ.COMMON''', '''LMDZ.GENERIC''','''ICOSA_LMDZ''','''XIOS''','''FCM''','''ICOSAGCM'''). At this point, you only miss '''IOIPSL'''. To install it, go to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
dynamico-giant/code/LMDZ.COMMON/ioipsl/ &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There, you will find some examples of installations script. You need to create one that will work on your cluster, with your own arch files.&lt;br /&gt;
During the installation of '''IOIPSL''', you might be asked for a login/password. Contact TGCC computing center to get access.&lt;br /&gt;
&lt;br /&gt;
'''3rd step''': Great, now we have all we need to get started. Navigate to the ''hot_jupiter'' folder. You will find a ''compile_mesopsl.sh'' and a ''compile_occigen.sh'' script. Use them as examples to create the compile script adapted to your own cluster, then run it. &lt;br /&gt;
While running, I suggest that you take a look at the ''log_compile'' file. The compilation can take a while (~ 10minutes, especially because of XIOS). On quick trick to make sure that everything went right is to check the number of ''Build command finished'' messages in ''log_compile''. If everything worked out, there should be 6 of them.&lt;br /&gt;
&lt;br /&gt;
'''4th step''': Okay, the model compiled, good job ! Now we need to create the initial condition for our run. In the hot_jupiter1d folder, you already have a ''temp_profile.txt'' computed with the 1D version of the LMDZ.GENERIC (see rcm1d on this page). Thus, no need to recompute a 1D model but it will be needed if you want to model another Hot Jupiter.&lt;br /&gt;
Navigate to the 'makestart' folder, located at &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
dynamico-giant/hot_jupiter/makestart/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To generate the initial conditions for the 3D run, we're gonna start the model using the temperature profile from the 1D run. to do that, you will find a &amp;quot;job_mpi&amp;quot; script. Open it, and adapt it to your cluster and launch the job. This job is using 20 procs, and it runs 5 days of simulations. &lt;br /&gt;
If everything goes well, you should see few netcdf files appear. The important ones are '''start_icosa0.nc''', '''startfi0.nc''' and '''Xhistins.nc'''. &lt;br /&gt;
If you see these files, you're all set to launch a real simulation !&lt;br /&gt;
&lt;br /&gt;
'''5th step''': Go back to ''hot_jupiter'' folder. There are a bunch of script to launch your simulation. Take a look at the ''astro_fat_mpi'' script, and adapt it to your cluster. Then you can launch your simulation by doing &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
./run_astro_fat&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will start the simulation, using 90 procs. In the same folder, check if the icosa_lmdz.out file is created. This is the logfile of the simulation, while it is running. You can check there that everything is going well.&lt;br /&gt;
&lt;br /&gt;
'''Important side note''': When using the ''run_astro_fat'' script to run a simulation, it will run a chained simulation, restarting the simulation from the previous state after 100 days of simulations and generating ''Xhistins.nc'' files. This is your results file, where you will find all the variables that controls your atmosphere (temperature field, wind fields, etc..). &lt;br /&gt;
&lt;br /&gt;
Good luck and enjoy the generic PCM Dynamico for Hot Jupiter !&lt;br /&gt;
&lt;br /&gt;
'''2nd important side note''': These 5 steps are the basic needed steps to run a simulation. If you want to tune simulations to another planet, or change other stuff, you need to take a look at '''*.def''' and '''*.xml''' files. If you're lost in all of this, take a look at the different pages of this website and/or contact us !&lt;br /&gt;
Also, you might want to check the wiki on the [https://github.com/aymeric-spiga/dynamico-giant ''Github''], that explains a lot of settings for Dynamico&lt;br /&gt;
&lt;br /&gt;
= 3D LES setup =&lt;br /&gt;
&lt;br /&gt;
== Proxima b with LES ==&lt;br /&gt;
&lt;br /&gt;
To model the subgrid atmospheric turbulence, the [[WRF dynamical core for LES/mesoscale simulations|'''WRF''']] dynamical core coupled with the LMD Generic physics package is used. The first studied conducted was to resolve the convective activity of the substellar point of Proxami-b (Lefevre et al 2021). The impact of the stellar insolation and rotation period were studied. The files for the reference case, with a stellar flux of 880 W/m2 and an 11 days rotation period, are presented&lt;br /&gt;
&lt;br /&gt;
The input_* file are the used to initialize the temperature, pressure, winds and moisture of the domain. &lt;br /&gt;
input_souding : altitude (km), potential temperature, water vapour (kg/kg), u, v&lt;br /&gt;
input_therm : normalized gas constant, isobaric heat capacity, pressure, density, temperature&lt;br /&gt;
input_hr : SW heating, LW heating, Large-scale heating extracted from the GCM. Only the last one is used in this configuration.&lt;br /&gt;
&lt;br /&gt;
The file namelist.input is used to set up the domain parameters (resolution, grid points, etc). The file levels specifies the eta-levels of the vertical domain.&lt;br /&gt;
&lt;br /&gt;
Planet is used set up the atmospheric parameters, in order : gravity (m/s2), isobaric heat capacity (J/kg/K), molecular mass (g/mol), reference temperature (K), surface pressure (Pa), planet radius (m) and planet rotation rate (s-1).&lt;br /&gt;
&lt;br /&gt;
The files *.def are the parameter for the physics. Compared to GCM runs, the convective adjustment in callphys.def is turned off&lt;br /&gt;
&lt;br /&gt;
The file controle.txt, equivalent of the field controle in GCM start.nc, needed to initialize some physics constants.&lt;br /&gt;
&lt;br /&gt;
TBC ML&lt;br /&gt;
&lt;br /&gt;
= 1D setup =&lt;br /&gt;
&lt;br /&gt;
== rcm1d test case ==&lt;br /&gt;
&lt;br /&gt;
Running the model in 1D (i.e. considering simply a column of atmosphere) is a common first step to test a new setup. To do so, you first have to compile the 1D version of the model. The command line is very similar to [[Quick_Install_and_Run#Compiling a test case (early Mars)|the one for the 3D]], except for 2 changes:&lt;br /&gt;
# put just the vertical resolution after the -d option (&amp;quot;VERT&amp;quot; instead of ''LON''x''LAT''x''VERT'' for the 3D case)&lt;br /&gt;
# at the end of the line, replace &amp;quot;gcm&amp;quot; with &amp;quot;rcm1d&amp;quot;&lt;br /&gt;
It will generate a file called '''rcm1d_XX_phyxxx_seq.e''', where ''XX'' and ''phyxxx'' are the vertical resolution and the physics package, respectively.&lt;br /&gt;
&lt;br /&gt;
Then, copy the executable in your working directory. &lt;br /&gt;
&lt;br /&gt;
Note that the '''.def''' files differ a bit from the 3D case: [[The_run.def_Input_File|'''run.def''']] is replaced with [[The_rcm1d.def_Input_File|'''rcm1d.def''']], which contains more general information. Indeed, the 1D model does not use [[The_start.nc_and_startfi.nc_input_files|'''start.nc''']] or [[The_start.nc_and_startfi.nc_input_files|'''startfi.nc''']] files to initialize. Instead it reads everything from the '''.def''' files. You can find examples of 1D configuration in ''LMDZ.GENERIC/deftank'' (e.g. '''rcm1d.def.earlymars''', '''rcm1d.def.earth'''), the best thing is to have a look at them.&lt;br /&gt;
&lt;br /&gt;
== kcm1d test case ==&lt;br /&gt;
&lt;br /&gt;
Our 1-D inverse model&lt;br /&gt;
&lt;br /&gt;
TBD by Guillaume or Martin&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;br /&gt;
[[Category:Generic-DYNAMICO]]&lt;br /&gt;
[[Category:Generic-WRF]]&lt;br /&gt;
[[Category:Generic-1D]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Mars_1D_testphys1d_program&amp;diff=3210</id>
		<title>Mars 1D testphys1d program</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Mars_1D_testphys1d_program&amp;diff=3210"/>
				<updated>2026-02-20T11:23:33Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;It is possible to run the Mars PCM in a &amp;quot;single-column&amp;quot; configuration: this is the so-called 1D Mars PCM whose program is '''testphys1d'''. It is quite useful for some studies and or when developing and testing parametrizations.&lt;br /&gt;
&lt;br /&gt;
== Compilation ==&lt;br /&gt;
The main program '''testphys1d''' is compiled using the same compilation script, [[The makelmdz fcm GCM Compilation Script|makelmdz_fcm]], as for the 3D Mars PCM. Nevertheless, there are few modifications:&lt;br /&gt;
* the ''-d'' option requires only one argument, the number of vertical levels;&lt;br /&gt;
* the main program to compile is ''testphys1d'' rather than ''gcm''.&lt;br /&gt;
So for instance to compile a case for 54 vertical levels one would run something like:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
makelmdz_fcm -arch somearch -d 54 -p mars testphys1d&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Inputs ==&lt;br /&gt;
Like the ''gcm'' program, the ''testphys1d'' program needs some inputs to be able to run. The needed files are:&lt;br /&gt;
* &amp;lt;code&amp;gt;z2sig.def&amp;lt;/code&amp;gt; for the definition of vertical levels;&lt;br /&gt;
* &amp;lt;code&amp;gt;traceur.def&amp;lt;/code&amp;gt; for the definition of tracers that the user wants the model to run with;&lt;br /&gt;
* &amp;lt;code&amp;gt;callphys.def&amp;lt;/code&amp;gt; for the definition of parametrizations that the user wants the model to run with;&lt;br /&gt;
* &amp;lt;code&amp;gt;run.def&amp;lt;/code&amp;gt; for the run configuration, which is similar to the one for the 3D PCM described here [[The run.def Input File]]. It has to be be adapted to the 1D case. An example file, called &amp;lt;code&amp;gt;run.def.1d&amp;lt;/code&amp;gt;, is available in &amp;lt;code&amp;gt;LMDZ.MARS/deftank&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Unlike the ''gcm'' program, the ''testphys1d'' program can run without start files, that is without &amp;lt;code&amp;gt;startfi.nc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;start.nc&amp;lt;/code&amp;gt;.&lt;br /&gt;
This is the case by default (&amp;lt;code&amp;gt;startfiles_1D=.false.&amp;lt;/code&amp;gt;). In this setup, one can (and often needs) needs to provide initial profiles of each tracer. These consist in files called ''profile_sometracername'' containing column-wise the initial values of the considered tracer. Then, the first line corresponds to the surface tracer and the following lines correspond to the layers. In addition one can also provide a similar ''profile_temp'' file containing an initial temperature profile (first line should then contain the surface temperature value).&lt;br /&gt;
When the program ends, it will produce by its own a &amp;lt;code&amp;gt;startfi.nc&amp;lt;/code&amp;gt; file initialized with the default settings.&lt;br /&gt;
&lt;br /&gt;
In the case of &amp;lt;code&amp;gt;startfiles_1D=.true.&amp;lt;/code&amp;gt; set in the &amp;lt;code&amp;gt;run.def&amp;lt;/code&amp;gt;, the program will look for starting files, that is a &amp;lt;code&amp;gt;startfi.nc&amp;lt;/code&amp;gt; file and a &amp;lt;code&amp;gt;start1D.txt&amp;lt;/code&amp;gt; file. The latter is specific to the 1D model and an example can be found in &amp;lt;code&amp;gt;LMDZ.MARS/deftank&amp;lt;/code&amp;gt;. If the starting files are present, it will read them to initialize the run accordingly and it will create a &amp;lt;code&amp;gt;restartfi.nc&amp;lt;/code&amp;gt; file and a &amp;lt;code&amp;gt;restart1D.txt&amp;lt;/code&amp;gt; file at the end of the run. If these starting files are not present, it will start as in the previous (default) case but it will still create the restarting files at the end.&lt;br /&gt;
&lt;br /&gt;
... TODO... add detailed description of ordering and contents of start1D.txt file...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
The program ''testphys1d'' can output &amp;lt;code&amp;gt;diagfi.nc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;stats.nc&amp;lt;/code&amp;gt; files. Just like for the 3D PCM model, the optional &amp;lt;code&amp;gt;diagfi.def&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;stats.def&amp;lt;/code&amp;gt; files can be respectively added to specify which variables need be outputted.&lt;br /&gt;
&lt;br /&gt;
As described in the previous section, if &amp;lt;code&amp;gt;startfiles_1D=.true.&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;run.def&amp;lt;/code&amp;gt;, then ''testphys1d'' will create a &amp;lt;code&amp;gt;restartfi.nc&amp;lt;/code&amp;gt; file and a &amp;lt;code&amp;gt;restart1D.txt&amp;lt;/code&amp;gt; file at the end of the run. &lt;br /&gt;
This option is particularly useful when one wants to make chained simulations with the 1D model.&lt;br /&gt;
&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Mars-1D]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=DYNAMICO_with_LMDZ_physics&amp;diff=3205</id>
		<title>DYNAMICO with LMDZ physics</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=DYNAMICO_with_LMDZ_physics&amp;diff=3205"/>
				<updated>2026-02-04T13:58:50Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the compilation procedure for the coupling between [[the DYNAMICO dynamical core]] and the PCM LMDZ.* physics packages.&lt;br /&gt;
It describes the directory structure of the ICOSA_LMDZ directory which contains the interface.&lt;br /&gt;
Please read the [[The DYNAMICO dynamical core|DYNAMICO installation process first]].&lt;br /&gt;
&lt;br /&gt;
== The ''ICOSA_LMDZ'' directory ==&lt;br /&gt;
&lt;br /&gt;
Once downloaded from the svn server, the ''ICOSA_LMDZ'' directory contents should be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
arch  bld.cfg  build  clean  compile_adastra-gnu  compile_irene-amd  make_icosa_lmdz  src  xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Where the noteworthy elements are:&lt;br /&gt;
* The &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;make_icosa_lmdz&amp;lt;/font&amp;gt; script, which is the master script to use to compile all components (DYNAMICO, physics package, the interface between the two and the IOIPSL and XIOS libraries).&lt;br /&gt;
To list available options, run &amp;quot;make_icosa_lmdz -h&amp;quot;, which should return something like: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Usage :&lt;br /&gt;
make_icosa_lmdz [options] -arch arch_name -p phys&lt;br /&gt;
[-h]                       : help&lt;br /&gt;
[-prod / -dev / -debug]    : compilation mode: production (default) / developpement / debug .&lt;br /&gt;
[-full]                    : recompile all code from scratch&lt;br /&gt;
[-nodeps]                  : do not build dependencies (XIOS and IOIPSL libraries)&lt;br /&gt;
 -arch arch_name           : target architecture&lt;br /&gt;
[-arch_path path]          : relative PATH to directory containing multi-model&lt;br /&gt;
                             path and environment arch files&lt;br /&gt;
 -p phys                   : physics package (e.g. generic , venus , mars, ...)&lt;br /&gt;
[-p_opt &amp;quot;options&amp;quot;]         : additional options for physics package&lt;br /&gt;
[-parallel type]           : parallelism (none|mpi|omp|mpi_omp)&lt;br /&gt;
[-with_xios]               : compile and link with XIOS (default)&lt;br /&gt;
[-job num]                 : speed up compilation by using num simulateneous &lt;br /&gt;
                             compilation steps (when possible)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Upon successful completion of the compilation step the &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;bin&amp;lt;/font&amp;gt; subdirectory will contain the executable &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;icosa_lmdz.exe&amp;lt;/font&amp;gt;&lt;br /&gt;
* Example scripts &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;compile_adastra-gnu&amp;lt;/font&amp;gt; and &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;compile_irene-amd&amp;lt;/font&amp;gt; which are wrappers to the &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;make_icosa_lmdz&amp;lt;/font&amp;gt; script with a set of given options. The general idea here is that a user would likewise write his/her own wrapper script.&lt;br /&gt;
* The ''arch'' directory which contains architecture files used by the &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;make_icosa_lmdz&amp;lt;/font&amp;gt; script&lt;br /&gt;
* The ''src'' directory which contains the source code for the interface&lt;br /&gt;
* The ''xml'' directory which contains an example of the &amp;lt;font color=&amp;quot;gray&amp;quot;&amp;gt;iodef.xml&amp;lt;/font&amp;gt; file to use, along with related instructions to combining it with other xml file gathered from DYNAMICO and physics packages&lt;br /&gt;
&lt;br /&gt;
== The ''src'' subdirectory ==&lt;br /&gt;
This directory contains the source code for the interface, along with subdirectories related to each of the physics packages (currently Generic, Mars and Venus). More general code, i.e. which apply to all the physics packages, such as plugins for the vertical discretization or dissipation factors are located at this level. In practice that directory currently contains:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
distrib_icosa_lmdz.f90  phymars   phyvenus&lt;br /&gt;
disvert_icosa_lmdz.f90  phypluto  vert_prof_dissip_icosa_lmdz.f90&lt;br /&gt;
icosa_lmdz.f90          phygeneric    wrapper.f90&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
* icosa_lmdz.f90 is the main program which is in fact quite minimal as it only needs to set-up pointers for  plugin routines and call DYNAMICO's ''icosa_init'' routine&lt;br /&gt;
* distrib_icosa_lmdz.f90 and wrapper.f90 are quite technical routines handling the parallelism and correspondances between physics and dynamics grids&lt;br /&gt;
* disvert_icosa_lmdz.f90 is the plugin (inherited from LMDZ.COMMON and follows the vertical coordinate and associated discretization choices made there)&lt;br /&gt;
* vert_prof_dissip_icosa_lmdz.f90 is the plugin (inherited from LMDZ.COMMON) to set the multiplicative coefficient along the vertical for lateral dissipation&lt;br /&gt;
&lt;br /&gt;
=== The ''src/phy*'' subdirectories ===&lt;br /&gt;
These contain the interface to call the main physics driver from the Mars (''phymars''), Pluto (''phypluto''), Generic (''phygeneric'') or Venus (''phyvenus'') physics packages&lt;br /&gt;
&lt;br /&gt;
== Compiling DYNAMICO with a PCM LMDZ.* physics package ==&lt;br /&gt;
&lt;br /&gt;
Once you have downloaded DYNAMICO into the '''ICOSAGCM/''' folder, go to the [[ICOSA LMDZ directory layout and contents|ICOSA_LMDZ folder]] and compile there using one of the compile example files:&lt;br /&gt;
*compile_adastra-gnu&lt;br /&gt;
*compile_irene-amd&lt;br /&gt;
&lt;br /&gt;
It might be wise to make a copy, i.e.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cp compile_adastra-gnu compile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can then edit the file to match [[The Target Architecture (&amp;quot;arch&amp;quot;) Files|your architecture (the '''arch''' parameter)]].&lt;br /&gt;
The file should look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
arch=ifort_MESOIPSL # change according to your arch file&lt;br /&gt;
&lt;br /&gt;
# Gas Giants/generic:&lt;br /&gt;
make_icosa_lmdz -p generic -p_opt &amp;quot;-b 17x23&amp;quot; -parallel mpi -arch ${arch} -arch_path ../ARCH -job 8  -full -nodeps&lt;br /&gt;
&lt;br /&gt;
# Venus:&lt;br /&gt;
#make_icosa_lmdz -p venus -parallel mpi -arch ${arch} -arch_path ../ARCH -job 8 -full -nodeps&lt;br /&gt;
&lt;br /&gt;
# Mars:&lt;br /&gt;
# make_icosa_lmdz -p mars -parallel mpi -arch ${arch} -arch_path ../ARCH -job 8 -full -nodeps&lt;br /&gt;
&lt;br /&gt;
# Pluto:&lt;br /&gt;
# make_icosa_lmdz -p pluto -p_opt &amp;quot;-b 17x23&amp;quot; -parallel mpi -arch ${arch} -arch_path ../ARCH -job 8 -full -nodeps&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can change the options such as the parallel version you will use (-parallel '''mpi''' or '''mpi_omp''') and the parameters for the physics package (with the -p_opt options (for example '''-b 17x23''' to specify a spectral resolution), these are the option of [[The makelmdz fcm GCM Compilation Script]]).&lt;br /&gt;
&lt;br /&gt;
Using the '''-full''' option, the script '''make_icosa_lmdz''' will recompile from scratch:&lt;br /&gt;
* [[IOIPSL]]&lt;br /&gt;
* [[XIOS]]&lt;br /&gt;
* [[Quick_Install_and_Run|LMDZ.COMMON (your physics)]]&lt;br /&gt;
* [[DYNAMICO|ICOSAGCM (DYNAMICO)]]&lt;br /&gt;
&lt;br /&gt;
and then link everything in the '''ICOSA_LMDZ/bin''' folder, under the name of the executable '''icosa_lmdz.exe'''.&lt;br /&gt;
&lt;br /&gt;
If you use the ''' -nodeps''' option, XIOS and IOIPSL will *not* be compiled (you should therefore follow the corresponding pages for their installation).&lt;br /&gt;
&lt;br /&gt;
The script should stop at each step of the compilation if it fails (for example the compilation of the physics package).&lt;br /&gt;
&lt;br /&gt;
You can also compile each step on its own and remove the '''-full''' option (you can keep the '''-nodeps''' option) (please click on each package name for explanations on how to install it).&lt;br /&gt;
&lt;br /&gt;
=== Known issues ===&lt;br /&gt;
&lt;br /&gt;
If you obtain an error of the type:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
XIOS/src/io/netcdf.hpp:20:12: fatal error: netcdf_par.h: Aucun fichier ou dossier de ce nom&lt;br /&gt;
   20 | #  include &amp;lt;netcdf_par.h&amp;gt;&lt;br /&gt;
      |            ^~~~~~~~~~~~~~&lt;br /&gt;
compilation terminated.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is due to the parallel version of netcdf not being recognized by the compilation module, typically for XIOS.&lt;br /&gt;
In that case, please open the '''arch.path''' file used by the module and change the path pointing to '''netcdf_par.h'''. The default path should look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
NETCDF_INCDIR=&amp;quot;$(nc-config --cflags) $(nf-config --fflags) -I/usr/lib/x86_64-linux-gnu/netcdf/mpi/include&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can find the path where '''netcdf_par.h''' is located via:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
locate netcdf_par.h&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And you can set the '''new_path''' found (without '''netcdf_par.h''') like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
NETCDF_INCDIR=&amp;quot;-Inew_path&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You should probably do this for the lib too:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
NETCDF_LIBDIR=&amp;quot;-L/path/to/libnetcdf_mpi.so&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Running DYNAMICO with a physics package ==&lt;br /&gt;
&lt;br /&gt;
You will now need to find the &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;.xml&amp;lt;/font&amp;gt; files specific to your physics package, hopefully found in the &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;deftank/&amp;lt;/font&amp;gt; folder of each package (for example LMDZ.PLUTO/deftank/dynamico).&lt;br /&gt;
&lt;br /&gt;
To transform an initial state ([[The start.nc and startfi.nc input files|start.nc and startfi.nc]]) from LMDZ.* physics (lat x lon grid) to a [[dynamico]] initial state (icosahedral grid), please follow the tutorial here: [[LMDZ to DYNAMICO start files]].&lt;br /&gt;
&lt;br /&gt;
Please refer to&lt;br /&gt;
* [[Venus_-_DYNAMICO#Running_Venus_-_DYNAMICO|Venus - DYNAMICO]]&lt;br /&gt;
* [[Mars Dynamico Installation manual|Mars - DYNAMICO]]&lt;br /&gt;
* [[Pluto Dynamico|Pluto - DYNAMICO]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
for the specific details over each physics package.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:FAQ]]&lt;br /&gt;
[[Category:DYNAMICO]]&lt;br /&gt;
[[Category:Generic-DYNAMICO]]&lt;br /&gt;
[[Category:Mars-DYNAMICO]]&lt;br /&gt;
[[Category:Venus-DYNAMICO]]&lt;br /&gt;
[[Category:Pluto-DYNAMICO]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Early_Mars_benchmark_-_DYNAMICO&amp;diff=3204</id>
		<title>Early Mars benchmark - DYNAMICO</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Early_Mars_benchmark_-_DYNAMICO&amp;diff=3204"/>
				<updated>2026-02-04T13:57:40Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
First make sure you can run the early Mars benchmark with the LMDZ dynamical core.&lt;br /&gt;
Then install dynamico and XIOS (preferentially in the same trunk directory).&lt;br /&gt;
Now that everything is working separately, let's put everything together.&lt;br /&gt;
&lt;br /&gt;
== Compiling dynamico with the generic PCM physics package ==&lt;br /&gt;
&lt;br /&gt;
First we need to compile dynamico with the physics package of the generic PCM. Go to the ICOSA_LMDZ folder. To compile, we will use the '''make_icosa_lmdz''' file which is a wrapper for the compilation scripts of IOIPSL, XIOS, the generic PCM's physics (called by '''makelmdz_fcm -libphy''') and dynamico ('''make_icosa'''). Some compilation command line examples are provded in the ascii files '''make_irene_lmdz''' and '''compile_adastra-gnu'''. For our purpose, let's go with the following command:&lt;br /&gt;
&lt;br /&gt;
 make_icosa_lmdz -p generic -p_opt &amp;quot;-b 32x36&amp;quot; -parallel mpi_omp -arch {your_arch} -arch_path ../ARCH -job 8&lt;br /&gt;
&lt;br /&gt;
Make sure that you have been using the same architecture for installing all the pieces. If the compilation worked, the last lines of the prompt look something like:&lt;br /&gt;
&lt;br /&gt;
 -&amp;gt;Make               : 6 seconds&lt;br /&gt;
 -&amp;gt;TOTAL              : 7 seconds&lt;br /&gt;
 Build command finished on Mon Oct 21 15:05:13 2024.&lt;br /&gt;
&lt;br /&gt;
Notice that it will probably take a bit longer, in particular if XIOS needs to be compiled. The bin folder now contains an executable called '''icosa_lmdz.exe'''.&lt;br /&gt;
&lt;br /&gt;
== Adapting the early Mars benchmark ==&lt;br /&gt;
&lt;br /&gt;
Now go to the folder of the early Mars benchmark sources, and create a subfolder called &amp;quot;dynamico&amp;quot;. Copy and paste all the .def files from the benchmark into this folder, as well as your executable '''icosa_lmdz.exe'''.&lt;br /&gt;
&lt;br /&gt;
=== planetary_const.def ===&lt;br /&gt;
&lt;br /&gt;
We'll need additional .def files for the planetary constants and run informations. You can simply copy and paste the '''const_earth.def''' for Earth used in the Held and Suarez test of dynamico into the local folder as '''const_mars.def''', and adapt it appropriately.&lt;br /&gt;
&lt;br /&gt;
=== run_icosa.def ===&lt;br /&gt;
&lt;br /&gt;
We'll also use a '''run_icosa.def''' file, for which you can find inforrmation here. You can modify it to have 40 triangles subdivisions and n_spliti and n_splitj equal to 4, and llm=15 (15 vertical levels, as in the benchmark). You also need to set disvert=plugin, meaning that the vertical discretization will be handled by the appropriate plugin from the generic model. Set day_step=480, nqtot=3 (given by traceur.def). Deactivate Rayleigh friction (rayleigh_friction_type=none). Finally, make sure you have physics = phys_external. Now that all the run info is in the '''run_icosa.def''' file, '''run.def''' is simply calling this file:&lt;br /&gt;
 ###########################################################################&lt;br /&gt;
 ### INCLUDE OTHER DEF FILES (physics, specific settings, etc...)&lt;br /&gt;
 ###########################################################################&lt;br /&gt;
 INCLUDEDEF=run_icosa.def&lt;br /&gt;
 &lt;br /&gt;
 INCLUDEDEF=mars_const.def&lt;br /&gt;
 &lt;br /&gt;
 INCLUDEDEF=callphys.def&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 prt_level=0&lt;br /&gt;
 &lt;br /&gt;
 ## iphysiq must be same as itau_physics&lt;br /&gt;
 iphysiq=5&lt;br /&gt;
 #iphysiq=10&lt;br /&gt;
&lt;br /&gt;
Make sure the '''tracer.def''' (the tracer file for dynamico) is consistant with traceur.def (the tracer file for the physics).&lt;br /&gt;
&lt;br /&gt;
Contrarily to the native benchmark, we'll need to use XIOS to run with dynamico. That involves a few more input files. You will find out which xml files you need and where to look for them in the trunk, in the README.xml file which is located in the xml subfolder of the ICOSA_LMDZ folder of the trunk.&lt;br /&gt;
&lt;br /&gt;
The '''context_physics_lmdz.xml''' file contains information relative to the physics grid. The original file creates a grid larger than we need. For the benchmark we will reduce the resolution to 32x32. To do so, add the following block taking example on the already existing ones:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;domain id=&amp;quot;dom_32_32&amp;quot; type=&amp;quot;rectilinear&amp;quot; ni_glo=&amp;quot;32&amp;quot; nj_glo=&amp;quot;32&amp;quot;   &amp;gt;&lt;br /&gt;
   &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
   &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
 &amp;lt;/domain&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and indicate that we want to use this new grid as the output grid. To do so, modify the existing line defining variable '''dom_out''' as:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;domain id=&amp;quot;dom_out&amp;quot; domain_ref=&amp;quot;dom_32_32&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Contrarily to the LMDZ dynamical core, a dynamico run usually does not start from a start.nc initial state. Instead, an initial state is generated, from which the simulation really starts. Let's thus make two subfolders: one called &amp;quot;init&amp;quot; where we will generate the initial state, and one called &amp;quot;run&amp;quot; where the simulation will run.&lt;br /&gt;
&lt;br /&gt;
=== iodef.xml ===&lt;br /&gt;
Here simply change the line:&lt;br /&gt;
 &amp;lt;context id=&amp;quot;LMDZ&amp;quot; src=&amp;quot;./context_lmdz_physics.xml&amp;quot; /&amp;gt;&lt;br /&gt;
to:&lt;br /&gt;
 &amp;lt;context id=&amp;quot;LMDZ&amp;quot; src=&amp;quot;./context_pcm_physics.xml&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Generating the initial state ==&lt;br /&gt;
In the run_icosa.def, comment out the following lines:&lt;br /&gt;
 #etat0=start_file&lt;br /&gt;
 #etat0_start_file_colocated=true&lt;br /&gt;
and uncomment instead:&lt;br /&gt;
 etat0=isothermal&lt;br /&gt;
 etat0_isothermal_temp=200&lt;br /&gt;
and&lt;br /&gt;
 etat0_ps_white_noise=0.01&lt;br /&gt;
 etat0_theta_rhodz_white_noise=0.01&lt;br /&gt;
&lt;br /&gt;
== Running the simulation ==&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_makelmdz_fcm_GCM_Compilation_Script&amp;diff=3203</id>
		<title>The makelmdz fcm GCM Compilation Script</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_makelmdz_fcm_GCM_Compilation_Script&amp;diff=3203"/>
				<updated>2026-02-04T13:56:45Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The '''makelmdz_fcm''' script is the (bash) script (located in '''LMDZ.COMMON''' directory) to use to [[Quick_Install_and_Run#Compiling_the_GCM|compile the GCM]]. It is based on FCM and should be run, with various options (e.g which physics package to compile the model with, what grid resolution to use, etc.) to generate the sought executable.&lt;br /&gt;
&lt;br /&gt;
== makelmdz_fcm options ==&lt;br /&gt;
To list available options, run &amp;quot;makelmdz_fcm -h&amp;quot;, which should return something like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Usage :&lt;br /&gt;
makelmdz_fcm [options] -arch arch_name exec&lt;br /&gt;
[-h]                       : brief help&lt;br /&gt;
[-d [[IMx]JMx]LM]          : IM, JM, LM are the dimensions in x, y, z (default: 96x72x19)&lt;br /&gt;
[-s nscat]                 : (Generic) Number of radiatively active scatterers&lt;br /&gt;
[-b IRxVIS]                : (Generic) Number of infrared (IR) and visible (VIS) bands for radiative transfer&lt;br /&gt;
[-p PHYS]                  : set of physical parametrizations (in libf/phyPHYS), (default: lmd)&lt;br /&gt;
[-prod / -dev / -debug]    : compilation mode production (default) / developement / debug .&lt;br /&gt;
[-c false/MPI1/OMCT]       : (Earth) coupling with ocean model : MPI1/OMCT/false (default: false)&lt;br /&gt;
[-v false/orchidee2.0/orchidee1.9/true] : (Earth) version of the vegetation model to include (default: false)&lt;br /&gt;
          false       : no vegetation model&lt;br /&gt;
          orchidee2.0 : compile using ORCHIDEE 2.0 (or more recent version)&lt;br /&gt;
          orchidee1.9 : compile using ORCHIDEE up to the version including OpenMP in ORCHIDEE : tag 1.9-1.9.5(version AR5)-1.9.6&lt;br /&gt;
          true        : (obsolete; for backward compatibility) use ORCHIDEE tag 1.9-1.9.6&lt;br /&gt;
[-chimie INCA/false]       : (Earth) with INCA chemistry model or without (default: false)&lt;br /&gt;
[-cosp true/false]         : (Earth) add the cosp model (default: false)&lt;br /&gt;
[-sisvat true/false]  : (Earth) compile with/without sisvat package (default: false)&lt;br /&gt;
[-rrtm true/false]    : (Earth) compile with/without rrtm package (default: false)&lt;br /&gt;
[-dust true/false]    : (Earth) compile with/without the dust package by Boucher and co (default: false)&lt;br /&gt;
[-strataer true/false]    : (Earth) compile with/without the strat aer package by Boucher and co (default: false)&lt;br /&gt;
[-parallel none/mpi/omp/mpi_omp] : parallelism (default: none) : mpi, openmp or mixted mpi_openmp&lt;br /&gt;
[-g GRI]                   : grid configuration in dyn3d/GRI_xy.h  (default: reg, inclues a zoom)&lt;br /&gt;
[-io ioipsl/mix/xios]                   : Input/Output library (default: ioipsl)&lt;br /&gt;
[-include INCLUDES]        : extra include path to add&lt;br /&gt;
[-cpp CPP_KEY]             : additional preprocessing definitions&lt;br /&gt;
[-adjnt]                   : adjoint model, not operational ...&lt;br /&gt;
[-mem]                     : reduced memory dynamics (if in parallel mode)&lt;br /&gt;
[-filtre NOMFILTRE]        : use filtre from libf/NOMFILTRE (default: filtrez)&lt;br /&gt;
[-link LINKS]              : additional links with other libraries&lt;br /&gt;
[-j n]                     : active parallel compiling on ntask&lt;br /&gt;
[-full]                    : full (re-)compilation (from scratch)&lt;br /&gt;
[-libphy]                  : only compile physics package (no dynamics or main program)&lt;br /&gt;
[-fcm_path path]           : path to the fcm tool (default: tools/fcm/bin)&lt;br /&gt;
[-ext_src path]            : path to an additional set of routines to compile with the model&lt;br /&gt;
[-arch_path path]          : path to architecture files (default: arch)&lt;br /&gt;
 -arch arch                : target architecture &lt;br /&gt;
 exec                      : executable to build&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that some options are meaningless for the Generic GCM. This is due to the fact that we try to coordinates tools like '''makelmdz_fcm''' between different physics packages.&lt;br /&gt;
&lt;br /&gt;
The only mandatory argument are '''-arch''' and '''exec''', but in practice you'll need to specify many more (as defaults rarely suit all needs)&lt;br /&gt;
 &lt;br /&gt;
== Details of main makelmdz_fcm options ==&lt;br /&gt;
&amp;lt;pre&amp;gt;-d IMxJMxLM&amp;lt;/pre&amp;gt;&lt;br /&gt;
As the default is to run on a fixed longitude-latitude grid (set when compiling), option &amp;quot;-d&amp;quot; is necessary to specify the number of grid points iimxjjmxlllm (actually iim is the number of intervals along longitude, jjm is the number of intervals along latitude and llm the number of atmospheric layers).&lt;br /&gt;
&lt;br /&gt;
If compiling one of the 1D models, then only the number of layers (llm) needs be specified, e.g. &amp;lt;code&amp;gt;-d 78&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-prod / -dev / -debug&amp;lt;/pre&amp;gt;&lt;br /&gt;
Compilation mode. Default is &amp;lt;code&amp;gt;-prod&amp;lt;/code&amp;gt;, i.e. &amp;quot;production&amp;quot; mode, where compiler optimization are on (as defined in the arch files; see [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files]] for details). When checking/debugging you definitely want to set &amp;lt;code&amp;gt;-debug&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-j n&amp;lt;/pre&amp;gt;&lt;br /&gt;
With this option one can speed up compilation by letting &amp;lt;code&amp;gt;make&amp;lt;/code&amp;gt; compile simultaneously (and if possible) up to &amp;lt;code&amp;gt;n&amp;lt;/code&amp;gt; routines in parallel (note that this &amp;quot;parallel compilation&amp;quot; has nothing to do with the code being compiled for serial or parallel use as specified via option &amp;lt;code&amp;gt;-parallel ...&amp;lt;/code&amp;gt;). In practice using &amp;lt;code&amp;gt;-j 8&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-j 4&amp;lt;/code&amp;gt; works well. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-parallel none/mpi/omp/mpi_omp&amp;lt;/pre&amp;gt;&lt;br /&gt;
This option is to specify whether the model should be compiled in serial mode (default) or in parallel using MPI (&amp;lt;code&amp;gt;mpi&amp;lt;/code&amp;gt;) only, or using OpenMP (&amp;lt;code&amp;gt;omp&amp;lt;/code&amp;gt;) only, or both MPI and OpenMP (&amp;lt;code&amp;gt;mpi_omp&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
In practice, one most often needs to run in parallel and using both MPI and OpenMP, so &amp;lt;code&amp;gt;-parallel mpi_omp&amp;lt;/code&amp;gt; is advised.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-io ioipsl/noioipsl/mix/xios&amp;lt;/pre&amp;gt;&lt;br /&gt;
This option specifies which IO (Input/Output) library is going to be used by the model. Default is &amp;lt;code&amp;gt;-io ioipsl&amp;lt;/code&amp;gt;, which is becoming depreciated for the GCM but mandatory for the 1D model. Note that when compiling with XIOS: &amp;lt;code&amp;gt;-io xios&amp;lt;/code&amp;gt; on still needs to also use the IOIPSL library (which handles the reading of the run.def and companion files). It is also possible to compile without the IOISPL or XIOS libraries by specifying &amp;lt;code&amp;gt;-io noioipsl&amp;lt;/code&amp;gt; (using an internal version of the getin() function from reading the *.def files; not recommended, but might help if building IOIPSL is a problem).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-libphy&amp;lt;/pre&amp;gt;&lt;br /&gt;
With this option, only the physics package is compiled (the longitude-latitude dynamics routines are excluded) as a library, and no main program is generated. Building this library is a mandatory step to run with other dynamical cores like DYNAMICO or WRF.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-full&amp;lt;/pre&amp;gt;&lt;br /&gt;
Impose a full cleanup (i.e. removing past object and library files) to ensure recompiling the model from scratch.&lt;br /&gt;
&lt;br /&gt;
== Details of some specific makelmdz_fcm options ==&lt;br /&gt;
One chooses which physics package will be used using the &lt;br /&gt;
&amp;lt;pre&amp;gt;-p arg&amp;lt;/pre&amp;gt;&lt;br /&gt;
option, where &amp;lt;code&amp;gt;arg&amp;lt;/code&amp;gt; implies that the corresponding code will be found in '''LMDZ.COMMON/libf/phyarg''' and optionally also in '''LMDZ.COMMON/libf/aeronoarg'''. In practice these are just links to the package '''LMDZ.ARG/libf/phyarg''' and '''LMDZ.ARG/libf/aeronoarg''' (see [[LMDZ.COMMON directory layout and contents]], [[LMDZ.GENERIC directory layout and contents]], etc.).&lt;br /&gt;
&lt;br /&gt;
=== Generic model specific options ===&lt;br /&gt;
To compile with the Generic physics package, the adequate argument to &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; is &amp;lt;code&amp;gt;generic&amp;lt;/code&amp;gt;, i.e. corresponding code will be found in '''LMDZ.COMMON/libf/phygeneric''' and '''LMDZ.COMMON/libf/aeronogeneric''', which are simply links to '''LMDZ.GENERIC/libf/phygeneric''' and '''LMDZ.GENERIC/libf/aeronogeneric''':&lt;br /&gt;
&amp;lt;pre&amp;gt;-p generic&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional options one must provide when compiling with the Generic physics include: &lt;br /&gt;
&amp;lt;pre&amp;gt;-b IRxVIS&amp;lt;/pre&amp;gt;&lt;br /&gt;
Number of bands in the InfraRed and Visible for the radiative transfer. Note that this requires that the corresponding appropriate input files are available (at run time).&lt;br /&gt;
&lt;br /&gt;
=== Mars model specific options ===&lt;br /&gt;
To compile with the Mars physics package, the adequate argument to &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; is &amp;lt;code&amp;gt;mars&amp;lt;/code&amp;gt;, i.e. corresponding code will be found in '''LMDZ.COMMON/libf/phymars''' and '''LMDZ.COMMON/libf/aeronomars''', which are simply links to '''LMDZ.MARS/libf/phymars''' and '''LMDZ.MARS/libf/aeronomars''':&lt;br /&gt;
&amp;lt;pre&amp;gt;-p mars&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Venus model specific options ===&lt;br /&gt;
To compile with the Venus physics package, the adequate argument to &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; is &amp;lt;code&amp;gt;venus&amp;lt;/code&amp;gt;, i.e. corresponding code will be found in '''LMDZ.COMMON/libf/phyvenus''', which is simply a link to '''LMDZ.VENUS/libf/phyvenus''':&lt;br /&gt;
&amp;lt;pre&amp;gt;-p venus&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Titan model specific options ===&lt;br /&gt;
To compile with the Titan physics package, the adequate argument to &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; is &amp;lt;code&amp;gt;titan&amp;lt;/code&amp;gt;, i.e. corresponding code will be found in '''LMDZ.COMMON/libf/phytitan''', '''LLMDZ.COMMON/libf/muphytitan''' and '''LMDZ.COMMON/libf/chimtitan''', which are simply links to '''LMDZ.TITAN/libf/phytitan''', '''LMDZ.TITAN/libf/muphytitan''' and '''LMDZ.TITAN/libf/chimtitan''':&lt;br /&gt;
&amp;lt;pre&amp;gt;-p titan&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
If when you run &amp;lt;code&amp;gt;makelmdz_fcm&amp;lt;/code&amp;gt; you get the following error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dirname: missing operand&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This is because you do not have the &amp;lt;code&amp;gt;fcm&amp;lt;/code&amp;gt; command available, either because you have not installed &amp;quot;fcm&amp;quot; (see the relevant &amp;quot;Overview&amp;quot; page for the PCM you are using) or because you have not added the &amp;lt;code&amp;gt;FCM_V1.2/bin&amp;lt;/code&amp;gt; directory to your &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt; environment variable.&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Venus-Model]]&lt;br /&gt;
[[Category:Titan-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Other_GCM_Configurations_worth_knowing_about&amp;diff=3202</id>
		<title>Other GCM Configurations worth knowing about</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Other_GCM_Configurations_worth_knowing_about&amp;diff=3202"/>
				<updated>2026-02-04T13:54:51Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= 3D lon-lat LMDZ setup =&lt;br /&gt;
&lt;br /&gt;
== early Mars ==&lt;br /&gt;
&lt;br /&gt;
It is already described in the [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Quick_Install_and_Run ''Quick Install and Run''] section.&lt;br /&gt;
&lt;br /&gt;
== Earth with slab ocean ==&lt;br /&gt;
&lt;br /&gt;
TBD by Siddharth, once all changes have been committed (also need a validation of the model on Earth to be sure)&lt;br /&gt;
&lt;br /&gt;
== TRAPPIST-1e with photochemistry ==&lt;br /&gt;
&lt;br /&gt;
A temperate rocky planet in synchronous rotation around a low mass star.&lt;br /&gt;
&lt;br /&gt;
Here is an example to simulate the planet TRAPPIST-1e with an Earth atmosphere using the photochemical module of the GCM.&lt;br /&gt;
&lt;br /&gt;
To install the model and run it, follow [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Quick_Install_and_Run ''Quick Install and Run''] but with the following changes:&lt;br /&gt;
&lt;br /&gt;
=== GCM Input Datafiles and Datasets ===&lt;br /&gt;
Section [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;amp;action=edit&amp;amp;section=9 ''GCM Input Datafiles and Datasets''] download the TRAPPIST-1e files (instead of the early Mars files):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/bench_trappist1e_photochemistry_64x48x30_b38x36.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can find the same type of file with the additional folder containing the chemical network file:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
callphys.def  gases.def  startfi.nc  traceur.def&lt;br /&gt;
datadir/      run.def    start.nc    z2sig.def&lt;br /&gt;
chemnetwork/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Compiling the GCM ===&lt;br /&gt;
==== Prior to a first compilation: setting up the target architecture files ====&lt;br /&gt;
The chemical solver require the libraries BLAS and LAPACK which need to be specified in the '''arch*.fcm''' file:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
%COMPILER            gfortran&lt;br /&gt;
%LINK                gfortran&lt;br /&gt;
%AR                  ar&lt;br /&gt;
%MAKE                make&lt;br /&gt;
%FPP_FLAGS           -P -traditional&lt;br /&gt;
%FPP_DEF             NC_DOUBLE LAPACK BLAS SGEMV=DGEMV SGEMM=DGEMM&lt;br /&gt;
%BASE_FFLAGS         -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons&lt;br /&gt;
%PROD_FFLAGS         -O3&lt;br /&gt;
%DEV_FFLAGS          -O&lt;br /&gt;
%DEBUG_FFLAGS        -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace&lt;br /&gt;
%MPI_FFLAGS&lt;br /&gt;
%OMP_FFLAGS         &lt;br /&gt;
%BASE_LD             -llapack -lblas&lt;br /&gt;
%MPI_LD&lt;br /&gt;
%OMP_LD              &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Specific to photochemistry: set hard coded reactions ====&lt;br /&gt;
In '''/LMDZ.GENERIC/libf/aeronogeneric/chimiedata_h.F90''' you can hard code reaction if needed, for instance because the reaction rate is very specific and out of the generic formula or your photochemical reaction does not use a regular cross section.&lt;br /&gt;
&lt;br /&gt;
The TRAPPIST-1e test case use 3 hard coded reactions.&lt;br /&gt;
&lt;br /&gt;
*Uncomment the following lines to fill reaction species indexes:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
!===========================================================&lt;br /&gt;
!      r001 : HNO3 + rain -&amp;gt; H2O&lt;br /&gt;
!===========================================================&lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
indice_phot(nb_phot) = z3spec(1.0, indexchim('hno3'), 1.0, indexchim('h2o_vap'), 0.0, 1)&lt;br /&gt;
&lt;br /&gt;
!===========================================================&lt;br /&gt;
!      e001 : CO + OH -&amp;gt; CO2 + H &lt;br /&gt;
!===========================================================&lt;br /&gt;
nb_reaction_4 = nb_reaction_4 + 1&lt;br /&gt;
indice_4(nb_reaction_4) = z4spec(1.0, indexchim('co'), 1.0, indexchim('oh'), 1.0, indexchim('co2'), 1.0, indexchim('h'))&lt;br /&gt;
&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
!     photodissociation of NO : NO + hv -&amp;gt; N + O&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
indice_phot(nb_phot) = z3spec(1.0, indexchim('no'), 1.0, indexchim('n'), 1.0, indexchim('o'))&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Uncomment the following lines to fill reaction rates:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
!     carbon reactions&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
     &lt;br /&gt;
!---  e001: oh + co -&amp;gt; co2 + h&lt;br /&gt;
&lt;br /&gt;
      nb_reaction_4 = nb_reaction_4 + 1&lt;br /&gt;
&lt;br /&gt;
!     joshi et al., 2006&lt;br /&gt;
&lt;br /&gt;
      do ilev = 1,nlayer&lt;br /&gt;
         k1a0 = 1.34*2.5*dens(ilev)                                  &amp;amp;&lt;br /&gt;
               *1/(1/(3.62e-26*t(ilev)**(-2.739)*exp(-20./t(ilev)))  &amp;amp;&lt;br /&gt;
               + 1/(6.48e-33*t(ilev)**(0.14)*exp(-57./t(ilev))))     ! typo in paper corrected&lt;br /&gt;
         k1b0 = 1.17e-19*t(ilev)**(2.053)*exp(139./t(ilev))          &amp;amp;&lt;br /&gt;
              + 9.56e-12*t(ilev)**(-0.664)*exp(-167./t(ilev))&lt;br /&gt;
         k1ainf = 1.52e-17*t(ilev)**(1.858)*exp(28.8/t(ilev))        &amp;amp;&lt;br /&gt;
                + 4.78e-8*t(ilev)**(-1.851)*exp(-318./t(ilev))&lt;br /&gt;
         x = k1a0/(k1ainf - k1b0)&lt;br /&gt;
         y = k1b0/(k1ainf - k1b0)&lt;br /&gt;
         fc = 0.628*exp(-1223./t(ilev)) + (1. - 0.628)*exp(-39./t(ilev))  &amp;amp;&lt;br /&gt;
            + exp(-t(ilev)/255.)&lt;br /&gt;
         fx = fc**(1./(1. + (alog(x))**2))                           ! typo in paper corrected&lt;br /&gt;
         k1a = k1a0*((1. + y)/(1. + x))*fx&lt;br /&gt;
         k1b = k1b0*(1./(1.+x))*fx&lt;br /&gt;
            &lt;br /&gt;
         v_4(ilev,nb_reaction_4) = k1a + k1b&lt;br /&gt;
      end do&lt;br /&gt;
&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
!     washout r001 : HNO3 + rain -&amp;gt; H2O&lt;br /&gt;
!----------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
      &lt;br /&gt;
rain_h2o  = 100.e-6&lt;br /&gt;
!rain_rate = 1.e-6  ! 10 days&lt;br /&gt;
rain_rate = 1.e-8&lt;br /&gt;
      &lt;br /&gt;
do ilev = 1,nlayer&lt;br /&gt;
   if (c(ilev,indexchim('h2o_vap'))/dens(ilev) &amp;gt;= rain_h2o) then&lt;br /&gt;
      v_phot(ilev,nb_phot) = rain_rate&lt;br /&gt;
   else&lt;br /&gt;
      v_phot(ilev,nb_phot) = 0.&lt;br /&gt;
   end if&lt;br /&gt;
end do&lt;br /&gt;
&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
!     photodissociation of NO&lt;br /&gt;
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc&lt;br /&gt;
      &lt;br /&gt;
nb_phot = nb_phot + 1&lt;br /&gt;
      &lt;br /&gt;
colo3(nlayer) = 0.&lt;br /&gt;
!     ozone columns for other levels (molecule.cm-2)&lt;br /&gt;
do ilev = nlayer-1,1,-1&lt;br /&gt;
   colo3(ilev) = colo3(ilev+1) + (c(ilev+1,indexchim('o3')) + c(ilev,indexchim('o3')))*0.5*avocado*1e-4*((press(ilev) - press(ilev+1))*100.)/(1.e-3*zmmean(ilev)*g*dens(ilev))&lt;br /&gt;
end do&lt;br /&gt;
call jno(nlayer, c(nlayer:1:-1,indexchim('no')), c(nlayer:1:-1,indexchim('o2')), colo3(nlayer:1:-1), dens(nlayer:1:-1), press(nlayer:1:-1), sza, v_phot(nlayer:1:-1,nb_phot))&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Change the following lines to set the number of hard coded reactions:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
integer, parameter :: nphot_hard_coding = 2&lt;br /&gt;
integer, parameter :: n4_hard_coding    = 1&lt;br /&gt;
integer, parameter :: n3_hard_coding    = 0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (TRAPPIST-1e) ===&lt;br /&gt;
Change the following compiling option:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
-d 64x48x30 -b 38x36&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NB: option -b is mandatory to change while option -d will still run with lower or higher resolution (if '''z2sig.def''' remains coherent with the number of altitude levels, meaning at least as many altitude levels defined as the number of levels wanted).&lt;br /&gt;
&lt;br /&gt;
== TRAPPIST-1c in Venus-like conditions ==&lt;br /&gt;
&lt;br /&gt;
A warm rocky planet in synchronous rotation around a low mass star. Here we provide an '''example''' to simulate the atmosphere of Trappist-1c, assuming it evolved to a modern Venus-like atmosphere.&lt;br /&gt;
&lt;br /&gt;
The planetary parameters are taken from  [https://arxiv.org/abs/2010.01074 Algol et al. 2021] and can be found in this table [[Media:Planetary_parameters_Trappist1c.png]]&lt;br /&gt;
&lt;br /&gt;
First, install the model and run it, following [[Quick Install and Run]]  but instead of  ''Early Mars files'', please download ''bench_trappist1c_64x48x50_b32x36'' using this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/bench_trappist1c_64x48x50_b32x36.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (TRAPPIST-1c) ===&lt;br /&gt;
Change the following compiling option:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
-d 64x48x50 -b 32x36&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can find the same type of  ASCII *def files than in the case of ''Early Mars'',  but adapted to the planet's characteristics and orbital parameters of Trappist 1c.&lt;br /&gt;
In particular ''callphys.def'' contains the following changes:&lt;br /&gt;
&lt;br /&gt;
* The planet is assumed to be in 1:1 spin-orbit resonance, therefore&lt;br /&gt;
   Diurnal = .false. &lt;br /&gt;
   Tlocked = .true.&lt;br /&gt;
* The planet equilibrium temperature is about 342 K&lt;br /&gt;
   tplanet    = 341.9&lt;br /&gt;
* The host star is a late spectral type M8V, with a stellar flux at 1 AU of 0.7527 [W m-2]&lt;br /&gt;
   startype = 9&lt;br /&gt;
   Fat1AU = 0.7527&lt;br /&gt;
* Fixed aerosol distribution, no radiative active tracers (no evaporation/condensation of H2O and CO2):&lt;br /&gt;
   aerofixed     = .true.&lt;br /&gt;
   aeroco2       = .false.&lt;br /&gt;
   aeroh2o       = .false.&lt;br /&gt;
* No water cycle model, no water cloud formation or water precipitation, no CO2 condensation:&lt;br /&gt;
   water         = .false.&lt;br /&gt;
   watercond     = .false.&lt;br /&gt;
   waterrain     = .false.&lt;br /&gt;
   hydrology     = .false.&lt;br /&gt;
   nonideal      = .true.&lt;br /&gt;
   co2cond       = .false.&lt;br /&gt;
* Following [https://www.sciencedirect.com/science/article/pii/S0032063313002596?via%3Dihub Haus et al. 2015] a prescribed radiatively active cloud model is included. &lt;br /&gt;
It can be activated/deactivated with the flag ''aerovenus''.&lt;br /&gt;
   aerovenus = .true.&lt;br /&gt;
* Mode 1, 2, 2p, 3 and the &amp;quot;unknown&amp;quot; UV absorber can be included/excluded by setting to true/false the following keywords. The characteristics of each mode (e.g. effect radius, effective variance) are based on Venus Express/ESA observations and can be found in this table [[Media:Table1 aerosolVenus trappist1c.png]]&lt;br /&gt;
   aerovenus1    = .true.&lt;br /&gt;
   aerovenus2    = .true.&lt;br /&gt;
   aerovenus2p   = .true.&lt;br /&gt;
   aerovenus3    = .true.&lt;br /&gt;
   aerovenusUV   = .true.&lt;br /&gt;
&lt;br /&gt;
The cloud model is prescribed from 1 to 0.037 ''bar'' pressure layers. For each mode, the top/bottom pressure can be modified by hard-coding model routine ''aeropacity.F90''.&lt;br /&gt;
Here below an example for mode 1 particles, where the top pressure layer and bottom pressure layer are prescribed at 0.1 bar and 1 bar, respectively:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
!       1. Initialization&lt;br /&gt;
          aerosol(1:ngrid,1:nlayer,iaer)=0.0&lt;br /&gt;
          p_bot = 1.e5 ! bottom pressure [Pa]&lt;br /&gt;
          p_top = 1.e4&lt;br /&gt;
          h_bot = 1.0e3 ! bottom scale height [m]&lt;br /&gt;
          h_top = 5.0e3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
'''TO BE COMPLETED BY GABRIELLA'''&lt;br /&gt;
&lt;br /&gt;
== mini-Neptune GJ1214b ==&lt;br /&gt;
&lt;br /&gt;
A warm mini-Neptune&lt;br /&gt;
&lt;br /&gt;
'''TO BE COMPLETED BY BENJAMIN'''&lt;br /&gt;
&lt;br /&gt;
= 3D DYNAMICO setup =&lt;br /&gt;
&lt;br /&gt;
Due to the rich dynamical activities in their atmospheres (banded zonal jets, eddies, vortices, storms, equatorial oscillations,...) resulting from multi-scale dynamic interactions, the Global Climate Modelling of the giant planet requires to resolve eddies arising from hydrodynamical instabilities to correctly establish the planetary-scaled jets regime. To this purpose, their Rossby radius deformation $$L_D$$, which is the length scale at which rotational effects become as important as buoyancy or gravity wave effects in the evolution of the flow about some disturbance, is calculated to determine the most suitable horizontal grid resolution. At mid-latitude range, for the giant planets, $$L_D$$ is of the same order of magnitude as that of the Earth. As the giant planets have a size of roughly 10 times the Earth size (i.e., Jupiter and Saturn), the modelling grid must be of a horizontal resolution of 0.5$$^{\circ}$$ over longitude and latitude (vs 5$$^{\circ}$$ for the Earth), considering 3 grid points to resolved $$L_D$$. &lt;br /&gt;
Moreover, to have a chance to model the equatorial oscillation, meridional cell circulations and/or a seasonal inter-hemispheric circulation, a giant planet GCM must also include a high vertical resolution. Indeed, these climate phenomena have been studied for decades for the Earth's atmosphere, and result from small- and large-scale interactions between the troposphere and stratosphere. This implies that the propagation of dynamic instabilities, waves and turbulence should be resolved as far as possible along the vertical. Contrary to horizontal resolution, it doesn't really exist a criterion (similar to $$L_D$$) to determine the most suitable vertical grid resolution and still an adjustable parameter according to the processes to be represented. However, we advise the user to set a vertical resolution of at least 5 grid points per scale height as first stage.    &lt;br /&gt;
Finally, these atmospheres are cold, with long radiative response time which needs radiative transfer computations over decade-long years of Jupiter (given that a Jupiter year $$\approx$$ 12 Earth years), Saturn ( a Saturn year $$\approx$$ 30 Earth years), Uranus (a Uranus year $$\approx$$ 84 earth years) or Neptune (a Neptune year $$\approx$$ 169 Earth years), depending on the chosen planet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To be able to deal with these three -- and non-exhaustive -- requirements to build a giant planet GCM, we need massive computational ressources. For this, we use a dynamical core suitable and numerically stable for massive parallel ressource computations: [[The_DYNAMICO_dynamical_core | DYNAMICO]] [Dubos et al,. 2015].  &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
In these two following subsections, we purpose an example of installation for Jupiter and a Hot Jupiter. All the install, compiling, setting and parameters files for each giant planets could be found on: https://gitlab.in2p3.fr/aymeric.spiga/dynamico-giant (the old repo is archived as read-only https://github.com/aymeric-spiga/dynamico-giant)&lt;br /&gt;
&lt;br /&gt;
The [[Dynamico-giant | DYNAMICO-giant wiki is here]]&lt;br /&gt;
&lt;br /&gt;
If you have already downloaded '''LMDZ.COMMON''', '''LMDZ.GENERIC''', '''IOIPSL''', '''ARCH''', you only have to download:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''ICOSAGCM''': the DYNAMICO dynamical core&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://gitlab.in2p3.fr/ipsl/projets/dynamico/dynamico.git ICOSAGCM&lt;br /&gt;
cd ICOSAGCM&lt;br /&gt;
git checkout 110016896ae9e85e614af43223b18fe38f211020   # Version du 6 nov. 2024&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''ICOSA_LMDZ''': the interface using to link LMDZ.GENERIC physical packages and ICOSAGCM&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn update -r 3729 -q ICOSA_LMDZ   # Version du 18 avr. 2025&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''XIOS (XML Input Output Server)''': the library to interpolate input/output fields between the icosahedral and longitude/latitude regular grids on fly&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn co -r 2626 -q http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/trunk XIOS   # Version du 22 mar. 2024&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you haven't already download '''LMDZ.COMMON''', '''LMDZ.GENERIC''', '''IOIPSL''', '''ARCH''', you can use the '''install.sh''' script provided by the GitLab repository. &lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
Once each part of the GCM is downloaded, you are able to compile it. &lt;br /&gt;
Firstly, you have to define your [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files | target architecture file ]] (hereafter named YOUR_ARCH_FILE) where you will fill in all the necessary information about the local environment, where libraries are located, which compiler, and compiler options will be used, etc.&lt;br /&gt;
Some architecture files related to specific machines are provided in the '''ARCH''' directory, which are referenced in the following lines without the prefix 'arch-' (i.e., arch-X64_IRENE-AMD.env will be referenced as X64_IRENE-AMD).  &lt;br /&gt;
&lt;br /&gt;
The main specificity of DYNAMICO-giant is that every main parts of the model ('''ICOSAGCM''', '''LMDZ.COMMON''' and '''LMDZ.GENERIC''') are compiled as libraries, and settings and running configuration are managed by the '''ICOSA_LMDZ''' interface.&lt;br /&gt;
&lt;br /&gt;
First, you have to compile '''IOIPSL''',&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd LMDZ.COMMON/ioipsl/                                                                                                             &lt;br /&gt;
    ./install_ioipsl_YOUR-MACHINE.bash&lt;br /&gt;
cd ../../&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
then '''XIOS''' library, &lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd XIOS/                                                                                                               &lt;br /&gt;
    ./make_xios --prod --arch YOUR_ARCH_FILE --arch_path ../ARCH --job 8 --full&lt;br /&gt;
cd -&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
the physics packaging,&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd LMDZ.COMMON/                                                                                                        &lt;br /&gt;
    ./makelmdz_fcm -p generic -p_opt &amp;quot;-b 20x25&amp;quot; -prod -parallel mpi -libphy -io xios -arch YOUR_ARCH_FILE -arch_path ../ARCH -j 8 -full&lt;br /&gt;
cd -&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
the dynamical core '''DYNAMICO''' (located in '''ICOSAGCM''' directory, named from the icosahedral shape of the horizontal mesh),&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd ICOSAGCM/&lt;br /&gt;
    ./make_icosa -prod -parallel mpi -external_ioipsl -with_xios -arch YOUR_ARCH_FILE -arch_path ../ARCH -job 8 -full&lt;br /&gt;
cd -&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
and finally the '''ICOSA_LMDZ''' interface&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd ICOSA_LMDZ/&lt;br /&gt;
    ./make_icosa_lmdz -p generic -p_opt &amp;quot;-b 20x25&amp;quot; -parallel mpi -arch YOUR_ARCH_FILE -arch_path ../ARCH -job 8 -nodeps&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This last step is a bit redundant with the two previous one, hence ''make_icosa_lmdz'' will execute ''./make_icosa'' (in the '''ICOSAGCM''' directory) and ''./makelmdz_fcm'' (in the '''LMDZ.COMMON''' directory) to create and source the architecture files shared between all parts of the model, as well as create the intermediate file ''config.fcm''. As you have already compiled these two elements, ''make_icosa_lmdz'' should only create the linked architecture files, ''config.fcm'' and compile the interface. Here, ''-nodeps'' option prevents the checking of XIOS and IOIPSL compilation, which saves you from recompiling these two elements again.&lt;br /&gt;
      &lt;br /&gt;
Finally, your executable programs should appeared in '''ICOSA_LMDZ/bin''' subdirectory, as '''icosa_lmdz.exe''' and in '''XIOS/bin''' subdirectory, as '''xios_server.exe''' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All these compiling steps are summed up in ''make_isoca_lmdz'' program that should be adapted to your own computational settings (i.e., through you target architecture file).&lt;br /&gt;
 &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./make_icosa_lmdz -p generic -p_opt &amp;quot;-b 20x25&amp;quot; -parallel mpi -arch YOUR_ARCH_FILE -arch_path ../ARCH -job 8 -full&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; &lt;br /&gt;
Here, ''-full'' option assure the compilation of each part ('''IOIPSL''', '''XIOS''', '''LMDZ.COMMON''', '''ICOSAGCM''' and '''ICOSA_LMDZ''') of the model.  &lt;br /&gt;
&lt;br /&gt;
Now you can move your two executable files to your working directory and start to run your own simulation of Jupiter or a Hot Jupiter, as what follows.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: If you are using the GitLab file architecture (https://gitlab.in2p3.fr/aymeric.spiga/dynamico-giant), you should be able to compile the model directly from your working directory (for instance ''dynamico-giant/jupiter/'') by using the ''compile_occigen.sh'' program, which has to be adapted to your machine/cluster.&lt;br /&gt;
&lt;br /&gt;
''Note 2 : Depending on the compiler module you use, especially with gfortran, you may need to modify the tracers_icosa.F90 file located in the src directory in order to successfully compile ICOSAGCM. For example, if you are using GCC/11.3.0 and OpenMPI/4.1.4, you must update the insert_tracer_output subroutine as follows:''&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;fortran&amp;quot;&amp;gt;&lt;br /&gt;
SUBROUTINE insert_tracer_output&lt;br /&gt;
      USE xios_mod&lt;br /&gt;
      USE grid_param&lt;br /&gt;
      IMPLICIT NONE&lt;br /&gt;
      TYPE(xios_fieldgroup) :: fieldgroup_hdl&lt;br /&gt;
      TYPE(xios_field) :: field_hdl&lt;br /&gt;
      INTEGER :: iq&lt;br /&gt;
      CHARACTER(len=1000) :: tracername1&lt;br /&gt;
      CHARACTER(len=1000) :: tracername2&lt;br /&gt;
      CHARACTER(len=1000) :: tracername3 &lt;br /&gt;
&lt;br /&gt;
      CALL xios_get_handle(&amp;quot;standard_output_tracers&amp;quot;, fieldgroup_hdl)&lt;br /&gt;
      DO iq = 1, nqtot&lt;br /&gt;
         tracername1 = &amp;quot;tracer_&amp;quot;//TRIM(tracers(iq)%name)&lt;br /&gt;
         CALL xios_add_child(fieldgroup_hdl, field_hdl, tracername1)&lt;br /&gt;
         CALL xios_set_attr(field_hdl, name=TRIM(tracers(iq)%name))&lt;br /&gt;
      END DO&lt;br /&gt;
&lt;br /&gt;
      CALL xios_get_handle(&amp;quot;standard_output_tracers_init&amp;quot;, fieldgroup_hdl)&lt;br /&gt;
      DO iq = 1, nqtot&lt;br /&gt;
         tracername2 = &amp;quot;tracer_&amp;quot;//TRIM(tracers(iq)%name)//&amp;quot;_init&amp;quot;&lt;br /&gt;
         tracername3 = TRIM(tracers(iq)%name)//&amp;quot;_init&amp;quot;&lt;br /&gt;
         CALL xios_add_child(fieldgroup_hdl, field_hdl, tracername2)&lt;br /&gt;
         CALL xios_set_attr(field_hdl, name=tracername3)&lt;br /&gt;
      END DO&lt;br /&gt;
&lt;br /&gt;
   END SUBROUTINE insert_tracer_output&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Jupiter with DYNAMICO ==&lt;br /&gt;
Using a new dynamical core implies new setting files, in addition or as a replacement of those relevant to '''LMDZ.COMMON''' dynamical core using. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two kind of setting files:&lt;br /&gt;
&lt;br /&gt;
'''A first group relevant to DYNAMICO:'''&lt;br /&gt;
&lt;br /&gt;
- [[The ''context_dynamico.xml'' Input File|''context_dynamico.xml'']]: Configuration file for '''DYNAMICO''' for reading and writing  files using '''XIOS''', mainly used when you want to check the installation of '''ICOSAGCM''' with [[The_DYNAMICO_dynamical_core | an ''Held and Suarez'' test case]]. When your installation, compilation and run environment is fully functional, the dynamic core output files will not (necessarily) be useful and you can disable their writing. &lt;br /&gt;
&lt;br /&gt;
- [[The context_input_dynamico.xml Input File|''context_input_dynamico.xml'']]:&lt;br /&gt;
&lt;br /&gt;
- [[The file_def_dynamico.xml Input File|''file_def_dynamico.xml'']]: Definition of output diagnostic files which will be written into the output files only related to '''ICOSAGCM'''. &lt;br /&gt;
&lt;br /&gt;
- [[The field_def_dynamico.xml Input File|''field_def_dynamico.xml'']]: Definition of all existing variables that can be output from DYNAMICO.&lt;br /&gt;
&lt;br /&gt;
- [[The tracer.def Input File|''tracer.def'']]: Definition of the name and physico-chemical properties of the tracers which will be advected by the dynamical core. For now, there is two files related to tracers, we are working to harmonise it.  &lt;br /&gt;
&lt;br /&gt;
''' A second group relevant to LMDZ.GENERIC physical packages: '''&lt;br /&gt;
&lt;br /&gt;
- [[The context_lmdz_physics.xml Input File|''context_lmdz_physics.xml'']]: File in which are defined the horizontal grid, vertical coordinate, output file(s) definition, with the setting of frequency output writing, time unit, geophysical variables to be written, etc. Each new geophysical variables added here have to be defined in the ''field_def_physics.xml'' file.&lt;br /&gt;
&lt;br /&gt;
- [[The field_def_physics.xml Input File|''field_def_physics.xml'']]: Definition of all existing variables that can be output from the physical packages interfaced with '''DYNAMICO'''. This is where you will add each geophysical fields that you want to appear in the ''Xhistins.nc'' output files. For instance, related to the ''thermal plume scheme'' using for Jupiter's tropospheric dynamics, we have added the following variables: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot; line&amp;gt;&lt;br /&gt;
             &amp;lt;field id=&amp;quot;h2o_vap&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Vapor mass mixing ratio&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/kg&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;h2o_ice&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Vapor mass mixing ratio&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/kg&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;detr&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Detrainment&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/m2/s&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;entr&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Entrainment&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;kg/m2/s&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;w_plm&amp;quot; &lt;br /&gt;
                   long_name=&amp;quot;Plume vertical velocity&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;m/s&amp;quot; /&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [[The_callphys.def_Input_File|''callphys.def'']]: This setting file is used either with '''DYNAMICO''' or '''LMDZ.COMMON''' and allows the user to choose the physical parametrisation schemes and their appropriate main parameter values relevant to the planet being simulated. In our case of Jupiter, there are some specific parametrisations that should be added or modified from the example given as link at the beginning of this line: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
# Diurnal cycle ?  if diurnal=false, diurnally averaged solar heating&lt;br /&gt;
diurnal      = .false. #.true.&lt;br /&gt;
# Seasonal cycle ? if season=false, Ls stays constant, to value set in &amp;quot;start&amp;quot;&lt;br /&gt;
season       = .true. &lt;br /&gt;
# Tidally resonant orbit ? must have diurnal=false, correct rotation rate in newstart&lt;br /&gt;
tlocked      = .false.&lt;br /&gt;
# Tidal resonance ratio ? ratio T_orbit to T_rotation&lt;br /&gt;
nres         = 1&lt;br /&gt;
# Planet with rings?&lt;br /&gt;
rings_shadow = .false.&lt;br /&gt;
# Compute latitude-dependent gravity field??&lt;br /&gt;
oblate       = .true.&lt;br /&gt;
# Include non-zero flattening (a-b)/a?&lt;br /&gt;
flatten      = 0.06487&lt;br /&gt;
# Needed if oblate=.true.: J2&lt;br /&gt;
J2           = 0.01470&lt;br /&gt;
# Needed if oblate=.true.: Planet mean radius (m)&lt;br /&gt;
Rmean        = 69911000.&lt;br /&gt;
# Needed if oblate=.true.: Mass of the planet (*1e24 kg)&lt;br /&gt;
MassPlanet   = 1898.3&lt;br /&gt;
# use (read/write) a startfi.nc file? (default=.true.)&lt;br /&gt;
startphy_file = .false.&lt;br /&gt;
# constant value for surface albedo (if startphy_file = .false.)&lt;br /&gt;
surfalbedo   = 0.0&lt;br /&gt;
# constant value for surface emissivity (if startphy_file = .false.)&lt;br /&gt;
surfemis     = 1.0&lt;br /&gt;
&lt;br /&gt;
# the rad. transfer is computed every &amp;quot;iradia&amp;quot; physical timestep&lt;br /&gt;
iradia           = 160&lt;br /&gt;
# folder in which correlated-k data is stored ?&lt;br /&gt;
corrkdir         = Jupiter_HITRAN2012_REY_ISO_NoKarko_T460K_article2019_gauss8p8_095&lt;br /&gt;
# Uniform absorption coefficient in radiative transfer?&lt;br /&gt;
graybody         = .false.&lt;br /&gt;
# Characteristic planetary equilibrium (black body) temperature&lt;br /&gt;
# This is used only in the aerosol radiative transfer setup. (see aerave.F)&lt;br /&gt;
tplanet          = 100.&lt;br /&gt;
# Output global radiative balance in file 'rad_bal.out' - slow for 1D!!&lt;br /&gt;
meanOLR          = .false.&lt;br /&gt;
# Variable gas species: Radiatively active ?&lt;br /&gt;
varactive        = .false.&lt;br /&gt;
# Computes atmospheric specific heat capacity and&lt;br /&gt;
# could calculated by the dynamics, set in callphys.def or calculeted from gases.def.&lt;br /&gt;
# You have to choose: 0 for dynamics (3d), 1 for forced in callfis (1d) or 2: computed from gases.def (1d)&lt;br /&gt;
# Force_cpp and check_cpp_match are now deprecated.  &lt;br /&gt;
cpp_mugaz_mode = 0&lt;br /&gt;
# Specific heat capacity in J K-1 kg-1 [only used if cpp_mugaz_mode = 1]&lt;br /&gt;
cpp              = 11500.&lt;br /&gt;
# Molecular mass in g mol-1 [only used if cpp_mugaz_mode = 1]&lt;br /&gt;
mugaz            = 2.30&lt;br /&gt;
### DEBUG&lt;br /&gt;
# To not call abort when temperature is outside boundaries:&lt;br /&gt;
strictboundcorrk = .false.&lt;br /&gt;
# To not stop run when temperature is greater than 400 K for H2-H2 CIA dataset:   &lt;br /&gt;
strictboundcia = .false.&lt;br /&gt;
# Add temperature sponge effect after radiative transfer?&lt;br /&gt;
callradsponge    = .false.&lt;br /&gt;
&lt;br /&gt;
Fat1AU = 1366.0&lt;br /&gt;
&lt;br /&gt;
## Other physics options&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# call turbulent vertical diffusion ?&lt;br /&gt;
calldifv    = .false.&lt;br /&gt;
# use turbdiff instead of vdifc ?&lt;br /&gt;
UseTurbDiff = .true.&lt;br /&gt;
# call convective adjustment ?&lt;br /&gt;
calladj     = .true.&lt;br /&gt;
# call thermal plume model ?&lt;br /&gt;
calltherm   = .true.&lt;br /&gt;
# call thermal conduction in the soil ?&lt;br /&gt;
callsoil    = .false.&lt;br /&gt;
# Internal heat flux (matters only if callsoil=F)&lt;br /&gt;
intheat     = 7.48&lt;br /&gt;
# Remove lower boundary (e.g. for gas giant sims)&lt;br /&gt;
nosurf      = .true.&lt;br /&gt;
#########################################################################&lt;br /&gt;
## extra non-standard definitions for Earth&lt;br /&gt;
#########################################################################&lt;br /&gt;
&lt;br /&gt;
## Thermal plume model options&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
dvimpl               = .true.&lt;br /&gt;
r_aspect_thermals    = 2.0&lt;br /&gt;
tau_thermals         = 0.0&lt;br /&gt;
betalpha             = 0.9&lt;br /&gt;
afact                = 0.7&lt;br /&gt;
fact_epsilon         = 2.e-4&lt;br /&gt;
alpha_max            = 0.7&lt;br /&gt;
fomass_max           = 0.5&lt;br /&gt;
pres_limit           = 2.e5&lt;br /&gt;
&lt;br /&gt;
## Tracer and aerosol options&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# Ammonia cloud (Saturn/Jupiter)?&lt;br /&gt;
aeronh3             = .true.&lt;br /&gt;
size_nh3_cloud      = 10.D-6&lt;br /&gt;
pres_nh3_cloud      = 1.1D5                        # old: 9.D4&lt;br /&gt;
tau_nh3_cloud       = 10.                          # old: 15.&lt;br /&gt;
# Radiatively active aerosol (Saturn/Jupiter)?&lt;br /&gt;
aeroback2lay         = .true.&lt;br /&gt;
optprop_back2lay_vis = optprop_jupiter_vis_n20.dat&lt;br /&gt;
optprop_back2lay_ir  = optprop_jupiter_ir_n20.dat&lt;br /&gt;
obs_tau_col_tropo    = 4.0&lt;br /&gt;
size_tropo           = 5.e-7&lt;br /&gt;
pres_bottom_tropo    = 8.0D4&lt;br /&gt;
pres_top_tropo       = 1.8D4&lt;br /&gt;
obs_tau_col_strato   = 0.1D0&lt;br /&gt;
# Auroral aerosols (Saturn/Jupiter)?&lt;br /&gt;
aeroaurora         = .false.&lt;br /&gt;
size_aurora        = 3.e-7&lt;br /&gt;
obs_tau_col_aurora = 2.0&lt;br /&gt;
&lt;br /&gt;
# Radiatively active CO2 aerosol?&lt;br /&gt;
aeroco2            = .false.&lt;br /&gt;
# Fixed CO2 aerosol distribution?&lt;br /&gt;
aerofixco2     = .false.&lt;br /&gt;
# Radiatively active water aerosol?&lt;br /&gt;
aeroh2o        = .false.&lt;br /&gt;
# Fixed water aerosol distribution?&lt;br /&gt;
aerofixh2o     = .false.&lt;br /&gt;
# basic dust opacity&lt;br /&gt;
dusttau        = 0.0&lt;br /&gt;
# Varying H2O cloud fraction?&lt;br /&gt;
CLFvarying     = .false.&lt;br /&gt;
# H2O cloud fraction if fixed?&lt;br /&gt;
CLFfixval      = 0.0&lt;br /&gt;
# fixed radii for cloud particles?&lt;br /&gt;
radfixed       = .false.&lt;br /&gt;
# number mixing ratio of CO2 ice particles&lt;br /&gt;
Nmix_co2       = 100000.&lt;br /&gt;
# number mixing ratio of water particles (for rafixed=.false.)&lt;br /&gt;
Nmix_h2o       = 1.e7&lt;br /&gt;
# number mixing ratio of water ice particles (for rafixed=.false.)&lt;br /&gt;
Nmix_h2o_ice   = 5.e5&lt;br /&gt;
# radius of H2O water particles (for rafixed=.true.):&lt;br /&gt;
rad_h2o        = 10.e-6&lt;br /&gt;
# radius of H2O ice particles (for rafixed=.true.):&lt;br /&gt;
rad_h2o_ice    = 35.e-6&lt;br /&gt;
# atm mass update due to tracer evaporation/condensation?&lt;br /&gt;
mass_redistrib = .false.&lt;br /&gt;
&lt;br /&gt;
## Water options &lt;br /&gt;
## ~~~~~~~~~~~~~&lt;br /&gt;
# Model water cycle&lt;br /&gt;
water         = .true.&lt;br /&gt;
# Model water cloud formation&lt;br /&gt;
watercond     = .true.&lt;br /&gt;
# Model water precipitation (including coagulation etc.)&lt;br /&gt;
waterrain     = .true.&lt;br /&gt;
# Use simple precipitation scheme?&lt;br /&gt;
precip_scheme = 1&lt;br /&gt;
# Evaporate precipitation?&lt;br /&gt;
evap_prec     = .true.&lt;br /&gt;
# multiplicative constant in Boucher 95 precip scheme&lt;br /&gt;
Cboucher      = 1.&lt;br /&gt;
# Include hydrology ?&lt;br /&gt;
hydrology     = .false.&lt;br /&gt;
# H2O snow (and ice) albedo ?&lt;br /&gt;
albedosnow    = 0.6&lt;br /&gt;
# Maximum sea ice thickness ?&lt;br /&gt;
maxicethick   = 10.&lt;br /&gt;
# Freezing point of seawater (degrees C) ?&lt;br /&gt;
Tsaldiff      = 0.0&lt;br /&gt;
# Evolve surface water sources ?&lt;br /&gt;
sourceevol    = .false.&lt;br /&gt;
&lt;br /&gt;
## CO2 options &lt;br /&gt;
## ~~~~~~~~~~~&lt;br /&gt;
# call CO2 condensation ?&lt;br /&gt;
co2cond       = .false.&lt;br /&gt;
# Set initial temperature profile to 1 K above CO2 condensation everywhere?&lt;br /&gt;
nearco2cond   = .false.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [[The_gases.def_Input_file|''gases.def'']]: File containing the gas composition of the atmosphere you want to model, with their molar mixing ratios. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
# gases&lt;br /&gt;
5&lt;br /&gt;
H2_&lt;br /&gt;
He_&lt;br /&gt;
CH4&lt;br /&gt;
C2H2&lt;br /&gt;
C2H6&lt;br /&gt;
0.863&lt;br /&gt;
0.134&lt;br /&gt;
0.0018&lt;br /&gt;
1.e-7&lt;br /&gt;
1.e-5&lt;br /&gt;
# First line is number of gases&lt;br /&gt;
# Followed by gas names (always 3 characters)&lt;br /&gt;
# and then molar mixing ratios.&lt;br /&gt;
# mixing ratio -1 means the gas is variable.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [[The jupiter_const.def Input File|''jupiter_const.def'']]: Files that gather all orbital and physical parameters of Jupiter.&lt;br /&gt;
&lt;br /&gt;
- [[The_traceur.def_Input_File|''traceur.def'']]: At this time, only two tracers are used for modelling Jupiter atmosphere, so the ''traceur.def'' file is summed up as follow&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
2&lt;br /&gt;
h2o_vap&lt;br /&gt;
h2o_ice&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''' Two additional files are used to set the running parameter of the simulation itself:'''&lt;br /&gt;
&lt;br /&gt;
- [[The run_icosa.def Input File | ''run_icosa.def'']]: file containing parameters for '''ICOSAGCM''' to execute the simulation, use to determine the [[Advanced Use of the GCM | horizontal and vertical resolutions]], the number of processors, the number of subdivisions, the duration of the simulation, etc.&lt;br /&gt;
&lt;br /&gt;
- ''run.def'': file which brings together all the setting files and will be reading by the interface '''ICOSA_LMDZ''' to link each part of the model ('''ICOSAGCM''', '''LMDZ.GENERIC''') with its particular setting file(s) when the library '''XIOS''' does not take action (through the ''.xml'' files).&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
###########################################################################&lt;br /&gt;
### INCLUDE OTHER DEF FILES (physics, specific settings, etc...)&lt;br /&gt;
###########################################################################&lt;br /&gt;
INCLUDEDEF=run_icosa.def&lt;br /&gt;
&lt;br /&gt;
INCLUDEDEF=jupiter_const.def&lt;br /&gt;
&lt;br /&gt;
INCLUDEDEF=callphys.def&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
prt_level=0&lt;br /&gt;
&lt;br /&gt;
## iphysiq must be same as itau_physics&lt;br /&gt;
iphysiq=40&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hot Jupiter with DYNAMICO ==&lt;br /&gt;
&lt;br /&gt;
Modelling the atmosphere of Hot Jupiter is challenging because of the extreme temperature conditions, and the fact that these planets are gas giants. Therefore, using a dynamical core such as Dynamico is strongly recommended. Here, we discuss how to perform a cloudless simulation of the Hot Jupiter WASP-43 b, using Dynamico.&lt;br /&gt;
&lt;br /&gt;
'''1st step''': You need to go to the github mentionned previously for Dynamico: https://github.com/aymeric-spiga/dynamico-giant. ''Git clone'' this repo on your favorite cluster, and ''checkout'' to the &amp;quot;hot_jupiter&amp;quot; branch.&lt;br /&gt;
&lt;br /&gt;
'''2nd step''': Now, run the install.sh script. This script will install '''all''' the required models ('''LMDZ.COMMON''', '''LMDZ.GENERIC''','''ICOSA_LMDZ''','''XIOS''','''FCM''','''ICOSAGCM'''). At this point, you only miss '''IOIPSL'''. To install it, go to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
dynamico-giant/code/LMDZ.COMMON/ioipsl/ &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There, you will find some examples of installations script. You need to create one that will work on your cluster, with your own arch files.&lt;br /&gt;
During the installation of '''IOIPSL''', you might be asked for a login/password. Contact TGCC computing center to get access.&lt;br /&gt;
&lt;br /&gt;
'''3rd step''': Great, now we have all we need to get started. Navigate to the ''hot_jupiter'' folder. You will find a ''compile_mesopsl.sh'' and a ''compile_occigen.sh'' script. Use them as examples to create the compile script adapted to your own cluster, then run it. &lt;br /&gt;
While running, I suggest that you take a look at the ''log_compile'' file. The compilation can take a while (~ 10minutes, especially because of XIOS). On quick trick to make sure that everything went right is to check the number of ''Build command finished'' messages in ''log_compile''. If everything worked out, there should be 6 of them.&lt;br /&gt;
&lt;br /&gt;
'''4th step''': Okay, the model compiled, good job ! Now we need to create the initial condition for our run. In the hot_jupiter1d folder, you already have a ''temp_profile.txt'' computed with the 1D version of the LMDZ.GENERIC (see rcm1d on this page). Thus, no need to recompute a 1D model but it will be needed if you want to model another Hot Jupiter.&lt;br /&gt;
Navigate to the 'makestart' folder, located at &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
dynamico-giant/hot_jupiter/makestart/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To generate the initial conditions for the 3D run, we're gonna start the model using the temperature profile from the 1D run. to do that, you will find a &amp;quot;job_mpi&amp;quot; script. Open it, and adapt it to your cluster and launch the job. This job is using 20 procs, and it runs 5 days of simulations. &lt;br /&gt;
If everything goes well, you should see few netcdf files appear. The important ones are '''start_icosa0.nc''', '''startfi0.nc''' and '''Xhistins.nc'''. &lt;br /&gt;
If you see these files, you're all set to launch a real simulation !&lt;br /&gt;
&lt;br /&gt;
'''5th step''': Go back to ''hot_jupiter'' folder. There are a bunch of script to launch your simulation. Take a look at the ''astro_fat_mpi'' script, and adapt it to your cluster. Then you can launch your simulation by doing &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
./run_astro_fat&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will start the simulation, using 90 procs. In the same folder, check if the icosa_lmdz.out file is created. This is the logfile of the simulation, while it is running. You can check there that everything is going well.&lt;br /&gt;
&lt;br /&gt;
'''Important side note''': When using the ''run_astro_fat'' script to run a simulation, it will run a chained simulation, restarting the simulation from the previous state after 100 days of simulations and generating ''Xhistins.nc'' files. This is your results file, where you will find all the variables that controls your atmosphere (temperature field, wind fields, etc..). &lt;br /&gt;
&lt;br /&gt;
Good luck and enjoy the generic PCM Dynamico for Hot Jupiter !&lt;br /&gt;
&lt;br /&gt;
'''2nd important side note''': These 5 steps are the basic needed steps to run a simulation. If you want to tune simulations to another planet, or change other stuff, you need to take a look at '''*.def''' and '''*.xml''' files. If you're lost in all of this, take a look at the different pages of this website and/or contact us !&lt;br /&gt;
Also, you might want to check the wiki on the [https://github.com/aymeric-spiga/dynamico-giant ''Github''], that explains a lot of settings for Dynamico&lt;br /&gt;
&lt;br /&gt;
= 3D LES setup =&lt;br /&gt;
&lt;br /&gt;
== Proxima b with LES ==&lt;br /&gt;
&lt;br /&gt;
To model the subgrid atmospheric turbulence, the [[WRF dynamical core for LES/mesoscale simulations|'''WRF''']] dynamical core coupled with the LMD Generic physics package is used. The first studied conducted was to resolve the convective activity of the substellar point of Proxami-b (Lefevre et al 2021). The impact of the stellar insolation and rotation period were studied. The files for the reference case, with a stellar flux of 880 W/m2 and an 11 days rotation period, are presented&lt;br /&gt;
&lt;br /&gt;
The input_* file are the used to initialize the temperature, pressure, winds and moisture of the domain. &lt;br /&gt;
input_souding : altitude (km), potential temperature, water vapour (kg/kg), u, v&lt;br /&gt;
input_therm : normalized gas constant, isobaric heat capacity, pressure, density, temperature&lt;br /&gt;
input_hr : SW heating, LW heating, Large-scale heating extracted from the GCM. Only the last one is used in this configuration.&lt;br /&gt;
&lt;br /&gt;
The file namelist.input is used to set up the domain parameters (resolution, grid points, etc). The file levels specifies the eta-levels of the vertical domain.&lt;br /&gt;
&lt;br /&gt;
Planet is used set up the atmospheric parameters, in order : gravity (m/s2), isobaric heat capacity (J/kg/K), molecular mass (g/mol), reference temperature (K), surface pressure (Pa), planet radius (m) and planet rotation rate (s-1).&lt;br /&gt;
&lt;br /&gt;
The files *.def are the parameter for the physics. Compared to GCM runs, the convective adjustment in callphys.def is turned off&lt;br /&gt;
&lt;br /&gt;
The file controle.txt, equivalent of the field controle in GCM start.nc, needed to initialize some physics constants.&lt;br /&gt;
&lt;br /&gt;
TBC ML&lt;br /&gt;
&lt;br /&gt;
= 1D setup =&lt;br /&gt;
&lt;br /&gt;
== rcm1d test case ==&lt;br /&gt;
&lt;br /&gt;
Running the model in 1D (i.e. considering simply a column of atmosphere) is a common first step to test a new setup. To do so, you first have to compile the 1D version of the model. The command line is very similar to [[Quick_Install_and_Run#Compiling a test case (early Mars)|the one for the 3D]], except for 2 changes:&lt;br /&gt;
# put just the vertical resolution after the -d option (&amp;quot;VERT&amp;quot; instead of ''LON''x''LAT''x''VERT'' for the 3D case)&lt;br /&gt;
# at the end of the line, replace &amp;quot;gcm&amp;quot; with &amp;quot;rcm1d&amp;quot;&lt;br /&gt;
It will generate a file called '''rcm1d_XX_phyxxx_seq.e''', where ''XX'' and ''phyxxx'' are the vertical resolution and the physics package, respectively.&lt;br /&gt;
&lt;br /&gt;
Then, copy the executable in your working directory. &lt;br /&gt;
&lt;br /&gt;
Note that the '''.def''' files differ a bit from the 3D case: [[The_run.def_Input_File|'''run.def''']] is replaced with [[The_rcm1d.def_Input_File|'''rcm1d.def''']], which contains more general information. Indeed, the 1D model does not use [[The_start.nc_and_startfi.nc_input_files|'''start.nc''']] or [[The_start.nc_and_startfi.nc_input_files|'''startfi.nc''']] files to initialize. Instead it reads everything from the '''.def''' files. You can find examples of 1D configuration in ''LMDZ.GENERIC/deftank'' (e.g. '''rcm1d.def.earlymars''', '''rcm1d.def.earth'''), the best thing is to have a look at them.&lt;br /&gt;
&lt;br /&gt;
== kcm1d test case ==&lt;br /&gt;
&lt;br /&gt;
Our 1-D inverse model&lt;br /&gt;
&lt;br /&gt;
TBD by Guillaume or Martin&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;br /&gt;
[[Category:Generic-DYNAMICO]]&lt;br /&gt;
[[Category:Generic-WRF]]&lt;br /&gt;
[[Category:Generic-1D]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Parallelism&amp;diff=3201</id>
		<title>Parallelism</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Parallelism&amp;diff=3201"/>
				<updated>2026-02-04T13:42:39Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''This page comes mainly from the LMD Generic GCM user manual (https://trac.lmd.jussieu.fr/Planeto/browser/trunk/LMDZ.GENERIC/ManualGCM_GENERIC.pdf). It is still in development and needs further improvements''&lt;br /&gt;
&lt;br /&gt;
== What is parallelism? ==&lt;br /&gt;
&lt;br /&gt;
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. &lt;br /&gt;
Large problems can often be divided into smaller ones, which can then be solved at the same time.&lt;br /&gt;
&lt;br /&gt;
In short : '''Parallelism can help you save time''': One will get the same results when running on more cores than if running the serial version of the code, but sooner.&lt;br /&gt;
&lt;br /&gt;
Indeed, as the problem is cut into smaller part that are solved simultaneously, the waiting (wall clock) time for the user is reduced.&lt;br /&gt;
However this usually comes with a counterpart (overheads and extra computations due to the parallelization, along with some inherent inefficiences concerning some computations which must be done sequentially), it can '''increase the total computational cost'''.&lt;br /&gt;
&lt;br /&gt;
== How parallelism is implemented in the model ==&lt;br /&gt;
The main factor that constrains and orients the way the code is parallelized is that in the physics, atmospheric columns are &amp;quot;independent&amp;quot; from each other, whereas in the dynamics the flow is 3D with strong coupling between neighboring cells.&lt;br /&gt;
&lt;br /&gt;
=== Parallelism with the lon-lat (LMDZ) dynamical core ===&lt;br /&gt;
&lt;br /&gt;
* MPI tiling: In the lon-lat dynamics the globe is tiled in regions covering all longitudes and a few latitudes. In practice these latitude bands must contain at least 2 points. There is therefore  a limitation to the number of MPI processes one may run with: for a given number of latitude intervals jjm one may use at most jjm/2 processes (for example if the horizontal grid is 64x48 in lonxlat so one could use at most 48/2=24 MPI processes.&lt;br /&gt;
&lt;br /&gt;
* Open MP (OMP): In the dynamics this parallelism is implemented on the loops along the vertical. One could thus use as many OpenMP threads are there are model levels. In practice however the speedup breaks down with much less and it is recommended to have OpenMP chunks of at least ten vertical levels each. Therefore for simulation with llm altitude levels one should target using at most llm/10 OpenMP threads (e.g. for a 64x48x54 grid target using at most 5 OpenMP threads).  &lt;br /&gt;
&lt;br /&gt;
* In practice: One will want to use both MPI and OpenMP for simulations, with as many MPI processes as possible, combined to a good number of OpenMP threads (for each MPI process). Depending on the cluster used, the speedup as a function of number of MPI processes and OpenMP threads can vary a lot. It is therefore recommended to test it to find the &amp;quot;optimal&amp;quot; setup for a given grid.&lt;br /&gt;
&lt;br /&gt;
== How to compile the parallel version of the PCM ==&lt;br /&gt;
&lt;br /&gt;
To compile the model in parallel use the same command as in sequential (see e.g. the &amp;quot;Compiling&amp;quot; section of [[Quick Install and Run]] or the description of the [[The_makelmdz_fcm_GCM_Compilation_Script|makelmdz_fcm script]] and its options) and add the following option :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 -parallel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then there is three choices for parallelism MPI, OMP and mixed (i.e. combined) MPI_OMP:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 -parallel mpi&lt;br /&gt;
 -parallel omp&lt;br /&gt;
 -parallel mpi_omp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So the command line to generate the Generic PCM to run in mixed MPI and OpenMP mode will be for example :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./makelmdz_fcm -s XX -d LONxLATxALT -b IRxVI -p generic -arch archFile -parallel mpi_omp gcm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to run in parallel ==&lt;br /&gt;
&lt;br /&gt;
=== Run interactively ===&lt;br /&gt;
* MPI only :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpirun -np N gcm.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
-np N specifies the number of procs to run on.&lt;br /&gt;
&lt;br /&gt;
IMPORTANT: one MUST use the mpirun command corresponding to the mpif90 compiler specified in the arch file.&lt;br /&gt;
&lt;br /&gt;
IMPORTANT: One can use at most one MPI process for every 2 points along the latitude (e.g. a maximum of 24 processes for a horizontal grid of 64x48). If you try to use too many MPI processes you will get the following error message (in French!!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 Arret : le nombre de bande de lattitude par process est trop faible (&amp;lt;2).&lt;br /&gt;
  ---&amp;gt; diminuez le nombre de CPU ou augmentez la taille en lattitude&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Output files (restart.nc, diagfi.nc ,etc.) are just as when running in serial. &lt;br /&gt;
But standard output messages are written by each process.&lt;br /&gt;
If using chained simulations (run mcd/run0 scripts), then the command line to run the gcm in run0 must be adapted for local settings.&lt;br /&gt;
&lt;br /&gt;
NB: LMDZ.COMMON dynamics set to run in double precision, so keep NC_DOUBLE declaration (and real to double precision promotion) in the arch files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Mix MPI_OMP :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export OMP_NUM_THREADS=2&lt;br /&gt;
export OMP_STACKSIZE=2500MB&lt;br /&gt;
mpirun -np 2 gcm.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this exemple, each of the 2 process MPI have 2 OpenMP tasks with a 2500MB memor.&lt;br /&gt;
&lt;br /&gt;
=== Run with a job scheduler ===&lt;br /&gt;
&lt;br /&gt;
This will be different for each machine.&lt;br /&gt;
Some example are provided here but will need to be adapted for each configuration and machine; see also pages dedicated to some clusters we use, such as [[Using the MESOIPSL cluster]] or [[Using Irene Rome]] or [[Using Adastra]]&lt;br /&gt;
&lt;br /&gt;
* MPI only :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
PBS example (on Ciclad):&lt;br /&gt;
#PBS -S /bin/bash&lt;br /&gt;
#PBS -N job_mpi08&lt;br /&gt;
#PBS -q short&lt;br /&gt;
#PBS -j eo&lt;br /&gt;
#PBS -l &amp;quot;nodes=1:ppn=8&amp;quot;&lt;br /&gt;
# go to directory where the job was launched&lt;br /&gt;
cd $PBS_O_WORKDIR&lt;br /&gt;
mpirun gcm_64x48x29_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LoadLeveler example (on Gnome):&lt;br /&gt;
# @ job_name = job_mip8&lt;br /&gt;
# standard output file&lt;br /&gt;
# @ output = job_mpi8.out.$(jobid)&lt;br /&gt;
# standard error file&lt;br /&gt;
# @ error = job_mpi8.err.$(jobid)&lt;br /&gt;
# job type&lt;br /&gt;
# @ job_type = mpich&lt;br /&gt;
# @ blocking = unlimited&lt;br /&gt;
# time&lt;br /&gt;
# @ class = AP&lt;br /&gt;
# Number of procs&lt;br /&gt;
# @ total_tasks = 8&lt;br /&gt;
# @ resources=ConsumableCpus(1) ConsumableMemory(2500 mb)&lt;br /&gt;
# @ queue&lt;br /&gt;
set -vx&lt;br /&gt;
mpirun gcm_32x24x11_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LoadLeveler example (on Ada):&lt;br /&gt;
module load intel/2012.0&lt;br /&gt;
# @ output = output.$(jobid)&lt;br /&gt;
# @ error = $(output)&lt;br /&gt;
# @ job_type = parallel&lt;br /&gt;
## Number of MPI process&lt;br /&gt;
# @ total_tasks = 8&lt;br /&gt;
## Memory used by each MPI process&lt;br /&gt;
# @ as_limit = 2500mb&lt;br /&gt;
# @ wall_clock_limit=01:00:00&lt;br /&gt;
# @ core_limit = 0&lt;br /&gt;
# @ queue&lt;br /&gt;
set -x&lt;br /&gt;
poe ./gcm.e -labelio yes &amp;gt; LOG 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Mix MPI_OMP :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LoadLeveler example (on Gnome):&lt;br /&gt;
# @ job_name = job_mip8&lt;br /&gt;
# standard output file&lt;br /&gt;
# @ output = job_mpi8.out.$(jobid)&lt;br /&gt;
# standard error file&lt;br /&gt;
# @ error = job_mpi8.err.$(jobid)&lt;br /&gt;
# job type&lt;br /&gt;
# @ job_type = mpich&lt;br /&gt;
# @ blocking = unlimited&lt;br /&gt;
# time&lt;br /&gt;
# @ class = AP&lt;br /&gt;
# Number of procs&lt;br /&gt;
# @ total_tasks = 8&lt;br /&gt;
# @ resources=ConsumableCpus(1) ConsumableMemory(5000 mb)&lt;br /&gt;
# @ queue&lt;br /&gt;
set -vx&lt;br /&gt;
export OMP_NUM_THREADS=2 #sinon par defaut, lance 8 threads OpenMP&lt;br /&gt;
export OMP_STACKSIZE=2500MB&lt;br /&gt;
mpirun gcm_32x24x11_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IMPORTANT: ConsumableMemory must be equal to OMP NUM THREADSxOMP STACKSIZE.&lt;br /&gt;
In this case, we are using 8x2 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LoadLeveler example (on Ada):&lt;br /&gt;
module load intel/2012.0&lt;br /&gt;
# @ output = output.$(jobid)&lt;br /&gt;
# @ error = $(output)&lt;br /&gt;
# @ job_type = parallel&lt;br /&gt;
## Number of MPI process&lt;br /&gt;
# @ total_tasks = 8&lt;br /&gt;
## Number of OpenMP tasks attached to each MPI process&lt;br /&gt;
# @ parallel_threads = 2&lt;br /&gt;
## Memory used by each MPI process&lt;br /&gt;
# @ as_limit = 5gb&lt;br /&gt;
# @ wall_clock_limit=01:00:00&lt;br /&gt;
# @ core_limit = 0&lt;br /&gt;
# @ queue&lt;br /&gt;
set -x&lt;br /&gt;
export OMP_STACKSIZE=2500MB&lt;br /&gt;
poe ./gcm.e -labelio yes &amp;gt; LOG 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
IMPORTANT: In this case, each core needs 2.5gb and we are using 2 OpenMP tasks for each MPI process so as_limit = 2 × 2.5.&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;br /&gt;
[[Category:Generic-DYNAMICO]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Tool_Box&amp;diff=3200</id>
		<title>Tool Box</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Tool_Box&amp;diff=3200"/>
				<updated>2026-02-04T13:38:59Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Pre-processing Tools ==&lt;br /&gt;
=== newstart: a fortran program to modify start files ===&lt;br /&gt;
&lt;br /&gt;
Newstart is an interactive tool to modify the start files (''start.nc'' and ''startfi.nc''). &lt;br /&gt;
&lt;br /&gt;
To be usable, ''newstart'' should be compile in the ''LMDZ.COMMON'' directory by using the following command line:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -arch my_arch_file -p generic -d 64x48x30 newstart&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In the example, my_arch_file is the name the arch files (see [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/The_Target_Architecture_(%22arch%22)_Files arch] ) and 64x48x30 is the resolution of the physical grid.&lt;br /&gt;
Then copy the executable from the ''LMDZ.COMMON/bin'' directory to your bench directory.&lt;br /&gt;
&lt;br /&gt;
When you execute newstart, you can use both a ''start2archive'' file or the start files (''start.nc'' and ''startfi.nc''). Then the interactive interface will propose to modify several physical quantities such as the gravity, the surface pressure or the rotation of the planet. At the end of the procedure, two files are created: '' '''re'''start.nc'' and '' '''re'''startfi.nc''. They can be renamed and used as start files to initialize a new simulation.&lt;br /&gt;
&lt;br /&gt;
We have prepared a simple tutorial to learn how to modify ''start.nc'' and ''startfi.nc'' files (= the the initial conditions) for the Generic PCM: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Modify_start_Files&lt;br /&gt;
&lt;br /&gt;
=== start2archive ===&lt;br /&gt;
&lt;br /&gt;
The start2archive tool is similar to newstart in the sense that it can be used to modify the start files. But start2archive can modify the resolution of the physical grid, the topography and the surface thermal inertia while newstart cannot. It is also useful to create an archive of different starting states, then extractable as start files.&lt;br /&gt;
The command line to compile start2archive is similar to the one used for newstart:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -arch my_arch_file -p generic -d 64x48x30 start2archive&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To modify the resolution, you should first create a start_archive (by using start2archive) file at the used resolution, then compile a newstart file at the new resolution. Newstart will interpolate all the physical quantities on the new grid.&lt;br /&gt;
&lt;br /&gt;
=== other third party scripts and tools ===&lt;br /&gt;
&lt;br /&gt;
You can easily modify start.nc and startfi.nc netcdf files with the xarray Python library. Below you can find an easy example where we modify the surface temperature field.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
from    numpy import *&lt;br /&gt;
import  numpy                 as        np&lt;br /&gt;
import  matplotlib.pyplot     as        mpl&lt;br /&gt;
import  math&lt;br /&gt;
import xarray as xr&lt;br /&gt;
&lt;br /&gt;
#FIRST WE GET THE DATA FROM THE GCM SIMULATION&lt;br /&gt;
nc = xr.open_dataset('startfi.nc',decode_times=False) # can be any netcdf file (e.g. start/startfi.nc files)&lt;br /&gt;
&lt;br /&gt;
# BELOW PHYSICAL VARIABLES&lt;br /&gt;
physical_points=nc['physical_points']&lt;br /&gt;
lat=nc['latitude']&lt;br /&gt;
lon=nc['longitude']&lt;br /&gt;
aire_GCM=nc['area']&lt;br /&gt;
&lt;br /&gt;
# BELOW THE VARIABLE WE WANT TO UPDATE&lt;br /&gt;
tsurf=nc['tsurf']&lt;br /&gt;
new_tsurf = np.empty(len(physical_points))&lt;br /&gt;
&lt;br /&gt;
# LOOP TO MODIFY THE VARIABLE&lt;br /&gt;
for i in range(0,len(physical_points),1):&lt;br /&gt;
    new_tsurf[i]=300. # here you put whatever you want ; in this exemple, we assume an isothermal temperature distribution&lt;br /&gt;
&lt;br /&gt;
nc['tsurf'].values = new_tsurf&lt;br /&gt;
nc.to_netcdf('restartfi.nc')&lt;br /&gt;
&lt;br /&gt;
# SANITY CHECK PLOTS&lt;br /&gt;
&lt;br /&gt;
fig = mpl.figure(1)&lt;br /&gt;
mpl.plot(physical_points,tsurf)&lt;br /&gt;
mpl.plot(physical_points,new_tsurf)&lt;br /&gt;
mpl.xlabel('GCM Physical Points')&lt;br /&gt;
mpl.ylabel('Tsurf (K)')&lt;br /&gt;
mpl.show()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-processing tools ==&lt;br /&gt;
=== zrecast ===&lt;br /&gt;
&lt;br /&gt;
With this program you can recast atmospheric (i.e.: 4D-dimentional longitude-latitude-altitude-time) data from&lt;br /&gt;
GCM outputs (e.g. as given in diagfi.nc files) onto either ''pressure'' or ''altitude'' above ''areoid vertical'' coordinates.&lt;br /&gt;
Since integrating the hydrostatic equation is required to recast the data, the input file must contain surface pressure&lt;br /&gt;
and atmospheric temperature, as well as the ground geopotential.&lt;br /&gt;
If recasting data onto ''pressure'' coordinates, then the output file name is given by the input file name to which ''_P.nc'' will be appened. If recasting data onto altitude above areoid coordinates, then a ''_A.nc'' will be appened.&lt;br /&gt;
&lt;br /&gt;
=== mass stream function ===&lt;br /&gt;
&lt;br /&gt;
The mass stream function (and the total angular momentum) can be computed from a diagfi.nc or a stats.nc, using the '''streamfunction.F90''' script. The script is located at&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
trunk/LMDZ.GENERIC/utilities&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compile the script, open the ''compile'' file in the same directory and do the following:&lt;br /&gt;
* Replace &amp;quot;pgf90&amp;quot; with your favorite fortran compiler &lt;br /&gt;
* replace &amp;quot;/distrib/local/netcdf/pgi_7.1-6_32/lib&amp;quot; with the lib address and directory that contains your NetCDF library (file ''libnetcdf.a'').&lt;br /&gt;
* Replace &amp;quot;/distrib/local/netcdf/pgi_7.1-6_32/include&amp;quot; with the address of the directory that contains the NetCDF include file (''netcdf.inc'').&lt;br /&gt;
* You can mess with the compiling options but it is not mandatory.&lt;br /&gt;
&lt;br /&gt;
Once the script is compiled, copy it in the same directory as your '''.nc''' file and run &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./streamfunction.e&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The script will ask you for the name of your '''.nc''' file, and will run and produce a new '''nameofyourfile_stream.nc''' file.&lt;br /&gt;
&lt;br /&gt;
'''Be careful''' : In this new file, all fields are temporally and zonally averaged.&lt;br /&gt;
&lt;br /&gt;
If you want to use '''python''' instead of '''fortran''', you can take a look at this [https://github.com/aymeric-spiga/dynanalysis repo]. It hosts a tool to perform dynamical analysis of GCM simulations (and therefore, it computes the mass stream function and a lot of other stuff), but it is tailored for Dynamico only. This repo also takes care of recasting (it does the job of both ''zrecast.F90'' and ''streamfunction.F90'')&lt;br /&gt;
&lt;br /&gt;
== Continuing Simulations ==&lt;br /&gt;
&lt;br /&gt;
=== manually ===&lt;br /&gt;
&lt;br /&gt;
At the end of a simulation, the model generates restart files (files 'restart.nc' and 'restartfi.nc') which contain the final state of the model. The 'restart.nc' and 'restartfi.nc' files have the same format as the 'start.nc' and 'startfi.nc' files, respectively. &lt;br /&gt;
&lt;br /&gt;
These files can in fact be used as initial states to continue the simulation, using the following renaming command lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mv restart.nc start.nc&lt;br /&gt;
mv restartfi.nc startfi.nc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running a simulation with these start files will in fact resume the simulation from where the previous run ended.&lt;br /&gt;
&lt;br /&gt;
=== with bash scripts ===&lt;br /&gt;
&lt;br /&gt;
We have set up very simple bash scripts to automatize the launching of chain simulations. Here is an example of bash script that does the job:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
###########################################################################&lt;br /&gt;
# Script to perform several chained LMD Mars GCM simulations&lt;br /&gt;
# SET HERE the maximum total number of simulations&lt;br /&gt;
&lt;br /&gt;
nummax=100&lt;br /&gt;
&lt;br /&gt;
###########################################################################&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;---------------------------------------------------------&amp;quot;&lt;br /&gt;
echo &amp;quot;STARTING LOOP RUN&amp;quot;&lt;br /&gt;
echo &amp;quot;---------------------------------------------------------&amp;quot;&lt;br /&gt;
&lt;br /&gt;
dir=`pwd`&lt;br /&gt;
machine=`hostname`&lt;br /&gt;
address=`whoami`&lt;br /&gt;
&lt;br /&gt;
# Look for file &amp;quot;num_run&amp;quot; which should contain &lt;br /&gt;
# the value of the previously computed season&lt;br /&gt;
# (defaults to 0 if file &amp;quot;num_run&amp;quot; does not exist)&lt;br /&gt;
if [[ -r num_run ]] ; then&lt;br /&gt;
  echo &amp;quot;found file num_run&amp;quot;&lt;br /&gt;
  numold=`cat num_run`&lt;br /&gt;
else&lt;br /&gt;
  numold=0&lt;br /&gt;
fi&lt;br /&gt;
echo &amp;quot;numold is set to&amp;quot; ${numold}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Set value of current season &lt;br /&gt;
(( numnew = ${numold} + 1 ))&lt;br /&gt;
echo &amp;quot;numnew is set to&amp;quot; ${numnew}&lt;br /&gt;
&lt;br /&gt;
# Look for initialization data files (exit if none found)&lt;br /&gt;
if [[ ( -r start${numold}.nc  &amp;amp;&amp;amp;  -r startfi${numold}.nc ) ]] ; then&lt;br /&gt;
   \cp -f start${numold}.nc start.nc&lt;br /&gt;
   \cp -f startfi${numold}.nc startfi.nc&lt;br /&gt;
else&lt;br /&gt;
   if (( ${numold} == 99999 )) ; then&lt;br /&gt;
    echo &amp;quot;No run because previous run crashed ! (99999 in num_run)&amp;quot;&lt;br /&gt;
    exit&lt;br /&gt;
   else&lt;br /&gt;
   echo &amp;quot;Where is file start&amp;quot;${numold}&amp;quot;.nc??&amp;quot;&lt;br /&gt;
   exit&lt;br /&gt;
   fi&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Run GCM -- THIS LINE NEEDS TO BE MODIFIED WITH THE CORRECT GCM EXECUTION COMMAND&lt;br /&gt;
mpirun -np 8 gcm_64x48x26_phygeneric_para.e &amp;lt; diagfi.def &amp;gt; lrun${numnew}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Check if run ended normaly and copy datafiles&lt;br /&gt;
if [[ ( -r restartfi.nc  &amp;amp;&amp;amp;  -r restart.nc ) ]] ; then&lt;br /&gt;
  echo &amp;quot;Run seems to have ended normaly&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  \mv -f restart.nc start${numnew}.nc&lt;br /&gt;
  \mv -f restartfi.nc startfi${numnew}.nc  &lt;br /&gt;
    &lt;br /&gt;
else&lt;br /&gt;
  if [[ -r num_run ]] ; then&lt;br /&gt;
    \mv -f num_run num_run.crash&lt;br /&gt;
  else&lt;br /&gt;
    echo &amp;quot;No file num_run to build num_run.crash from !!&amp;quot;&lt;br /&gt;
    # Impose a default value of 0 for num_run&lt;br /&gt;
    echo 0 &amp;gt; num_run.crash&lt;br /&gt;
  fi&lt;br /&gt;
 echo 99999 &amp;gt; num_run&lt;br /&gt;
 exit&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Copy other datafiles that may have been generated&lt;br /&gt;
if [[ -r diagfi.nc ]] ; then&lt;br /&gt;
  \mv -f diagfi.nc diagfi${numnew}.nc&lt;br /&gt;
fi&lt;br /&gt;
if [[ -r diagsoil.nc ]] ; then&lt;br /&gt;
  \mv -f diagsoil.nc diagsoil${numnew}.nc&lt;br /&gt;
fi&lt;br /&gt;
if [[ -r stats.nc ]] ; then&lt;br /&gt;
  \mv -f stats.nc stats${numnew}.nc&lt;br /&gt;
fi&lt;br /&gt;
if [[ -f profiles.dat ]] ; then&lt;br /&gt;
  \mv -f profiles.dat profiles${numnew}.dat&lt;br /&gt;
  \mv -f profiles.hdr profiles${numnew}.hdr&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Prepare things for upcoming runs by writing&lt;br /&gt;
# value of computed season in file num_run&lt;br /&gt;
echo ${numnew} &amp;gt; num_run&lt;br /&gt;
&lt;br /&gt;
# If we are over nummax : stop&lt;br /&gt;
if (( $numnew + 1 &amp;gt; $nummax )) ; then&lt;br /&gt;
   exit&lt;br /&gt;
# If not : restart the loop (copy the executable and run the copy)&lt;br /&gt;
else&lt;br /&gt;
   \cp -f run_gnome exe_mars&lt;br /&gt;
   ./exe_mars&lt;br /&gt;
fi &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Summary of what this bash script does''':&lt;br /&gt;
&lt;br /&gt;
* It reads the file 'num_run' which contains the step of the simulation. &lt;br /&gt;
If num_run is&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
5&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
then the script expects to read start5.nc and startfi5.nc.&lt;br /&gt;
* It modifies start5.nc and startfi5.nc into start.nc and startfi.nc, respectively.&lt;br /&gt;
* It runs the GCM.&lt;br /&gt;
* It modifies restart.nc and restartfi.nc into start6.nc and startfi6.nc&lt;br /&gt;
* It rewrite num_run as follows:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
6&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* It restarts the loop until num_run reaches the value (defined in nummax):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
100&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Printing the code version of a program ==&lt;br /&gt;
&lt;br /&gt;
Details about compilation and code version (SVN or Git) are embedded in the executable file. This feature helps track code builds and their exact compilation context directly from the executable.&lt;br /&gt;
Use the command-line option &amp;quot;--version [file]&amp;quot; when running your program:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./myprogram.e --version [file]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will write compilation and version details into the specified [file]. If [file] is omitted, the default name &amp;quot;pgrm_version_details.txt&amp;quot; will be used. The file contains compilation details, the version control information (SVN or Git), the status and the diff result if applicable, for the sub-folders of your trunk.&lt;br /&gt;
&lt;br /&gt;
This feature is available for every program of the Mars model, the Generic model and the PEM. If you want to extend it to other models, please add the following code section at the beginning of your main programs:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
! Parse command-line options&lt;br /&gt;
call parse_args()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Don't forget to add the required module as well:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
use parse_args_mod, only: parse_args&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Processing Output Files with NCOs ==&lt;br /&gt;
&lt;br /&gt;
NCOs (NetCdf OperatorS) are a set of powerful command-line utilities – available on Linux, Mac and PC – that allow to perform useful (and very fast!) post-processing operations on netCDF GCM output files. Full documentation can be found on http://research.jisao.washington.edu/data_sets/nco/, but we provide below a few examples of command lines.&lt;br /&gt;
&lt;br /&gt;
* How to calculate a time mean of a netCDF 'diagfi.nc' file&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ncra -F -d Time,1,,1 diagfi.nc diagfi_MEAN.nc # format is &amp;quot;-d dimension,minimum,maximum,stride&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Subsetting time in a netCDF 'diagfi.nc' file&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ncea -F -d Time,first,last diagfi.nc diagfi_subset.nc # format is &amp;quot;-d dimension,minimum,maximum&amp;quot; ; we recall you can type &amp;quot;ncdump -v time diagfi.nc&amp;quot; to see the Time values in the netCDF file.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Decimating a netCDF 'diagfi.nc' file in time&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ncks -F -d Time,1,,8 diagfi.nc diagfi_decimated.nc # format is &amp;quot;-d dimension,minimum,maximum,stride&amp;quot; ; In this example, this means that data is extracted 1 time every 8 time steps, starting from the first time step (number 1), ending at the last time step).&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Extract a variable from a netCDF 'diagfi.nc' file&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ncks -v tsurf,temp,p diagfi.nc diagfi_out.nc # Here we created a new file named 'diagfi_out.nc' in which we only kept variables named 'tsurf' (surface temperatures), 'temp' (atmospheric temperatures) and p (atmospheric pressures).&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, more examples can be found on http://research.jisao.washington.edu/data_sets/nco/ .&lt;br /&gt;
&lt;br /&gt;
== Data Handling and Visualization Software ==&lt;br /&gt;
&lt;br /&gt;
There are several data handling and visualization tools that can be used to analyse and plot the results from GCM simulations (using the diagfi.nc NetCDF files). We provide below a panorama of most widely used solutions.&lt;br /&gt;
&lt;br /&gt;
=== panoply ===&lt;br /&gt;
Panoply is a user-friendly tool for viewing raw NetCDF data, available here: https://www.giss.nasa.gov/tools/panoply/ . It is very convenient to make pretty visuals (see an example for the exoplanet TRAPPIST-1e). There are many options that can be used (map projections, masks, colorbars, shadows, etc.) to make your plots fancy. However, the tool is not very well suited for manipulating data (compute averages, statistics, etc.).&lt;br /&gt;
&lt;br /&gt;
* Installation on Linux:&lt;br /&gt;
You simply need to download and untar the Package from the Panoply website. Note that to work it requires that Java and related Java Runtime environment (JRE) be installed on your system (otherwise it will simply look as if &amp;quot;nothing is happening&amp;quot; when you try to launch Panoply via the &amp;quot;panoply.sh&amp;quot; script), which on Ubuntu simply requires something like:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt install java&lt;br /&gt;
sudo apt install default-jre&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Run on Linux (assuming the panoply.sh script is in a directory included in your PATH environment variable):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
panoply.sh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Example panoply.png|thumb|Screenshot of panoply showing here Generic PCM results for the exoplanet TRAPPIST-1e (surface temperatures)]]&lt;br /&gt;
&lt;br /&gt;
=== ncview ===&lt;br /&gt;
ncview is another useful user-friendly tool for viewing raw NetCDF data. This is kind of a very archaic version of panoply, but it is convenient because it allows to have a very quick first look at netCDF data files.&lt;br /&gt;
&lt;br /&gt;
Command line tool to visualize NetCDF data:&lt;br /&gt;
* Installation on Linux-Ubuntu:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt install ncview&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Run on Linux:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ncview diagfi.nc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Example ncview.png|thumb|Screenshot of ncview showing here Generic PCM results for the exoplanet Proxima b (OLR - Thermal Emission)]]&lt;br /&gt;
&lt;br /&gt;
=== python scripts ===&lt;br /&gt;
&lt;br /&gt;
Python scripts provide a very useful mean to analyse and visualize netCDF files.&lt;br /&gt;
&lt;br /&gt;
==== NETCDF4 python library (old school) ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can use the netCDF4 python library to open a netCDF file and put data in tables that can then be manipulated and plotted.&lt;br /&gt;
&lt;br /&gt;
Here is an exemple of how to open and read a netCDF file with Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import numpy&lt;br /&gt;
from netCDF4 import Dataset&lt;br /&gt;
&lt;br /&gt;
# HERE WE OPEN THE NETCDF FILE&lt;br /&gt;
nc = Dataset('diagfi.nc')&lt;br /&gt;
&lt;br /&gt;
# HERE WE READ THE VARIABLES (1D OUTPUT)&lt;br /&gt;
Time=nc.variables['Time'][:]&lt;br /&gt;
lat=nc.variables['latitude'][:]&lt;br /&gt;
lon=nc.variables['longitude'][:]&lt;br /&gt;
al=nc.variables['altitude'][:]&lt;br /&gt;
&lt;br /&gt;
# HERE WE READ THE AREA (2D OUTPUT)&lt;br /&gt;
aire_GCM=nc.variables['aire'][:][:]&lt;br /&gt;
&lt;br /&gt;
# HERE WE READ 3D OUTPUTS&lt;br /&gt;
tsurf=nc.variables['tsurf'][:][:][:] # this is the surface temperature 3D field (time, latitude, longitude, altitude)&lt;br /&gt;
&lt;br /&gt;
# HERE WE READ 4D OUTPUTS&lt;br /&gt;
temp=nc.variables['temp'][:][:][:][:] # this is the atmospheric temperature 4D field (time, latitude, longitude, altitude)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And here is an exemple of how to manipulate the netCDF data (here to compute the time averaged surface temperatures):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from numpy import *&lt;br /&gt;
import numpy as np&lt;br /&gt;
&lt;br /&gt;
mean_tsurf=np.zeros((len(lat),len(lon)),dtype='f')&lt;br /&gt;
&lt;br /&gt;
for i in range(0,len(Time)):&lt;br /&gt;
    for j in range(0,len(lat)):&lt;br /&gt;
        for k in range(0,len(lon)):&lt;br /&gt;
            mean_tsurf[j,k]=mean_tsurf[j,k]+tsurf[i,j,k]*(1./len(Time))&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And here is a last exemple of how to plot the data (using matplotlib):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
&lt;br /&gt;
plt.figure(1)&lt;br /&gt;
plt.contourf(lon_GCM,lat_GCM,mean_tsurf)&lt;br /&gt;
plt.colorbar(label='Surface Temperature (K)')&lt;br /&gt;
plt.xlabel('Longitude ($^{\circ}$)')&lt;br /&gt;
plt.ylabel('Latitude ($^{\circ}$)')&lt;br /&gt;
plt.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== XARRAY python library (more modern) ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another useful library to deal with netcdf files is ''xarray''. We provide a code snippet below, doing the same thing as the snippets above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import numpy as np&lt;br /&gt;
import xarray as xr &lt;br /&gt;
import matplotlib.pyplot as plt &lt;br /&gt;
&lt;br /&gt;
# HERE WE OPEN THE NETCDF FILE&lt;br /&gt;
data = xr.open_dataset('diagfi.nc',&lt;br /&gt;
                       decode_times=False)&lt;br /&gt;
&lt;br /&gt;
# HERE WE READ THE VARIABLES (1D OUTPUT)&lt;br /&gt;
Time=data['Time']&lt;br /&gt;
lat=data['latitude']&lt;br /&gt;
lon=data['longitude']&lt;br /&gt;
al=data['altitude']&lt;br /&gt;
&lt;br /&gt;
# HERE WE READ THE AREA (2D OUTPUT)&lt;br /&gt;
aire_GCM=data['aire']&lt;br /&gt;
&lt;br /&gt;
# HERE WE READ 3D OUTPUTS&lt;br /&gt;
tsurf=data['tsurf'] # this is the surface temperature 3D field (time, latitude, longitude, altitude)&lt;br /&gt;
&lt;br /&gt;
# HERE WE READ 4D OUTPUTS&lt;br /&gt;
temp=data['temp'] # this is the atmospheric temperature 4D field (time, latitude, longitude, altitude)&lt;br /&gt;
&lt;br /&gt;
## let's take the time-averaged surface temperature&lt;br /&gt;
mean_tsurf = np.mean(tsurf,axis=0)&lt;br /&gt;
&lt;br /&gt;
##Let's plot a lon-lat map&lt;br /&gt;
fig = plt.figure()&lt;br /&gt;
plt.contourf(lon,lat,mean_tsurf)&lt;br /&gt;
plt.colorbar(label='Surface Temperature (K)')&lt;br /&gt;
plt.xlabel('Longitude ($^{\circ}$)')&lt;br /&gt;
plt.ylabel('Latitude ($^{\circ}$)')&lt;br /&gt;
plt.show()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Don't hesitate to use the function called ''.values'' to transform any ''xarray'' into a numpy array, especially in case of calculation time problems. &lt;br /&gt;
For more examples on how to use ''xarray'', take a look at the [https://docs.xarray.dev/en/stable/index.html documentation]. &lt;br /&gt;
Here is another example of how one can use xarray with multiples netcdfiles.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
&lt;br /&gt;
import xarray as xr&lt;br /&gt;
import os&lt;br /&gt;
&lt;br /&gt;
# your folder where output files are stored&lt;br /&gt;
FOLDER = './your_folder_with_output_files/'&lt;br /&gt;
&lt;br /&gt;
# take back the files from your FOLDER&lt;br /&gt;
list_files_folder=os.listdir(FOLDER)&lt;br /&gt;
&lt;br /&gt;
# If there are several files.&lt;br /&gt;
# Sort your simulation files by date,&lt;br /&gt;
# so beginning of simulation will be top of the list&lt;br /&gt;
# and end of simulation will be end of the list.&lt;br /&gt;
list_files_folder.sort()&lt;br /&gt;
&lt;br /&gt;
files = [FOLDER+str(f) for f in list_files_folder]&lt;br /&gt;
# if you want to keep only files of special_year you can add this option :&lt;br /&gt;
# files = [FOLDER+str(f) for f in list_files_folder if f.startswith(&amp;quot;special_year&amp;quot;)]&lt;br /&gt;
&lt;br /&gt;
# xarray will magically concatenate your outfiles by 'Time' (or any other 'concat_dime' you want)&lt;br /&gt;
nc=xr.open_mfdataset(files,decode_times=False, concat_dim='Time', combine='nested')&lt;br /&gt;
&lt;br /&gt;
# to check your keys&lt;br /&gt;
for key in nc.keys():&lt;br /&gt;
    print(key)&lt;br /&gt;
&lt;br /&gt;
# to load keys (example here with keys for a mesoscale simulation)&lt;br /&gt;
Times = nc['Times'][:]&lt;br /&gt;
PTOT = nc['PTOT'][:]&lt;br /&gt;
T = nc['T'][:]&lt;br /&gt;
W = nc['W'][:]&lt;br /&gt;
&lt;br /&gt;
# you can use some functions to make averages etc&lt;br /&gt;
&lt;br /&gt;
T_moy = T.mean(dim=['Time','south_north','west_east'])&lt;br /&gt;
&lt;br /&gt;
# other functions&lt;br /&gt;
# .cumsum (cumulative sum)&lt;br /&gt;
# .rename (change the name of the object)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One cool thing about xarray is that it is well optimized, and can do whatever you want to do on your data, but better than you. See for instance, the example below to plot a temperature lon-lat map. Xarray handles it in 5 lines of code, where you would need a lot more to set-up you plot in traditional matplotlib. And the results look almost good enough for a paper plot. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import xarray as xr&lt;br /&gt;
import matplotlib.pyplot as plt &lt;br /&gt;
##Load your data and print it &lt;br /&gt;
file = '/home/lteinturier/Documents/PhD/wasp43b/chemistry_project/input_5xsolar.nc'&lt;br /&gt;
data = xr.open_dataset(file,decode_times=False)&lt;br /&gt;
print(data)&lt;br /&gt;
##extracting the altitude level #20 for the whole file&lt;br /&gt;
data = data.isel(altitude=20)&lt;br /&gt;
##let's assume that data hold a time-series of the temperature. Let's average it in time&lt;br /&gt;
temp = data['temp'].mean(&amp;quot;Time&amp;quot;,keep_attrs=True) #we keep the attribute when averaging to conserve the DataArray structure&lt;br /&gt;
##now we plot &lt;br /&gt;
fig = temp.plot.contourf(cmap='gnuplot',levels=50) #choose the colormap and the number of contourf levels &lt;br /&gt;
fig.ax.set_title(&amp;quot;P = {:.2e} mbar&amp;quot;.format(data.p.mean().values)) ##set-up your title. If you don't change it, the title will be the altitude in km of your atmospheric level&lt;br /&gt;
plt.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This is only a fraction of what Xarray can do. Check the [https://docs.xarray.dev/en/stable/user-guide/index.html documentation] for more.&lt;br /&gt;
&lt;br /&gt;
==== Python tutorials to make pretty visuals ====&lt;br /&gt;
&lt;br /&gt;
We provide a tutorial on how to make pretty visuals using Generic PCM 3-D simulations [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Plots_With_PyVista here].&lt;br /&gt;
&lt;br /&gt;
=== Planetoplot ===&lt;br /&gt;
&lt;br /&gt;
Planetoplot is a in-house, python based library developped to vizualize Generic PCM data.&lt;br /&gt;
&lt;br /&gt;
The code and documentation can be found at: https://nbviewer.org/github/aymeric-spiga/planetoplot/blob/master/tutorial/planetoplot_tutorial.ipynb&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;br /&gt;
[[Category:Generic-DYNAMICO]]&lt;br /&gt;
[[Category:Generic-WRF]]&lt;br /&gt;
[[Category:Generic-1D]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;diff=3199</id>
		<title>Quick Install and Run</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;diff=3199"/>
				<updated>2026-02-04T13:33:41Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this page we give a hopefully exhaustive enough overview of the necessary prerequisites and steps to download, compile and run a simple simulation with the GCM in an &amp;quot;Early Mars&amp;quot; setup (i.e. a desert planet with a CO2 atmosphere) on a Linux computer.&lt;br /&gt;
&lt;br /&gt;
Note that there is a dedicated, install script, that attempts to do all these steps (up to and including running the simulation) that you can obtain here: https://web.lmd.jussieu.fr/~lmdz/planets/generic/install_scripts/install_lmdz_generic_earlymars.bash&lt;br /&gt;
Automating the process is not trivial as there are many subtle variants of Linux flavors and user account setups, so the script may fail in your case, but hopefully the information given in this page should help you solve the encountered problems.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites: Tools and Libraries ==&lt;br /&gt;
In order to use (i.e. compile and run) the GCM, one needs to have some tools and installed libraries at hand. We list below a (minimal) set that you should check that is available and/or that you'll need to first install on your machine. Note that we assume in this tutorial that you are on a Linux native-OS/cluster.&lt;br /&gt;
&lt;br /&gt;
===  Fortran compiler ===&lt;br /&gt;
The GCM source code is in Fortran. One thus needs a Fortran compiler to build (compile) the executable.&lt;br /&gt;
The most easily available one (on Linux) is gfortran and examples discussed here will assume it is the one used.&lt;br /&gt;
You can check that you indeed have a gfortran compiler at hand with the following Bash command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
which gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
which should return something like&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/usr/bin/gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Subversion ===&lt;br /&gt;
The source code is managed using subversion (svn), which you'll need to download or update. Leaving aside the subtleties of svn and code organization for now, downloading the code amounts to doing the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://svn.lmd.jussieu.fr/Planeto/trunk --depth empty&lt;br /&gt;
cd trunk&lt;br /&gt;
svn update LMDZ.COMMON LMDZ.GENERIC&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As a side note: the source code that will be fetched by svn can also be browsed online here: https://trac.lmd.jussieu.fr/Planeto&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Note: if the command line above doesn't work, you may also try to replace 'http' by 'https'.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Git === &lt;br /&gt;
&lt;br /&gt;
Alternatively to svn, you can use [[Git usage|git to download the source code]]. &lt;br /&gt;
&lt;br /&gt;
=== FCM ===&lt;br /&gt;
The FCM (Flexible Configuration Management) tool is a suite of perl scripts to help building and managing codes. We use a slightly modified version which can be obtained using subversion (svn). Ideally you'll want to download it somewhere on your computer once in for all. To do this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://forge.ipsl.jussieu.fr/fcm/svn/PATCHED/FCM_V1.2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You'll then need to add the resulting FCM_V1.2/bin to your PATH environment variable so that the command &amp;quot;fcm&amp;quot; may be used from anywhere on your machine. e.g. by adding the following line in your .bashrc (the .bashrc file is a hidden configuration script in your home directory (~/.bashrc) that runs whenever you start a new Bash shell):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/FCM_V1.2/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that the downloaded FCM_V1.2 directory is in your home ($HOME) directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
=== the NetCDF library ===&lt;br /&gt;
The GCM reads and writes input and output files in NetCDF format, therefore a NetCDF library is required. Most of the clusters propose a NetCDF library that you can load before using the model. &lt;br /&gt;
&lt;br /&gt;
If this library is not available, you can install it by yourself on your system (check out [[the netCDF library]] page for more). You can use the following home-made &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script to do so. For this, ensure that you are in your home directory:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir netcdf&lt;br /&gt;
cd netcdf&lt;br /&gt;
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5_seq.bash&lt;br /&gt;
chmod u=rwx install_netcdf4_hdf5_seq.bash&lt;br /&gt;
./install_netcdf4_hdf5_seq.bash &amp;gt; netcdf.log 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Compiling the library and dependencies can take a while (&amp;gt;&amp;gt;15 minutes; be patient).&lt;br /&gt;
Once this is done, check file netcdf.log to verify that all went well.&lt;br /&gt;
You may want to also add its &amp;quot;bin&amp;quot; directory to your PATH environment variable by adding in your .bashrc a line of:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/netcdf/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that you have run the &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script in a &amp;quot;netcdf&amp;quot; subdirectory of your home directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
As a side note: The NetCDF library provides a very simple command line tool (ncdump) to inspect the contents of NetCDF files, but you'll need more advanced visualization tools (e.g., Panoply, Python scripts, etc. - see further down this page in the &amp;quot;Checking the Results&amp;quot; section) for more advanced post-processing of the outputs.&lt;br /&gt;
&lt;br /&gt;
=== the IOIPSL library ===&lt;br /&gt;
&lt;br /&gt;
The IOIPSL (Input/Output IPSL) library is designed to handle both the reading of some input files used by the GCM (the *.def files which are described further below) and the writing of some NetCDF output files.&lt;br /&gt;
&lt;br /&gt;
==== Automated IOIPSL install script ====&lt;br /&gt;
Scripts to download and install the IOIPSL library can be found in the &amp;quot;ioipsl&amp;quot; subdirectory of the &amp;quot;LMDZ.COMMON&amp;quot; library. Since here we assume we're working with gfortran, the relevant one is &amp;quot;install_ioipsl_gfortran.bash&amp;quot;. If your PATH environment variable is already such that it includes the path to your NetCDF library distribution's bin directory (see previous section) then all you need to do is execute the script:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_gfortran.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If all went well the script should end with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
OK: ioipsl library is in ...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''(for further details about [[The_IOIPSL_Library|the IOIPSL library]] and installing it, follow the link and/or use the Search Box at the top of this page)''&lt;br /&gt;
&lt;br /&gt;
== GCM Input Datafiles and Datasets ==&lt;br /&gt;
In order to run, the GCM needs some inputs, such as initial conditions (values of state variables), external inputs (e.g. optical properties of aerosols) and simulation setup (e.g. specifications on how long to run, which parametrizations should be activated, etc.)&lt;br /&gt;
&lt;br /&gt;
In the spirit of the illustrative example considered here (an &amp;quot;Early Mars&amp;quot; simulation), a set of necessary input data may be downloaded with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/reference_earlymars_32x32x15_b32x36.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Once unpacked (to do that, you can execute the command &amp;quot;tar xvzf reference_earlymars_32x32x15_b32x36.tar.gz&amp;quot;) the resulting &amp;quot;reference_earlymars_32x32x15_b32x36&amp;quot; will contain all that is needed, namely:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
callphys.def  gases.def  startfi.nc  traceur.def&lt;br /&gt;
datadir/      run.def    start.nc    z2sig.def&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Initial condition NetCDF files ''start.nc'' and ''startfi.nc''; the first containing initial condition values for the dynamics and the second initial condition values for the physics.&lt;br /&gt;
* A '''datadir''' directory containing external inputs (aerosol properties, stellar spectra, etc.)&lt;br /&gt;
* Some ASCII *.def files containing run parameters, namely:&lt;br /&gt;
# [[The_run.def_Input_File | run.def]] : &amp;quot;master def file&amp;quot; containing main run parameters&lt;br /&gt;
# [[The_callphys.def_Input_File | callphys.def]] : file containing flags and keys for the various physics parametrizations&lt;br /&gt;
# [[The_z2sig.def_Input_File | z2sig.def]] : file describing the sought vertical discretization&lt;br /&gt;
# [[The_traceur.def_Input_File | traceur.def]] : file specifying the tracer number and names&lt;br /&gt;
# [[The_gases.def_Input_File | gases.def]] : file specifying the list of gases (main and trace) in the atmosphere&lt;br /&gt;
&lt;br /&gt;
== Compiling the GCM ==&lt;br /&gt;
Now that all the prerequisites are fulfilled, it is (almost!) time to compile the GCM&lt;br /&gt;
&lt;br /&gt;
=== Prior to a first compilation: setting up the target architecture files ===&lt;br /&gt;
Compiling the model is done using a dedicated Bash script ''makelmdz_fcm'' located in the '''LMDZ.COMMON''' directory. This script however relies on ''architecture files''. These files contain information on which compiler to use, what compilation options to use, where relevant libraries are located etc. In practice, one must thus create these ASCII text files in the '''arch/''' subdirectory of '''LMDZ.COMMON'''. The naming convention is rather straightforward, when the script ''makelmdz_fcm'' is run with the option '''-arch somename''', it will look for files ''arch/arch-somename.env'', ''arch/arch-somename.path'' and ''arch/arch-somename.fcm''. Leaving aside a detailed description for later (see [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this page]]), here we mention that:&lt;br /&gt;
* the ''arch*.env'' is an optional file containing ''environment'' information, such as setting up environment variables or loading modules on some machines, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export NETCDF_HOME=/path/to/the/netcdf/distribution&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
A more realistic (but more specific) example of a '''arch*.env''' file using &amp;quot;recent&amp;quot; module commands, adapted for compilation and visualisation on a given supercomputer, would look more like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module purge&lt;br /&gt;
module load GCC/10.3.0  OpenMPI/4.1.1&lt;br /&gt;
module load netCDF-Fortran/4.5.3&lt;br /&gt;
export NETCDF_INCDIR=&amp;quot;/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/include&amp;quot;&lt;br /&gt;
export NETCDFF_LIBDIR=&amp;quot;/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/lib&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that the last two lines above specify paths to the '''include''' and '''lib''' directories used on this system and will certainly vary from system to system. Likewise, the exact module versions will most likely need to be adapted in your specific '''arch*.env''' file.&lt;br /&gt;
* the '''arch*.path''' is a mandatory file containing information relative to external libraries such as NetCDF and IOIPSL, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ROOT=$PWD&lt;br /&gt;
&lt;br /&gt;
NETCDF_LIBDIR=&amp;quot;-L${NETCDF_HOME}/lib&amp;quot;&lt;br /&gt;
NETCDF_LIB=&amp;quot;-lnetcdf -lnetcdff&amp;quot;&lt;br /&gt;
NETCDF_INCDIR=&amp;quot;-I${NETCDF_HOME}/include&amp;quot;&lt;br /&gt;
&lt;br /&gt;
IOIPSL_INCDIR=&amp;quot;-I$ROOT/../IOIPSL/inc&amp;quot;&lt;br /&gt;
IOIPSL_LIBDIR=&amp;quot;-L$ROOT/../IOIPSL/lib&amp;quot;&lt;br /&gt;
IOIPSL_LIB=&amp;quot;-lioipsl&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Each library is referenced by a fixed identifier (NETCDF, IOIPSL, XIOS, ...) and 3 trailing strings: '''_LIBDIR''', for the path to the library, '''_LIB''', for the library name(s), and '''_INCDIR''' for the path to the library's ''include'' directory.&lt;br /&gt;
&lt;br /&gt;
* the '''arch*.fcm''' is a mandatory file containing information relative to the compiler and compilation options, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
%COMPILER            gfortran&lt;br /&gt;
%LINK                gfortran&lt;br /&gt;
%AR                  ar&lt;br /&gt;
%MAKE                make&lt;br /&gt;
%FPP_FLAGS           -P -traditional&lt;br /&gt;
%FPP_DEF             NC_DOUBLE&lt;br /&gt;
%BASE_FFLAGS         -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons&lt;br /&gt;
%PROD_FFLAGS         -O3&lt;br /&gt;
%DEV_FFLAGS          -O&lt;br /&gt;
%DEBUG_FFLAGS        -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace&lt;br /&gt;
%MPI_FFLAGS&lt;br /&gt;
%OMP_FFLAGS         &lt;br /&gt;
%BASE_LD     &lt;br /&gt;
%MPI_LD&lt;br /&gt;
%OMP_LD              &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Again, not going into a detailed description (follow [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this link]] for that), just note here that each line corresponds to a keyword (starting with &amp;quot;%&amp;quot;) followed by the relevant options. Here, we mention a few of the main ones:&lt;br /&gt;
* %COMPILER: The compiler to use (here, gfortran)&lt;br /&gt;
* %BASE_FFLAGS: compiler options (always included)&lt;br /&gt;
* %PROD_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-prod&amp;quot; option&lt;br /&gt;
* %DEBUG_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-debug&amp;quot; option&lt;br /&gt;
* %BASE_LD: flags to add at the linking step of the compilation&lt;br /&gt;
&lt;br /&gt;
Note that if you are using a recent version of gfortran (10 or more), you have to add an extra option in the %BASE_FFLAGS, that is '''-fallow-argument-mismatch'''&lt;br /&gt;
&lt;br /&gt;
Also note that you can find in the '''LMDZ.COMMON/arch/''' many examples of arch files that you can re-use as is if you compile the model on our usual computing clusters (e.g. Spirit, Adastra, etc.). Just check the content of the directory to see if your favorite computing cluster already has arch files.&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (early Mars) ===&lt;br /&gt;
To compile the GCM at the sought resolution for the Early Mars test case run (in LMDZ.COMMON):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -arch local -p generic -d 32x32x15 -b 32x36 gcm &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;!-- -s option is no more needed ; * '''-s 2''': the physics parametrizations will handle 2 radiatively active tracers (water ice and dust for the Early Mars setup) --&amp;gt;&lt;br /&gt;
Here, we assume that you have generated the '''arch-local.*''' files as per what is suggested in the previous section.&lt;br /&gt;
The options for ''makelmdz_fcm'' used here imply:&lt;br /&gt;
* '''-p generic''': the GCM will use the &amp;quot;generic&amp;quot; physics package&lt;br /&gt;
* '''-d 32x32x15''': the GCM grid will be 32x32 in longitude x latitude, with 15 vertical levels.&lt;br /&gt;
* '''-b 32x36''': the physics radiative transfer will be done using 32 bands in the IR and 36 in the visible.&lt;br /&gt;
For a glimpse at all the possible ''makelmdz_fcm'' options and their meanings, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -h&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and/or check the dedicated [[The_makelmdz_fcm_GCM_Compilation_Script|makelmdz_fcm page]].&lt;br /&gt;
&lt;br /&gt;
Upon successful compilation, the executable '''gcm_32x32x15_phygeneric_b32x36_seq.e''' should be generated in the '''bin''' subdirectory.&lt;br /&gt;
&lt;br /&gt;
=== Known issues ===&lt;br /&gt;
&lt;br /&gt;
If the compilation fails, it might be due to the options used in the arch file. &lt;br /&gt;
For example, if you are using gfortran prior to 10, you could get an error such as:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gfortran: error: unrecognized command line option ‘-fallow-argument-mismatch’; did you mean ‘-Wno-argument-mismatch’?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This can be solved by removing the option '''-fallow-argument-mismatch''' from the arch.fcm file.&lt;br /&gt;
&lt;br /&gt;
If you are using a recent version of gfortran (10 of beyond) without the option '''-fallow-argument-mismatch''', the compilation will probably fail ith the error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  136 |      .       idim_index,nvarid)&lt;br /&gt;
      |             2                                       &lt;br /&gt;
......&lt;br /&gt;
  211 |       ierr = NF_DEF_VAR (nid, &amp;quot;aire&amp;quot;, NF_DOUBLE, 2, id,nvarid)&lt;br /&gt;
      |                                                    1&lt;br /&gt;
Error: Rank mismatch between actual argument at (1) and actual argument at (2) (scalar and rank-1)&lt;br /&gt;
fcm_internal compile failed (256)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Add the compilation option in the arch file to solve the issue.&lt;br /&gt;
&lt;br /&gt;
== Running the GCM ==&lt;br /&gt;
To run your first simulation, you need to first copy (or move) the executable '''gcm_32x32x15_phygeneric_b32x36_seq.e''' to the directory containing the initial conditions and parameter files, e.g. '''reference_earlymars_32x32x15_b32x36''' and run it.&lt;br /&gt;
This is usually a two-step process: the (optional) first step is to source the environment architecture file (the very same that was used to compile the model), e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ../LMDZ.COMMON/arch.env&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The second step is to execute the model, e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./gcm_32x32x15_phygeneric_b32x36_seq.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this command line, the (text) outputs messages are redirected into a text file, '''gcm.out'''. It is convenient to keep this file for later inspection (e.g., to track a bug). If there is no redirection (only '''./gcm_32x32x15_phygeneric_b32x36_seq.e'''), then the outputs will be directly on the screen.&lt;br /&gt;
&lt;br /&gt;
== Checking the Results of a Simulation ==&lt;br /&gt;
Once the simulation is finished, you'll know that all went well (&amp;quot;everything is cool&amp;quot;) if the last few lines of the standard text output reads:&lt;br /&gt;
[[File:tsurf_benchmark_early_Mars.png|300px|thumb|Final surface temperature map of the reference simulation (plotted using Panoply).]]&lt;br /&gt;
[[File:water_ice_cloud_column_benchmark_early_Mars.png|300px|thumb|Final water ice cloud column map of the reference simulation (plotted using Panoply).]]&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 in abort_gcm&lt;br /&gt;
 Stopping in leapfrog&lt;br /&gt;
 Reason = Simulation finished &lt;br /&gt;
 Everything is cool&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, start looking for an error message and a way to fix the problem...&lt;br /&gt;
&lt;br /&gt;
Apart from the standard text output messages from the GCM, which are mostly for monitoring and checking the simulation progress, the user will more likely be interested in checking the contents of the ''diagfi.nc'' file produced by the GCM, as it contains instantaneous values of the main model variables (atmospheric temperature, winds, etc.).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To check that you have successfully run the simulation, we provide some graphs to evaluate the results of your simulations, for the simulation described in this tutorial (early Mars reference, 32x32x15 resolution).&lt;br /&gt;
&lt;br /&gt;
In the plots shown here, we present maps of the surface temperatures ('tsurf' variable) and the water ice cloud column ('h2o_ice_col' variable), both plotted using Panoply.&lt;br /&gt;
&lt;br /&gt;
Side note: There are a variety of freely available software that can be used to visualise the NetCDF ''diagfi.nc'' file, such as Panoply, Ferret, Ncview, Grads, Python, etc. (see more details in the [[Tool_Box | Tool Box section]])&lt;br /&gt;
&lt;br /&gt;
== Taking Things to the Next Level ==&lt;br /&gt;
The short tutorial presented in this page is meant to be useful to get an overview of what is required to install and run the GCM, in addition to checking the results of a simulation. Moving on to a more intensive and problem-specific usage will require diving into additional topics and aspects such as:&lt;br /&gt;
* Selecting the appropriate inputs and run parameters for a given study.&lt;br /&gt;
* Compiling and running in parallel (MPI and/or OpenMP) to obtain results in a reasonable time frame.&lt;br /&gt;
* post-processing and analysis of model outputs.&lt;br /&gt;
All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)!&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_XIOS_Library&amp;diff=3192</id>
		<title>The XIOS Library</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_XIOS_Library&amp;diff=3192"/>
				<updated>2026-01-30T08:52:07Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://forge.ipsl.jussieu.fr/ioserver/wiki XIOS] (Xml I/O Server) library is based on client-server principles where the server manages the outputs asynchronously from the client (the climate model) so that the bottleneck of writing data is alleviated.&lt;br /&gt;
&lt;br /&gt;
== Installing the XIOS library ==&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
There are a couple of prerequisites to installing and using the XIOS library:&lt;br /&gt;
# An MPI library must be available&lt;br /&gt;
# A NetCDF4-HDF5 library, preferably compiled with MPI enabled, must be available (see, e.g. dedicated section on  [[The_netCDF_library]])&lt;br /&gt;
The rest of this page assume all prerequisites are met. People interested in building an appropriate NetCDF library on their Linux machine might be interested in the following installation script: https://web.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5.bash (which might need some adaptations to work in your specific case).&lt;br /&gt;
&lt;br /&gt;
=== Downloading and compiling the XIOS library ===&lt;br /&gt;
The XIOS source code is available for download using svn (subversion). To download it, go to your trunk repository and run the line e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn co http://forge.ipsl.fr/ioserver/svn/XIOS/trunk XIOS&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* To compile the library, one must first have adequate architecture &amp;quot;arch&amp;quot; files at hand, just like for the GCM (see [[The_Target Architecture_(&amp;quot;arch&amp;quot;)_Files]]). In principle both ''arch.env'' and ''arch.path'' files could be the same as for the GCM; ''arch.fcm'' will of course differ, as XIOS source code is in C++ (along with a Fortran interface). If using a &amp;quot;known&amp;quot; machine (e.g. Occigen, Irene-Rome, Ciclad) then ready-to-use up-to-date arch files for that machine should be present in the ''arch'' directory. If not you will have to create your own (it is advised to use the existing ones as templates!)&lt;br /&gt;
* Assuming ''some_machine'' arch files (i.e. files ''arch-some_machine.env'', ''arch-some_machine.path'', ''arch-some_machine.fcm'') are present in the '''arch''' subdirectory, compiling the XIOS is done using the dedicated ''make_xios'' script, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./make_xios --prod --arch some_machine --job 8 &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If the compilation steps went well then the '''lib''' directory should contain file ''libxios.a'' and the '''bin''' directory should contain&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fcm_env.ksh  generic_testcase.exe  xios_server.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== XIOS documentation ===&lt;br /&gt;
Note that the downloaded XIOS distribution includes some documentation in the '''doc''' subdirectory:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
reference_xml.pdf  XIOS_reference_guide.pdf  XIOS_user_guide.pdf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Definitely worth checking out!&lt;br /&gt;
&lt;br /&gt;
== Compiling the GCM with the XIOS library ==&lt;br /&gt;
&lt;br /&gt;
To compile with XIOS enabled, one must specify the option&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 -io xios&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
to the [[The makelmdz fcm GCM Compilation Script|makelmdz_fcm]] script.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== XIOS output controls ==&lt;br /&gt;
&lt;br /&gt;
All aspects of the outputs (name, units, file, post-processing operations, etc.) are controlled by dedicated XML files which are read at run-time. Samples of xml files are provided in the &amp;quot;deftank&amp;quot; directory.&lt;br /&gt;
&lt;br /&gt;
=== In a nutshell ===&lt;br /&gt;
* the master file read by XIOS is ''iodef.xml''; and contains specific XIOS parameters such as ''using_server'' to dictate whether XIOS is run in client-server mode (true) or attached (false) mode, ''info_level'' to set the verbosity of XIOS messages (0: none, 100: very verbose), ''print_file'' to set whether XIOS messages will be sent to standard output (false) or dedicated xios_*.out and xios_*.err files (true).&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
                        &amp;lt;variable id=&amp;quot;using_server&amp;quot; type=&amp;quot;bool&amp;quot;&amp;gt;false&amp;lt;/variable&amp;gt;&lt;br /&gt;
                        &amp;lt;variable id=&amp;quot;info_level&amp;quot; type=&amp;quot;int&amp;quot;&amp;gt;0&amp;lt;/variable&amp;gt;&lt;br /&gt;
                        &amp;lt;variable id=&amp;quot;print_file&amp;quot; type=&amp;quot;bool&amp;quot;&amp;gt; false &amp;lt;/variable&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* It is common practice to have LMDZ-related definitions and outputs in separate XML files, e.g. ''context_lmdz.xml'' which are included in ''iodef.xml'' via the ''src'' attribute, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;context id=&amp;quot;LMDZ&amp;quot; src=&amp;quot;./context_lmdz_physics.xml&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The ''context_lmdz_physics.xml'' file must then contain all fields/grid/file output definitions, which may be split into multiple XML files, for instance the definition of model variables (i.e. all fields that may be outputed) is often put in a separate file ''field_def_physics.xml'' which is referenced within ''context_lmdz_physics.xml'' as:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;field_definition src=&amp;quot;./field_def_physics.xml&amp;quot; /&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Concerning output files, the current recommended practice is to use separate ''file_def_histsomething_lmdz.xml'' files, one for each ''histsomething.nc'' file to generate, and include these in ''context_lmdz.xml'' using the ''file_definition'' key. e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;!-- Define output files&lt;br /&gt;
              Each file contains the list of variables and their output levels --&amp;gt;&lt;br /&gt;
  &amp;lt;file_definition src=&amp;quot;./file_def_histins.xml&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;file_definition src=&amp;quot;./file_def_specIR.xml&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Some XIOS key concepts ===&lt;br /&gt;
==== calendars ====&lt;br /&gt;
The calendar is set via the Fortran source code (see '''xios_output_mod.F90''' in the physics). Without going into details here, note that it is flexible enough so that day length, year length, etc. may be defined by the user. However a strong limitation is that the calendar time step should be an integer number of seconds.&lt;br /&gt;
&lt;br /&gt;
TODO: refer to specific stuff/settings for Mars, Generic, Venus cases...&lt;br /&gt;
&lt;br /&gt;
==== axes, domains and grids ====&lt;br /&gt;
First a bit of XIOS nomenclature:&lt;br /&gt;
* an '''axis''' is 1D; e.g. pseudo-altitude or pseudo-pressure or sub-surface depth or wavelength or ...&lt;br /&gt;
* a '''domain''' is a horizontal 2D surface; e.g. the globe or some portion of it&lt;br /&gt;
* a '''grid''' is the combination of a domain and one axis (or more); e.g. the atmosphere or the sub-surface of a planet&lt;br /&gt;
Most of the '''axis''' and '''domain''' are defined in the code (since all the information is known there) and only referred to in the XML via dedicated '''id''' values, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;axis_definition&amp;gt;&lt;br /&gt;
        &amp;lt;axis id=&amp;quot;presnivs&amp;quot; &lt;br /&gt;
              standard_name=&amp;quot;Pseudo-pressure of model vertical levels&amp;quot; &lt;br /&gt;
              unit=&amp;quot;Pa&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;/axis&amp;gt;&lt;br /&gt;
        &amp;lt;axis id=&amp;quot;altitude&amp;quot; &lt;br /&gt;
              standard_name=&amp;quot;Pseudo-altitude of model vertical levels&amp;quot; &lt;br /&gt;
              unit=&amp;quot;km&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;/axis&amp;gt;&lt;br /&gt;
    &amp;lt;/axis_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Likewise the global computational domain is defined in the code and known in the XML via its '''id'''(=&amp;quot;dom_glo&amp;quot;):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain_definition&amp;gt;&lt;br /&gt;
        &amp;lt;domain id=&amp;quot;dom_glo&amp;quot; data_dim=&amp;quot;2&amp;quot; /&amp;gt;&lt;br /&gt;
    &amp;lt;/domain_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
From there one may generate a grid, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;grid_definition&amp;gt;&lt;br /&gt;
        &amp;lt;!-- toggle axis id below to change output vertical axis --&amp;gt;&lt;br /&gt;
        &amp;lt;grid id=&amp;quot;grid_3d&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;domain id=&amp;quot;dom_glo&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;!-- &amp;lt;axis id=&amp;quot;presnivs&amp;quot; /&amp;gt; --&amp;gt;&lt;br /&gt;
            &amp;lt;axis id=&amp;quot;altitude&amp;quot; /&amp;gt;&lt;br /&gt;
        &amp;lt;/grid&amp;gt;&lt;br /&gt;
    &amp;lt;/grid_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that '''grid_3d''' is defined in the XML file and thus may be changed by the user without having to modify the PCM source code. For instance by simply adding the following definitions:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain_definition&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
        &amp;lt;domain id=&amp;quot;dom_128_96&amp;quot; ni_glo=&amp;quot;128&amp;quot; nj_glo=&amp;quot;96&amp;quot; type=&amp;quot;rectilinear&amp;quot;  &amp;gt;&lt;br /&gt;
          &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
          &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;/domain&amp;gt;&lt;br /&gt;
    &amp;lt;/domain_definition&amp;gt;&lt;br /&gt;
    &amp;lt;grid_definition&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
        &amp;lt;grid id=&amp;quot;my_grid_3d&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;domain id=&amp;quot;dom_128_96&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;axis id=&amp;quot;altitude&amp;quot; /&amp;gt;&lt;br /&gt;
        &amp;lt;/grid&amp;gt;&lt;br /&gt;
    &amp;lt;/grid_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Specifying to output variables on grid '''my_grid_3d''' will trigger XIOS interpolations so that the output fields are on a regular 128x96 longitudexlatitude grid. &lt;br /&gt;
&lt;br /&gt;
==== field definitions ====&lt;br /&gt;
For XIOS a field is defined with and '''id''' and most be assigned to a reference '''grid''' (this is how XIOS knows a field is a simple scalar, or a surface or a volume, and thus to which computational grid it is related to).e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;field_definition prec=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;field_group id=&amp;quot;fields_2D&amp;quot; domain_ref=&amp;quot;dom_glo&amp;quot;&amp;gt;&lt;br /&gt;
           &amp;lt;field id=&amp;quot;aire&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Mesh area&amp;quot;&lt;br /&gt;
                   unit=&amp;quot;m2&amp;quot; /&amp;gt;&lt;br /&gt;
           &amp;lt;field id=&amp;quot;phis&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Surface geopotential (gz)&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;m2/s2&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;tsol&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Surface Temperature&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;K&amp;quot; /&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;/field_group&amp;gt;&lt;br /&gt;
&lt;br /&gt;
        &amp;lt;field_group id=&amp;quot;fields_3D&amp;quot; grid_ref=&amp;quot;grid_3d&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;temp&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Atmospheric temperature&amp;quot;&lt;br /&gt;
                   unit=&amp;quot;K&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;field id=&amp;quot;pres&amp;quot;&lt;br /&gt;
                   long_name=&amp;quot;Atmospheric pressure&amp;quot; &lt;br /&gt;
                   unit=&amp;quot;Pa&amp;quot; /&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;/field_group&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;/field_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is vital that all the fields which are sent to XIOS via the code are declared in the XML file otherwise there will be a run-time error message of the likes of:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
In file &amp;quot;object_factory_impl.hpp&amp;quot;, function &amp;quot;static std::shared_ptr&amp;lt;U&amp;gt; xios::CObjectFactory::GetObject(const std::__cxx11::basic_string&amp;lt;char, std::char_traits&amp;lt;char&amp;gt;, std::allocator&amp;lt;char&amp;gt;&amp;gt; &amp;amp;) [with U = xios::CAxis]&amp;quot;,  line 78 -&amp;gt; [ id = weirdvar, U = field ] object was not found.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In the message above XIOS received from the code a variable called &amp;quot;weirdvar&amp;quot; which is not defined in the XML... One must update the XML file with the proper definition (&amp;lt;field id=&amp;quot;weirdvar&amp;quot; ... /&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
==== output file definitions ====&lt;br /&gt;
It is by defining a '''file''' that the user specifies what the output file will be, which variables it will contain, etc. as illustrated with this simple Venusian example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;file_definition&amp;gt;&lt;br /&gt;
        &amp;lt;!-- Instantaneous outputs; every physics time steps --&amp;gt;&lt;br /&gt;
        &amp;lt;file id=&amp;quot;Xins&amp;quot;&lt;br /&gt;
              output_freq=&amp;quot;1ts&amp;quot; &lt;br /&gt;
              type=&amp;quot;one_file&amp;quot;&lt;br /&gt;
              enabled=&amp;quot;.true.&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
            &amp;lt;!-- VARS 2D --&amp;gt;&lt;br /&gt;
            &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
                         freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;phis&amp;quot; operation=&amp;quot;once&amp;quot; /&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;aire&amp;quot; operation=&amp;quot;once&amp;quot; /&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;tsol&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/field_group&amp;gt;&lt;br /&gt;
&lt;br /&gt;
            &amp;lt;!-- VARS 3D --&amp;gt;&lt;br /&gt;
            &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
                         freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;temp&amp;quot; /&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;pres&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/field_group&amp;gt;&lt;br /&gt;
        &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It is mandatory to have an '''operation''' attribute defined (this can be either done at the level of the definition of the variable or, as above at the level of the definition of the outputs); there is no default. If this attribute is missing you will get an error message of the likes of:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
In file &amp;quot;attribute_template_impl.hpp&amp;quot;, function &amp;quot;virtual void xios::CAttributeTemplate&amp;lt;std::basic_string&amp;lt;char&amp;gt;&amp;gt;::checkEmpty() const [T = std::basic_string&amp;lt;char&amp;gt;]&amp;quot;,  line 78 -&amp;gt; On checking attribute with id=operation : data is not initialized &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Thorougher description with illustrative examples ===&lt;br /&gt;
TODO: PUT SOME SIMPLE ILLUSTRATIVE EXAMPLES HERE&lt;br /&gt;
&lt;br /&gt;
See for example the following page: [[controling outputs in the dynamics with DYNAMICO]]&lt;br /&gt;
&lt;br /&gt;
==== Specifying that the time axis should be labeled in days rather than seconds ====&lt;br /&gt;
The default for XIOS is to label temporal axes (&amp;quot;time_instant&amp;quot; and &amp;quot;time_counter&amp;quot;) in seconds. But one may ask that they be labelled in days by setting the optional '''time_unit''' attribute of a file to '''days''', e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;file id=&amp;quot;my_output_file&amp;quot; &lt;br /&gt;
       output_freq=&amp;quot;1ts&amp;quot;&lt;br /&gt;
       time_units=&amp;quot;days&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Force flushing and writing files every ### time steps ====&lt;br /&gt;
XIOS handles its buffers and only writes to output files when needed. This is quite efficient and worthwhile, except for instance when the model crashes as some data might then not be included in the output files. One may use the '''sync_freq''' (optional) attribute of a file to force XIOS to write to the file at some predefined frequency, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;file id=&amp;quot;my_output_file&amp;quot; &lt;br /&gt;
       output_freq=&amp;quot;1ts&amp;quot;&lt;br /&gt;
       sync_freq=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Very useful when debugging.&lt;br /&gt;
&lt;br /&gt;
==== Specifying an offset (in time) for the outputs ====&lt;br /&gt;
One may use the attribute '''record_offset''' of a file to impose that the outputs in the file begin after a certain number of time steps of the simulation (useful for instance when debugging). For instance if there are 192 time steps per day and the run is 10 days long but one only wants outputs for the last day and at every time step of that day then one should have a '''record_offset''' of -9*192=-1728 (note the ''-''; the value to specify is negative), e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;file id=&amp;quot;my_output_file&amp;quot; &lt;br /&gt;
       output_freq=&amp;quot;1ts&amp;quot;&lt;br /&gt;
       record_offset=&amp;quot;-1728ts&amp;quot;&lt;br /&gt;
       time_units=&amp;quot;days&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
                         freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;my_variable&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/field_group&amp;gt;&lt;br /&gt;
 &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The ''time_counter'' values in the file will be from 9.0052 (=9.+1./192.) to 10. (since here the time axis unit is requested to be in days)&lt;br /&gt;
&lt;br /&gt;
An alternative way to have the first n timesteps of a time series excluded from the output is to specify a ''freq_offset'' attribute to the field. For instance, to follow up on the example above, to extract every time step of the final 10th day of simulation with 192 time steps par day one should specify a '''freq_offset''' of 9*192=1728, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;file id=&amp;quot;my_output_file&amp;quot; &lt;br /&gt;
       output_freq=&amp;quot;1ts&amp;quot;&lt;br /&gt;
       time_units=&amp;quot;days&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
                         freq_offset=&amp;quot;1728ts&amp;quot;&lt;br /&gt;
                         freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;field field_ref=&amp;quot;my_variable&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/field_group&amp;gt;&lt;br /&gt;
 &amp;lt;/file&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The main difference, compared to the previous example using the '''record_offset''' file attribute, is that the ''time_counter'' values in the file will this time be from 0.0052 (=1./192) to 1.0.&lt;br /&gt;
&lt;br /&gt;
==== Saving or loading interpolation weights ====&lt;br /&gt;
With the XIOS library one can define output domains (grid) which are different from input domains (grids), and XIOS does the necessary interpolation.&lt;br /&gt;
&lt;br /&gt;
This requires, once source and destination grids are known, to compute some interpolation weights (during the initialization step). For large grids, this can take some time. One can however tell XIOS to save the interpolation weights in a file and use that file (if it is present) rather than recompute them when a new simulation is ran.&lt;br /&gt;
&lt;br /&gt;
In practice one must add extra keys to the &amp;quot;interpolate_domain&amp;quot; tag, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain id=&amp;quot;dom_256_192&amp;quot; type=&amp;quot;rectilinear&amp;quot; ni_glo=&amp;quot;256&amp;quot; nj_glo=&amp;quot;192&amp;quot; &amp;gt;&lt;br /&gt;
      &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
      &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot; write_weight=&amp;quot;true&amp;quot; mode=&amp;quot;read_or_compute&amp;quot; /&amp;gt;&lt;br /&gt;
    &amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will automatically generate a NetCDF file containing the weights. Default file name will be something like xios_interpolation_weights_CONTEXT_INPUTDOMAIN_OUTPUTDOMAIN.nc , where CONTEXT, INPUTDOMAIN and OUTPUTDOMAIN are inherited from the context (i.e. definitions of these in the xml files).&lt;br /&gt;
&lt;br /&gt;
One can specify the name of the file with the key &amp;quot;weight_filename&amp;quot;, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;domain id=&amp;quot;dom_256_192&amp;quot; type=&amp;quot;rectilinear&amp;quot; ni_glo=&amp;quot;256&amp;quot; nj_glo=&amp;quot;192&amp;quot; &amp;gt;&lt;br /&gt;
      &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
      &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot; write_weight=&amp;quot;true&amp;quot; mode=&amp;quot;read_or_compute&amp;quot; weight_filename=&amp;quot;xios_weights&amp;quot; /&amp;gt;&lt;br /&gt;
    &amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can also happen that for a given variable we want the interpolation not to be conservative. For example, a variable like the area of a mesh grid should not be interpolated between different domains. Since the interpolation is specific to a domain (and defined in the &amp;quot;domain id&amp;quot;), we have to create a new domain for all the variable that should be interpolated in another way. For the variable &amp;quot;Area&amp;quot; for example, the syntax is as follow :&lt;br /&gt;
&lt;br /&gt;
* Create the new domain:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;domain id=&amp;quot;dom_64_48_quantity_T&amp;quot; type=&amp;quot;rectilinear&amp;quot; ni_glo=&amp;quot;64&amp;quot; nj_glo=&amp;quot;48&amp;quot;   &amp;gt;&lt;br /&gt;
   &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
   &amp;lt;interpolate_domain quantity=&amp;quot;true&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Assign the variable to this domain:&lt;br /&gt;
Later in the context file, the variable should be outputted using this new domain (note that it still can be outputed in the same file as the other variables) :&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;field_group domain_ref=&amp;quot;dom_64_48_quantity_T&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;field_group operation=&amp;quot;instant&amp;quot;&lt;br /&gt;
     freq_op=&amp;quot;1ts&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;field field_ref=&amp;quot;area&amp;quot; operation=&amp;quot;once&amp;quot; /&amp;gt;&lt;br /&gt;
  &amp;lt;/field_group&amp;gt;&lt;br /&gt;
&amp;lt;/field_group&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples &amp;amp; adding outputs ===&lt;br /&gt;
&lt;br /&gt;
See [[LMDZ XIOS outputs]] for more details on the outputs generated via XIOS for the LMDZ dynamics/physics.&lt;br /&gt;
&lt;br /&gt;
If you use [[DYNAMICO]], check out the [[Controling outputs in the dynamics with DYNAMICO|DYNAMICO outputs]] page.&lt;br /&gt;
&lt;br /&gt;
== Using the XIOS library in client-server mode ==&lt;br /&gt;
To run with XIOS in client-server mode requires the following:&lt;br /&gt;
* The client-server mode should be activated (in file ''iodef.xml''):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
                        &amp;lt;variable id=&amp;quot;using_server&amp;quot; type=&amp;quot;bool&amp;quot;&amp;gt;true&amp;lt;/variable&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* The '''xios_server.exe''' executable should be present alongside the GCM executable '''gcm_***.e''' and they should be run together in MPMD (Multiple Programs, Multiple Data) mode : some of the MPI processes being allocated to the GCM and the others to XIOS ; in practice much less are needed by XIOS than the GCM, this however also depends on the amount of outputs and postprocessing computations, e.g. temporal averaging and grid interpolations, that XIOS will have to do. For example if the MPI execution wrapper is ''mpirun'' and that 26 processes are to be used by the GCM ''gcm_64x52x20_phystd_para.e'' and 2 by XIOS (i.e. using overall 28 processes):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mpirun -np 26 gcm_64x52x20_phystd_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1 : -np 2 xios_server.exe&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Venus-Model]]&lt;br /&gt;
[[Category:Titan-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=PCM_vertical_coordinate&amp;diff=3191</id>
		<title>PCM vertical coordinate</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=PCM_vertical_coordinate&amp;diff=3191"/>
				<updated>2026-01-23T08:50:41Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The PCM vertical coordinate, also called &amp;quot;model levels&amp;quot; is rather specific in the PCM and typically consists of &amp;quot;sigma&amp;quot; or hybrid &amp;quot;sigma-pressure&amp;quot; coordinates. What one needs to keep in mind is that model levels are not at fixed altitude, nor in most cases at fixed pressure. The pressure $$P$$ of a model level $$k$$ at time $$t$$ is:&lt;br /&gt;
$$&lt;br /&gt;
\begin{align}&lt;br /&gt;
  P(k,t) &amp;amp; = ap(k) + bp(k) . Ps(t)&lt;br /&gt;
\end{align}&lt;br /&gt;
$$ &lt;br /&gt;
Where $$ap(k)$$ and $$bp(k)$$ are respectively hybrid pressure and hybrid sigma coefficients. Note that $$ap()$$ and $$bp()$$ are time-independent and $$Ps(t)$$ is surface pressure (which typically varies with time). &lt;br /&gt;
&lt;br /&gt;
=== sigma coordinates ===&lt;br /&gt;
If in the general equation above one sets $$ap(k)=0$$ for all $$k$$, then $$P(k,t)=bp(k).Ps(t)$$, which can be re-written as $$bp(k)=P(k,t)/Ps(t)$$, which is the expression of a &amp;quot;sigma&amp;quot; coordinate (often represented with the notation $$\sigma=P/Ps$$). In case of a hydrostatic system, pressure monotonically decreases with altitude and thus &amp;quot;sigma&amp;quot; is indeed a coordinate, which monotonically decreases from 1 at the surface (where $$P=P_s$$) to 0 at the top of the atmosphere (where $$P=0$$). &amp;quot;Sigma&amp;quot; coordinates are also often called &amp;quot;terrain-following&amp;quot; coordinates because for given model level $$k$$ the pressure is a constant factor of the surface pressure, which to first order depends on the topography.&lt;br /&gt;
&lt;br /&gt;
=== pressure coordinates ===  &lt;br /&gt;
If in the general equation above one sets $$bp(k)=0$$ for all $$k$$, then $$P(k,t)=ap(k)$$, which implies that a layer k is always at the same pressure. While this is quite fine at high enough altitudes, it is quite easy to see that in the near surface and presence of topography (or if fact even without topography when weather systems will impact on local surface pressure) it will be problematic to use a fixed pressure grid where some points will be undefined.&lt;br /&gt;
&lt;br /&gt;
=== hybrid sigma-pressure coordinates ===&lt;br /&gt;
The two approaches above can be combined by an appropriate choice of the values of $$ap(k)$$ and $$bp(k)$$, i.e. enforcing that:&lt;br /&gt;
* near the surface (small values of $$k$$) $$ap(k) \simeq 0 $$ (i.e. small compared to $$ bp(k) . Ps(t)$$), then the vertical coordinate there is close to being a purely &amp;quot;sigma&amp;quot; coordinate&lt;br /&gt;
* high enough (i.e. high enough above the topography, high values of $$k$$) $$bp(k) \simeq 0$$ and $$ bp(k). Ps(t) &amp;lt;&amp;lt; ap(k)$$, then the vertical coordinate there is close to being a purely &amp;quot;pressure&amp;quot; coordinate&lt;br /&gt;
&lt;br /&gt;
== In practice ==&lt;br /&gt;
Information about the vertical coordinate is only partially stored in the PCM (lon-lat) dynamics start files which contain the number of atmospheric layer (llm), the ap() and bp() coefficients (along with the the $$preff$$ and $$pa$$ parameters discussed below, which are in the &amp;quot;controle&amp;quot; array). Note that the DYNAMICO dynamics start file do not contain this information.&lt;br /&gt;
&lt;br /&gt;
In practice the vertical layer discretisation is generated at run time, using information from the input [[The z2sig.def Input File|z2sig.def file]] and a couple of related parameters : $$preff$$, reference surface pressure, (in Pa), and $$pa$$, a nameless parameter homogeneous to a pressure (also expressed in Pa) which roughly corresponds to a &amp;quot;reference transition pressure&amp;quot; from which hybrid-sigma coordinates become essentially pressure coordinates. Based on these parameters, and the specified atmospheric scale height and related target pseudo-altitudes of model levels from file [[The z2sig.def Input File|z2sig.def]] the model iteratively (requires solving a non-linear problem with no analytic solution) computes and builds the sought model levels.&lt;br /&gt;
&lt;br /&gt;
An additional input parameter that will impact on the generation of model levels is the $$hybrid$$ (logical) parameter set in [[The run.def Input File|run.def]], e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# use hybrid vertical coordinate (else will use sigma levels)&lt;br /&gt;
 hybrid=.true.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If $$hybrid$$ is set to $$.false.$$ the generated model levels will be purely sigma levels, i.e. the $$ap()$$ coefficients will be zero. If running a simulation without any topography it is recommended to stick to purely sigma levels. &lt;br /&gt;
&lt;br /&gt;
You can check out the code where this is done, in the ''disvert_noterre'' routine: https://trac.lmd.jussieu.fr/Planeto/browser/trunk/LMDZ.COMMON/libf/dyn3d_common/disvert_noterre.F and/or its &amp;quot;mirror&amp;quot; version in the dynamico-LMDZ interface ''disvert_icosa_lmdz'' https://trac.lmd.jussieu.fr/Planeto/browser/trunk/ICOSA_LMDZ/src/disvert_icosa_lmdz.f90&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Venus-Model]]&lt;br /&gt;
[[Category:Titan-Model]]&lt;br /&gt;
[[Category:Pluto-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run_Mars_PEM&amp;diff=2928</id>
		<title>Quick Install and Run Mars PEM</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run_Mars_PEM&amp;diff=2928"/>
				<updated>2025-11-06T13:41:11Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this page, we give a hopefully exhaustive enough overview of the necessary prerequisites and steps to download, compile and run a simple simulation with the PEM, set up on a Linux computer. &lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
The PEM is downloaded alongside the '''LMDZ.COMMON''' repository of your trunk, following the same step described in the related section of [[Quick Install and Run Mars PCM]]. The Fortran code is in the following directory: &amp;lt;code&amp;gt;trunk/LMDZ.COMMON/libf/evolution/&amp;lt;/code&amp;gt;. The two PEM programs are '''pem.F90''' and '''reshape_XIOS_output.F90'''.&lt;br /&gt;
&lt;br /&gt;
== Compilation  ==&lt;br /&gt;
&lt;br /&gt;
To compile the PEM, in LMDZ.COMMON, do:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -arch [local] -p [planet] -d [dimensions] -j 8 pem&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Options with example:&lt;br /&gt;
# [local]: ''root name of arch files'', assuming that they have been set up for your configuration;&lt;br /&gt;
# [planet]: ''mars'' to use the Mars planet physics package; &lt;br /&gt;
# [dimensions]: ''64x48x54'' to define the grid you want to use (longitude x latitude x atmospheric layers).&lt;br /&gt;
To run the PEM, you need a dedicated reshaping tool with consistent options. To compile it, in LMDZ_COMMON, do:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -arch [local] -p [planet] -d [dimensions] -j 8 reshape_XIOS_output&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the PEM, you also need a PCM working with XIOS and consistent options. To compile it, in LMDZ.COMMON, do:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -arch [local] -p [planet] -parallel mpi_omp -io xios -d [dimensions] -j 8 gcm&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
After compilation, the executable file can be found in the &amp;quot;bin&amp;quot; sub-directory.&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&lt;br /&gt;
To run a PEM simulation, do:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./launchPEM.sh [options]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Options:&lt;br /&gt;
# None: to start a simulation from scratch;&lt;br /&gt;
# 're': to relaunch a simulation from a starting point (interactive prompt).&lt;br /&gt;
&lt;br /&gt;
The Bash file ''launchPEM.sh'' is the master script to launch the PEM chained simulation. It checks if necessary files and required options for your simulation are ok.&lt;br /&gt;
&lt;br /&gt;
== Requirements ==&lt;br /&gt;
&lt;br /&gt;
To run the PEM, you can create a folder in which you need the following files:&lt;br /&gt;
* your executable files for the PCM, the PEM and the reshaping tool with consistent options;&lt;br /&gt;
* the xml files for XIOS which can be found in the PCM deftank folder: ''iodef.xml'', ''context_lmdz_physics.xml'', ''file_def_physics_mars.xml'' and ''field_def_physics_mars.xml'';&lt;br /&gt;
* the def files you want to run the PCM: ''run.def'', ''callphys.def'', ''traceur.def'', etc. '''Be careful, do not forget to rename the PCM ''run.def'' into ''run_PCM.def''''';&lt;br /&gt;
* the starting files you want to run the PCM: ''startfi.nc'', ''start.nc''/''start1D.txt''/profiles;&lt;br /&gt;
* the necessary PEM files: ''launchPEM.sh'', ''lib_launchPEM.sh'', ''PCMrun.job'', ''PEMrun.job'', ''run_PEM.def'' and ''obl_ecc_lsp.asc'';&lt;br /&gt;
* the optional PEM files ''diagpem.def'' to define the PEM variables to be ouputted and ''startpem.nc'' to set the initial state of the PEM.&lt;br /&gt;
&lt;br /&gt;
The PEM files can be found in the deftank folder, where a ''README'' file recaps everything.&lt;br /&gt;
&lt;br /&gt;
Before a simulation, you have to set up some parameters/options:&lt;br /&gt;
# In ''launchPEM.sh'', the user has to specify:&lt;br /&gt;
#* '''n_mars_years''', '''n_earth_years''': the number of Mars/Earth years to be simulated in total (&amp;gt; 0);&lt;br /&gt;
#* '''nPCM_ini''': the number of initial PCM runs (&amp;gt;= 2);&lt;br /&gt;
#* '''nPCM''': the number of PCM runs between each PEM run (&amp;gt;= 2, usually 2);&lt;br /&gt;
#* '''counting''': the counting method for the number of years to be simulated (0 = &amp;quot;only PEM runs count&amp;quot;; any other values = &amp;quot;PCM runs are taken into account&amp;quot;). The former option is the usual one;&lt;br /&gt;
#* '''mode''': the launching mode (0 = &amp;quot;processing scripts&amp;quot;; any other values = &amp;quot;submitting jobs&amp;quot;). The former option is usually used to process the script on a local machine while the latter is used to submit jobs on a supercomputer with SLURM or PBS/TORQUE.&lt;br /&gt;
# In ''PCMrun.job'', the user has to specify:&lt;br /&gt;
#* the '''headers''' correspond to the ADASTRA supercomputer and should be changed for other machines and job schedulers. In case of &amp;quot;processing scripts&amp;quot; launching mode, the headers are naturally omitted.&lt;br /&gt;
#* The '''path to source''' the arch file should be adapted to the machine.&lt;br /&gt;
#* The '''name of the PCM executable''' file should be adapted.&lt;br /&gt;
#* The '''execution command''' should also be adapted according to the set-up.&lt;br /&gt;
# In ''PEMrun.job'', the user has to specify:&lt;br /&gt;
#* The '''headers''' correspond to the ADASTRA supercomputer and should be changed for other machines and job schedulers. In case of &amp;quot;processing scripts&amp;quot; launching mode, the headers are naturally omitted.&lt;br /&gt;
#* The '''path to source''' the arch file should be adapted to the machine.&lt;br /&gt;
#* The '''name of the PEM and Reshaping executable''' files should be adapted.&lt;br /&gt;
#* The PEM executable can have an '''optional argument''' which should be specified according to the set-up (&amp;quot;--auto-exit&amp;quot; for SLURM and PBS/TORQUE | &amp;quot;&amp;quot; when the script is not run as a job).&lt;br /&gt;
# The user has to specify the wanted options in the '''def files''', especially for ''run_PEM.def'', ''run_PCM.def'', ''callphys.def''.&lt;br /&gt;
# The user has to provide a ''startfi.nc'' whose orbital parameters are consistent with the initial date set in ''run_PEM.def''. The script '''inipem_orbit.sh''' can do it automatically with ''obl_ecc_lsp.asc''.&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
&lt;br /&gt;
The PEM simulation generates the following files:&lt;br /&gt;
* the usual outputs of the PCM: ''restartfi.nc'', ''restart.nc'', ''diagfi.nc'', etc;&lt;br /&gt;
* the XIOS outputs of the PCM, then reshaped: ''Xdiurnalave.nc''/''data2reshape*.nc''/''data_PCM_Y*.nc'';&lt;br /&gt;
* the outputs of the chained simulation: ''launchPEM.log'', ''info_PEM.txt'' and possibly ''kill_launchPEM.sh'';&lt;br /&gt;
* the usual outputs of the PEM: ''&amp;quot;restartfi.nc'', ''restart.nc''/''restart1D.txt'' and ''diagpem.nc''.&lt;br /&gt;
&lt;br /&gt;
During the simulation, the PCM/PEM run files are renamed conveniently and stored in the sub-directories '''logs''' (log files), '''starts''' (starting files) and '''diags''' (diagnostic files).&lt;br /&gt;
&lt;br /&gt;
If you run a simulation by submitting jobs, the script ''kill_launchPEM.sh'' is automatically generated. It can be used to kill in the queue of the job scheduler the jobs related to your chained simulation.&lt;br /&gt;
&lt;br /&gt;
[[Category:Planetary-Evolution-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Parallelism&amp;diff=2902</id>
		<title>Parallelism</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Parallelism&amp;diff=2902"/>
				<updated>2025-10-06T08:29:53Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: /* How to compile the parallel version of the PCM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''This page comes mainly from the LMD Generic GCM user manual (https://trac.lmd.jussieu.fr/Planeto/browser/trunk/LMDZ.GENERIC/ManualGCM_GENERIC.pdf). It is still in development and needs further improvements''&lt;br /&gt;
&lt;br /&gt;
== What is parallelism? ==&lt;br /&gt;
&lt;br /&gt;
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. &lt;br /&gt;
Large problems can often be divided into smaller ones, which can then be solved at the same time.&lt;br /&gt;
&lt;br /&gt;
In short : '''Parallelism can help you save time''': One will get the same results when running on more cores than if running the serial version of the code, but sooner.&lt;br /&gt;
&lt;br /&gt;
Indeed, as the problem is cut into smaller part that are solved simultaneously, the waiting (wall clock) time for the user is reduced.&lt;br /&gt;
However this usually comes with a counterpart (overheads and extra computations due to the parallelization, along with some inherent inefficiences concerning some computations which must be done sequentially), it can '''increase the total computational cost'''.&lt;br /&gt;
&lt;br /&gt;
== How parallelism is implemented in the model ==&lt;br /&gt;
The main factor that constrains and orients the way the code is parallelized is that in the physics, atmospheric columns are &amp;quot;independent&amp;quot; from each other, whereas in the dynamics the flow is 3D with strong coupling between neighboring cells.&lt;br /&gt;
&lt;br /&gt;
=== Parallelism with the lon-lat (LMDZ) dynamical core ===&lt;br /&gt;
&lt;br /&gt;
* MPI tiling: In the lon-lat dynamics the globe is tiled in regions covering all longitudes and a few latitudes. In practice these latitude bands must contain at least 2 points. There is therefore  a limitation to the number of MPI processes one may run with: for a given number of latitude intervals jjm one may use at most jjm/2 processes (for example if the horizontal grid is 64x48 in lonxlat so one could use at most 48/2=24 MPI processes.&lt;br /&gt;
&lt;br /&gt;
* Open MP (OMP): In the dynamics this parallelism is implemented on the loops along the vertical. One could thus use as many OpenMP threads are there are model levels. In practice however the speedup breaks down with much less and it is recommended to have OpenMP chunks of at least ten vertical levels each. Therefore for simulation with llm altitude levels one should target using at most llm/10 OpenMP threads (e.g. for a 64x48x54 grid target using at most 5 OpenMP threads).  &lt;br /&gt;
&lt;br /&gt;
* In practice: One will want to use both MPI and OpenMP for simulations, with as many MPI processes as possible, combined to a good number of OpenMP threads (for each MPI process). Depending on the cluster used, the speedup as a function of number of MPI processes and OpenMP threads can vary a lot. It is therefore recommended to test it to find the &amp;quot;optimal&amp;quot; setup for a given grid.&lt;br /&gt;
&lt;br /&gt;
== How to compile the parallel version of the PCM ==&lt;br /&gt;
&lt;br /&gt;
To compile the model in parallel use the same command as in sequential (see e.g. the &amp;quot;Compiling&amp;quot; section of [[Quick Install and Run]] or the description of the [[The_makelmdz_fcm_GCM_Compilation_Script|makelmdz_fcm script]] and its options) and add the following option :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 -parallel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then there is three choices for parallelism MPI, OMP and mixed (i.e. combined) MPI_OMP:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 -parallel mpi&lt;br /&gt;
 -parallel omp&lt;br /&gt;
 -parallel mpi_omp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So the command line to generate the Generic PCM to run in mixed MPI and OpenMP mode will be for example :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./makelmdz_fcm -s XX -d LONxLATxALT -b IRxVI -p std -arch archFile -parallel mpi_omp gcm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to run in parallel ==&lt;br /&gt;
&lt;br /&gt;
=== Run interactively ===&lt;br /&gt;
* MPI only :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpirun -np N gcm.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
-np N specifies the number of procs to run on.&lt;br /&gt;
&lt;br /&gt;
IMPORTANT: one MUST use the mpirun command corresponding to the mpif90 compiler specified in the arch file.&lt;br /&gt;
&lt;br /&gt;
IMPORTANT: One can use at most one MPI process for every 2 points along the latitude (e.g. a maximum of 24 processes for a horizontal grid of 64x48). If you try to use too many MPI processes you will get the following error message (in French!!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 Arret : le nombre de bande de lattitude par process est trop faible (&amp;lt;2).&lt;br /&gt;
  ---&amp;gt; diminuez le nombre de CPU ou augmentez la taille en lattitude&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Output files (restart.nc, diagfi.nc ,etc.) are just as when running in serial. &lt;br /&gt;
But standard output messages are written by each process.&lt;br /&gt;
If using chained simulations (run mcd/run0 scripts), then the command line to run the gcm in run0 must be adapted for local settings.&lt;br /&gt;
&lt;br /&gt;
NB: LMDZ.COMMON dynamics set to run in double precision, so keep NC_DOUBLE declaration (and real to double precision promotion) in the arch files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Mix MPI_OMP :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export OMP_NUM_THREADS=2&lt;br /&gt;
export OMP_STACKSIZE=2500MB&lt;br /&gt;
mpirun -np 2 gcm.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this exemple, each of the 2 process MPI have 2 OpenMP tasks with a 2500MB memor.&lt;br /&gt;
&lt;br /&gt;
=== Run with a job scheduler ===&lt;br /&gt;
&lt;br /&gt;
This will be different for each machine.&lt;br /&gt;
Some example are provided here but will need to be adapted for each configuration and machine; see also pages dedicated to some clusters we use, such as [[Using the MESOIPSL cluster]] or [[Using Irene Rome]] or [[Using Adastra]]&lt;br /&gt;
&lt;br /&gt;
* MPI only :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
PBS example (on Ciclad):&lt;br /&gt;
#PBS -S /bin/bash&lt;br /&gt;
#PBS -N job_mpi08&lt;br /&gt;
#PBS -q short&lt;br /&gt;
#PBS -j eo&lt;br /&gt;
#PBS -l &amp;quot;nodes=1:ppn=8&amp;quot;&lt;br /&gt;
# go to directory where the job was launched&lt;br /&gt;
cd $PBS_O_WORKDIR&lt;br /&gt;
mpirun gcm_64x48x29_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LoadLeveler example (on Gnome):&lt;br /&gt;
# @ job_name = job_mip8&lt;br /&gt;
# standard output file&lt;br /&gt;
# @ output = job_mpi8.out.$(jobid)&lt;br /&gt;
# standard error file&lt;br /&gt;
# @ error = job_mpi8.err.$(jobid)&lt;br /&gt;
# job type&lt;br /&gt;
# @ job_type = mpich&lt;br /&gt;
# @ blocking = unlimited&lt;br /&gt;
# time&lt;br /&gt;
# @ class = AP&lt;br /&gt;
# Number of procs&lt;br /&gt;
# @ total_tasks = 8&lt;br /&gt;
# @ resources=ConsumableCpus(1) ConsumableMemory(2500 mb)&lt;br /&gt;
# @ queue&lt;br /&gt;
set -vx&lt;br /&gt;
mpirun gcm_32x24x11_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LoadLeveler example (on Ada):&lt;br /&gt;
module load intel/2012.0&lt;br /&gt;
# @ output = output.$(jobid)&lt;br /&gt;
# @ error = $(output)&lt;br /&gt;
# @ job_type = parallel&lt;br /&gt;
## Number of MPI process&lt;br /&gt;
# @ total_tasks = 8&lt;br /&gt;
## Memory used by each MPI process&lt;br /&gt;
# @ as_limit = 2500mb&lt;br /&gt;
# @ wall_clock_limit=01:00:00&lt;br /&gt;
# @ core_limit = 0&lt;br /&gt;
# @ queue&lt;br /&gt;
set -x&lt;br /&gt;
poe ./gcm.e -labelio yes &amp;gt; LOG 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Mix MPI_OMP :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LoadLeveler example (on Gnome):&lt;br /&gt;
# @ job_name = job_mip8&lt;br /&gt;
# standard output file&lt;br /&gt;
# @ output = job_mpi8.out.$(jobid)&lt;br /&gt;
# standard error file&lt;br /&gt;
# @ error = job_mpi8.err.$(jobid)&lt;br /&gt;
# job type&lt;br /&gt;
# @ job_type = mpich&lt;br /&gt;
# @ blocking = unlimited&lt;br /&gt;
# time&lt;br /&gt;
# @ class = AP&lt;br /&gt;
# Number of procs&lt;br /&gt;
# @ total_tasks = 8&lt;br /&gt;
# @ resources=ConsumableCpus(1) ConsumableMemory(5000 mb)&lt;br /&gt;
# @ queue&lt;br /&gt;
set -vx&lt;br /&gt;
export OMP_NUM_THREADS=2 #sinon par defaut, lance 8 threads OpenMP&lt;br /&gt;
export OMP_STACKSIZE=2500MB&lt;br /&gt;
mpirun gcm_32x24x11_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IMPORTANT: ConsumableMemory must be equal to OMP NUM THREADSxOMP STACKSIZE.&lt;br /&gt;
In this case, we are using 8x2 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LoadLeveler example (on Ada):&lt;br /&gt;
module load intel/2012.0&lt;br /&gt;
# @ output = output.$(jobid)&lt;br /&gt;
# @ error = $(output)&lt;br /&gt;
# @ job_type = parallel&lt;br /&gt;
## Number of MPI process&lt;br /&gt;
# @ total_tasks = 8&lt;br /&gt;
## Number of OpenMP tasks attached to each MPI process&lt;br /&gt;
# @ parallel_threads = 2&lt;br /&gt;
## Memory used by each MPI process&lt;br /&gt;
# @ as_limit = 5gb&lt;br /&gt;
# @ wall_clock_limit=01:00:00&lt;br /&gt;
# @ core_limit = 0&lt;br /&gt;
# @ queue&lt;br /&gt;
set -x&lt;br /&gt;
export OMP_STACKSIZE=2500MB&lt;br /&gt;
poe ./gcm.e -labelio yes &amp;gt; LOG 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
IMPORTANT: In this case, each core needs 2.5gb and we are using 2 OpenMP tasks for each MPI process so as_limit = 2 × 2.5.&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;br /&gt;
[[Category:Generic-DYNAMICO]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Parallelism&amp;diff=2901</id>
		<title>Parallelism</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Parallelism&amp;diff=2901"/>
				<updated>2025-10-03T09:51:44Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''This page comes mainly from the LMD Generic GCM user manual (https://trac.lmd.jussieu.fr/Planeto/browser/trunk/LMDZ.GENERIC/ManualGCM_GENERIC.pdf). It is still in development and needs further improvements''&lt;br /&gt;
&lt;br /&gt;
== What is parallelism? ==&lt;br /&gt;
&lt;br /&gt;
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. &lt;br /&gt;
Large problems can often be divided into smaller ones, which can then be solved at the same time.&lt;br /&gt;
&lt;br /&gt;
In short : '''Parallelism can help you save time''': One will get the same results when running on more cores than if running the serial version of the code, but sooner.&lt;br /&gt;
&lt;br /&gt;
Indeed, as the problem is cut into smaller part that are solved simultaneously, the waiting (wall clock) time for the user is reduced.&lt;br /&gt;
However this usually comes with a counterpart (overheads and extra computations due to the parallelization, along with some inherent inefficiences concerning some computations which must be done sequentially), it can '''increase the total computational cost'''.&lt;br /&gt;
&lt;br /&gt;
== How parallelism is implemented in the model ==&lt;br /&gt;
The main factor that constrains and orients the way the code is parallelized is that in the physics, atmospheric columns are &amp;quot;independent&amp;quot; from each other, whereas in the dynamics the flow is 3D with strong coupling between neighboring cells.&lt;br /&gt;
&lt;br /&gt;
=== Parallelism with the lon-lat (LMDZ) dynamical core ===&lt;br /&gt;
&lt;br /&gt;
* MPI tiling: In the lon-lat dynamics the globe is tiled in regions covering all longitudes and a few latitudes. In practice these latitude bands must contain at least 2 points. There is therefore  a limitation to the number of MPI processes one may run with: for a given number of latitude intervals jjm one may use at most jjm/2 processes (for example if the horizontal grid is 64x48 in lonxlat so one could use at most 48/2=24 MPI processes.&lt;br /&gt;
&lt;br /&gt;
* Open MP (OMP): In the dynamics this parallelism is implemented on the loops along the vertical. One could thus use as many OpenMP threads are there are model levels. In practice however the speedup breaks down with much less and it is recommended to have OpenMP chunks of at least ten vertical levels each. Therefore for simulation with llm altitude levels one should target using at most llm/10 OpenMP threads (e.g. for a 64x48x54 grid target using at most 5 OpenMP threads).  &lt;br /&gt;
&lt;br /&gt;
* In practice: One will want to use both MPI and OpenMP for simulations, with as many MPI processes as possible, combined to a good number of OpenMP threads (for each MPI process). Depending on the cluster used, the speedup as a function of number of MPI processes and OpenMP threads can vary a lot. It is therefore recommended to test it to find the &amp;quot;optimal&amp;quot; setup for a given grid.&lt;br /&gt;
&lt;br /&gt;
== How to compile the parallel version of the PCM ==&lt;br /&gt;
&lt;br /&gt;
To compile the model in parallel use the same command as in sequential (see e.g. the &amp;quot;Compiling&amp;quot; section of [[Quick Install and Run]] or the description of the [[The_makelmdz_fcm_GCM_Compilation_Script|makelmdz_fcm script]] and its options) and add the following option :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 -parallel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then there is three choices for parallelism MPI, OMP and mixed (i.e. combined) MPI_OMP:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 -parallel mpi&lt;br /&gt;
 -parallel omp&lt;br /&gt;
 -parallel mpi_omp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So the command line to run in mix MPI_OMP will be for example :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./makelmdz_fcm -s XX -t XX -d LONxLATxALT -b IRxVI -p physicSuffix -arch archFile -parallel mpi_omp gcm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to run in parallel ==&lt;br /&gt;
&lt;br /&gt;
=== Run interactively ===&lt;br /&gt;
* MPI only :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpirun -np N gcm.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
-np N specifies the number of procs to run on.&lt;br /&gt;
&lt;br /&gt;
IMPORTANT: one MUST use the mpirun command corresponding to the mpif90 compiler specified in the arch file.&lt;br /&gt;
&lt;br /&gt;
IMPORTANT: One can use at most one MPI process for every 2 points along the latitude (e.g. a maximum of 24 processes for a horizontal grid of 64x48). If you try to use too many MPI processes you will get the following error message (in French!!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 Arret : le nombre de bande de lattitude par process est trop faible (&amp;lt;2).&lt;br /&gt;
  ---&amp;gt; diminuez le nombre de CPU ou augmentez la taille en lattitude&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Output files (restart.nc, diagfi.nc ,etc.) are just as when running in serial. &lt;br /&gt;
But standard output messages are written by each process.&lt;br /&gt;
If using chained simulations (run mcd/run0 scripts), then the command line to run the gcm in run0 must be adapted for local settings.&lt;br /&gt;
&lt;br /&gt;
NB: LMDZ.COMMON dynamics set to run in double precision, so keep NC_DOUBLE declaration (and real to double precision promotion) in the arch files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Mix MPI_OMP :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export OMP_NUM_THREADS=2&lt;br /&gt;
export OMP_STACKSIZE=2500MB&lt;br /&gt;
mpirun -np 2 gcm.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this exemple, each of the 2 process MPI have 2 OpenMP tasks with a 2500MB memor.&lt;br /&gt;
&lt;br /&gt;
=== Run with a job scheduler ===&lt;br /&gt;
&lt;br /&gt;
This will be different for each machine.&lt;br /&gt;
Some example are provided here but will need to be adapted for each configuration and machine; see also pages dedicated to some clusters we use, such as [[Using the MESOIPSL cluster]] or [[Using Irene Rome]] or [[Using Adastra]]&lt;br /&gt;
&lt;br /&gt;
* MPI only :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
PBS example (on Ciclad):&lt;br /&gt;
#PBS -S /bin/bash&lt;br /&gt;
#PBS -N job_mpi08&lt;br /&gt;
#PBS -q short&lt;br /&gt;
#PBS -j eo&lt;br /&gt;
#PBS -l &amp;quot;nodes=1:ppn=8&amp;quot;&lt;br /&gt;
# go to directory where the job was launched&lt;br /&gt;
cd $PBS_O_WORKDIR&lt;br /&gt;
mpirun gcm_64x48x29_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LoadLeveler example (on Gnome):&lt;br /&gt;
# @ job_name = job_mip8&lt;br /&gt;
# standard output file&lt;br /&gt;
# @ output = job_mpi8.out.$(jobid)&lt;br /&gt;
# standard error file&lt;br /&gt;
# @ error = job_mpi8.err.$(jobid)&lt;br /&gt;
# job type&lt;br /&gt;
# @ job_type = mpich&lt;br /&gt;
# @ blocking = unlimited&lt;br /&gt;
# time&lt;br /&gt;
# @ class = AP&lt;br /&gt;
# Number of procs&lt;br /&gt;
# @ total_tasks = 8&lt;br /&gt;
# @ resources=ConsumableCpus(1) ConsumableMemory(2500 mb)&lt;br /&gt;
# @ queue&lt;br /&gt;
set -vx&lt;br /&gt;
mpirun gcm_32x24x11_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LoadLeveler example (on Ada):&lt;br /&gt;
module load intel/2012.0&lt;br /&gt;
# @ output = output.$(jobid)&lt;br /&gt;
# @ error = $(output)&lt;br /&gt;
# @ job_type = parallel&lt;br /&gt;
## Number of MPI process&lt;br /&gt;
# @ total_tasks = 8&lt;br /&gt;
## Memory used by each MPI process&lt;br /&gt;
# @ as_limit = 2500mb&lt;br /&gt;
# @ wall_clock_limit=01:00:00&lt;br /&gt;
# @ core_limit = 0&lt;br /&gt;
# @ queue&lt;br /&gt;
set -x&lt;br /&gt;
poe ./gcm.e -labelio yes &amp;gt; LOG 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Mix MPI_OMP :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LoadLeveler example (on Gnome):&lt;br /&gt;
# @ job_name = job_mip8&lt;br /&gt;
# standard output file&lt;br /&gt;
# @ output = job_mpi8.out.$(jobid)&lt;br /&gt;
# standard error file&lt;br /&gt;
# @ error = job_mpi8.err.$(jobid)&lt;br /&gt;
# job type&lt;br /&gt;
# @ job_type = mpich&lt;br /&gt;
# @ blocking = unlimited&lt;br /&gt;
# time&lt;br /&gt;
# @ class = AP&lt;br /&gt;
# Number of procs&lt;br /&gt;
# @ total_tasks = 8&lt;br /&gt;
# @ resources=ConsumableCpus(1) ConsumableMemory(5000 mb)&lt;br /&gt;
# @ queue&lt;br /&gt;
set -vx&lt;br /&gt;
export OMP_NUM_THREADS=2 #sinon par defaut, lance 8 threads OpenMP&lt;br /&gt;
export OMP_STACKSIZE=2500MB&lt;br /&gt;
mpirun gcm_32x24x11_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IMPORTANT: ConsumableMemory must be equal to OMP NUM THREADSxOMP STACKSIZE.&lt;br /&gt;
In this case, we are using 8x2 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LoadLeveler example (on Ada):&lt;br /&gt;
module load intel/2012.0&lt;br /&gt;
# @ output = output.$(jobid)&lt;br /&gt;
# @ error = $(output)&lt;br /&gt;
# @ job_type = parallel&lt;br /&gt;
## Number of MPI process&lt;br /&gt;
# @ total_tasks = 8&lt;br /&gt;
## Number of OpenMP tasks attached to each MPI process&lt;br /&gt;
# @ parallel_threads = 2&lt;br /&gt;
## Memory used by each MPI process&lt;br /&gt;
# @ as_limit = 5gb&lt;br /&gt;
# @ wall_clock_limit=01:00:00&lt;br /&gt;
# @ core_limit = 0&lt;br /&gt;
# @ queue&lt;br /&gt;
set -x&lt;br /&gt;
export OMP_STACKSIZE=2500MB&lt;br /&gt;
poe ./gcm.e -labelio yes &amp;gt; LOG 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
IMPORTANT: In this case, each core needs 2.5gb and we are using 2 OpenMP tasks for each MPI process so as_limit = 2 × 2.5.&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;br /&gt;
[[Category:Generic-DYNAMICO]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;diff=2900</id>
		<title>Quick Install and Run</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Quick_Install_and_Run&amp;diff=2900"/>
				<updated>2025-10-02T15:06:48Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this page we give a hopefully exhaustive enough overview of the necessary prerequisites and steps to download, compile and run a simple simulation with the GCM in an &amp;quot;Early Mars&amp;quot; setup (i.e. a desert planet with a CO2 atmosphere) on a Linux computer.&lt;br /&gt;
&lt;br /&gt;
Note that there is a dedicated, install script, that attempts to do all these steps (up to and including running the simulation) that you can obtain here: https://web.lmd.jussieu.fr/~lmdz/planets/generic/install_scripts/install_lmdz_generic_earlymars.bash&lt;br /&gt;
Automating the process is not trivial as there are many subtle variants of Linux flavors and user account setups, so the script may fail in your case, but hopefully the information given in this page should help you solve the encountered problems.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites: Tools and Libraries ==&lt;br /&gt;
In order to use (i.e. compile and run) the GCM, one needs to have some tools and installed libraries at hand. We list below a (minimal) set that you should check that is available and/or that you'll need to first install on your machine. Note that we assume in this tutorial that you are on a Linux native-OS/cluster.&lt;br /&gt;
&lt;br /&gt;
===  Fortran compiler ===&lt;br /&gt;
The GCM source code is in Fortran. One thus needs a Fortran compiler to build (compile) the executable.&lt;br /&gt;
The most easily available one (on Linux) is gfortran and examples discussed here will assume it is the one used.&lt;br /&gt;
You can check that you indeed have a gfortran compiler at hand with the following Bash command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
which gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
which should return something like&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/usr/bin/gfortran&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Subversion ===&lt;br /&gt;
The source code is managed using subversion (svn), which you'll need to download or update. Leaving aside the subtleties of svn and code organization for now, downloading the code amounts to doing the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://svn.lmd.jussieu.fr/Planeto/trunk --depth empty&lt;br /&gt;
cd trunk&lt;br /&gt;
svn update LMDZ.COMMON LMDZ.GENERIC&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As a side note: the source code that will be fetched by svn can also be browsed online here: https://trac.lmd.jussieu.fr/Planeto&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Note: if the command line above doesn't work, you may also try to replace 'http' by 'https'.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Git === &lt;br /&gt;
&lt;br /&gt;
Alternatively to svn, you can use [[Git usage|git to download the source code]]. &lt;br /&gt;
&lt;br /&gt;
=== FCM ===&lt;br /&gt;
The FCM (Flexible Configuration Management) tool is a suite of perl scripts to help building and managing codes. We use a slightly modified version which can be obtained using subversion (svn). Ideally you'll want to download it somewhere on your computer once in for all. To do this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
svn checkout https://forge.ipsl.jussieu.fr/fcm/svn/PATCHED/FCM_V1.2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You'll then need to add the resulting FCM_V1.2/bin to your PATH environment variable so that the command &amp;quot;fcm&amp;quot; may be used from anywhere on your machine. e.g. by adding the following line in your .bashrc (the .bashrc file is a hidden configuration script in your home directory (~/.bashrc) that runs whenever you start a new Bash shell):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/FCM_V1.2/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that the downloaded FCM_V1.2 directory is in your home ($HOME) directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
=== the NetCDF library ===&lt;br /&gt;
The GCM reads and writes input and output files in NetCDF format, therefore a NetCDF library is required. Most of the clusters propose a NetCDF library that you can load before using the model. &lt;br /&gt;
&lt;br /&gt;
If this library is not available, you can install it by yourself on your system (check out [[the netCDF library]] page for more). You can use the following home-made &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script to do so. For this, ensure that you are in your home directory:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir netcdf&lt;br /&gt;
cd netcdf&lt;br /&gt;
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5_seq.bash&lt;br /&gt;
chmod u=rwx install_netcdf4_hdf5_seq.bash&lt;br /&gt;
./install_netcdf4_hdf5_seq.bash &amp;gt; netcdf.log 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Compiling the library and dependencies can take a while (&amp;gt;&amp;gt;15 minutes; be patient).&lt;br /&gt;
Once this is done, check file netcdf.log to verify that all went well.&lt;br /&gt;
You may want to also add its &amp;quot;bin&amp;quot; directory to your PATH environment variable by adding in your .bashrc a line of:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PATH=$PATH:$HOME/netcdf/bin&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The assumption here is that you have run the &amp;quot;install_netcdf4_hdf5_seq.bash&amp;quot; script in a &amp;quot;netcdf&amp;quot; subdirectory of your home directory. Adapt accordingly if not.&lt;br /&gt;
&lt;br /&gt;
As a side note: The NetCDF library provides a very simple command line tool (ncdump) to inspect the contents of NetCDF files, but you'll need more advanced visualization tools (e.g., Panoply, Python scripts, etc. - see further down this page in the &amp;quot;Checking the Results&amp;quot; section) for more advanced post-processing of the outputs.&lt;br /&gt;
&lt;br /&gt;
=== the IOIPSL library ===&lt;br /&gt;
&lt;br /&gt;
The IOIPSL (Input/Output IPSL) library is designed to handle both the reading of some input files used by the GCM (the *.def files which are described further below) and the writing of some NetCDF output files.&lt;br /&gt;
&lt;br /&gt;
==== Automated IOIPSL install script ====&lt;br /&gt;
Scripts to download and install the IOIPSL library can be found in the &amp;quot;ioipsl&amp;quot; subdirectory of the &amp;quot;LMDZ.COMMON&amp;quot; library. Since here we assume we're working with gfortran, the relevant one is &amp;quot;install_ioipsl_gfortran.bash&amp;quot;. If your PATH environment variable is already such that it includes the path to your NetCDF library distribution's bin directory (see previous section) then all you need to do is execute the script:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./install_ioipsl_gfortran.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If all went well the script should end with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
OK: ioipsl library is in ...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''(for further details about [[The_IOIPSL_Library|the IOIPSL library]] and installing it, follow the link and/or use the Search Box at the top of this page)''&lt;br /&gt;
&lt;br /&gt;
== GCM Input Datafiles and Datasets ==&lt;br /&gt;
In order to run, the GCM needs some inputs, such as initial conditions (values of state variables), external inputs (e.g. optical properties of aerosols) and simulation setup (e.g. specifications on how long to run, which parametrizations should be activated, etc.)&lt;br /&gt;
&lt;br /&gt;
In the spirit of the illustrative example considered here (an &amp;quot;Early Mars&amp;quot; simulation), a set of necessary input data may be downloaded with:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
wget -nv --no-check-certificate https://web.lmd.jussieu.fr/~lmdz/planets/generic/reference_setups/reference_earlymars_32x32x15_b32x36.tar.gz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Once unpacked (to do that, you can execute the command &amp;quot;tar xvzf reference_earlymars_32x32x15_b32x36.tar.gz&amp;quot;) the resulting &amp;quot;reference_earlymars_32x32x15_b32x36&amp;quot; will contain all that is needed, namely:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
callphys.def  gases.def  startfi.nc  traceur.def&lt;br /&gt;
datadir/      run.def    start.nc    z2sig.def&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Initial condition NetCDF files ''start.nc'' and ''startfi.nc''; the first containing initial condition values for the dynamics and the second initial condition values for the physics.&lt;br /&gt;
* A '''datadir''' directory containing external inputs (aerosol properties, stellar spectra, etc.)&lt;br /&gt;
* Some ASCII *.def files containing run parameters, namely:&lt;br /&gt;
# [[The_run.def_Input_File | run.def]] : &amp;quot;master def file&amp;quot; containing main run parameters&lt;br /&gt;
# [[The_callphys.def_Input_File | callphys.def]] : file containing flags and keys for the various physics parametrizations&lt;br /&gt;
# [[The_z2sig.def_Input_File | z2sig.def]] : file describing the sought vertical discretization&lt;br /&gt;
# [[The_traceur.def_Input_File | traceur.def]] : file specifying the tracer number and names&lt;br /&gt;
# [[The_gases.def_Input_File | gases.def]] : file specifying the list of gases (main and trace) in the atmosphere&lt;br /&gt;
&lt;br /&gt;
== Compiling the GCM ==&lt;br /&gt;
Now that all the prerequisites are fulfilled, it is (almost!) time to compile the GCM&lt;br /&gt;
&lt;br /&gt;
=== Prior to a first compilation: setting up the target architecture files ===&lt;br /&gt;
Compiling the model is done using a dedicated Bash script ''makelmdz_fcm'' located in the '''LMDZ.COMMON''' directory. This script however relies on ''architecture files''. These files contain information on which compiler to use, what compilation options to use, where relevant libraries are located etc. In practice, one must thus create these ASCII text files in the '''arch/''' subdirectory of '''LMDZ.COMMON'''. The naming convention is rather straightforward, when the script ''makelmdz_fcm'' is run with the option '''-arch somename''', it will look for files ''arch/arch-somename.env'', ''arch/arch-somename.path'' and ''arch/arch-somename.fcm''. Leaving aside a detailed description for later (see [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this page]]), here we mention that:&lt;br /&gt;
* the ''arch*.env'' is an optional file containing ''environment'' information, such as setting up environment variables or loading modules on some machines, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export NETCDF_HOME=/path/to/the/netcdf/distribution&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
A more realistic (but more specific) example of a '''arch*.env''' file using &amp;quot;recent&amp;quot; module commands, adapted for compilation and visualisation on a given supercomputer, would look more like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module purge&lt;br /&gt;
module load GCC/10.3.0  OpenMPI/4.1.1&lt;br /&gt;
module load netCDF-Fortran/4.5.3&lt;br /&gt;
export NETCDF_INCDIR=&amp;quot;/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/include&amp;quot;&lt;br /&gt;
export NETCDFF_LIBDIR=&amp;quot;/opt/ebsofts/netCDF-Fortran/4.5.3-gompi-2021a/lib&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that the last two lines above specify paths to the '''include''' and '''lib''' directories used on this system and will certainly vary from system to system. Likewise, the exact module versions will most likely need to be adapted in your specific '''arch*.env''' file.&lt;br /&gt;
* the '''arch*.path''' is a mandatory file containing information relative to external libraries such as NetCDF and IOIPSL, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ROOT=$PWD&lt;br /&gt;
&lt;br /&gt;
NETCDF_LIBDIR=&amp;quot;-L${NETCDF_HOME}/lib&amp;quot;&lt;br /&gt;
NETCDF_LIB=&amp;quot;-lnetcdf -lnetcdff&amp;quot;&lt;br /&gt;
NETCDF_INCDIR=&amp;quot;-I${NETCDF_HOME}/include&amp;quot;&lt;br /&gt;
&lt;br /&gt;
IOIPSL_INCDIR=&amp;quot;-I$ROOT/../IOIPSL/inc&amp;quot;&lt;br /&gt;
IOIPSL_LIBDIR=&amp;quot;-L$ROOT/../IOIPSL/lib&amp;quot;&lt;br /&gt;
IOIPSL_LIB=&amp;quot;-lioipsl&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Each library is referenced by a fixed identifier (NETCDF, IOIPSL, XIOS, ...) and 3 trailing strings: '''_LIBDIR''', for the path to the library, '''_LIB''', for the library name(s), and '''_INCDIR''' for the path to the library's ''include'' directory.&lt;br /&gt;
&lt;br /&gt;
* the '''arch*.fcm''' is a mandatory file containing information relative to the compiler and compilation options, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
%COMPILER            gfortran&lt;br /&gt;
%LINK                gfortran&lt;br /&gt;
%AR                  ar&lt;br /&gt;
%MAKE                make&lt;br /&gt;
%FPP_FLAGS           -P -traditional&lt;br /&gt;
%FPP_DEF             NC_DOUBLE&lt;br /&gt;
%BASE_FFLAGS         -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons&lt;br /&gt;
%PROD_FFLAGS         -O3&lt;br /&gt;
%DEV_FFLAGS          -O&lt;br /&gt;
%DEBUG_FFLAGS        -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace&lt;br /&gt;
%MPI_FFLAGS&lt;br /&gt;
%OMP_FFLAGS         &lt;br /&gt;
%BASE_LD     &lt;br /&gt;
%MPI_LD&lt;br /&gt;
%OMP_LD              &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Again, not going into a detailed description (follow [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files|this link]] for that), just note here that each line corresponds to a keyword (starting with &amp;quot;%&amp;quot;) followed by the relevant options. Here, we mention a few of the main ones:&lt;br /&gt;
* %COMPILER: The compiler to use (here, gfortran)&lt;br /&gt;
* %BASE_FFLAGS: compiler options (always included)&lt;br /&gt;
* %PROD_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-prod&amp;quot; option&lt;br /&gt;
* %DEBUG_FFLAGS: compilation flags to include if makelmdz_fcm is run with the &amp;quot;-debug&amp;quot; option&lt;br /&gt;
* %BASE_LD: flags to add at the linking step of the compilation&lt;br /&gt;
&lt;br /&gt;
Note that if you are using a recent version of gfortran (10 or more), you have to add an extra option in the %BASE_FFLAGS, that is '''-fallow-argument-mismatch'''&lt;br /&gt;
&lt;br /&gt;
Also note that you can find in the '''LMDZ.COMMON/arch/''' many examples of arch files that you can re-use as is if you compile the model on our usual computing clusters (e.g. Spirit, Adastra, etc.). Just check the content of the directory to see if your favorite computing cluster already has arch files.&lt;br /&gt;
&lt;br /&gt;
=== Compiling a test case (early Mars) ===&lt;br /&gt;
To compile the GCM at the sought resolution for the Early Mars test case run (in LMDZ.COMMON):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -arch local -p std -d 32x32x15 -b 32x36 gcm &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;!-- -s option is no more needed ; * '''-s 2''': the physics parametrizations will handle 2 radiatively active tracers (water ice and dust for the Early Mars setup) --&amp;gt;&lt;br /&gt;
Here, we assume that you have generated the '''arch-local.*''' files as per what is suggested in the previous section.&lt;br /&gt;
The options for ''makelmdz_fcm'' used here imply:&lt;br /&gt;
* '''-p std''': the GCM will use the &amp;quot;std&amp;quot; physics package (i.e. the generic physics)&lt;br /&gt;
* '''-d 32x32x15''': the GCM grid will be 32x32 in longitude x latitude, with 15 vertical levels.&lt;br /&gt;
* '''-b 32x36''': the physics radiative transfer will be done using 32 bands in the IR and 36 in the visible.&lt;br /&gt;
For a glimpse at all the possible ''makelmdz_fcm'' options and their meanings, run:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./makelmdz_fcm -h&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and/or check the dedicated [[The_makelmdz_fcm_GCM_Compilation_Script|makelmdz_fcm page]].&lt;br /&gt;
&lt;br /&gt;
Upon successful compilation, the executable '''gcm_32x32x15_phystd_b32x36_seq.e''' should be generated in the '''bin''' subdirectory.&lt;br /&gt;
&lt;br /&gt;
=== Known issues ===&lt;br /&gt;
&lt;br /&gt;
If the compilation fails, it might be due to the options used in the arch file. &lt;br /&gt;
For example, if you are using gfortran prior to 10, you could get an error such as:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gfortran: error: unrecognized command line option ‘-fallow-argument-mismatch’; did you mean ‘-Wno-argument-mismatch’?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This can be solved by removing the option '''-fallow-argument-mismatch''' from the arch.fcm file.&lt;br /&gt;
&lt;br /&gt;
If you are using a recent version of gfortran (10 of beyond) without the option '''-fallow-argument-mismatch''', the compilation will probably fail ith the error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  136 |      .       idim_index,nvarid)&lt;br /&gt;
      |             2                                       &lt;br /&gt;
......&lt;br /&gt;
  211 |       ierr = NF_DEF_VAR (nid, &amp;quot;aire&amp;quot;, NF_DOUBLE, 2, id,nvarid)&lt;br /&gt;
      |                                                    1&lt;br /&gt;
Error: Rank mismatch between actual argument at (1) and actual argument at (2) (scalar and rank-1)&lt;br /&gt;
fcm_internal compile failed (256)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Add the compilation option in the arch file to solve the issue.&lt;br /&gt;
&lt;br /&gt;
== Running the GCM ==&lt;br /&gt;
To run your first simulation, you need to first copy (or move) the executable '''gcm_32x32x15_phystd_b32x36_seq.e''' to the directory containing the initial conditions and parameter files, e.g. '''reference_earlymars_32x32x15_b32x36''' and run it.&lt;br /&gt;
This is usually a two-step process: the (optional) first step is to source the environment architecture file (the very same that was used to compile the model), e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ../LMDZ.COMMON/arch.env&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The second step is to execute the model, e.g.,:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./gcm_32x32x15_phystd_b32x36_seq.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this command line, the (text) outputs messages are redirected into a text file, '''gcm.out'''. It is convenient to keep this file for later inspection (e.g., to track a bug). If there is no redirection (only '''./gcm_32x32x15_phystd_b32x36_seq.e'''), then the outputs will be directly on the screen.&lt;br /&gt;
&lt;br /&gt;
== Checking the Results of a Simulation ==&lt;br /&gt;
Once the simulation is finished, you'll know that all went well (&amp;quot;everything is cool&amp;quot;) if the last few lines of the standard text output reads:&lt;br /&gt;
[[File:tsurf_benchmark_early_Mars.png|300px|thumb|Final surface temperature map of the reference simulation (plotted using Panoply).]]&lt;br /&gt;
[[File:water_ice_cloud_column_benchmark_early_Mars.png|300px|thumb|Final water ice cloud column map of the reference simulation (plotted using Panoply).]]&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 in abort_gcm&lt;br /&gt;
 Stopping in leapfrog&lt;br /&gt;
 Reason = Simulation finished &lt;br /&gt;
 Everything is cool&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, start looking for an error message and a way to fix the problem...&lt;br /&gt;
&lt;br /&gt;
Apart from the standard text output messages from the GCM, which are mostly for monitoring and checking the simulation progress, the user will more likely be interested in checking the contents of the ''diagfi.nc'' file produced by the GCM, as it contains instantaneous values of the main model variables (atmospheric temperature, winds, etc.).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To check that you have successfully run the simulation, we provide some graphs to evaluate the results of your simulations, for the simulation described in this tutorial (early Mars reference, 32x32x15 resolution).&lt;br /&gt;
&lt;br /&gt;
In the plots shown here, we present maps of the surface temperatures ('tsurf' variable) and the water ice cloud column ('h2o_ice_col' variable), both plotted using Panoply.&lt;br /&gt;
&lt;br /&gt;
Side note: There are a variety of freely available software that can be used to visualise the NetCDF ''diagfi.nc'' file, such as Panoply, Ferret, Ncview, Grads, Python, etc. (see more details in the [[Tool_Box | Tool Box section]])&lt;br /&gt;
&lt;br /&gt;
== Taking Things to the Next Level ==&lt;br /&gt;
The short tutorial presented in this page is meant to be useful to get an overview of what is required to install and run the GCM, in addition to checking the results of a simulation. Moving on to a more intensive and problem-specific usage will require diving into additional topics and aspects such as:&lt;br /&gt;
* Selecting the appropriate inputs and run parameters for a given study.&lt;br /&gt;
* Compiling and running in parallel (MPI and/or OpenMP) to obtain results in a reasonable time frame.&lt;br /&gt;
* post-processing and analysis of model outputs.&lt;br /&gt;
All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)!&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Dust_Cycle_in_Mars_PCM5&amp;diff=2811</id>
		<title>Dust Cycle in Mars PCM5</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Dust_Cycle_in_Mars_PCM5&amp;diff=2811"/>
				<updated>2025-08-22T15:24:48Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page details at different levels how to use and what is featured in the dust cycle in Mars PCM5.&lt;br /&gt;
&lt;br /&gt;
== Brief Overview ==&lt;br /&gt;
The reference work leading to the implementation in the PCM is the PhD work of J.-B. Madeleine; check out his PhD manuscript and 2011 JGR article&lt;br /&gt;
&amp;quot;Revisiting the radiative impact of dust on Mars using the LMD Global Climate Model&amp;quot; https://doi.org/10.1029/2011JE003855&lt;br /&gt;
&lt;br /&gt;
The main features and concepts on how the dust is handled are:&lt;br /&gt;
* Dust is modeled as a log-normal population (varying in size), which in the end requires managing only two ''tracers'', the first two moments of the distribution, which are the dust mass mixing ratio and number (tracers ''dust_mass'' and ''dust_number'').&lt;br /&gt;
* All the physical processes like large scale advection, mixing by the turbulence in the planetary boundary layer, sedimentation, etc. are modeled.&lt;br /&gt;
* How the dust gets injected in the atmosphere (i.e. the details of dust lifting from the surface) is not modeled; instead we use a simple assumption that there is always some injection of dust from the surface (most of it simply falling back down; but some of it, when the conditions are right, propagates)&lt;br /&gt;
* In addition, the dust in each column is rescaled so that its column opacity then matches that of a driving dust scenario (typically derived from observations).&lt;br /&gt;
&lt;br /&gt;
== Flags concerning the dust cycle in a PCM version 5 setup ==&lt;br /&gt;
A setup including adequate flags and parameters for a PCM5 dust cycle can be found in the reference [[callphys.def.GCM5|https://trac.lmd.jussieu.fr/Planeto/browser/trunk/LMDZ.MARS/deftank/callphys.def.GCM5] reference provided in deftank.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, there should be the 2 dedicated tracers (dust moments) at hand (i.e. in the traceur.def file):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dust_mass&lt;br /&gt;
dust_number&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In practice the relevant callphys.def parameters are:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Directory where external input files are:&lt;br /&gt;
datadir=/users/lmdz/WWW/planets/mars/datadir&lt;br /&gt;
&lt;br /&gt;
## Dust scenario. Used if the dust is prescribed (i.e. if active=F)&lt;br /&gt;
## ~~~~~~~~~~~~~&lt;br /&gt;
#  =1 Dust opt.deph read in startfi; =2 Viking scenario; =3 MGS scenario,&lt;br /&gt;
#  =4 Mars Year 24 from TES assimilation (old version of MY24; dust_tes.nc file)&lt;br /&gt;
#  =6 &amp;quot;cold&amp;quot; (low dust) scenario ; =7 &amp;quot;warm&amp;quot; (high dust) scenario&lt;br /&gt;
#  =8 &amp;quot;climatology&amp;quot; (our best guess of a typical Mars year) scenario&lt;br /&gt;
#  =24 Mars Year 24  ; =25 Mars Year 25 (year with a global dust storm) ; ...&lt;br /&gt;
#  =30 Mars Year 30 &lt;br /&gt;
iaervar = 26&lt;br /&gt;
# Dust opacity at 610 Pa (when constant, i.e. for the iaervar=1 case)&lt;br /&gt;
tauvis=0.2&lt;br /&gt;
# Dust vertical distribution: &lt;br /&gt;
# (=0: old distrib. (Pollack90), =1: top set by &amp;quot;topdustref&amp;quot;,&lt;br /&gt;
#  =2: Viking scenario; =3 MGS scenario)&lt;br /&gt;
iddist  = 3&lt;br /&gt;
# Dust top altitude (km). (Matters only if iddist=1)&lt;br /&gt;
topdustref = 55.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* '''iaervar''' points to the driving dust scenario (daily maps; netcdf files located under '''datadir''') to use: e.g. ''dust_clim.nc'' if &amp;lt;code&amp;gt;iaervar=8&amp;lt;/code&amp;gt;, ''dust_MY30.nc'' if &amp;lt;code&amp;gt;iaervar=30&amp;lt;/code&amp;gt;, etc. In the special case where &amp;lt;code&amp;gt;iaervar=1&amp;lt;/code&amp;gt; then the driving dust column opacity is constant (over space and time) to the value specified by flag '''tauvis'''.&lt;br /&gt;
* '''tauvis''' is only used when &amp;lt;code&amp;gt;iaervar=1&amp;lt;/code&amp;gt; in which case it is the imposed value of the column dust opacity (at reference pressure of 610Pa).&lt;br /&gt;
* '''iddist''' and '''topdustref''' are not used (date back to even simpler setup)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
## Tracer (dust water, ice and/or chemical species) options :&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# DUST: Transported dust ? (if &amp;gt;0, use 'dustbin' dust bins)&lt;br /&gt;
dustbin    = 2&lt;br /&gt;
# DUST: Radiatively active dust ? (matters if dustbin&amp;gt;0)&lt;br /&gt;
active  = .true.&lt;br /&gt;
# DUST: use mass and number mixing ratios to predict dust size ?&lt;br /&gt;
doubleq   = .true.&lt;br /&gt;
# DUST: use a small population of dust particules (submicron dust)?&lt;br /&gt;
submicron = .false.&lt;br /&gt;
# DUST: lifted by GCM surface winds ?&lt;br /&gt;
lifting = .true.&lt;br /&gt;
# DUST: lifted by dust devils ?&lt;br /&gt;
callddevil = .false.&lt;br /&gt;
# DUST: Scavenging by H2O/CO2 snowfall ?&lt;br /&gt;
scavenging = .true.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* '''dustbin''' should be 2, because using the 2-moment scheme (with &amp;lt;code&amp;gt;doubleq=.true.&amp;lt;/code&amp;gt;).&lt;br /&gt;
* '''active''' should definitely be set to &amp;lt;code&amp;gt;.true.&amp;lt;/code&amp;gt;, to account for the radiative effect of dust.&lt;br /&gt;
* '''doubleq'''  should be &amp;lt;code&amp;gt;.true.&amp;lt;/code&amp;gt; (with &amp;lt;code&amp;gt;dustbin=2&amp;lt;/code&amp;gt;), to use the two-moment scheme&lt;br /&gt;
* '''submicron''' should be &amp;lt;code&amp;gt;.false.&amp;lt;/code&amp;gt;; was put there to eventually have a second distribution of small particles around. Not used nor validated.&lt;br /&gt;
* '''lifting''' should be &amp;lt;code&amp;gt;.true.&amp;lt;/code&amp;gt;, to impose continuous dust injection from the surface into the first atmospheric layer.&lt;br /&gt;
* '''callddevil''' should be &amp;lt;code&amp;gt;.false.&amp;lt;/code&amp;gt;; was put there to eventually account for dust injection via dust devils. Not used nor validated.&lt;br /&gt;
* '''scavenging ''' should be &amp;lt;code&amp;gt;.true.&amp;lt;/code&amp;gt; to include scavenging of dust by H2O and or CO2 snowfall (assuming CO2 and/or H2O cycles are also computed). Significant effect in the polar night.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# SCATTERERS: set number of scatterers. must be compliant with preceding options.&lt;br /&gt;
naerkind = 2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* '''nearkind ''' is the number of radiatively active scatterers. Dust is one of them (if &amp;lt;code&amp;gt;active=.true.&amp;lt;/code&amp;gt;) so nearkind should be at least 1; and if the water cycle is also computed with radiatively active clouds (&amp;lt;code&amp;gt;water=.true.&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;activice=.true.&amp;lt;/code&amp;gt;) then &amp;lt;code&amp;gt;naerkind=2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== More technical stuff, and how/where it is managed in the Mars PCM code ==&lt;br /&gt;
....TODO....&lt;br /&gt;
&lt;br /&gt;
=== Routines ===&lt;br /&gt;
* initracer : Initialize some dust properties stored in the '''tracer_mod''' module: dedicated tracer indexes &amp;lt;code&amp;gt;igcm_dust_mass&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;igcm_dust_number&amp;lt;/code&amp;gt;, reference dust density &amp;lt;code&amp;gt;rho_q(igcm_dust_mass)&amp;lt;/code&amp;gt;, variance of the lifted/injected dust distribution &amp;lt;code&amp;gt;varian&amp;lt;/code&amp;gt;, reference effective radius of the lifted dust &amp;lt;code&amp;gt;reff_lift&amp;lt;/code&amp;gt; and injection/lifting coefficient &amp;lt;code&amp;gt;alpha_lift(igcm_dust_mass)&amp;lt;/code&amp;gt;&lt;br /&gt;
* aeropacity : where the optical depth of the aerosols is computed (see e.g. the &amp;quot;dust_doubleq&amp;quot; case for dust) and the call to compute_dustscaling is done&lt;br /&gt;
* compute_dustscaling (in dust_scaling_mod) : where &amp;quot;tauscaling&amp;quot;, the dust rescaling coefficient, is computed&lt;br /&gt;
* vdifc : where the dust is lifted/injected from the surface into the atmosphere&lt;br /&gt;
&lt;br /&gt;
=== Parameters and variables in the code ===&lt;br /&gt;
* '''tauscaling''' : dust rescaling coefficient (one value per column) &lt;br /&gt;
* '''tau_pref_gcm''' : dust column opacity at 610 Pa (should be equal to &amp;quot;tau_pref_scenario&amp;quot;, the dust column opacity from the driving scenario)&lt;br /&gt;
* '''freedust''' : this parameter should be &amp;lt;code&amp;gt;freedust=.false.&amp;lt;/code&amp;gt;, so that there is rescaling of the dust using the &amp;quot;tauscaling&amp;quot; coefficient&lt;br /&gt;
* '''dustscaling_mode''' : this parameter should be &amp;lt;code&amp;gt;dustscaling_mode=1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Mars-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Dust_Cycle_in_Mars_PCM5&amp;diff=2810</id>
		<title>Dust Cycle in Mars PCM5</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Dust_Cycle_in_Mars_PCM5&amp;diff=2810"/>
				<updated>2025-08-22T14:27:41Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: Created page with &amp;quot;This page details at different levels how to use and what is featured in the dust cycle in Mars PCM5.  == Brief Overview == The reference work leading to the implementation in...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page details at different levels how to use and what is featured in the dust cycle in Mars PCM5.&lt;br /&gt;
&lt;br /&gt;
== Brief Overview ==&lt;br /&gt;
The reference work leading to the implementation in the PCM is the PhD work of J.-B. Madeleine; check out his PhD manuscript and 2011 JGR article&lt;br /&gt;
&amp;quot;Revisiting the radiative impact of dust on Mars using the LMD Global Climate Model&amp;quot; https://doi.org/10.1029/2011JE003855&lt;br /&gt;
&lt;br /&gt;
The main features and concepts on how the dust is handled are:&lt;br /&gt;
* Dust is modeled as a log-normal population (varying in size), which in the end requires managing only two ''tracers'', the first two moments of the distribution, which are the dust mass mixing ratio and number (tracers ''dust_mass'' and ''dust_number'').&lt;br /&gt;
* All the physical processes like large scale advection, mixing by the turbulence in the planetary boundary layer, sedimentation, etc. are modeled.&lt;br /&gt;
* How the dust gets injected in the atmosphere (i.e. the details of dust lifting from the surface) is not modeled; instead we use a simple assumption that there is always some injection of dust from the surface (most of it simply falling back down; but some of it, when the conditions are right, propagates)&lt;br /&gt;
* In addition, the dust in each column is rescaled so that its column opacity then matches that of a driving dust scenario (typically derived from observations).&lt;br /&gt;
&lt;br /&gt;
== Flags concerning the dust cycle in a PCM version 5 setup ==&lt;br /&gt;
A setup including adequate flags and parameters for a PCM5 dust cycle can be found in the reference [[callphys.def.GCM5|https://trac.lmd.jussieu.fr/Planeto/browser/trunk/LMDZ.MARS/deftank/callphys.def.GCM5] reference provided in deftank.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, there should be the 2 dedicated tracers (dust moments) at hand (i.e. in the traceur.def file):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dust_mass&lt;br /&gt;
dust_number&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In practice the relevant callphys.def parameters are:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Directory where external input files are:&lt;br /&gt;
datadir=/users/lmdz/WWW/planets/mars/datadir&lt;br /&gt;
&lt;br /&gt;
## Dust scenario. Used if the dust is prescribed (i.e. if active=F)&lt;br /&gt;
## ~~~~~~~~~~~~~&lt;br /&gt;
#  =1 Dust opt.deph read in startfi; =2 Viking scenario; =3 MGS scenario,&lt;br /&gt;
#  =4 Mars Year 24 from TES assimilation (old version of MY24; dust_tes.nc file)&lt;br /&gt;
#  =6 &amp;quot;cold&amp;quot; (low dust) scenario ; =7 &amp;quot;warm&amp;quot; (high dust) scenario&lt;br /&gt;
#  =8 &amp;quot;climatology&amp;quot; (our best guess of a typical Mars year) scenario&lt;br /&gt;
#  =24 Mars Year 24  ; =25 Mars Year 25 (year with a global dust storm) ; ...&lt;br /&gt;
#  =30 Mars Year 30 &lt;br /&gt;
iaervar = 26&lt;br /&gt;
# Dust opacity at 610 Pa (when constant, i.e. for the iaervar=1 case)&lt;br /&gt;
tauvis=0.2&lt;br /&gt;
# Dust vertical distribution: &lt;br /&gt;
# (=0: old distrib. (Pollack90), =1: top set by &amp;quot;topdustref&amp;quot;,&lt;br /&gt;
#  =2: Viking scenario; =3 MGS scenario)&lt;br /&gt;
iddist  = 3&lt;br /&gt;
# Dust top altitude (km). (Matters only if iddist=1)&lt;br /&gt;
topdustref = 55.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* '''iaervar''' points to the driving dust scenario (daily maps; netcdf files located under '''datadir''') to use: e.g. ''dust_clim.nc'' if &amp;lt;code&amp;gt;iaervar=8&amp;lt;/code&amp;gt;, ''dust_MY30.nc'' if &amp;lt;code&amp;gt;iaervar=30&amp;lt;/code&amp;gt;, etc. In the special case where &amp;lt;code&amp;gt;iaervar=1&amp;lt;/code&amp;gt; then the driving dust column opacity is constant (over space and time) to the value specified by flag '''tauvis'''.&lt;br /&gt;
* '''tauvis''' is only used when &amp;lt;code&amp;gt;iaervar=1&amp;lt;/code&amp;gt; in which case it is the imposed value of the column dust opacity (at reference pressure of 610Pa).&lt;br /&gt;
* '''iddist''' and '''topdustref''' are not used (date back to even simpler setup)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
## Tracer (dust water, ice and/or chemical species) options :&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# DUST: Transported dust ? (if &amp;gt;0, use 'dustbin' dust bins)&lt;br /&gt;
dustbin    = 2&lt;br /&gt;
# DUST: Radiatively active dust ? (matters if dustbin&amp;gt;0)&lt;br /&gt;
active  = .true.&lt;br /&gt;
# DUST: use mass and number mixing ratios to predict dust size ?&lt;br /&gt;
doubleq   = .true.&lt;br /&gt;
# DUST: use a small population of dust particules (submicron dust)?&lt;br /&gt;
submicron = .false.&lt;br /&gt;
# DUST: lifted by GCM surface winds ?&lt;br /&gt;
lifting = .true.&lt;br /&gt;
# DUST: lifted by dust devils ?&lt;br /&gt;
callddevil = .false.&lt;br /&gt;
# DUST: Scavenging by H2O/CO2 snowfall ?&lt;br /&gt;
scavenging = .true.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* '''dustbin''' should be 2, because using the 2-moment scheme (with &amp;lt;code&amp;gt;doubleq=.true.&amp;lt;/code&amp;gt;).&lt;br /&gt;
* '''active''' should definitely be set to &amp;lt;code&amp;gt;.true.&amp;lt;/code&amp;gt;, to account for the radiative effect of dust.&lt;br /&gt;
* '''doubleq'''  should be &amp;lt;code&amp;gt;.true.&amp;lt;/code&amp;gt; (with &amp;lt;code&amp;gt;dustbin=2&amp;lt;/code&amp;gt;), to use the two-moment scheme&lt;br /&gt;
* '''submicron''' should be &amp;lt;code&amp;gt;.false.&amp;lt;/code&amp;gt;; was put there to eventually have a second distribution of small particles around. Not used nor validated.&lt;br /&gt;
* '''lifting''' should be &amp;lt;code&amp;gt;.true.&amp;lt;/code&amp;gt;, to impose continuous dust injection from the surface into the first atmospheric layer.&lt;br /&gt;
* '''callddevil''' should be &amp;lt;code&amp;gt;.false.&amp;lt;/code&amp;gt;; was put there to eventually account for dust injection via dust devils. Not used nor validated.&lt;br /&gt;
* '''scavenging ''' should be &amp;lt;code&amp;gt;.true.&amp;lt;/code&amp;gt; to include scavenging of dust by H2O and or CO2 snowfall (assuming CO2 and/or H2O cycles are also computed). Significant effect in the polar night.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# SCATTERERS: set number of scatterers. must be compliant with preceding options.&lt;br /&gt;
naerkind = 2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* '''nearkind ''' is the number of radiatively active scatterers. Dust is one of them (if &amp;lt;code&amp;gt;active=.true.&amp;lt;/code&amp;gt;) so nearkind should be at least 1; and if the water cycle is also computed with radiatively active clouds (&amp;lt;code&amp;gt;water=.true.&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;activice=.true.&amp;lt;/code&amp;gt;) then &amp;lt;code&amp;gt;naerkind=2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== More technical stuff, and how/where it is managed in the Mars PCM code ==&lt;br /&gt;
....TODO....&lt;br /&gt;
&lt;br /&gt;
[[Category:Mars-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=PCM_vertical_coordinate&amp;diff=2809</id>
		<title>PCM vertical coordinate</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=PCM_vertical_coordinate&amp;diff=2809"/>
				<updated>2025-08-21T15:07:06Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The PCM vertical coordinate, also called &amp;quot;model levels&amp;quot; is rather specific in the PCM and typically consists of &amp;quot;sigma&amp;quot; or hybrid &amp;quot;sigma-pressure&amp;quot; coordinates. What one needs to keep in mind is that model levels are not at fixed altitude, nor in most cases at fixed pressure. The pressure $$P$$ of a model level $$k$$ at time $$t$$ is:&lt;br /&gt;
$$&lt;br /&gt;
\begin{align}&lt;br /&gt;
  P(k,t) &amp;amp; = ap(k) + bp(k) . Ps(t)&lt;br /&gt;
\end{align}&lt;br /&gt;
$$ &lt;br /&gt;
Where $$ap(k)$$ and $$bp(k)$$ are respectively hybrid pressure and hybrid sigma coefficients. Note that $$ap()$$ and $$bp()$$ are time-independent and $$Ps(t)$$ is surface pressure (which typically varies with time). &lt;br /&gt;
&lt;br /&gt;
=== sigma coordinates ===&lt;br /&gt;
If in the general equation above one sets $$ap(k)=0$$ for all $$k$$, then $$P(k,t)=bp(k).Ps(t)$$, which can be re-written as $$bp(k)=P(k,t)/Ps(t)$$, which is the expression of a &amp;quot;sigma&amp;quot; coordinate (often represented with the notation $$\sigma=P/Ps$$). In case of a hydrostatic system, pressure monotonically decreases with altitude and thus &amp;quot;sigma&amp;quot; is indeed a coordinate, which monotonically decreases from 1 at the surface (where $$P=P_s$$) to 0 at the top of the atmosphere (where $$P=0$$). &amp;quot;Sigma&amp;quot; coordinates are also often called &amp;quot;terrain-following&amp;quot; coordinates because for given model level $$k$$ the pressure is a constant factor of the surface pressure, which to first order depends on the topography.&lt;br /&gt;
&lt;br /&gt;
=== pressure coordinates ===  &lt;br /&gt;
If in the general equation above one sets $$bp(k)=0$$ for all $$k$$, then $$P(k,t)=ap(k)$$, which implies that a layer k is always at the same pressure. While this is quite fine at high enough altitudes, it is quite easy to see that in the near surface and presence of topography (or if fact even without topography when weather systems will impact on local surface pressure) it will be problematic to use a fixed pressure grid where some points will be undefined.&lt;br /&gt;
&lt;br /&gt;
=== hybrid sigma-pressure coordinates ===&lt;br /&gt;
The two approaches above can be combined by an appropriate choice of the values of $$ap(k)$$ and $$bp(k)$$, i.e. enforcing that:&lt;br /&gt;
* near the surface (small values of $$k$$) $$ap(k) \simeq 0 $$ (i.e. small compared to $$ bp(k) . Ps(t)$$), then the vertical coordinate there is close to being a purely &amp;quot;sigma&amp;quot; coordinate&lt;br /&gt;
* high enough (i.e. high enough above the topography, high values of $$k$$) $$bp(k) \simeq 0$$ and $$ bp(k). Ps(t) &amp;lt;&amp;lt; ap(k)$$, then the vertical coordinate there is close to being a purely &amp;quot;pressure&amp;quot; coordinate&lt;br /&gt;
&lt;br /&gt;
== In practice ==&lt;br /&gt;
Information about the vertical coordinate is only partially stored in the PCM (lon-lat) dynamics start files which contain the number of atmospheric layer (llm), the ap() and bp() coefficients (along with the the $$preff$$ and $$pa$$ parameters discussed below, which are in the &amp;quot;controle&amp;quot; array). Note that the DYNAMICO dynamics start file do not contain this information.&lt;br /&gt;
&lt;br /&gt;
In practice the vertical layer discretisation is generated at run time, using information from the input [[The z2sig.def Input File|z2sig.def file]] and a couple of related parameters : $$preff$$, reference surface pressure, (in Pa), and $$pa$$, a nameless parameter homogeneous to a pressure (also expressed in Pa) which roughly corresponds to a &amp;quot;reference transition pressure&amp;quot; from which hybrid-sigma coordinates become essentially pressure coordinates. Based on these parameters, and the specified atmospheric scale height and related target pseudo-altitudes of model levels from file [[The z2sig.def Input File|z2sig.def]] the model iteratively (requires solving a non-linear problem with no analytic solution) computes and builds the sought model levels.&lt;br /&gt;
&lt;br /&gt;
An additional input parameter that will impact on the generation of model levels is the $$hybrid$$ (logical) parameter set in [[The run.def Input File|run.def]], e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# use hybrid vertical coordinate (else will use sigma levels)&lt;br /&gt;
 hybrid=.true.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If $$hybrid$$ is set to $$.false.$$ the generated model levels will be purely sigma levels, i.e. the $$ap()$$ coefficients will be zero.&lt;br /&gt;
&lt;br /&gt;
You can check out the code where this is done, in the ''disvert_noterre'' routine: https://trac.lmd.jussieu.fr/Planeto/browser/trunk/LMDZ.COMMON/libf/dyn3d_common/disvert_noterre.F and/or its &amp;quot;mirror&amp;quot; version in the dynamico-LMDZ interface ''disvert_icosa_lmdz'' https://trac.lmd.jussieu.fr/Planeto/browser/trunk/ICOSA_LMDZ/src/disvert_icosa_lmdz.f90&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Venus-Model]]&lt;br /&gt;
[[Category:Titan-Model]]&lt;br /&gt;
[[Category:Pluto-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=PCM_vertical_coordinate&amp;diff=2808</id>
		<title>PCM vertical coordinate</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=PCM_vertical_coordinate&amp;diff=2808"/>
				<updated>2025-08-21T14:58:34Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The PCM vertical coordinate, also called &amp;quot;model levels&amp;quot; is rather specific in the PCM and typically consists of &amp;quot;sigma&amp;quot; or hybrid &amp;quot;sigma-pressure&amp;quot; coordinates. What one needs to keep in mind is that model levels are not at fixed altitude, nor in most cases at fixed pressure. The pressure $$P$$ of a model level $$k$$ at time $$t$$ is:&lt;br /&gt;
$$&lt;br /&gt;
\begin{align}&lt;br /&gt;
  P(k,t) &amp;amp; = ap(k) + bp(k) . Ps(t)&lt;br /&gt;
\end{align}&lt;br /&gt;
$$ &lt;br /&gt;
Where $$ap(k)$$ and $$bp(k)$$ are respectively hybrid pressure and hybrid sigma coefficients (note that $$ap()$$ and $$bp()$$ are time-independent) and $$Ps(t)$$ is surface pressure (and typically varies with time). &lt;br /&gt;
&lt;br /&gt;
=== sigma coordinates ===&lt;br /&gt;
If in the general equation above one sets $$ap(k)=0$$ for all $$k$$, then $$P(k,t)=bp(k).Ps(t)$$, which can be re-written as $$bp(k)=P(k,t)/Ps(t)$$, which is the expression of a &amp;quot;sigma&amp;quot; coordinate (often represented with the notation $$\sigma=P/Ps$$). In case of a hydrostatic system, pressure monotonically decreases with altitude and thus &amp;quot;sigma&amp;quot; is indeed a coordinate, which monotonically decreases from 1 at the surface (where $$P=P_s$$) to 0 at the top of the atmosphere (where $$P=0$$). &amp;quot;Sigma&amp;quot; coordinates are also often called &amp;quot;terrain-following&amp;quot; coordinates because for given model level $$k$$ the pressure is a constant factor of the surface pressure, which to first order depends on the topography.&lt;br /&gt;
&lt;br /&gt;
=== pressure coordinates ===  &lt;br /&gt;
If in the general equation above one sets $$bp(k)=0$$ for all $$k$$, then $$P(k,t)=ap(k)$$, which implies that a layer k is always at the same pressure. While this is quite fine at high enough altitudes, it is quite easy to see that in the near surface and presence of topography (or if fact even without topography when weather systems will impact on local surface pressure) it will be problematic to use a fixed pressure grid where some points will be undefined.&lt;br /&gt;
&lt;br /&gt;
=== hybrid sigma-pressure coordinates ===&lt;br /&gt;
The two approaches above can be combined by an appropriate choice of the values of $$ap(k)$$ and $$bp(k)$$, i.e. enforcing that:&lt;br /&gt;
* near the surface (small values of $$k$$) $$ap(k) \simeq 0 $$ (i.e. small compared to $$ bp(k) . Ps(t)$$), then the vertical coordinate there is close to being a purely &amp;quot;sigma&amp;quot; coordinate&lt;br /&gt;
* high enough (i.e. high enough above the topography, high values of $$k$$) $$bp(k) \simeq 0$$ and $$ bp(k). Ps(t) &amp;lt;&amp;lt; ap(k)$$, then the vertical coordinate there is close to being a purely &amp;quot;pressure&amp;quot; coordinate&lt;br /&gt;
&lt;br /&gt;
== In practice ==&lt;br /&gt;
Information about the vertical coordinate is only partially stored in the PCM (lon-lat) dynamics start files which contain the number of atmospheric layer (llm), the ap() and bp() coefficients (along with the the $$preff$$ and $$pa$$ parameters discussed below, which are in the &amp;quot;controle&amp;quot; array). Note that the DYNAMICO dynamics start file do not contain this information.&lt;br /&gt;
&lt;br /&gt;
In practice the vertical layer discretisation is generated at run time, using information from the input [[The z2sig.def Input File|z2sig.def file]] and a couple of related parameters : $$preff$$, reference surface pressure, (in Pa), and $$pa$$, a nameless parameter homogeneous to a pressure (also expressed in Pa) which roughly corresponds to a &amp;quot;reference transition pressure&amp;quot; from which hybrid-sigma coordinates become essentially pressure coordinates. Based on these parameters, and the specified atmospheric scale height and related target pseudo-altitudes of model levels from file [[The z2sig.def Input File|z2sig.def]] the model iteratively (requires solving a non-linear problem with no analytic solution) computes and builds the sought model levels.&lt;br /&gt;
&lt;br /&gt;
An additional input parameter that will impact on the generation of model levels is the $$hybrid$$ (logical) parameter set in [[The run.def Input File|run.def]], e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# use hybrid vertical coordinate (else will use sigma levels)&lt;br /&gt;
 hybrid=.true.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If $$hybrid$$ is set to $$.false.$$ the generated model levels will be purely sigma levels, i.e. the $$ap()$$ coefficients will be zero.&lt;br /&gt;
&lt;br /&gt;
You can check out the code where this is done, in the ''disvert_noterre'' routine: https://trac.lmd.jussieu.fr/Planeto/browser/trunk/LMDZ.COMMON/libf/dyn3d_common/disvert_noterre.F and/or its &amp;quot;mirror&amp;quot; version in the dynamico-LMDZ interface ''disvert_icosa_lmdz'' https://trac.lmd.jussieu.fr/Planeto/browser/trunk/ICOSA_LMDZ/src/disvert_icosa_lmdz.f90&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Venus-Model]]&lt;br /&gt;
[[Category:Titan-Model]]&lt;br /&gt;
[[Category:Pluto-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Dissipation&amp;diff=2807</id>
		<title>Dissipation</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Dissipation&amp;diff=2807"/>
				<updated>2025-08-21T14:47:57Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Note that there are some slides from the LMDZ Earth Climate Model tutorial sessions:&lt;br /&gt;
https://lmdz.lmd.jussieu.fr/utilisateurs/formation/winter_2024 &lt;br /&gt;
in the &amp;quot;Dynamics: grid/temporal discretization/stability/dissipation&amp;quot; course which can be helpful to learn about dissipation in the PCM,&lt;br /&gt;
why it is there, how it is handled, and what are the underlying parameters.&lt;br /&gt;
&lt;br /&gt;
== Description==&lt;br /&gt;
In the LMDZ grid point model, nonlinear interactions between explicitly resolved scales&lt;br /&gt;
and subgrid-scale processes are parameterized by applying a scale-selective horizontal dissipation operator based on an $$n$$ time iterated Laplacian $$\Delta^n$$. For the grid point model, for&lt;br /&gt;
instance, this can be written: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
\begin{align}&lt;br /&gt;
  \label{def:Wns}&lt;br /&gt;
\frac{\partial q}{\partial t} = \frac{(-1)^n}{\tau_{diss}}(\delta x)^{2n}\Delta^nq&lt;br /&gt;
\end{align}&lt;br /&gt;
&lt;br /&gt;
where $$q$$ is a field component on which dissipation is applied, $$\delta x$$ is the smallest horizontal distance represented in the model and $$\tau_{diss}$$ is the dissipation timescale for a structure of scale $$\delta x$$. These operators are necessary to ensure the grid point model numerical stability. In practice, the operator is separately applied to three components : &lt;br /&gt;
* the divergence of the flow, &lt;br /&gt;
* the vorticity of the flow,&lt;br /&gt;
* potential temperature.&lt;br /&gt;
&lt;br /&gt;
We classically use n = 1 for the divergence of the flow (&amp;lt;code&amp;gt;nitergdiv=1&amp;lt;/code&amp;gt; in [[The_run.def_Input_File|run.def]]) and n = 2 for flow vorticity and potential temperature (&amp;lt;code&amp;gt;nitergrot=2&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;niterh=2&amp;lt;/code&amp;gt; in [[The_run.def_Input_File|run.def]]).&lt;br /&gt;
&lt;br /&gt;
== How to change it in the model ==&lt;br /&gt;
&lt;br /&gt;
In practice, the values of $$n$$ and $$\tau_{diss}$$ are prescribed in the [[The_run.def_Input_File|run.def]] with the keys: &lt;br /&gt;
*nitergdiv&lt;br /&gt;
*nitergrot&lt;br /&gt;
*niterh &lt;br /&gt;
for the values of $$n$$ on each field, and the associated time scales $$\tau$$ (in s): &lt;br /&gt;
*tetagdiv&lt;br /&gt;
*tetagrot&lt;br /&gt;
*tetatemp&lt;br /&gt;
&lt;br /&gt;
In [[The_run.def_Input_File|run.def]], there is also a key ''idissip'' (depreciated) or ''dissip_period'' which is the rate (in dynamical steps) at which to apply the dissipation (and thus impacts on the overall time step of the dissipation).&lt;br /&gt;
&lt;br /&gt;
In addition there are multiplicative factors on the dissipation that are applied going to the upper layers of the atmosphere (usually more dissipation is required in the upper atmosphere than the lower atmosphere since perturbations traveling upwards will grow in amplitude). These coefficients are:&lt;br /&gt;
* dissip_fac_mid : dissipation multiplicative factor in the middle atmosphere&lt;br /&gt;
* dissip_fac_up : dissipation multiplicative factor in the upper atmosphere&lt;br /&gt;
&lt;br /&gt;
At which pressure/altitude the middle and upper atmosphere referred to above depends on another input flag: vert_prof_dissip&lt;br /&gt;
* if &amp;lt;code&amp;gt;vert_prof_dissip=0&amp;lt;/code&amp;gt; (default setup for most planets) then the bottom of the transition zone between mid and upper atmosphere is set at the pressure (in Pa) provided by the input parameter dissip_pupstart (set in [[The_run.def_Input_File|run.def]]) and extends over dissip_deltaz km (also set in [[The_run.def_Input_File|run.def]]), assuming a transition zone scale height of dissip_hdelta km.&lt;br /&gt;
* if &amp;lt;code&amp;gt;vert_prof_dissip=1&amp;lt;/code&amp;gt; (default setup if &amp;lt;code&amp;gt;planet_type==&amp;quot;mars&amp;quot;&amp;lt;/code&amp;gt;!) then the transition between mid and upper atmosphere is set to start at a pseudo-altitude of 70 km, with a transition region of 30km (see inidissip.F for the details) &lt;br /&gt;
&lt;br /&gt;
When the PCM runs it yields a digest of the dissipation multiplicative coefficients and timesteps (look for keyword &amp;quot;inidissip&amp;quot; in the output!) of the likes of:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Dissipation : &lt;br /&gt;
 Multiplication de la dissipation en altitude :&lt;br /&gt;
   dissip_fac_mid =   2.0000000000000000     &lt;br /&gt;
   dissip_fac_up =   20.000000000000000     &lt;br /&gt;
 Transition mid /up:  Pupstart,delta =   1000.0000000000000      Pa   10.000000000000000      (km)&lt;br /&gt;
 inidissip: Time constants for lateral dissipation&lt;br /&gt;
 inidissip: dissip_period=           5  dtdiss=   870.83333333333326       dtvr=   174.16666666666666 &lt;br /&gt;
    &lt;br /&gt;
 pseudoZ(km)  zvert    dt(tetagdiv)   dt(tetagrot)   dt(divgrad)&lt;br /&gt;
   0.0  1.0000004E+00  8.7083367E-02  8.7083367E-02  8.7083367E-02&lt;br /&gt;
   0.0  1.0000096E+00  8.7084169E-02  8.7084169E-02  8.7084169E-02&lt;br /&gt;
   0.1  1.0000957E+00  8.7091668E-02  8.7091668E-02  8.7091668E-02&lt;br /&gt;
   0.2  1.0005771E+00  8.7133585E-02  8.7133585E-02  8.7133585E-02&lt;br /&gt;
   0.5  1.0024163E+00  8.7293751E-02  8.7293751E-02  8.7293751E-02&lt;br /&gt;
   0.9  1.0079124E+00  8.7772369E-02  8.7772369E-02  8.7772369E-02&lt;br /&gt;
   1.4  1.0217973E+00  8.8981516E-02  8.8981516E-02  8.8981516E-02&lt;br /&gt;
   2.1  1.0526661E+00  9.1669671E-02  9.1669671E-02  9.1669671E-02&lt;br /&gt;
   3.1  1.1136968E+00  9.6984432E-02  9.6984432E-02  9.6984432E-02&lt;br /&gt;
   4.3  1.2193074E+00  1.0618135E-01  1.0618135E-01  1.0618135E-01&lt;br /&gt;
   5.7  1.3732887E+00  1.1959055E-01  1.1959055E-01  1.1959055E-01&lt;br /&gt;
   7.5  1.5542353E+00  1.3534799E-01  1.3534799E-01  1.3534799E-01&lt;br /&gt;
   9.6  1.7216407E+00  1.4992621E-01  1.4992621E-01  1.4992621E-01&lt;br /&gt;
  12.1  1.8454911E+00  1.6071151E-01  1.6071151E-01  1.6071151E-01&lt;br /&gt;
  14.9  1.9221312E+00  1.6738559E-01  1.6738559E-01  1.6738559E-01&lt;br /&gt;
  18.1  1.9631130E+00  1.7095442E-01  1.7095442E-01  1.7095442E-01&lt;br /&gt;
  21.4  1.9826551E+00  1.7265621E-01  1.7265621E-01  1.7265621E-01&lt;br /&gt;
  24.8  1.9916537E+00  1.7343984E-01  1.7343984E-01  1.7343984E-01&lt;br /&gt;
  28.1  1.9959119E+00  1.7381066E-01  1.7381066E-01  1.7381066E-01&lt;br /&gt;
  31.4  1.9979711E+00  1.7398998E-01  1.7398998E-01  1.7398998E-01&lt;br /&gt;
  34.8  1.9989834E+00  1.7407814E-01  1.7407814E-01  1.7407814E-01&lt;br /&gt;
  38.1  1.9994871E+00  1.7412200E-01  1.7412200E-01  1.7412200E-01&lt;br /&gt;
  41.4  1.9997400E+00  1.7414402E-01  1.7414402E-01  1.7414402E-01&lt;br /&gt;
  44.8  1.9998677E+00  1.7415514E-01  1.7415514E-01  1.7415514E-01&lt;br /&gt;
  48.1  1.9999325E+00  1.7416079E-01  1.7416079E-01  1.7416079E-01&lt;br /&gt;
  51.4  1.9999655E+00  1.7416366E-01  1.7416366E-01  1.7416366E-01&lt;br /&gt;
  54.8  1.9999823E+00  1.7416513E-01  1.7416513E-01  1.7416513E-01&lt;br /&gt;
  58.1  1.9999910E+00  1.7416588E-01  1.7416588E-01  1.7416588E-01&lt;br /&gt;
  61.4  1.9999954E+00  1.7416626E-01  1.7416626E-01  1.7416626E-01&lt;br /&gt;
  64.8  1.9999976E+00  1.7416646E-01  1.7416646E-01  1.7416646E-01&lt;br /&gt;
  68.1  1.9999988E+00  1.7416656E-01  1.7416656E-01  1.7416656E-01&lt;br /&gt;
  71.4  1.9999997E+00  1.7416664E-01  1.7416664E-01  1.7416664E-01&lt;br /&gt;
  74.8  2.0000019E+00  1.7416683E-01  1.7416683E-01  1.7416683E-01&lt;br /&gt;
  78.1  2.0000163E+00  1.7416809E-01  1.7416809E-01  1.7416809E-01&lt;br /&gt;
  81.4  2.0001218E+00  1.7417728E-01  1.7417728E-01  1.7417728E-01&lt;br /&gt;
  84.8  2.0009008E+00  1.7424511E-01  1.7424511E-01  1.7424511E-01&lt;br /&gt;
  88.1  2.0066545E+00  1.7474616E-01  1.7474616E-01  1.7474616E-01&lt;br /&gt;
  91.4  2.0490550E+00  1.7843854E-01  1.7843854E-01  1.7843854E-01&lt;br /&gt;
  94.8  2.3562671E+00  2.0519160E-01  2.0519160E-01  2.0519160E-01&lt;br /&gt;
  98.1  4.3369577E+00  3.7767674E-01  3.7767674E-01  3.7767674E-01&lt;br /&gt;
 101.4  1.1438614E+01  9.9611261E-01  9.9611261E-01  9.9611261E-01&lt;br /&gt;
 104.8  1.8031963E+01  1.5702835E+00  1.5702835E+00  1.5702835E+00&lt;br /&gt;
 108.1  1.9705847E+01  1.7160508E+00  1.7160508E+00  1.7160508E+00&lt;br /&gt;
 111.4  1.9959620E+01  1.7381503E+00  1.7381503E+00  1.7381503E+00&lt;br /&gt;
 114.8  1.9994525E+01  1.7411898E+00  1.7411898E+00  1.7411898E+00&lt;br /&gt;
 118.5  1.9999423E+01  1.7416164E+00  1.7416164E+00  1.7416164E+00&lt;br /&gt;
 123.5  1.9999971E+01  1.7416641E+00  1.7416641E+00  1.7416641E+00&lt;br /&gt;
 129.8  1.9999999E+01  1.7416666E+00  1.7416666E+00  1.7416666E+00&lt;br /&gt;
 137.1  2.0000000E+01  1.7416667E+00  1.7416667E+00  1.7416667E+00&lt;br /&gt;
 144.5  2.0000000E+01  1.7416667E+00  1.7416667E+00  1.7416667E+00&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Good to know rules and rules of thumb == &lt;br /&gt;
* If your simulation shows numerical instabilities, a good idea is to increase dissipation. This means decreasing parameters $$\tau$$ and/or increasing dissip_fac_up (and to a lesser extent dissip_fac_mid).&lt;br /&gt;
* Optimal values for the dissipation timescales depends on the resolution of the horizontal grid. The higher the resolution, the more dissipation we need.&lt;br /&gt;
* Because dissipation is applied using an Explicit Euler time marching scheme, it is liable to be unstable if the time step is too large. This is tested at run-time by the model which will generate an error message of the likes of:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
STOP : lateral dissipation is too intense and will&lt;br /&gt;
    generate instabilities in the model !&lt;br /&gt;
 You must increase tetah (or increase dissip_period&lt;br /&gt;
              or increase day_step)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Venus-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Dissipation&amp;diff=2806</id>
		<title>Dissipation</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Dissipation&amp;diff=2806"/>
				<updated>2025-08-21T13:57:05Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Note that there are some slides from the LMDZ Earth Climate Model tutorial sessions:&lt;br /&gt;
https://lmdz.lmd.jussieu.fr/utilisateurs/formation/winter_2024 &lt;br /&gt;
in the &amp;quot;Dynamics: grid/temporal discretization/stability/dissipation&amp;quot; course which can be helpful to learn about dissipation in the PCM,&lt;br /&gt;
why it is there, how it is handled, and what are the underlying parameters.&lt;br /&gt;
&lt;br /&gt;
== Description==&lt;br /&gt;
In the LMDZ grid point model, nonlinear interactions between explicitly resolved scales&lt;br /&gt;
and subgrid-scale processes are parameterized by applying a scale-selective horizontal dissipation operator based on an $$n$$ time iterated Laplacian $$\Delta^n$$. For the grid point model, for&lt;br /&gt;
instance, this can be written: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
\begin{align}&lt;br /&gt;
  \label{def:Wns}&lt;br /&gt;
\frac{\partial q}{\partial t} = \frac{(-1)^n}{\tau_{diss}}(\delta x)^{2n}\Delta^nq&lt;br /&gt;
\end{align}&lt;br /&gt;
&lt;br /&gt;
where $$q$$ is a field component on which dissipation is applied, $$\delta x$$ is the smallest horizontal distance represented in the model and $$\tau_{diss}$$ is the dissipation timescale for a structure of scale $$\delta x$$. These operators are necessary to ensure the grid point model numerical stability. In practice, the operator is separately applied to three components : &lt;br /&gt;
* the divergence of the flow, &lt;br /&gt;
* the vorticity of the flow,&lt;br /&gt;
* potential temperature.&lt;br /&gt;
&lt;br /&gt;
We classically use n = 1 for the divergence of the flow (&amp;lt;code&amp;gt;nitergdiv=1&amp;lt;/code&amp;gt; in [[The_run.def_Input_File|run.def]]) and n = 2 for flow vorticity and potential temperature (&amp;lt;code&amp;gt;nitergrot=2&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;niterh=2&amp;lt;/code&amp;gt; in [[The_run.def_Input_File|run.def]]).&lt;br /&gt;
&lt;br /&gt;
== How to change it in the model ==&lt;br /&gt;
&lt;br /&gt;
In practice, the values of $$n$$ and $$\tau_{diss}$$ are prescribed in the [[The_run.def_Input_File|run.def]] with the keys: &lt;br /&gt;
*nitergdiv&lt;br /&gt;
*nitergrot&lt;br /&gt;
*niterh &lt;br /&gt;
for the values of $$n$$ on each field, and the associated time scales $$\tau$$ (in s): &lt;br /&gt;
*tetagdiv&lt;br /&gt;
*tetagrot&lt;br /&gt;
*tetatemp&lt;br /&gt;
&lt;br /&gt;
In [[The_run.def_Input_File|run.def]], there is also a key ''idissip'' (depreciated) or ''dissip_period'' which is the rate (in dynamical steps) at which to apply the dissipation (and thus impacts on the overall time step of the dissipation).&lt;br /&gt;
&lt;br /&gt;
In addition there are multiplicative factors on the dissipation that are applied going to the upper layers of the atmosphere (as it has been noted that usually more dissipation is required in the upper atmosphere than the lower atmosphere). These coefficients are:&lt;br /&gt;
* dissip_fac_mid : dissipation multiplicative factor in the middle atmosphere&lt;br /&gt;
* dissip_fac_up : dissipation multiplicative factor in the upper atmosphere&lt;br /&gt;
&lt;br /&gt;
== Good to know rules and rules of thumb == &lt;br /&gt;
* If your simulation shows numerical instabilities, a good idea is to increase dissipation. This means decreasing parameters $$\tau$$ and/or increasing dissip_fac_up (and to a lesser extent dissip_fac_mid).&lt;br /&gt;
* Optimal values for the dissipation timescales depends on the resolution of the horizontal grid. The higher the resolution, the more dissipation we need.&lt;br /&gt;
* Because dissipation is applied using an Explicit Euler time marching scheme, it is liable to be unstable if the time step is too large. This is tested at run-time by the model which will generate an error message of the likes of:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
STOP : lateral dissipation is too intense and will&lt;br /&gt;
    generate instabilities in the model !&lt;br /&gt;
 You must increase tetah (or increase dissip_period&lt;br /&gt;
              or increase day_step)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Venus-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Dissipation&amp;diff=2805</id>
		<title>Dissipation</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Dissipation&amp;diff=2805"/>
				<updated>2025-08-21T11:14:24Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Note that there are some slides from the LMDZ Earth Climate Model tutorial sessions:&lt;br /&gt;
https://lmdz.lmd.jussieu.fr/utilisateurs/formation/winter_2024 &lt;br /&gt;
in the &amp;quot;Dynamics: grid/temporal discretization/stability/dissipation&amp;quot; course which can be helpful to learn about dissipation in the PCM.&lt;br /&gt;
&lt;br /&gt;
== Description==&lt;br /&gt;
In the LMD grid point model, nonlinear interactions between explicitly resolved scales&lt;br /&gt;
and subgrid-scale processes are parameterized by applying a scale-selective horizontal dissipation operator based on an $$n$$ time iterated Laplacian $$\Delta^n$$. For the grid point model, for&lt;br /&gt;
instance, this can be written: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
\begin{align}&lt;br /&gt;
  \label{def:Wns}&lt;br /&gt;
\frac{\partial q}{\partial t} = \frac{(-1)^n}{\tau_{diss}}(\delta x)^{2n}\Delta^nq&lt;br /&gt;
\end{align}&lt;br /&gt;
&lt;br /&gt;
where $$q$$ is a field component on which disspation is applied, $$\delta x$$ is the smallest horizontal distance represented in the model and $$\tau_{diss}$$ is the dissipation timescale for a structure of scale $$\delta x$$. These operators are necessary to ensure the grid point model numerical stability. In practice, the operator is separately applied to three components : &lt;br /&gt;
* the divergence of the flow, &lt;br /&gt;
* the vorticity of the flow,&lt;br /&gt;
* potential temperature.&lt;br /&gt;
&lt;br /&gt;
We classically use n = 1 for the divergence of the flow and n = 2 for flow vorticity and potential temperature.&lt;br /&gt;
&lt;br /&gt;
== How to change it in the model ==&lt;br /&gt;
&lt;br /&gt;
In practice, the values of $$n$$ and $$\tau_{diss}$$ are prescribed in the [[The_run.def_Input_File|run.def]] with the keys: &lt;br /&gt;
*nitergdiv&lt;br /&gt;
*nitergrot&lt;br /&gt;
*niterh &lt;br /&gt;
for the values of $$n$$ on each field, and the associated time scales $$\tau$$: &lt;br /&gt;
*tetagdiv&lt;br /&gt;
*tetagrot&lt;br /&gt;
*tetatemp&lt;br /&gt;
&lt;br /&gt;
In [[The_run.def_Input_File|run.def]], there is also a key ''idissip'' (depreciated) or ''dissip_period'' which is the rate (in dynamical steps) at which to apply the dissipation (and thus impacts on the overall time step of the dissipation).&lt;br /&gt;
&lt;br /&gt;
In addition there are multiplicative factors on the dissipation that are applied going to the upper layers of the atmosphere (as it has been noted that usually more dissipation is required in the upper atmospher than the lower atmosphere). These coefficients are:&lt;br /&gt;
* dissip_fac_mid : dissipation multiplicative factor in the middle atmosphere&lt;br /&gt;
* dissip_fac_up : dissipation multiplicative factor in the upper atmosphere&lt;br /&gt;
&lt;br /&gt;
== Good to know rules and rules of thumb == &lt;br /&gt;
* If your simulation shows numerical instabilities, a good idea is to increase dissipation. This means decreasing parameters $$\tau$$.&lt;br /&gt;
* Optimal values for the dissipation timescales depends on the resolution of the horizontal grid. The higher the resolution, the more dissipation we need.&lt;br /&gt;
* Because dissipation is applied using an Explicit Euler time marching scheme, it is liable to be unstable if the time step is too large. This is tested at run-time by the model which will generate an error message of the likes of:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
STOP : lateral dissipation is too intense and will&lt;br /&gt;
    generate instabilities in the model !&lt;br /&gt;
 You must increase tetah (or increase dissip_period&lt;br /&gt;
              or increase day_step)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Venus-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Tool_Box_Mars_PCM&amp;diff=2804</id>
		<title>Tool Box Mars PCM</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Tool_Box_Mars_PCM&amp;diff=2804"/>
				<updated>2025-08-18T06:19:41Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Post-processing tools provided with the Mars PCM ==&lt;br /&gt;
First and foremost there are a number of postprocessing utilities (self-standing tools) which can be found in the &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LMDZ.MARS/util&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
directory; you certainly want to first read the README file there.&lt;br /&gt;
&lt;br /&gt;
Current contents of this directory is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aeroptical.def	   expandstartfi.F90	localtime.def	 solzenangle.F90&lt;br /&gt;
aeroptical.F90	   extract.F90		localtime.F90	 startarchive2icosa&lt;br /&gt;
aeropt_mod.F90	   extract.points.def	lslin.def	 streamfunction.F90&lt;br /&gt;
analyse_netcdf.py  extract.profile.def	lslin.F90	 xvik&lt;br /&gt;
compile		   gencol.def		README		 zrecast.auto.def&lt;br /&gt;
concatnc.def	   gencol.F90		simu_MCS.def	 zrecast.F90&lt;br /&gt;
concatnc.F90	   hrecast.def		simu_MCS.F90	 zrecast.manual.def&lt;br /&gt;
display_netcdf.py  hrecast.F90		solzenangle.def&lt;br /&gt;
&amp;lt;/pre&amp;gt;  &lt;br /&gt;
the ''compile'' script is an example of how to compile any of the utilities, which you will have to adapt to your needs (mostly concerns setting the right path to the NetCDF library). All the post-processing tools are meant to be run interactively (asking the user for some instructions), hence the ''*.def'' files as it is most of the time more convenient (once one knows the tools and questions it will ask) to redirect this list of answers to the standard input of the tool, e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
zrecast.e &amp;lt; zrecast.manual.def&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main programs (other than GCM) but included in the Mars PCM package ==&lt;br /&gt;
There are a few other main programs that are included with the GCM. &lt;br /&gt;
&lt;br /&gt;
Advanced stuff: In practice these main programs are located under ''LMDZ.MARS/dynphy_lonlat/phymars/'' as they are at the interface between lon-lat dynamics and the Mars physics package (i.e. need to use both and thus can only be applied in that context).&lt;br /&gt;
&lt;br /&gt;
=== start2archive ===&lt;br /&gt;
A main program to collect multiple &amp;lt;code&amp;gt;start.nc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;startfi.nc&amp;lt;/code&amp;gt; files from a series simulations and store them in a &amp;lt;code&amp;gt;start_archive.nc&amp;lt;/code&amp;gt; file. For this one simply needs to run the &amp;lt;code&amp;gt;startarchive&amp;lt;/code&amp;gt; program in the directory. It will automatically fetch code&amp;gt;start.nc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;startfi.nc&amp;lt;/code&amp;gt; files and generate &amp;lt;code&amp;gt;start_archive.nc&amp;lt;/code&amp;gt;. If a &amp;lt;code&amp;gt;start_archive.nc&amp;lt;/code&amp;gt; file is already present then the current &amp;lt;code&amp;gt;start.nc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;startfi.nc&amp;lt;/code&amp;gt; files are added to the &amp;lt;code&amp;gt;start_archive.nc&amp;lt;/code&amp;gt; file (which can contain multiple initial states, as long as they are on the same grid and correspond to different dates.&lt;br /&gt;
&lt;br /&gt;
=== newstart ===&lt;br /&gt;
A main program to:&lt;br /&gt;
* extract (and interpolate) &amp;lt;code&amp;gt;restart.nc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;restartfi.nc&amp;lt;/code&amp;gt; files from a &amp;lt;code&amp;gt;start_archive.nc&amp;lt;/code&amp;gt; file or from a pair of &amp;lt;code&amp;gt;start.nc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;startfi.nc&amp;lt;/code&amp;gt; files. The subtle difference between the two setup is that grid interpolation (horizontal and/or vertical) is only possible if using a &amp;lt;code&amp;gt;start_archive.nc&amp;lt;/code&amp;gt; input file&lt;br /&gt;
* modify values and fields contained in the initial condition file&lt;br /&gt;
* Compiling &amp;lt;code&amp;gt;newstart&amp;lt;/code&amp;gt; is done using the [[The makelmdz fcm GCM Compilation Script|makelmdz_fcm]] utility. The program is then meant to be run interactively with the user providing options and choices when prompted.&lt;br /&gt;
* Once the program has run and finished without error, it will generate &amp;lt;code&amp;gt;restart.nc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;restartfi.nc&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== xvik ===&lt;br /&gt;
A post-processing utility to analyse the modeled CO2 cycle&lt;br /&gt;
&lt;br /&gt;
=== testphys1d ===&lt;br /&gt;
More than a pre- or post-processing tool, this is a 1D (single column) version of the PCM. More about it is on the dedicated page: [[Mars 1D testphys1d program]].&lt;br /&gt;
&lt;br /&gt;
Note that there is also a separate &amp;quot;self-standing&amp;quot; 1D model, the &amp;quot;1D thermal model&amp;quot;, which can be downloaded from this page: [http://www-planets.lmd.jussieu.fr/ http://www-planets.lmd.jussieu.fr/]. This is essentially a frozen version of ''testphys1d'' with some additional tweaks to make it more user-friendly.&lt;br /&gt;
&lt;br /&gt;
== Visualizing the outputs ==&lt;br /&gt;
The GCM and most of the post-processing tools mentioned above produce NetCDF files, which one most likely will need to visualize.&lt;br /&gt;
There are a number of tools available (free or not). Here is a selection:&lt;br /&gt;
* Ncview is a very basic tool, mostly useful for a simple quick look.&lt;br /&gt;
* Panoply is a user-friendly tool for viewing NetCDF data, available here: https://www.giss.nasa.gov/tools/panoply/&lt;br /&gt;
* Ferret is another nice tool for viewing and processing NetCDF data, see their official page: https://ferret.pmel.noaa.gov/Ferret/ and also [[Some Ferret tips and pointers|this page]]&lt;br /&gt;
* Many also like to write up their own python scripts&lt;br /&gt;
&lt;br /&gt;
== Maniuplating NetCDF files with external tools ==&lt;br /&gt;
In addition to the programs that come with the Mars PCM distribution, there are many public tools available that provide more general capabilities.   Here are some selections that you may&lt;br /&gt;
find to be of value:&lt;br /&gt;
* NetCDF Operators (NCO):  This toolbox is a combination of compiled programs and scripts for manipulating NetCDF files, including regridding and interpolating.  This public domain tools was developed at the University of California - Irvine and appears to be actively maintained.  Find it (along with documentation) at the official site: https://nco.sourceforge.net.  A reasonable tutorial, though it is html based and a few years old (like playing an Atari console game), may be downloaded here:  https://www.hydroshare.org/resource/4b5a903a02e64b078d5a581a010f45a4/&lt;br /&gt;
&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Mars-1D]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_Adastra&amp;diff=2772</id>
		<title>Using Adastra</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_Adastra&amp;diff=2772"/>
				<updated>2025-06-27T06:52:01Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: /* Example of a mixed MPI/OpenMP job to launch a simulation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a summary of examples and tools designed to help you get used with the Adastra environment.&lt;br /&gt;
&lt;br /&gt;
== Getting access to the cluster ==&lt;br /&gt;
For people on the &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project who need to open an account on Adastra, here is the procedure:&lt;br /&gt;
&lt;br /&gt;
# Go to https://www.edari.fr/utilisateur and log in via Janus or create an account if you don't have a Janus login. If this doesn't work, you can create a new eDARI account. (Make sure your profile is fully up to date including nationality)&lt;br /&gt;
# Beware! If you are on 2 lab (LMD and LATMOS for example), you must register with your email address corresponding to your Janus account.&lt;br /&gt;
# Click on &amp;quot;se rattacher à un dossier ayant obtenu des resources&amp;quot; or &amp;quot;Attach yourself to an application file that has obtained resources&amp;quot;&lt;br /&gt;
# &amp;quot;Atmosphères Planétaires&amp;quot; project number to provide:  A0180110391&lt;br /&gt;
# Ehouarn then receives an email to allow you to join the project. Once he has validated it, you receive a confirmation mail.&lt;br /&gt;
# Once approved, you have to request for an account, click on &amp;quot;CINES: créer une demande d'ouverture de compte&amp;quot;&lt;br /&gt;
# fill in the forms: name, contract end date, CINES, your lab information (LMD is the default)&lt;br /&gt;
# Access IP address  134.157.47.46 , FQDN (Fully Qualified Domain Name): ssh-out.lmd.jussieu.fr &lt;br /&gt;
# Add a second address : 134.157.176.129 , FQDN: spirit2.ipsl.fr&lt;br /&gt;
# click on option to have access to CCFR (only important if you have access to other GENCI machines)&lt;br /&gt;
# Security officer is Julien Lenseigne for LMD (his informations are all pre-filled, except phone: +33169335172)&lt;br /&gt;
# YOU MUST THEN VALIDATE THE REQUEST: click on the &amp;quot;Valider la saisie des informations&amp;quot;&lt;br /&gt;
# You then receive an automatic mail, but it's only to tell you to go to the next step: You must now download the pre-filled form from e-dari: find &amp;quot;télécharger la demande&amp;quot; and download the pdf. Sign it, and upload it on e-dari &amp;quot;déposer la demande de création de compte&amp;quot;.&lt;br /&gt;
# Wait for your application to be preprocessed by the system...&lt;br /&gt;
&lt;br /&gt;
== A couple of pointers ==&lt;br /&gt;
&lt;br /&gt;
* Connecting to Adastra: For those who had an account on Occigen, we have retained group and login credentials from then; To connect to Adastra you need first go through the LMD gateway (hakim) or the IPSL (Spirit/SpiritX) gateway and then&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh your_cines_login@adastra.cines.fr&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And then you will probably want to switch project using the myproject command, e.g. to switch to &amp;quot;lmd1167&amp;quot; (the old &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -a lmd1167&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and to switch to &amp;quot;cin0391&amp;quot; (the 2023-2024 &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -a cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
WARNING: when you switch projects, you also switch HOME directory etc.&lt;br /&gt;
&lt;br /&gt;
To get all the info about dedicated environment variables (e.g. paths to SCRATCH, STORE, etc.) you can use&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -c&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* To get all the information about project accounting (number of hours available and used by each member of the project) you need to connect to https://reser.cines.fr/ using your Adastra login and password&lt;br /&gt;
&lt;br /&gt;
* Changing the password of your CINES account&lt;br /&gt;
When your password is close to expiring, CINES asks you to change it on this website : https://rosetta.cines.fr&lt;br /&gt;
&lt;br /&gt;
Please note that you can access this website only if you are on a machine that you declared as a gateway for Adastra. At LMD, we have generally declared hakim.lmd.jussieu.fr (aka ssh-out) and spirit2.ipsl.fr as gateway machines. Hakim doesn't have any browser installed, but you can launch &amp;lt;code&amp;gt;firefox&amp;lt;/code&amp;gt; on Spirit and connect to the rosetta website.&lt;br /&gt;
If that doesn't work, check out the page on [[How to launch your local browser through a gateway machine]] or contact mail svp@cines.fr&lt;br /&gt;
&lt;br /&gt;
* Link to the Adastra technical documentation: https://dci.dci-gitlab.cines.fr/webextranet/&lt;br /&gt;
&lt;br /&gt;
* Link to the webpage where you can find out (login and password are those of your Adastra account) how many hour left we have on the project and details about everyone's use of Adastra: https://reser.cines.fr&lt;br /&gt;
&lt;br /&gt;
== Disks and workspaces ==&lt;br /&gt;
* all the details are on the Adastra documentation: https://dci.dci-gitlab.cines.fr/webextranet/data_storage_and_transfers/index.html&lt;br /&gt;
* If you want to know the current quota (in HOMEDIR, WORKDIR and SCRATCHDIR) allocated to the project (yes quotas are for the whole group):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -s cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* In a nutshell: we have lots of space on the WORKDIR (250 To) which is &amp;quot;permanent&amp;quot; (unlike the SCRATCHDIR, where files older than 30 days are purged), so use it! And when you want to archive things, make some large tar files and put them on the STOREDIR&lt;br /&gt;
=== Transferring data from Irene ===&lt;br /&gt;
You can use the ccfr &amp;quot;speedway&amp;quot; between National computing centers to copy data from Irene to Adastra (it is all explained here: https://dci.dci-gitlab.cines.fr/webextranet/data_storage_and_transfers/index.html#between-computing-site-ccfr ). To summarize:&lt;br /&gt;
# First check that you indeed asked to have access to ccfr when you created your account. just run on Adastra the &amp;quot;id&amp;quot; command and check that you are a registered member of the &amp;quot;22011(cinesccfr)&amp;quot; group. If not, ask the CINES helpdesk svp@cines.fr &lt;br /&gt;
# Connect to Adastra the usual way, and once on Adastra &amp;quot;ssh adastra-ccfr.cines.fr&amp;quot;, which should land you on &amp;quot;login1&amp;quot; which is the node enabled to use the ccfr connection&lt;br /&gt;
# Once on login1 you can transfert data from Irene via scp or rsync using the appropriate gateway machine (on the Irene side) which is &amp;quot;irene-fr-ccfr-gw.ccc.cea&amp;quot;, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -avz irenelogin@irene-fr-ccfr-gw.ccc.cea:irene_path_to_your_data adastra_path_to_your_data&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Environment and Account Setup ==&lt;br /&gt;
* To be able to use svn (subversion) to download or update code you will need to specify in your ''~/.subversion/servers'' file :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[global]&lt;br /&gt;
http-proxy-host = proxy-l-adastra.cines.fr&lt;br /&gt;
http-proxy-port = 3128&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs ==&lt;br /&gt;
It's done using SLURM; you need to write up a job script and submit it using '''sbatch'''&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch myjob&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You must specify in the header of the job which project ressources you are using (&amp;quot;cin0391&amp;quot; in our case):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example of an MPI job to launch a simulation ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=job_mpi&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
### GENOA nodes accommodate 2 processors of 96 cores each, i.e. 192 cores overall&lt;br /&gt;
#SBATCH --constraint=GENOA&lt;br /&gt;
### Number of Nodes to use&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
### Number of MPI tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=48 &lt;br /&gt;
### Number of OpenMP threads per MPI task&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --threads-per-core=1&lt;br /&gt;
###SBATCH --exclusive&lt;br /&gt;
#SBATCH --output=job_mpi_%A.out&lt;br /&gt;
#SBATCH --time=00:45:00 &lt;br /&gt;
&lt;br /&gt;
#source env modules:&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env &lt;br /&gt;
ulimit -s unlimited&lt;br /&gt;
&lt;br /&gt;
srun --cpu-bind=threads --label gcm_96x96x78_phyvenus_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example of a mixed MPI/OpenMP job to launch a simulation ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=job_mpi_omp&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
### GENOA nodes accommodate 2 processors of 96 cores each, i.e. 192 cores overall&lt;br /&gt;
#SBATCH --constraint=GENOA&lt;br /&gt;
### Number of Nodes to use&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
### Number of MPI tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=24 &lt;br /&gt;
### Number of OpenMP threads per MPI task&lt;br /&gt;
#SBATCH --cpus-per-task=4&lt;br /&gt;
#SBATCH --threads-per-core=1&lt;br /&gt;
###SBATCH --exclusive&lt;br /&gt;
#SBATCH --output=job_mpi_omp_%A.out&lt;br /&gt;
#SBATCH --time=00:30:00 &lt;br /&gt;
&lt;br /&gt;
#source env modules:&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env &lt;br /&gt;
ulimit -s unlimited&lt;br /&gt;
&lt;br /&gt;
### OMP_NUM_THREADS value must match &amp;quot;#SBATCH --cpus-per-task&amp;quot;&lt;br /&gt;
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
export OMP_STACKSIZE=400M&lt;br /&gt;
&lt;br /&gt;
srun --cpu-bind=threads --label gcm_64x48x54_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note however that the ''srun'' instructions above will not yield very efficient results as one actually needs to specify the cpu binding (i.e. how core relate to one another) via dedicated functions :&lt;br /&gt;
&amp;lt;syntaxhighlight  lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
srun --ntasks-per-node=${SLURM_NTASKS_PER_NODE} --cpu-bind=none --mem-bind=none --label -- ./adastra_cpu_binding.sh ./gcm_64x48x54_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
But core placement requires using a full node (192 cores, i.e. 24 MPI time 8 OpenMP)&lt;br /&gt;
&lt;br /&gt;
== Using python ==&lt;br /&gt;
&lt;br /&gt;
If you want to use python on ADASTRA for quick analysis, you'll see that some basic packages are unavailable (ex : matplotlib). To solve this issue, you may install a virtual python environment. Note that ADASTRA allows the self maintenance of your environment on the /work and /scratch partition : you should not put it in your /home !&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
python3 -m venv virtual_environment&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, you may want to activate the environment by doing :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
source path/virtual_environment/bin/activate&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will see that the environment is active in your terminal with a (virtual_environment) at the beginning of your input line. When you are here, you can install any desired package with &amp;quot;pip&amp;quot;. For exemple here are the command lines I had to use to get matplotlib to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
python3 -m pip install --upgrade pip&lt;br /&gt;
python3 -m pip install --upgrade Pillow&lt;br /&gt;
&lt;br /&gt;
pip install matplotlib&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may see that some packages are required beforehand : in some cases, you will need to install them manually. When all packages are done installing, you may use python as you please if the virtual environment is active in your terminal !&lt;br /&gt;
&lt;br /&gt;
== Using Ferret ==&lt;br /&gt;
Ferret is installed on Adastra, but not (yet) as a standard module to load... To be ables to use Ferret you need to do the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load develop&lt;br /&gt;
module load GCC-CPU-2.1.0&lt;br /&gt;
module load ferret/7.6.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using gdb4hpc ==&lt;br /&gt;
This is the default (only) debugger available... to use it you need to:&lt;br /&gt;
# Launch a request for an allocation on a compute node: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; salloc --account=cin0391 --constraint=GENOA --job-name=&amp;quot;debug&amp;quot; --nodes=1 --time=1:00:00 --exclusive &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Identify which node it is linked to and directly ssh (from login node) to it, e.g. if it is node &amp;quot;c1516&amp;quot; &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; ssh c1516 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# source your usual environment and then the gdb4hpc module &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; module load gdb4hpc/4.16.0.1 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Go to your work directory and launch gdb4hpc&lt;br /&gt;
# within gdb4hpc: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; dbg all&amp;gt; launch $a{1} --launcher-args=&amp;quot;--mpi=cray_shasta -A cin0391 --constraint=GENOA -t 00:30:00 -N 1 --cpu-bind=verbose,cores --exclusive&amp;quot;  ./executable.exe &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once everything running, the first thing you have to do is set a breakpoint at the beginning of the program, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
break icosa_lmdz.f90:1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And then &amp;quot;continue&amp;quot; to that point&lt;br /&gt;
&lt;br /&gt;
== Using DDT ==&lt;br /&gt;
Much more user-friendly, and with a Graphical User Interface, you can now use DDT rather than gdb4hpc, just load the appropriate module:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load ddt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(after having loaded all the other modules (for compiler, libraries, etc.)&lt;br /&gt;
&lt;br /&gt;
Then you can launch ddt&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./ddt &amp;amp;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In a nutshell you then need to:&lt;br /&gt;
* select &amp;quot;Run and Debug a program&amp;quot;&lt;br /&gt;
* specify the Application (executable) and Working directory (where to run the executable)&lt;br /&gt;
* specify the use of MPI (and/or OpenMP) with given number of MPI processes and/or OpenMP threads&lt;br /&gt;
* specify the &amp;quot;srun arguments&amp;quot; (which are usually set in the job header when running a regular simulation), e.g.&lt;br /&gt;
&amp;lt;code&amp;gt;--nodes=1 --exclusive --constraint=GENOA --account=cin0391 --time=00:15:00 --threads-per-core=1 --label&amp;lt;/code&amp;gt;&lt;br /&gt;
* click on &amp;quot;Run&amp;quot;. It will launch a job (be patient your job might be on hold if the machine is full).&lt;br /&gt;
&lt;br /&gt;
...TODO: Add some example using DDT ...&lt;br /&gt;
&lt;br /&gt;
== Are you being disconnected when inactive? ==&lt;br /&gt;
If you are regularly being disconnected when a bit inactive on the supercomputer, adding these few lines in a ''config'' file in the .ssh/ repository of your logging machine (ex: ssh-out/spirit) may help :&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Host *&lt;br /&gt;
...&lt;br /&gt;
KeepAlive yes&lt;br /&gt;
TCPKeepAlive yes&lt;br /&gt;
ServerAliveInterval 15&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:FAQ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_Adastra&amp;diff=2771</id>
		<title>Using Adastra</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_Adastra&amp;diff=2771"/>
				<updated>2025-06-27T06:50:28Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: /* Example of a mixed MPI/OpenMP job to launch a simulation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a summary of examples and tools designed to help you get used with the Adastra environment.&lt;br /&gt;
&lt;br /&gt;
== Getting access to the cluster ==&lt;br /&gt;
For people on the &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project who need to open an account on Adastra, here is the procedure:&lt;br /&gt;
&lt;br /&gt;
# Go to https://www.edari.fr/utilisateur and log in via Janus or create an account if you don't have a Janus login. If this doesn't work, you can create a new eDARI account. (Make sure your profile is fully up to date including nationality)&lt;br /&gt;
# Beware! If you are on 2 lab (LMD and LATMOS for example), you must register with your email address corresponding to your Janus account.&lt;br /&gt;
# Click on &amp;quot;se rattacher à un dossier ayant obtenu des resources&amp;quot; or &amp;quot;Attach yourself to an application file that has obtained resources&amp;quot;&lt;br /&gt;
# &amp;quot;Atmosphères Planétaires&amp;quot; project number to provide:  A0180110391&lt;br /&gt;
# Ehouarn then receives an email to allow you to join the project. Once he has validated it, you receive a confirmation mail.&lt;br /&gt;
# Once approved, you have to request for an account, click on &amp;quot;CINES: créer une demande d'ouverture de compte&amp;quot;&lt;br /&gt;
# fill in the forms: name, contract end date, CINES, your lab information (LMD is the default)&lt;br /&gt;
# Access IP address  134.157.47.46 , FQDN (Fully Qualified Domain Name): ssh-out.lmd.jussieu.fr &lt;br /&gt;
# Add a second address : 134.157.176.129 , FQDN: spirit2.ipsl.fr&lt;br /&gt;
# click on option to have access to CCFR (only important if you have access to other GENCI machines)&lt;br /&gt;
# Security officer is Julien Lenseigne for LMD (his informations are all pre-filled, except phone: +33169335172)&lt;br /&gt;
# YOU MUST THEN VALIDATE THE REQUEST: click on the &amp;quot;Valider la saisie des informations&amp;quot;&lt;br /&gt;
# You then receive an automatic mail, but it's only to tell you to go to the next step: You must now download the pre-filled form from e-dari: find &amp;quot;télécharger la demande&amp;quot; and download the pdf. Sign it, and upload it on e-dari &amp;quot;déposer la demande de création de compte&amp;quot;.&lt;br /&gt;
# Wait for your application to be preprocessed by the system...&lt;br /&gt;
&lt;br /&gt;
== A couple of pointers ==&lt;br /&gt;
&lt;br /&gt;
* Connecting to Adastra: For those who had an account on Occigen, we have retained group and login credentials from then; To connect to Adastra you need first go through the LMD gateway (hakim) or the IPSL (Spirit/SpiritX) gateway and then&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh your_cines_login@adastra.cines.fr&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And then you will probably want to switch project using the myproject command, e.g. to switch to &amp;quot;lmd1167&amp;quot; (the old &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -a lmd1167&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and to switch to &amp;quot;cin0391&amp;quot; (the 2023-2024 &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -a cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
WARNING: when you switch projects, you also switch HOME directory etc.&lt;br /&gt;
&lt;br /&gt;
To get all the info about dedicated environment variables (e.g. paths to SCRATCH, STORE, etc.) you can use&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -c&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* To get all the information about project accounting (number of hours available and used by each member of the project) you need to connect to https://reser.cines.fr/ using your Adastra login and password&lt;br /&gt;
&lt;br /&gt;
* Changing the password of your CINES account&lt;br /&gt;
When your password is close to expiring, CINES asks you to change it on this website : https://rosetta.cines.fr&lt;br /&gt;
&lt;br /&gt;
Please note that you can access this website only if you are on a machine that you declared as a gateway for Adastra. At LMD, we have generally declared hakim.lmd.jussieu.fr (aka ssh-out) and spirit2.ipsl.fr as gateway machines. Hakim doesn't have any browser installed, but you can launch &amp;lt;code&amp;gt;firefox&amp;lt;/code&amp;gt; on Spirit and connect to the rosetta website.&lt;br /&gt;
If that doesn't work, check out the page on [[How to launch your local browser through a gateway machine]] or contact mail svp@cines.fr&lt;br /&gt;
&lt;br /&gt;
* Link to the Adastra technical documentation: https://dci.dci-gitlab.cines.fr/webextranet/&lt;br /&gt;
&lt;br /&gt;
* Link to the webpage where you can find out (login and password are those of your Adastra account) how many hour left we have on the project and details about everyone's use of Adastra: https://reser.cines.fr&lt;br /&gt;
&lt;br /&gt;
== Disks and workspaces ==&lt;br /&gt;
* all the details are on the Adastra documentation: https://dci.dci-gitlab.cines.fr/webextranet/data_storage_and_transfers/index.html&lt;br /&gt;
* If you want to know the current quota (in HOMEDIR, WORKDIR and SCRATCHDIR) allocated to the project (yes quotas are for the whole group):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -s cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* In a nutshell: we have lots of space on the WORKDIR (250 To) which is &amp;quot;permanent&amp;quot; (unlike the SCRATCHDIR, where files older than 30 days are purged), so use it! And when you want to archive things, make some large tar files and put them on the STOREDIR&lt;br /&gt;
=== Transferring data from Irene ===&lt;br /&gt;
You can use the ccfr &amp;quot;speedway&amp;quot; between National computing centers to copy data from Irene to Adastra (it is all explained here: https://dci.dci-gitlab.cines.fr/webextranet/data_storage_and_transfers/index.html#between-computing-site-ccfr ). To summarize:&lt;br /&gt;
# First check that you indeed asked to have access to ccfr when you created your account. just run on Adastra the &amp;quot;id&amp;quot; command and check that you are a registered member of the &amp;quot;22011(cinesccfr)&amp;quot; group. If not, ask the CINES helpdesk svp@cines.fr &lt;br /&gt;
# Connect to Adastra the usual way, and once on Adastra &amp;quot;ssh adastra-ccfr.cines.fr&amp;quot;, which should land you on &amp;quot;login1&amp;quot; which is the node enabled to use the ccfr connection&lt;br /&gt;
# Once on login1 you can transfert data from Irene via scp or rsync using the appropriate gateway machine (on the Irene side) which is &amp;quot;irene-fr-ccfr-gw.ccc.cea&amp;quot;, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -avz irenelogin@irene-fr-ccfr-gw.ccc.cea:irene_path_to_your_data adastra_path_to_your_data&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Environment and Account Setup ==&lt;br /&gt;
* To be able to use svn (subversion) to download or update code you will need to specify in your ''~/.subversion/servers'' file :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[global]&lt;br /&gt;
http-proxy-host = proxy-l-adastra.cines.fr&lt;br /&gt;
http-proxy-port = 3128&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs ==&lt;br /&gt;
It's done using SLURM; you need to write up a job script and submit it using '''sbatch'''&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch myjob&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You must specify in the header of the job which project ressources you are using (&amp;quot;cin0391&amp;quot; in our case):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example of an MPI job to launch a simulation ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=job_mpi&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
### GENOA nodes accommodate 2 processors of 96 cores each, i.e. 192 cores overall&lt;br /&gt;
#SBATCH --constraint=GENOA&lt;br /&gt;
### Number of Nodes to use&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
### Number of MPI tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=48 &lt;br /&gt;
### Number of OpenMP threads per MPI task&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --threads-per-core=1&lt;br /&gt;
###SBATCH --exclusive&lt;br /&gt;
#SBATCH --output=job_mpi_%A.out&lt;br /&gt;
#SBATCH --time=00:45:00 &lt;br /&gt;
&lt;br /&gt;
#source env modules:&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env &lt;br /&gt;
ulimit -s unlimited&lt;br /&gt;
&lt;br /&gt;
srun --cpu-bind=threads --label gcm_96x96x78_phyvenus_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example of a mixed MPI/OpenMP job to launch a simulation ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=job_mpi_omp&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
### GENOA nodes accommodate 2 processors of 96 cores each, i.e. 192 cores overall&lt;br /&gt;
#SBATCH --constraint=GENOA&lt;br /&gt;
### Number of Nodes to use&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
### Number of MPI tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=24 &lt;br /&gt;
### Number of OpenMP threads per MPI task&lt;br /&gt;
#SBATCH --cpus-per-task=4&lt;br /&gt;
#SBATCH --threads-per-core=1&lt;br /&gt;
###SBATCH --exclusive&lt;br /&gt;
#SBATCH --output=job_mpi_omp_%A.out&lt;br /&gt;
#SBATCH --time=00:30:00 &lt;br /&gt;
&lt;br /&gt;
#source env modules:&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env &lt;br /&gt;
ulimit -s unlimited&lt;br /&gt;
&lt;br /&gt;
### OMP_NUM_THREADS value must match &amp;quot;#SBATCH --cpus-per-task&amp;quot;&lt;br /&gt;
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
export OMP_STACKSIZE=400M&lt;br /&gt;
&lt;br /&gt;
srun --cpu-bind=threads --label gcm_64x48x54_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note however that the ''srun'' instructions above will not yield very efficient results as one actually needs to specify the cpu binding (i.e. how core relate to one another) via dedicated functions :&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
srun --ntasks-per-node=${SLURM_NTASKS_PER_NODE} --cpu-bind=none --mem-bind=none --label -- ./adastra_cpu_binding.sh ./gcm_64x48x54_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
But core placement requires using a full node (192 cores, i.e. 24 MPI time 8 OpenMP)&lt;br /&gt;
&lt;br /&gt;
== Using python ==&lt;br /&gt;
&lt;br /&gt;
If you want to use python on ADASTRA for quick analysis, you'll see that some basic packages are unavailable (ex : matplotlib). To solve this issue, you may install a virtual python environment. Note that ADASTRA allows the self maintenance of your environment on the /work and /scratch partition : you should not put it in your /home !&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
python3 -m venv virtual_environment&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, you may want to activate the environment by doing :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
source path/virtual_environment/bin/activate&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will see that the environment is active in your terminal with a (virtual_environment) at the beginning of your input line. When you are here, you can install any desired package with &amp;quot;pip&amp;quot;. For exemple here are the command lines I had to use to get matplotlib to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
python3 -m pip install --upgrade pip&lt;br /&gt;
python3 -m pip install --upgrade Pillow&lt;br /&gt;
&lt;br /&gt;
pip install matplotlib&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may see that some packages are required beforehand : in some cases, you will need to install them manually. When all packages are done installing, you may use python as you please if the virtual environment is active in your terminal !&lt;br /&gt;
&lt;br /&gt;
== Using Ferret ==&lt;br /&gt;
Ferret is installed on Adastra, but not (yet) as a standard module to load... To be ables to use Ferret you need to do the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load develop&lt;br /&gt;
module load GCC-CPU-2.1.0&lt;br /&gt;
module load ferret/7.6.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using gdb4hpc ==&lt;br /&gt;
This is the default (only) debugger available... to use it you need to:&lt;br /&gt;
# Launch a request for an allocation on a compute node: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; salloc --account=cin0391 --constraint=GENOA --job-name=&amp;quot;debug&amp;quot; --nodes=1 --time=1:00:00 --exclusive &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Identify which node it is linked to and directly ssh (from login node) to it, e.g. if it is node &amp;quot;c1516&amp;quot; &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; ssh c1516 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# source your usual environment and then the gdb4hpc module &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; module load gdb4hpc/4.16.0.1 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Go to your work directory and launch gdb4hpc&lt;br /&gt;
# within gdb4hpc: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; dbg all&amp;gt; launch $a{1} --launcher-args=&amp;quot;--mpi=cray_shasta -A cin0391 --constraint=GENOA -t 00:30:00 -N 1 --cpu-bind=verbose,cores --exclusive&amp;quot;  ./executable.exe &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once everything running, the first thing you have to do is set a breakpoint at the beginning of the program, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
break icosa_lmdz.f90:1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And then &amp;quot;continue&amp;quot; to that point&lt;br /&gt;
&lt;br /&gt;
== Using DDT ==&lt;br /&gt;
Much more user-friendly, and with a Graphical User Interface, you can now use DDT rather than gdb4hpc, just load the appropriate module:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load ddt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(after having loaded all the other modules (for compiler, libraries, etc.)&lt;br /&gt;
&lt;br /&gt;
Then you can launch ddt&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./ddt &amp;amp;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In a nutshell you then need to:&lt;br /&gt;
* select &amp;quot;Run and Debug a program&amp;quot;&lt;br /&gt;
* specify the Application (executable) and Working directory (where to run the executable)&lt;br /&gt;
* specify the use of MPI (and/or OpenMP) with given number of MPI processes and/or OpenMP threads&lt;br /&gt;
* specify the &amp;quot;srun arguments&amp;quot; (which are usually set in the job header when running a regular simulation), e.g.&lt;br /&gt;
&amp;lt;code&amp;gt;--nodes=1 --exclusive --constraint=GENOA --account=cin0391 --time=00:15:00 --threads-per-core=1 --label&amp;lt;/code&amp;gt;&lt;br /&gt;
* click on &amp;quot;Run&amp;quot;. It will launch a job (be patient your job might be on hold if the machine is full).&lt;br /&gt;
&lt;br /&gt;
...TODO: Add some example using DDT ...&lt;br /&gt;
&lt;br /&gt;
== Are you being disconnected when inactive? ==&lt;br /&gt;
If you are regularly being disconnected when a bit inactive on the supercomputer, adding these few lines in a ''config'' file in the .ssh/ repository of your logging machine (ex: ssh-out/spirit) may help :&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Host *&lt;br /&gt;
...&lt;br /&gt;
KeepAlive yes&lt;br /&gt;
TCPKeepAlive yes&lt;br /&gt;
ServerAliveInterval 15&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:FAQ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_Adastra&amp;diff=2770</id>
		<title>Using Adastra</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_Adastra&amp;diff=2770"/>
				<updated>2025-06-27T06:38:21Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a summary of examples and tools designed to help you get used with the Adastra environment.&lt;br /&gt;
&lt;br /&gt;
== Getting access to the cluster ==&lt;br /&gt;
For people on the &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project who need to open an account on Adastra, here is the procedure:&lt;br /&gt;
&lt;br /&gt;
# Go to https://www.edari.fr/utilisateur and log in via Janus or create an account if you don't have a Janus login. If this doesn't work, you can create a new eDARI account. (Make sure your profile is fully up to date including nationality)&lt;br /&gt;
# Beware! If you are on 2 lab (LMD and LATMOS for example), you must register with your email address corresponding to your Janus account.&lt;br /&gt;
# Click on &amp;quot;se rattacher à un dossier ayant obtenu des resources&amp;quot; or &amp;quot;Attach yourself to an application file that has obtained resources&amp;quot;&lt;br /&gt;
# &amp;quot;Atmosphères Planétaires&amp;quot; project number to provide:  A0180110391&lt;br /&gt;
# Ehouarn then receives an email to allow you to join the project. Once he has validated it, you receive a confirmation mail.&lt;br /&gt;
# Once approved, you have to request for an account, click on &amp;quot;CINES: créer une demande d'ouverture de compte&amp;quot;&lt;br /&gt;
# fill in the forms: name, contract end date, CINES, your lab information (LMD is the default)&lt;br /&gt;
# Access IP address  134.157.47.46 , FQDN (Fully Qualified Domain Name): ssh-out.lmd.jussieu.fr &lt;br /&gt;
# Add a second address : 134.157.176.129 , FQDN: spirit2.ipsl.fr&lt;br /&gt;
# click on option to have access to CCFR (only important if you have access to other GENCI machines)&lt;br /&gt;
# Security officer is Julien Lenseigne for LMD (his informations are all pre-filled, except phone: +33169335172)&lt;br /&gt;
# YOU MUST THEN VALIDATE THE REQUEST: click on the &amp;quot;Valider la saisie des informations&amp;quot;&lt;br /&gt;
# You then receive an automatic mail, but it's only to tell you to go to the next step: You must now download the pre-filled form from e-dari: find &amp;quot;télécharger la demande&amp;quot; and download the pdf. Sign it, and upload it on e-dari &amp;quot;déposer la demande de création de compte&amp;quot;.&lt;br /&gt;
# Wait for your application to be preprocessed by the system...&lt;br /&gt;
&lt;br /&gt;
== A couple of pointers ==&lt;br /&gt;
&lt;br /&gt;
* Connecting to Adastra: For those who had an account on Occigen, we have retained group and login credentials from then; To connect to Adastra you need first go through the LMD gateway (hakim) or the IPSL (Spirit/SpiritX) gateway and then&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh your_cines_login@adastra.cines.fr&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And then you will probably want to switch project using the myproject command, e.g. to switch to &amp;quot;lmd1167&amp;quot; (the old &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -a lmd1167&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and to switch to &amp;quot;cin0391&amp;quot; (the 2023-2024 &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -a cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
WARNING: when you switch projects, you also switch HOME directory etc.&lt;br /&gt;
&lt;br /&gt;
To get all the info about dedicated environment variables (e.g. paths to SCRATCH, STORE, etc.) you can use&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -c&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* To get all the information about project accounting (number of hours available and used by each member of the project) you need to connect to https://reser.cines.fr/ using your Adastra login and password&lt;br /&gt;
&lt;br /&gt;
* Changing the password of your CINES account&lt;br /&gt;
When your password is close to expiring, CINES asks you to change it on this website : https://rosetta.cines.fr&lt;br /&gt;
&lt;br /&gt;
Please note that you can access this website only if you are on a machine that you declared as a gateway for Adastra. At LMD, we have generally declared hakim.lmd.jussieu.fr (aka ssh-out) and spirit2.ipsl.fr as gateway machines. Hakim doesn't have any browser installed, but you can launch &amp;lt;code&amp;gt;firefox&amp;lt;/code&amp;gt; on Spirit and connect to the rosetta website.&lt;br /&gt;
If that doesn't work, check out the page on [[How to launch your local browser through a gateway machine]] or contact mail svp@cines.fr&lt;br /&gt;
&lt;br /&gt;
* Link to the Adastra technical documentation: https://dci.dci-gitlab.cines.fr/webextranet/&lt;br /&gt;
&lt;br /&gt;
* Link to the webpage where you can find out (login and password are those of your Adastra account) how many hour left we have on the project and details about everyone's use of Adastra: https://reser.cines.fr&lt;br /&gt;
&lt;br /&gt;
== Disks and workspaces ==&lt;br /&gt;
* all the details are on the Adastra documentation: https://dci.dci-gitlab.cines.fr/webextranet/data_storage_and_transfers/index.html&lt;br /&gt;
* If you want to know the current quota (in HOMEDIR, WORKDIR and SCRATCHDIR) allocated to the project (yes quotas are for the whole group):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -s cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* In a nutshell: we have lots of space on the WORKDIR (250 To) which is &amp;quot;permanent&amp;quot; (unlike the SCRATCHDIR, where files older than 30 days are purged), so use it! And when you want to archive things, make some large tar files and put them on the STOREDIR&lt;br /&gt;
=== Transferring data from Irene ===&lt;br /&gt;
You can use the ccfr &amp;quot;speedway&amp;quot; between National computing centers to copy data from Irene to Adastra (it is all explained here: https://dci.dci-gitlab.cines.fr/webextranet/data_storage_and_transfers/index.html#between-computing-site-ccfr ). To summarize:&lt;br /&gt;
# First check that you indeed asked to have access to ccfr when you created your account. just run on Adastra the &amp;quot;id&amp;quot; command and check that you are a registered member of the &amp;quot;22011(cinesccfr)&amp;quot; group. If not, ask the CINES helpdesk svp@cines.fr &lt;br /&gt;
# Connect to Adastra the usual way, and once on Adastra &amp;quot;ssh adastra-ccfr.cines.fr&amp;quot;, which should land you on &amp;quot;login1&amp;quot; which is the node enabled to use the ccfr connection&lt;br /&gt;
# Once on login1 you can transfert data from Irene via scp or rsync using the appropriate gateway machine (on the Irene side) which is &amp;quot;irene-fr-ccfr-gw.ccc.cea&amp;quot;, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -avz irenelogin@irene-fr-ccfr-gw.ccc.cea:irene_path_to_your_data adastra_path_to_your_data&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Environment and Account Setup ==&lt;br /&gt;
* To be able to use svn (subversion) to download or update code you will need to specify in your ''~/.subversion/servers'' file :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[global]&lt;br /&gt;
http-proxy-host = proxy-l-adastra.cines.fr&lt;br /&gt;
http-proxy-port = 3128&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs ==&lt;br /&gt;
It's done using SLURM; you need to write up a job script and submit it using '''sbatch'''&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch myjob&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You must specify in the header of the job which project ressources you are using (&amp;quot;cin0391&amp;quot; in our case):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example of an MPI job to launch a simulation ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=job_mpi&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
### GENOA nodes accommodate 2 processors of 96 cores each, i.e. 192 cores overall&lt;br /&gt;
#SBATCH --constraint=GENOA&lt;br /&gt;
### Number of Nodes to use&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
### Number of MPI tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=48 &lt;br /&gt;
### Number of OpenMP threads per MPI task&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --threads-per-core=1&lt;br /&gt;
###SBATCH --exclusive&lt;br /&gt;
#SBATCH --output=job_mpi_%A.out&lt;br /&gt;
#SBATCH --time=00:45:00 &lt;br /&gt;
&lt;br /&gt;
#source env modules:&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env &lt;br /&gt;
ulimit -s unlimited&lt;br /&gt;
&lt;br /&gt;
srun --cpu-bind=threads --label gcm_96x96x78_phyvenus_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example of a mixed MPI/OpenMP job to launch a simulation ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=job_mpi_omp&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
### GENOA nodes accommodate 2 processors of 96 cores each, i.e. 192 cores overall&lt;br /&gt;
#SBATCH --constraint=GENOA&lt;br /&gt;
### Number of Nodes to use&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
### Number of MPI tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=24 &lt;br /&gt;
### Number of OpenMP threads per MPI task&lt;br /&gt;
#SBATCH --cpus-per-task=4&lt;br /&gt;
#SBATCH --threads-per-core=1&lt;br /&gt;
###SBATCH --exclusive&lt;br /&gt;
#SBATCH --output=job_mpi_omp_%A.out&lt;br /&gt;
#SBATCH --time=00:30:00 &lt;br /&gt;
&lt;br /&gt;
#source env modules:&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env &lt;br /&gt;
ulimit -s unlimited&lt;br /&gt;
&lt;br /&gt;
### OMP_NUM_THREADS value must match &amp;quot;#SBATCH --cpus-per-task&amp;quot;&lt;br /&gt;
export OMP_NUM_THREADS=4&lt;br /&gt;
export OMP_STACKSIZE=400M&lt;br /&gt;
&lt;br /&gt;
srun --cpu-bind=threads --label gcm_64x48x54_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using python ==&lt;br /&gt;
&lt;br /&gt;
If you want to use python on ADASTRA for quick analysis, you'll see that some basic packages are unavailable (ex : matplotlib). To solve this issue, you may install a virtual python environment. Note that ADASTRA allows the self maintenance of your environment on the /work and /scratch partition : you should not put it in your /home !&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
python3 -m venv virtual_environment&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, you may want to activate the environment by doing :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
source path/virtual_environment/bin/activate&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will see that the environment is active in your terminal with a (virtual_environment) at the beginning of your input line. When you are here, you can install any desired package with &amp;quot;pip&amp;quot;. For exemple here are the command lines I had to use to get matplotlib to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
python3 -m pip install --upgrade pip&lt;br /&gt;
python3 -m pip install --upgrade Pillow&lt;br /&gt;
&lt;br /&gt;
pip install matplotlib&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may see that some packages are required beforehand : in some cases, you will need to install them manually. When all packages are done installing, you may use python as you please if the virtual environment is active in your terminal !&lt;br /&gt;
&lt;br /&gt;
== Using Ferret ==&lt;br /&gt;
Ferret is installed on Adastra, but not (yet) as a standard module to load... To be ables to use Ferret you need to do the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load develop&lt;br /&gt;
module load GCC-CPU-2.1.0&lt;br /&gt;
module load ferret/7.6.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using gdb4hpc ==&lt;br /&gt;
This is the default (only) debugger available... to use it you need to:&lt;br /&gt;
# Launch a request for an allocation on a compute node: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; salloc --account=cin0391 --constraint=GENOA --job-name=&amp;quot;debug&amp;quot; --nodes=1 --time=1:00:00 --exclusive &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Identify which node it is linked to and directly ssh (from login node) to it, e.g. if it is node &amp;quot;c1516&amp;quot; &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; ssh c1516 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# source your usual environment and then the gdb4hpc module &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; module load gdb4hpc/4.16.0.1 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Go to your work directory and launch gdb4hpc&lt;br /&gt;
# within gdb4hpc: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; dbg all&amp;gt; launch $a{1} --launcher-args=&amp;quot;--mpi=cray_shasta -A cin0391 --constraint=GENOA -t 00:30:00 -N 1 --cpu-bind=verbose,cores --exclusive&amp;quot;  ./executable.exe &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once everything running, the first thing you have to do is set a breakpoint at the beginning of the program, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
break icosa_lmdz.f90:1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And then &amp;quot;continue&amp;quot; to that point&lt;br /&gt;
&lt;br /&gt;
== Using DDT ==&lt;br /&gt;
Much more user-friendly, and with a Graphical User Interface, you can now use DDT rather than gdb4hpc, just load the appropriate module:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load ddt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(after having loaded all the other modules (for compiler, libraries, etc.)&lt;br /&gt;
&lt;br /&gt;
Then you can launch ddt&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./ddt &amp;amp;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In a nutshell you then need to:&lt;br /&gt;
* select &amp;quot;Run and Debug a program&amp;quot;&lt;br /&gt;
* specify the Application (executable) and Working directory (where to run the executable)&lt;br /&gt;
* specify the use of MPI (and/or OpenMP) with given number of MPI processes and/or OpenMP threads&lt;br /&gt;
* specify the &amp;quot;srun arguments&amp;quot; (which are usually set in the job header when running a regular simulation), e.g.&lt;br /&gt;
&amp;lt;code&amp;gt;--nodes=1 --exclusive --constraint=GENOA --account=cin0391 --time=00:15:00 --threads-per-core=1 --label&amp;lt;/code&amp;gt;&lt;br /&gt;
* click on &amp;quot;Run&amp;quot;. It will launch a job (be patient your job might be on hold if the machine is full).&lt;br /&gt;
&lt;br /&gt;
...TODO: Add some example using DDT ...&lt;br /&gt;
&lt;br /&gt;
== Are you being disconnected when inactive? ==&lt;br /&gt;
If you are regularly being disconnected when a bit inactive on the supercomputer, adding these few lines in a ''config'' file in the .ssh/ repository of your logging machine (ex: ssh-out/spirit) may help :&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Host *&lt;br /&gt;
...&lt;br /&gt;
KeepAlive yes&lt;br /&gt;
TCPKeepAlive yes&lt;br /&gt;
ServerAliveInterval 15&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:FAQ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_makelmdz_fcm_GCM_Compilation_Script&amp;diff=2742</id>
		<title>The makelmdz fcm GCM Compilation Script</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_makelmdz_fcm_GCM_Compilation_Script&amp;diff=2742"/>
				<updated>2025-05-30T13:36:27Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The '''makelmdz_fcm''' script is the (bash) script (located in '''LMDZ.COMMON''' directory) to use to [[Quick_Install_and_Run#Compiling_the_GCM|compile the GCM]]. It is based on FCM and should be run, with various options (e.g which physics package to compile the model with, what grid resolution to use, etc.) to generate the sought executable.&lt;br /&gt;
&lt;br /&gt;
== makelmdz_fcm options ==&lt;br /&gt;
To list available options, run &amp;quot;makelmdz_fcm -h&amp;quot;, which should return something like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Usage :&lt;br /&gt;
makelmdz_fcm [options] -arch arch_name exec&lt;br /&gt;
[-h]                       : brief help&lt;br /&gt;
[-d [[IMx]JMx]LM]          : IM, JM, LM are the dimensions in x, y, z (default: 96x72x19)&lt;br /&gt;
[-s nscat]                 : (Generic) Number of radiatively active scatterers&lt;br /&gt;
[-b IRxVIS]                : (Generic) Number of infrared (IR) and visible (VIS) bands for radiative transfer&lt;br /&gt;
[-p PHYS]                  : set of physical parametrizations (in libf/phyPHYS), (default: lmd)&lt;br /&gt;
[-prod / -dev / -debug]    : compilation mode production (default) / developement / debug .&lt;br /&gt;
[-c false/MPI1/OMCT]       : (Earth) coupling with ocean model : MPI1/OMCT/false (default: false)&lt;br /&gt;
[-v false/orchidee2.0/orchidee1.9/true] : (Earth) version of the vegetation model to include (default: false)&lt;br /&gt;
          false       : no vegetation model&lt;br /&gt;
          orchidee2.0 : compile using ORCHIDEE 2.0 (or more recent version)&lt;br /&gt;
          orchidee1.9 : compile using ORCHIDEE up to the version including OpenMP in ORCHIDEE : tag 1.9-1.9.5(version AR5)-1.9.6&lt;br /&gt;
          true        : (obsolete; for backward compatibility) use ORCHIDEE tag 1.9-1.9.6&lt;br /&gt;
[-chimie INCA/false]       : (Earth) with INCA chemistry model or without (default: false)&lt;br /&gt;
[-cosp true/false]         : (Earth) add the cosp model (default: false)&lt;br /&gt;
[-sisvat true/false]  : (Earth) compile with/without sisvat package (default: false)&lt;br /&gt;
[-rrtm true/false]    : (Earth) compile with/without rrtm package (default: false)&lt;br /&gt;
[-dust true/false]    : (Earth) compile with/without the dust package by Boucher and co (default: false)&lt;br /&gt;
[-strataer true/false]    : (Earth) compile with/without the strat aer package by Boucher and co (default: false)&lt;br /&gt;
[-parallel none/mpi/omp/mpi_omp] : parallelism (default: none) : mpi, openmp or mixted mpi_openmp&lt;br /&gt;
[-g GRI]                   : grid configuration in dyn3d/GRI_xy.h  (default: reg, inclues a zoom)&lt;br /&gt;
[-io ioipsl/mix/xios]                   : Input/Output library (default: ioipsl)&lt;br /&gt;
[-include INCLUDES]        : extra include path to add&lt;br /&gt;
[-cpp CPP_KEY]             : additional preprocessing definitions&lt;br /&gt;
[-adjnt]                   : adjoint model, not operational ...&lt;br /&gt;
[-mem]                     : reduced memory dynamics (if in parallel mode)&lt;br /&gt;
[-filtre NOMFILTRE]        : use filtre from libf/NOMFILTRE (default: filtrez)&lt;br /&gt;
[-link LINKS]              : additional links with other libraries&lt;br /&gt;
[-j n]                     : active parallel compiling on ntask&lt;br /&gt;
[-full]                    : full (re-)compilation (from scratch)&lt;br /&gt;
[-libphy]                  : only compile physics package (no dynamics or main program)&lt;br /&gt;
[-fcm_path path]           : path to the fcm tool (default: tools/fcm/bin)&lt;br /&gt;
[-ext_src path]            : path to an additional set of routines to compile with the model&lt;br /&gt;
[-arch_path path]          : path to architecture files (default: arch)&lt;br /&gt;
 -arch arch                : target architecture &lt;br /&gt;
 exec                      : executable to build&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that some options are meaningless for the Generic GCM. This is due to the fact that we try to coordinates tools like '''makelmdz_fcm''' between different physics packages.&lt;br /&gt;
&lt;br /&gt;
The only mandatory argument are '''-arch''' and '''exec''', but in practice you'll need to specify many more (as defaults rarely suit all needs)&lt;br /&gt;
 &lt;br /&gt;
== Details of main makelmdz_fcm options ==&lt;br /&gt;
&amp;lt;pre&amp;gt;-d IMxJMxLM&amp;lt;/pre&amp;gt;&lt;br /&gt;
As the default is to run on a fixed longitude-latitude grid (set when compiling), option &amp;quot;-d&amp;quot; is necessary to specify the number of grid points iimxjjmxlllm (actually iim is the number of intervals along longitude, jjm is the number of intervals along latitude and llm the number of atmospheric layers).&lt;br /&gt;
&lt;br /&gt;
If compiling one of the 1D models, then only the number of layers (llm) needs be specified, e.g. &amp;lt;code&amp;gt;-d 78&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-prod / -dev / -debug&amp;lt;/pre&amp;gt;&lt;br /&gt;
Compilation mode. Default is &amp;lt;code&amp;gt;-prod&amp;lt;/code&amp;gt;, i.e. &amp;quot;production&amp;quot; mode, where compiler optimization are on (as defined in the arch files; see [[The_Target_Architecture_(&amp;quot;arch&amp;quot;)_Files]] for details). When checking/debugging you definitely want to set &amp;lt;code&amp;gt;-debug&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-j n&amp;lt;/pre&amp;gt;&lt;br /&gt;
With this option one can speed up compilation by letting &amp;lt;code&amp;gt;make&amp;lt;/code&amp;gt; compile simultaneously (and if possible) up to &amp;lt;code&amp;gt;n&amp;lt;/code&amp;gt; routines in parallel (note that this &amp;quot;parallel compilation&amp;quot; has nothing to do with the code being compiled for serial or parallel use as specified via option &amp;lt;code&amp;gt;-parallel ...&amp;lt;/code&amp;gt;). In practice using &amp;lt;code&amp;gt;-j 8&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-j 4&amp;lt;/code&amp;gt; works well. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-parallel none/mpi/omp/mpi_omp&amp;lt;/pre&amp;gt;&lt;br /&gt;
This option is to specify whether the model should be compiled in serial mode (default) or in parallel using MPI (&amp;lt;code&amp;gt;mpi&amp;lt;/code&amp;gt;) only, or using OpenMP (&amp;lt;code&amp;gt;omp&amp;lt;/code&amp;gt;) only, or both MPI and OpenMP (&amp;lt;code&amp;gt;mpi_omp&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
In practice, one most often needs to run in parallel and using both MPI and OpenMP, so &amp;lt;code&amp;gt;-parallel mpi_omp&amp;lt;/code&amp;gt; is advised.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-io ioipsl/noioipsl/mix/xios&amp;lt;/pre&amp;gt;&lt;br /&gt;
This option specifies which IO (Input/Output) library is going to be used by the model. Default is &amp;lt;code&amp;gt;-io ioipsl&amp;lt;/code&amp;gt;, which is becoming depreciated for the GCM but mandatory for the 1D model. Note that when compiling with XIOS: &amp;lt;code&amp;gt;-io xios&amp;lt;/code&amp;gt; on still needs to also use the IOIPSL library (which handles the reading of the run.def and companion files). It is also possible to compile without the IOISPL or XIOS libraries by specifying &amp;lt;code&amp;gt;-io noioipsl&amp;lt;/code&amp;gt; (using an internal version of the getin() function from reading the *.def files; not recommended, but might help if building IOIPSL is a problem).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-libphy&amp;lt;/pre&amp;gt;&lt;br /&gt;
With this option, only the physics package is compiled (the longitude-latitude dynamics routines are excluded) as a library, and no main program is generated. Building this library is a mandatory step to run with other dynamical cores like DYNAMICO or WRF.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-full&amp;lt;/pre&amp;gt;&lt;br /&gt;
Impose a full cleanup (i.e. removing past object and library files) to ensure recompiling the model from scratch.&lt;br /&gt;
&lt;br /&gt;
== Details of some specific makelmdz_fcm options ==&lt;br /&gt;
One chooses which physics package will be used using the &lt;br /&gt;
&amp;lt;pre&amp;gt;-p arg&amp;lt;/pre&amp;gt;&lt;br /&gt;
option, where &amp;lt;code&amp;gt;arg&amp;lt;/code&amp;gt; implies that the corresponding code will be found in '''LMDZ.COMMON/libf/phyarg''' and optionally also in '''LMDZ.COMMON/libf/aeronoarg'''. In practice these are just links to the package '''LMDZ.ARG/libf/phyarg''' and '''LMDZ.ARG/libf/aeronoarg''' (see [[LMDZ.COMMON directory layout and contents]], [[LMDZ.GENERIC directory layout and contents]], etc.).&lt;br /&gt;
&lt;br /&gt;
=== Generic model specific options ===&lt;br /&gt;
To compile with the Generic physics package, the adequate argument to &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; is &amp;lt;code&amp;gt;std&amp;lt;/code&amp;gt;, i.e. corresponding code will be found in '''LMDZ.COMMON/libf/phystd''' and '''LMDZ.COMMON/libf/aeronostd''', which are simply links to '''LMDZ.GENERIC/libf/phystd''' and '''LMDZ.GENERIC/libf/aeronostd''':&lt;br /&gt;
&amp;lt;pre&amp;gt;-p std&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional options one must provide when compiling with the Generic physics include: &lt;br /&gt;
&amp;lt;pre&amp;gt;-b IRxVIS&amp;lt;/pre&amp;gt;&lt;br /&gt;
Number of bands in the InfraRed and Visible for the radiative transfer. Note that this requires that the corresponding appropriate input files are available (at run time).&lt;br /&gt;
&lt;br /&gt;
=== Mars model specific options ===&lt;br /&gt;
To compile with the Mars physics package, the adequate argument to &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; is &amp;lt;code&amp;gt;mars&amp;lt;/code&amp;gt;, i.e. corresponding code will be found in '''LMDZ.COMMON/libf/phymars''' and '''LMDZ.COMMON/libf/aeronomars''', which are simply links to '''LMDZ.MARS/libf/phymars''' and '''LMDZ.MARS/libf/aeronomars''':&lt;br /&gt;
&amp;lt;pre&amp;gt;-p mars&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Venus model specific options ===&lt;br /&gt;
To compile with the Venus physics package, the adequate argument to &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; is &amp;lt;code&amp;gt;venus&amp;lt;/code&amp;gt;, i.e. corresponding code will be found in '''LMDZ.COMMON/libf/phyvenus''', which is simply a link to '''LMDZ.VENUS/libf/phyvenus''':&lt;br /&gt;
&amp;lt;pre&amp;gt;-p venus&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Titan model specific options ===&lt;br /&gt;
To compile with the Titan physics package, the adequate argument to &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; is &amp;lt;code&amp;gt;titan&amp;lt;/code&amp;gt;, i.e. corresponding code will be found in '''LMDZ.COMMON/libf/phytitan''', '''LLMDZ.COMMON/libf/muphytitan''' and '''LMDZ.COMMON/libf/chimtitan''', which are simply links to '''LMDZ.TITAN/libf/phytitan''', '''LMDZ.TITAN/libf/muphytitan''' and '''LMDZ.TITAN/libf/chimtitan''':&lt;br /&gt;
&amp;lt;pre&amp;gt;-p titan&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
If when you run &amp;lt;code&amp;gt;makelmdz_fcm&amp;lt;/code&amp;gt; you get the following error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dirname: missing operand&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This is because you do not have the &amp;lt;code&amp;gt;fcm&amp;lt;/code&amp;gt; command available, either because you have not installed &amp;quot;fcm&amp;quot; (see the relevant &amp;quot;Overview&amp;quot; page for the PCM you are using) or because you have not added the &amp;lt;code&amp;gt;FCM_V1.2/bin&amp;lt;/code&amp;gt; directory to your &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt; environment variable.&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Venus-Model]]&lt;br /&gt;
[[Category:Titan-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Installing_Mars_mesoscale_model_on_spirit&amp;diff=2741</id>
		<title>Installing Mars mesoscale model on spirit</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Installing_Mars_mesoscale_model_on_spirit&amp;diff=2741"/>
				<updated>2025-05-28T15:42:38Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Set up environment ==&lt;br /&gt;
=== Option 1 ===&lt;br /&gt;
Add this in your ~/.bashrc then source the file&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
module purge&lt;br /&gt;
module load intel/2021.4.0&lt;br /&gt;
module load intel-mkl/2020.4.304&lt;br /&gt;
module load openmpi/4.0.7&lt;br /&gt;
module load hdf5/1.10.7-mpi&lt;br /&gt;
module load netcdf-fortran/4.5.3-mpi&lt;br /&gt;
module load netcdf-c/4.7.4-mpi&lt;br /&gt;
declare -x WHERE_MPI=/net/nfs/tools/u20/22.3/PrgEnv/intel/linux-ubuntu20.04-zen2/openmpi/4.0.7-intel-2021.4.0-43fdcnab3ydwu7ycrplnzlp6xieusuz7/bin/&lt;br /&gt;
declare -x NETCDF=/scratchu/spiga/les_mars_project_spirit/netcdf_hacks/SPIRIT&lt;br /&gt;
declare -x NCDFLIB=$NETCDF/lib&lt;br /&gt;
declare -x NCDFINC=$NETCDF/include&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do not forget to declare the local directory in PATH by adding this to your ~/.bashrc&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
declare -x PATH=./:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is necessary to unlimit the stacksize to avoit unwanted segmentation faults by adding this to your ~/.bashrc&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
ulimit -s unlimited&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the end, source the .bashrc file!&lt;br /&gt;
&lt;br /&gt;
=== Option 2 ===&lt;br /&gt;
If you prefer to not modify your .bashrc file, you should instead put all the lines in a &amp;quot;mesoscale.env&amp;quot; file and add a&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source mesoscale.env&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
at the beginning of the meso_install.sh script.&lt;br /&gt;
&lt;br /&gt;
Note that you will still need to have &amp;quot;.&amp;quot; in your PATH and unlimited stacksize. So you will definitely need to have:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
declare -x PATH=./:$PATH&lt;br /&gt;
ulimit -s unlimited&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
in your .bashrc file&lt;br /&gt;
&lt;br /&gt;
=== Extra technical details===&lt;br /&gt;
* WRF needs a NETCDF environment variable where everything is in the same dir (Fortran and C stuff). If this is not available one needs to create a uniquer dire with links to all the C and Fortran stuff. NCDFLIB and NCDFINC are for the physics.&lt;br /&gt;
* WHERE_MPI env variable is used by some scripts to make sure we use the right one&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Set up the installer ==&lt;br /&gt;
&lt;br /&gt;
Go to your 'data' directory (cd /homedata/_MY_LOGIN_ ''or'' cd /data/_MY_LOGIN_) and download the installer with the following command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
svn co https://svn.lmd.jussieu.fr/Planeto/trunk/MESOSCALE/LMD_MM_MARS/SIMU&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure the installer can be executed&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
chmod 755 SIMU/meso_install.sh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make a link (e.g. where you are, in the parent directory of /data) to the main script in this SIMU directory&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
ln -sf SIMU/meso_install.sh .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run the installer with a simple display of possible options&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
./meso_install.sh -h&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Install the code ==&lt;br /&gt;
&lt;br /&gt;
Update to the latest version of installer&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
cd SIMU&lt;br /&gt;
svn update&lt;br /&gt;
chmod 755 meso_install.sh&lt;br /&gt;
cd ..&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run the installer by providing a name for your specific directory&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
./meso_install.sh -n DESCRIBE_YOUR_RESEARCH_PROJECT&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
''Important'': This will only work if you have access (i.e. an account) to the IN2P3 Gitlab project &amp;quot;La communauté des modèles atmosphériques planétaires&amp;quot; AND if you have added your personal ssh key there. Otherwise proceed as indicated below.&lt;br /&gt;
&lt;br /&gt;
''Special case'': In case you do not have a gitlab account, ask for an archive (tar.gz) of the code.&lt;br /&gt;
Let us assume the name is git-trunk-mesoscale-compile-run-spirit.tar.gz &lt;br /&gt;
and the file is in the current directory (for instance in /homedata/_MY_LOGIN_)&lt;br /&gt;
then run the installer with the following command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
./meso_install.sh -n DESCRIBE_YOUR_RESEARCH_PROJECT -a git-trunk-mesoscale-compile-run-spirit&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Troubleshooting (in case this happens to you) ===&lt;br /&gt;
If GCM compilation fails with error message &amp;quot;Can't locate Fcm/Config.pm in @INC (you may need to install the Fcm::Config module)&amp;quot;, this is because &amp;quot;.&amp;quot; is in your PATH before &amp;quot;FCM_V1.2/bin&amp;quot;. &lt;br /&gt;
You can fix it by either adapting the PATH in the environment file by adding&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
declare -x PATH=/homedata/_MY_LOGIN_/DESCRIBE_YOUR_RESEARCH_PROJECT/code/LMDZ.COMMON/FCM_V1.2/bin/:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
or by removing the symbolic link to &amp;quot;fcm&amp;quot; in &amp;quot;/homedata/_MY_LOGIN_/DESCRIBE_YOUR_RESEARCH_PROJECT/code/LMDZ.COMMON/&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Once you have fixed the problem you can recompile the GCM (which had failed previously) by doing:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
cd /homedata/_MY_LOGIN_/DESCRIBE_YOUR_RESEARCH_PROJECT/&lt;br /&gt;
./compile_gcm.sh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and recreate start files from a sample start_archive.nc by doing (in subdirectory gcm/newstart)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
./mini_startbase.sh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A different option is to fix the problem directly in the meso_install.sh script, by adding this line (anywhere below the definition of $refrepo )&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
 declare -x PATH=$refrepo/code/LMDZ.COMMON/FCM_V1.2/bin/:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Extra technical stuff ===&lt;br /&gt;
&lt;br /&gt;
* in meso_install.sh is hard coded the git tag of the version that will be installed e.g. version=&amp;quot;tags/mesoscale-compile-run_MESOIPSL_exploration&amp;quot;. the tags point to specific versions which have been tested and are thus reference. e.g. mesoscale_compile-run_MESOIPSL on the git-trunk&lt;br /&gt;
&lt;br /&gt;
* To recompile WRF (see &amp;quot;readme&amp;quot; file)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
cd code/MESOSCALE/LMD_MM_MARS&lt;br /&gt;
makemeso -p mars_lmd_new&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To recompile WRF in debug mode:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
cd code/MESOSCALE/LMD_MM_MARS&lt;br /&gt;
makemeso -p mars_lmd_new -g&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The compilation script (and options) is in MESOSCALE/LMD_MM_MARS/SRC/WRFV2/mars_lmd_new/makegcm_mpifort for physics&lt;br /&gt;
And in MESOSCALE/LMD_MM_MARS/makemeso for WRF&lt;br /&gt;
&lt;br /&gt;
== Run the full workflow GCM + initialization + mesoscale model ==&lt;br /&gt;
&lt;br /&gt;
Have a look in the script named launch just for info or to change the number of processors or the step at which you start. Send the launch job to the cluster by typing (note that you can change the number of core to run with in the header of the script, e.g. ''#SBATCH --ntasks=24'' to use 24 cores, but this doesn't work for now...)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch launch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
IMPORTANT: if you have used &amp;quot;Option 2&amp;quot; in the setup/environment, i.e. creating a dedicated &amp;quot;mesoscale.env&amp;quot; file, then you should source it in the &amp;quot;launch&amp;quot; script with something of the likes of:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
source ../mesoscale.env&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can check the status of the run by typing&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
squeue -u $USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Overview of the workflow steps ===&lt;br /&gt;
The launch script workflow includes 4 steps:&lt;br /&gt;
# Running the GCM , see the &amp;quot;launch_gcm&amp;quot; script in subdirectory &amp;quot;gcm&amp;quot;, and run the &amp;quot;readmeteo.exe&amp;quot; executable in subdirectory &amp;quot;prep&amp;quot;. &amp;quot;gcm/startbase&amp;quot; contains the required start files. &amp;quot;readmeteo.exe&amp;quot; parses diagfi files into a binary WRF input file in &amp;quot;prep/WPSFEED&amp;quot;&lt;br /&gt;
# Running &amp;quot;geogrid.exe&amp;quot; in the &amp;quot;geogrid&amp;quot; subdirectory and &amp;quot;metgrid.exe&amp;quot; in the &amp;quot;metgrid&amp;quot; subdirectory. &amp;quot;geogrid.exe&amp;quot; sets up the Mesocale grid (relies on &amp;quot;data_static&amp;quot;). and &amp;quot;metgrid.ecxe&amp;quot; re-interpolates horizontally the &amp;quot;prep/WPSFEED&amp;quot; files onto the Mesoscale grid (files &amp;quot;met_***.nc&amp;quot;).&lt;br /&gt;
# Running &amp;quot;real.exe&amp;quot;. This is for vertical re-interpolation and generation of boundary conditions (files &amp;quot;wrfinput_d01&amp;quot; and &amp;quot;wrfbdy_d01&amp;quot;)&lt;br /&gt;
# Running &amp;quot;wrf.exe&amp;quot; in a dedicated &amp;quot;run_###&amp;quot; subdirectory. Instantaneous outputs are &amp;quot;wrfout_d*&amp;quot; files and &amp;quot;wrfrst_*&amp;quot; files are restart files.&lt;br /&gt;
&lt;br /&gt;
Note that the various steps are incremental, but not always necessary. For instance once the forcings have been generated for a given setup (steps 1-3) then one can rerun the Mesocale Model &amp;quot;wrf.exe&amp;quot; (step 4 only) without redoing the setup steps.&lt;br /&gt;
&lt;br /&gt;
Note that &amp;quot;run.def&amp;quot; includes &amp;quot;callphys.def&amp;quot; (same as the GCM) and additions in &amp;quot;mesoscale.def&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Extra technical stuff ===&lt;br /&gt;
* The namelist.input file contains inputs for WRF. One can change output rate by specifying &amp;quot;history_interval_s&amp;quot; in seconds (instead of &amp;quot;history_interval&amp;quot;), e.g. set to the timestep for an output at every step to debug.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
history_interval_s=20&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* to run in hydrostatic mode: it is a parameter for &amp;quot;&amp;amp;dynamics&amp;quot; namelist&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Bash&amp;quot;&amp;gt;&lt;br /&gt;
non_hydrostatic=F&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Note that the full list of options for namelist.in can be found in &amp;quot;SIMU/namelist.input_full&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== A more detailed run ==&lt;br /&gt;
&lt;br /&gt;
The best is probably to use a more complete startbase than the minimal one that was created. For instance, to get a more complete startbase, link 'data_gcm' to point towards '/data/spiga/2021_STARTBASES_rev2460/MY35'&lt;br /&gt;
&lt;br /&gt;
To use the tyler cap setting in namelist.wps, download the tylerall archive here 'https://web.lmd.jussieu.fr/~aslmd/mesoscale_model/data_static/' and extract the content in the folder named data_static&lt;br /&gt;
&lt;br /&gt;
[[Category:Mars-Mesoscale]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_Adastra&amp;diff=2740</id>
		<title>Using Adastra</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_Adastra&amp;diff=2740"/>
				<updated>2025-05-13T14:25:14Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a summary of examples and tools designed to help you get used with the Adastra environment.&lt;br /&gt;
&lt;br /&gt;
== Getting access to the cluster ==&lt;br /&gt;
For people on the &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project who need to open an account on Adastra, here is the procedure:&lt;br /&gt;
&lt;br /&gt;
# Go to https://www.edari.fr/utilisateur and log in via Janus or create an account if you don't have a Janus login. If this doesn't work, you can create a new eDARI account. (Make sure your profile is fully up to date including nationality)&lt;br /&gt;
# Beware! If you are on 2 lab (LMD and LATMOS for example), you must register with your email address corresponding to your Janus account.&lt;br /&gt;
# Click on &amp;quot;se rattacher à un dossier ayant obtenu des resources&amp;quot; or &amp;quot;Attach yourself to an application file that has obtained resources&amp;quot;&lt;br /&gt;
# &amp;quot;Atmosphères Planétaires&amp;quot; project number to provide:  A0180110391&lt;br /&gt;
# Ehouarn then receives an email to allow you to join the project. Once he has validated it, you receive a confirmation mail.&lt;br /&gt;
# Once approved, you have to request for an account, click on &amp;quot;CINES: créer une demande d'ouverture de compte&amp;quot;&lt;br /&gt;
# fill in the forms: name, contract end date, CINES, your lab information (LMD is the default)&lt;br /&gt;
# Access IP address  134.157.47.46 , FQDN (Fully Qualified Domain Name): ssh-out.lmd.jussieu.fr &lt;br /&gt;
# Add a second address : 134.157.176.129 , FQDN: spirit2.ipsl.fr&lt;br /&gt;
# click on option to have access to CCFR (only important if you have access to other GENCI machines)&lt;br /&gt;
# Security officer is Julien Lenseigne for LMD (his informations are all pre-filled, except phone: +33169335172)&lt;br /&gt;
# YOU MUST THEN VALIDATE THE REQUEST: click on the &amp;quot;Valider la saisie des informations&amp;quot;&lt;br /&gt;
# You then receive an automatic mail, but it's only to tell you to go to the next step: You must now download the pre-filled form from e-dari: find &amp;quot;télécharger la demande&amp;quot; and download the pdf. Sign it, and upload it on e-dari &amp;quot;déposer la demande de création de compte&amp;quot;.&lt;br /&gt;
# Wait for your application to be preprocessed by the system...&lt;br /&gt;
&lt;br /&gt;
== A couple of pointers ==&lt;br /&gt;
&lt;br /&gt;
* Connecting to Adastra: For those who had an account on Occigen, we have retained group and login credentials from then; To connect to Adastra you need first go through the LMD gateway (hakim) or the IPSL (Spirit/SpiritX) gateway and then&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh your_cines_login@adastra.cines.fr&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And then you will probably want to switch project using the myproject command, e.g. to switch to &amp;quot;lmd1167&amp;quot; (the old &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -a lmd1167&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and to switch to &amp;quot;cin0391&amp;quot; (the 2023-2024 &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -a cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
WARNING: when you switch projects, you also switch HOME directory etc.&lt;br /&gt;
&lt;br /&gt;
To get all the info about dedicated environment variables (e.g. paths to SCRATCH, STORE, etc.) you can use&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -c&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* To get all the information about project accounting (number of hours available and used by each member of the project) you need to connect to https://reser.cines.fr/ using your Adastra login and password&lt;br /&gt;
&lt;br /&gt;
* Changing the password of your CINES account&lt;br /&gt;
When your password is close to expiring, CINES asks you to change it on this website : https://rosetta.cines.fr&lt;br /&gt;
&lt;br /&gt;
Please note that you can access this website only if you are on a machine that you declared as a gateway for Adastra. At LMD, we have generally declared hakim.lmd.jussieu.fr (aka ssh-out) and spirit2.ipsl.fr as gateway machines. Hakim doesn't have any browser installed, but you can launch &amp;lt;code&amp;gt;firefox&amp;lt;/code&amp;gt; on Spirit and connect to the rosetta website.&lt;br /&gt;
If that doesn't work, check out the page on [[How to launch your local browser through a gateway machine]] or contact mail svp@cines.fr&lt;br /&gt;
&lt;br /&gt;
* Link to the Adastra technical documentation: https://dci.dci-gitlab.cines.fr/webextranet/&lt;br /&gt;
&lt;br /&gt;
* Link to the webpage where you can find out (login and password are those of your Adastra account) how many hour left we have on the project and details about everyone's use of Adastra: https://reser.cines.fr&lt;br /&gt;
&lt;br /&gt;
== Disks and workspaces ==&lt;br /&gt;
* all the details are on the Adastra documentation: https://dci.dci-gitlab.cines.fr/webextranet/data_storage_and_transfers/index.html&lt;br /&gt;
* If you want to know the current quota (in HOMEDIR, WORKDIR and SCRATCHDIR) allocated to the project (yes quotas are for the whole group):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -s cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* In a nutshell: we have lots of space on the WORKDIR (250 To) which is &amp;quot;permanent&amp;quot; (unlike the SCRATCHDIR, where files older than 30 days are purged), so use it! And when you want to archive things, make some large tar files and put them on the STOREDIR&lt;br /&gt;
=== Transferring data from Irene ===&lt;br /&gt;
You can use the ccfr &amp;quot;speedway&amp;quot; between National computing centers to copy data from Irene to Adastra (it is all explained here: https://dci.dci-gitlab.cines.fr/webextranet/data_storage_and_transfers/index.html#between-computing-site-ccfr ). To summarize:&lt;br /&gt;
# First check that you indeed asked to have access to ccfr when you created your account. just run on Adastra the &amp;quot;id&amp;quot; command and check that you are a registered member of the &amp;quot;22011(cinesccfr)&amp;quot; group. If not, ask the CINES helpdesk svp@cines.fr &lt;br /&gt;
# Connect to Adastra the usual way, and once on Adastra &amp;quot;ssh adastra-ccfr.cines.fr&amp;quot;, which should land you on &amp;quot;login1&amp;quot; which is the node enabled to use the ccfr connection&lt;br /&gt;
# Once on login1 you can transfert data from Irene via scp or rsync using the appropriate gateway machine (on the Irene side) which is &amp;quot;irene-fr-ccfr-gw.ccc.cea&amp;quot;, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -avz irenelogin@irene-fr-ccfr-gw.ccc.cea:irene_path_to_your_data adastra_path_to_your_data&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs ==&lt;br /&gt;
It's done using SLURM; you need to write up a job script and submit it using '''sbatch'''&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch myjob&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You must specify in the header of the job which project ressources you are using (&amp;quot;cin0391&amp;quot; in our case):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example of an MPI job to launch a simulation ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=job_mpi&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
### GENOA nodes accommodate 2 processors of 96 cores each, i.e. 192 cores overall&lt;br /&gt;
#SBATCH --constraint=GENOA&lt;br /&gt;
### Number of Nodes to use&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
### Number of MPI tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=48 &lt;br /&gt;
### Number of OpenMP threads per MPI task&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --threads-per-core=1&lt;br /&gt;
###SBATCH --exclusive&lt;br /&gt;
#SBATCH --output=job_mpi_%A.out&lt;br /&gt;
#SBATCH --time=00:45:00 &lt;br /&gt;
&lt;br /&gt;
#source env modules:&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env &lt;br /&gt;
ulimit -s unlimited&lt;br /&gt;
&lt;br /&gt;
srun --cpu-bind=threads --label gcm_96x96x78_phyvenus_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example of a mixed MPI/OpenMP job to launch a simulation ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=job_mpi_omp&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
### GENOA nodes accommodate 2 processors of 96 cores each, i.e. 192 cores overall&lt;br /&gt;
#SBATCH --constraint=GENOA&lt;br /&gt;
### Number of Nodes to use&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
### Number of MPI tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=24 &lt;br /&gt;
### Number of OpenMP threads per MPI task&lt;br /&gt;
#SBATCH --cpus-per-task=4&lt;br /&gt;
#SBATCH --threads-per-core=1&lt;br /&gt;
###SBATCH --exclusive&lt;br /&gt;
#SBATCH --output=job_mpi_omp_%A.out&lt;br /&gt;
#SBATCH --time=00:30:00 &lt;br /&gt;
&lt;br /&gt;
#source env modules:&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env &lt;br /&gt;
ulimit -s unlimited&lt;br /&gt;
&lt;br /&gt;
### OMP_NUM_THREADS value must match &amp;quot;#SBATCH --cpus-per-task&amp;quot;&lt;br /&gt;
export OMP_NUM_THREADS=4&lt;br /&gt;
export OMP_STACKSIZE=400M&lt;br /&gt;
&lt;br /&gt;
srun --cpu-bind=threads --label gcm_64x48x54_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using python ==&lt;br /&gt;
&lt;br /&gt;
If you want to use python on ADASTRA for quick analysis, you'll see that some basic packages are unavailable (ex : matplotlib). To solve this issue, you may install a virtual python environment. Note that ADASTRA allows the self maintenance of your environment on the /work and /scratch partition : you should not put it in your /home !&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
python3 -m venv virtual_environment&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, you may want to activate the environment by doing :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
source path/virtual_environment/bin/activate&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will see that the environment is active in your terminal with a (virtual_environment) at the beginning of your input line. When you are here, you can install any desired package with &amp;quot;pip&amp;quot;. For exemple here are the command lines I had to use to get matplotlib to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
python3 -m pip install --upgrade pip&lt;br /&gt;
python3 -m pip install --upgrade Pillow&lt;br /&gt;
&lt;br /&gt;
pip install matplotlib&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may see that some packages are required beforehand : in some cases, you will need to install them manually. When all packages are done installing, you may use python as you please if the virtual environment is active in your terminal !&lt;br /&gt;
&lt;br /&gt;
== Using Ferret ==&lt;br /&gt;
Ferret is installed on Adastra, but not (yet) as a standard module to load... To be ables to use Ferret you need to do the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load develop&lt;br /&gt;
module load GCC-CPU-2.1.0&lt;br /&gt;
module load ferret/7.6.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using gdb4hpc ==&lt;br /&gt;
This is the default (only) debugger available... to use it you need to:&lt;br /&gt;
# Launch a request for an allocation on a compute node: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; salloc --account=cin0391 --constraint=GENOA --job-name=&amp;quot;debug&amp;quot; --nodes=1 --time=1:00:00 --exclusive &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Identify which node it is linked to and directly ssh (from login node) to it, e.g. if it is node &amp;quot;c1516&amp;quot; &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; ssh c1516 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# source your usual environment and then the gdb4hpc module &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; module load gdb4hpc/4.16.0.1 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Go to your work directory and launch gdb4hpc&lt;br /&gt;
# within gdb4hpc: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; dbg all&amp;gt; launch $a{1} --launcher-args=&amp;quot;--mpi=cray_shasta -A cin0391 --constraint=GENOA -t 00:30:00 -N 1 --cpu-bind=verbose,cores --exclusive&amp;quot;  ./executable.exe &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once everything running, the first thing you have to do is set a breakpoint at the beginning of the program, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
break icosa_lmdz.f90:1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And then &amp;quot;continue&amp;quot; to that point&lt;br /&gt;
&lt;br /&gt;
== Using DDT ==&lt;br /&gt;
Much more user-friendly, and with a Graphical User Interface, you can now use DDT rather than gdb4hpc, just load the appropriate module:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load ddt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(after having loaded all the other modules (for compiler, libraries, etc.)&lt;br /&gt;
&lt;br /&gt;
Then you can launch ddt&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./ddt &amp;amp;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In a nutshell you then need to:&lt;br /&gt;
* select &amp;quot;Run and Debug a program&amp;quot;&lt;br /&gt;
* specify the Application (executable) and Working directory (where to run the executable)&lt;br /&gt;
* specify the use of MPI (and/or OpenMP) with given number of MPI processes and/or OpenMP threads&lt;br /&gt;
* specify the &amp;quot;srun arguments&amp;quot; (which are usually set in the job header when running a regular simulation), e.g.&lt;br /&gt;
&amp;lt;code&amp;gt;--nodes=1 --exclusive --constraint=GENOA --account=cin0391 --time=00:15:00 --threads-per-core=1 --label&amp;lt;/code&amp;gt;&lt;br /&gt;
* click on &amp;quot;Run&amp;quot;. It will launch a job (be patient your job might be on hold if the machine is full).&lt;br /&gt;
&lt;br /&gt;
...TODO: Add some example using DDT ...&lt;br /&gt;
&lt;br /&gt;
== Are you being disconnected when inactive? ==&lt;br /&gt;
If you are regularly being disconnected when a bit inactive on the supercomputer, adding these few lines in a ''config'' file in the .ssh/ repository of your logging machine (ex: ssh-out/spirit) may help :&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Host *&lt;br /&gt;
...&lt;br /&gt;
KeepAlive yes&lt;br /&gt;
TCPKeepAlive yes&lt;br /&gt;
ServerAliveInterval 15&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:FAQ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_Adastra&amp;diff=2733</id>
		<title>Using Adastra</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Using_Adastra&amp;diff=2733"/>
				<updated>2025-04-30T14:06:57Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides a summary of examples and tools designed to help you get used with the Adastra environment.&lt;br /&gt;
&lt;br /&gt;
== Getting access to the cluster ==&lt;br /&gt;
For people on the &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project who need to open an account on Adastra, here is the procedure:&lt;br /&gt;
&lt;br /&gt;
# Go to https://www.edari.fr/utilisateur and log in via Janus or create an account if you don't have a Janus login. If this doesn't work, you can create a new eDARI account. (Make sure your profile is fully up to date including nationality)&lt;br /&gt;
# Beware! If you are on 2 lab (LMD and LATMOS for example), you must register with your email address corresponding to your Janus account.&lt;br /&gt;
# Click on &amp;quot;se rattacher à un dossier ayant obtenu des resources&amp;quot; or &amp;quot;Attach yourself to an application file that has obtained resources&amp;quot;&lt;br /&gt;
# &amp;quot;Atmosphères Planétaires&amp;quot; project number to provide:  A0180110391&lt;br /&gt;
# Ehouarn then receives an email to allow you to join the project. Once he has validated it, you receive a confirmation mail.&lt;br /&gt;
# Once approved, you have to request for an account, click on &amp;quot;CINES: créer une demande d'ouverture de compte&amp;quot;&lt;br /&gt;
# fill in the forms: name, contract end date, CINES, your lab information (LMD is the default)&lt;br /&gt;
# Access IP address  134.157.47.46 , FQDN (Fully Qualified Domain Name): ssh-out.lmd.jussieu.fr &lt;br /&gt;
# Add a second address : 134.157.176.129 , FQDN: spirit2.ipsl.fr&lt;br /&gt;
# click on option to have access to CCFR (only important if you have access to other GENCI machines)&lt;br /&gt;
# Security officer is Julien Lenseigne for LMD (his informations are all pre-filled, except phone: +33169335172)&lt;br /&gt;
# YOU MUST THEN VALIDATE THE REQUEST: click on the &amp;quot;Valider la saisie des informations&amp;quot;&lt;br /&gt;
# You then receive an automatic mail, but it's only to tell you to go to the next step: You must now download the pre-filled form from e-dari: find &amp;quot;télécharger la demande&amp;quot; and download the pdf. Sign it, and upload it on e-dari &amp;quot;déposer la demande de création de compte&amp;quot;.&lt;br /&gt;
# Wait for your application to be preprocessed by the system...&lt;br /&gt;
&lt;br /&gt;
== A couple of pointers ==&lt;br /&gt;
&lt;br /&gt;
* Connecting to Adastra: For those who had an account on Occigen, we have retained group and login credentials from then; To connect to Adastra you need first go through the LMD gateway (hakim) or the IPSL (Spirit/SpiritX) gateway and then&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh your_cines_login@adastra.cines.fr&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And then you will probably want to switch project using the myproject command, e.g. to switch to &amp;quot;lmd1167&amp;quot; (the old &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -a lmd1167&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and to switch to &amp;quot;cin0391&amp;quot; (the 2023-2024 &amp;quot;Atmosphères Planétaires&amp;quot; GENCI project)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -a cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
WARNING: when you switch projects, you also switch HOME directory etc.&lt;br /&gt;
&lt;br /&gt;
To get all the info about dedicated environment variables (e.g. paths to SCRATCH, STORE, etc.) you can use&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -c&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* To get all the information about project accounting (number of hours available and used by each member of the project) you need to connect to https://reser.cines.fr/ using your Adastra login and password&lt;br /&gt;
&lt;br /&gt;
* Changing the password of your CINES account&lt;br /&gt;
When your password is close to expiring, CINES asks you to change it on this website : https://rosetta.cines.fr&lt;br /&gt;
&lt;br /&gt;
Please note that you can access this website only if you are on a machine that you declared as a gateway for Adastra. At LMD, we have generally declared hakim.lmd.jussieu.fr (aka ssh-out) and spirit2.ipsl.fr as gateway machines. Hakim doesn't have any browser installed, but you can launch &amp;lt;code&amp;gt;firefox&amp;lt;/code&amp;gt; on Spirit and connect to the rosetta website.&lt;br /&gt;
If that doesn't work, check out the page on [[How to launch your local browser through a gateway machine]] or contact mail svp@cines.fr&lt;br /&gt;
&lt;br /&gt;
* Link to the Adastra technical documentation: https://dci.dci-gitlab.cines.fr/webextranet/&lt;br /&gt;
&lt;br /&gt;
* Link to the webpage where you can find out (login and password are those of your Adastra account) how many hour left we have on the project and details about everyone's use of Adastra: https://reser.cines.fr&lt;br /&gt;
&lt;br /&gt;
== Disks and workspaces ==&lt;br /&gt;
* all the details are on the Adastra documentation: https://dci.dci-gitlab.cines.fr/webextranet/data_storage_and_transfers/index.html&lt;br /&gt;
* If you want to know the current quota (in HOMEDIR, WORKDIR and SCRATCHDIR) allocated to the project (yes quotas are for the whole group):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
myproject -s cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* In a nutshell: we have lots of space on the WORKDIR (250 To) which is &amp;quot;permanent&amp;quot; (unlike the SCRATCHDIR, where files older than 30 days are purged), so use it! And when you want to archive things, make some large tar files and put them on the STOREDIR&lt;br /&gt;
=== Transferring data from Irene ===&lt;br /&gt;
You can use the ccfr &amp;quot;speedway&amp;quot; between National computing centers to copy data from Irene to Adastra (it is all explained here: https://dci.dci-gitlab.cines.fr/webextranet/data_storage_and_transfers/index.html#between-computing-site-ccfr ). To summarize:&lt;br /&gt;
# First check that you indeed asked to have access to ccfr when you created your account. just run on Adastra the &amp;quot;id&amp;quot; command and check that you are a registered member of the &amp;quot;22011(cinesccfr)&amp;quot; group. If not, ask the CINES helpdesk svp@cines.fr &lt;br /&gt;
# Connect to Adastra the usual way, and once on Adastra &amp;quot;ssh adastra-ccfr.cines.fr&amp;quot;, which should land you on &amp;quot;login1&amp;quot; which is the node enabled to use the ccfr connection&lt;br /&gt;
# Once on login1 you can transfert data from Irene via scp or rsync using the appropriate gateway machine (on the Irene side) which is &amp;quot;irene-fr-ccfr-gw.ccc.cea&amp;quot;, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -avz irenelogin@irene-fr-ccfr-gw.ccc.cea:irene_path_to_your_data adastra_path_to_your_data&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs ==&lt;br /&gt;
It's done using SLURM; you need to write up a job script and submit it using '''sbatch'''&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch myjob&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You must specify in the header of the job which project ressources you are using (&amp;quot;cin0391&amp;quot; in our case):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example of an MPI job to launch a simulation ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=job_mpi&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
### GENOA nodes accommodate 2 processors of 96 cores each, i.e. 192 cores overall&lt;br /&gt;
#SBATCH --constraint=GENOA&lt;br /&gt;
### Number of Nodes to use&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
### Number of MPI tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=48 &lt;br /&gt;
### Number of OpenMP threads per MPI task&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --threads-per-core=1&lt;br /&gt;
###SBATCH --exclusive&lt;br /&gt;
#SBATCH --output=job_mpi_%A.out&lt;br /&gt;
#SBATCH --time=00:45:00 &lt;br /&gt;
&lt;br /&gt;
#source env modules:&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env &lt;br /&gt;
ulimit -s unlimited&lt;br /&gt;
&lt;br /&gt;
srun --cpu-bind=threads --label gcm_96x96x78_phyvenus_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example of a mixed MPI/OpenMP job to launch a simulation ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=job_mpi_omp&lt;br /&gt;
#SBATCH --account=cin0391&lt;br /&gt;
### GENOA nodes accommodate 2 processors of 96 cores each, i.e. 192 cores overall&lt;br /&gt;
#SBATCH --constraint=GENOA&lt;br /&gt;
### Number of Nodes to use&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
### Number of MPI tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=24 &lt;br /&gt;
### Number of OpenMP threads per MPI task&lt;br /&gt;
#SBATCH --cpus-per-task=4&lt;br /&gt;
#SBATCH --threads-per-core=1&lt;br /&gt;
###SBATCH --exclusive&lt;br /&gt;
#SBATCH --output=job_mpi_omp_%A.out&lt;br /&gt;
#SBATCH --time=00:30:00 &lt;br /&gt;
&lt;br /&gt;
#source env modules:&lt;br /&gt;
source ../trunk/LMDZ.COMMON/arch.env &lt;br /&gt;
ulimit -s unlimited&lt;br /&gt;
&lt;br /&gt;
### OMP_NUM_THREADS value must match &amp;quot;#SBATCH --cpus-per-task&amp;quot;&lt;br /&gt;
export OMP_NUM_THREADS=4&lt;br /&gt;
export OMP_STACKSIZE=400M&lt;br /&gt;
&lt;br /&gt;
srun --cpu-bind=threads --label gcm_64x48x54_phymars_para.e &amp;gt; gcm.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using python ==&lt;br /&gt;
&lt;br /&gt;
If you want to use python on ADASTRA for quick analysis, you'll see that some basic packages are unavailable (ex : matplotlib). To solve this issue, you may install a virtual python environment. Note that ADASTRA allows the self maintenance of your environment on the /work and /scratch partition : you should not put it in your /home !&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
python3 -m venv virtual_environment&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, you may want to activate the environment by doing :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
source path/virtual_environment/bin/activate&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will see that the environment is active in your terminal with a (virtual_environment) at the beginning of your input line. When you are here, you can install any desired package with &amp;quot;pip&amp;quot;. For exemple here are the command lines I had to use to get matplotlib to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
python3 -m pip install --upgrade pip&lt;br /&gt;
python3 -m pip install --upgrade Pillow&lt;br /&gt;
&lt;br /&gt;
pip install matplotlib&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may see that some packages are required beforehand : in some cases, you will need to install them manually. When all packages are done installing, you may use python as you please if the virtual environment is active in your terminal !&lt;br /&gt;
&lt;br /&gt;
== Using Ferret ==&lt;br /&gt;
Ferret is installed on Adastra, but not (yet) as a standard module to load... To be ables to use Ferret you need to do the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load develop&lt;br /&gt;
module load GCC-CPU-2.1.0&lt;br /&gt;
module load ferret/7.6.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using gdb4hpc ==&lt;br /&gt;
This is the default (only) debugger available... to use it you need to:&lt;br /&gt;
# Launch a request for an allocation on a compute node: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; salloc --account=cin0391 --constraint=GENOA --job-name=&amp;quot;debug&amp;quot; --nodes=1 --time=1:00:00 --exclusive &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Identify which node it is linked to and directly ssh (from login node) to it, e.g. if it is node &amp;quot;c1516&amp;quot; &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; ssh c1516 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# source your usual environment and then the gdb4hpc module &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; module load gdb4hpc/4.16.0.1 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Go to your work directory and launch gdb4hpc&lt;br /&gt;
# within gdb4hpc: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; dbg all&amp;gt; launch $a{1} --launcher-args=&amp;quot;--mpi=cray_shasta -A cin0391 --constraint=GENOA -t 00:30:00 -N 1 --cpu-bind=verbose,cores --exclusive&amp;quot;  ./executable.exe &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once everything running, the first thing you have to do is set a breakpoint at the beginning of the program, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
break icosa_lmdz.f90:1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And then &amp;quot;continue&amp;quot; to that point&lt;br /&gt;
&lt;br /&gt;
== Using DDT ==&lt;br /&gt;
Much more user-friendly, and with a Graphical User Interface, you can now use DDT rather than gdb4hpc, just load the appropriate module:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load ddt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
...TODO: Add some example using DDT ...&lt;br /&gt;
&lt;br /&gt;
== Are you being disconnected when inactive? ==&lt;br /&gt;
If you are regularly being disconnected when a bit inactive on the supercomputer, adding these few lines in a ''config'' file in the .ssh/ repository of your logging machine (ex: ssh-out/spirit) may help :&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Host *&lt;br /&gt;
...&lt;br /&gt;
KeepAlive yes&lt;br /&gt;
TCPKeepAlive yes&lt;br /&gt;
ServerAliveInterval 15&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:FAQ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Advanced_Use_of_the_GCM&amp;diff=2731</id>
		<title>Advanced Use of the GCM</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=Advanced_Use_of_the_GCM&amp;diff=2731"/>
				<updated>2025-04-30T13:30:48Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: /* Changing the output temporal resolution and time duration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Running in parallel ==&lt;br /&gt;
&lt;br /&gt;
For large simulation (long run, high resolution etc...), the computational cost can be huge and hence the run time very long.&lt;br /&gt;
To overcome this issue, the model can be run in parallel. This however requires a few extra steps (compared to compiling and running the serial version of the code).&lt;br /&gt;
For all the details see [[Parallelism | the dedicated page]].&lt;br /&gt;
&lt;br /&gt;
== Disambiguation between ifort, mpif90, etc. ==&lt;br /&gt;
&lt;br /&gt;
For users not used to compilers and/or compiling and running codes in parallel, namely in MPI mode, there is often some confusion which hopefully the following paragraph might help clarify:&lt;br /&gt;
* the compiler (typically gfortran, ifort, pgfortran, etc.) is the required tool to compile the Fortran source code and generate an executable. It is strongly recommended that libraries used by a program are also compiled using the same compiler. Thus if you plan to use different compilers to compile the model, note that you should also have at hand versions of the libraries it uses also compiled with these compilers.&lt;br /&gt;
* the MPI (Message Passing Interface) library is a library used to solve problems using multiple processes by enabling message-passing between the otherwise independent processes. There are a number of available MPI libraries out there, e.g. OpenMPI, MPICH or IntelMPI to name a few (you can check out the [[Building an MPI library]] page for some information about installing an MPI library). The important point here is that on a given machine the MPI library is related to a given compiler and that it provides related wrappers to compile and run with. Typically (but not always) the compiler wrapper is '''mpif90''' and the execution wrapper is '''mpirun'''. If you want to know which compiler is wrapped in the '''mpif90''' compiler wrapper, check out the output of:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpif90 --version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* In addition a second type of parallelism, shared memory parallelism known as OpenMP, is also implemented in the code. In contradistinction to MPI, OpenMP does not require an external library but is instead implemented as a compiler feature. At run time one must then specify some dedicated environment variables (such as OMP_NUM_THREADS and OMP_STACKSIZE) to specify the number of threads to use per process.&lt;br /&gt;
* In practice one should favor compiling and running with both MPI and OPenMP enabled.&lt;br /&gt;
* For much more detailed information about compiling and running in parallel, check out the [[Parallelism | the page dedicated to Parallelism]].&lt;br /&gt;
&lt;br /&gt;
== A word about the IOIPSL and XIOS libraries ==&lt;br /&gt;
* The IOIPSL (Input Output IPSL) library is a library that has developed by the IPSL community to handle input and outputs of (mostly terrestrial) climate models. For the Generic PCM only a small part of this library is actually used, related to reading and processing the input [[The_run.def_Input_File | run.def]] file. For more details check out the [[The IOIPSL Library]] page.&lt;br /&gt;
* The [https://forge.ipsl.jussieu.fr/ioserver/wiki XIOS] (Xml I/O Server) library is based on client-server principles where the server manages the outputs asynchronously from the client (the climate model) so that the bottleneck of writing data in a parallel environment is alleviated. All aspects of the outputs (name, units, file, post-processing operations, etc.) are then controlled by dedicated XML files which are read at run-time. Using XIOS is currently optional (and requires compiling the GCM with the XIOS library). More about the XIOS library, how to install and use it, etc. [[The XIOS Library| here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Playing with the output files ==&lt;br /&gt;
&lt;br /&gt;
=== Changing the output temporal resolution and time duration ===&lt;br /&gt;
&lt;br /&gt;
* To change the total time of a simulation, you need to open the 'For all the details see [[The_run.def_Input_File | run.def]]. file and change the variable 'nday':&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nday = 1000 # this means the simulation will run for 1000 days ; and that the associated output files will also be computed for a total duration of 1000 days&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: in the example, they are not necessarily 1000 Earth days, because it depends on the definition of the day duration that has been taken in the start files.&lt;br /&gt;
&lt;br /&gt;
* To change the temporal resolution of the output files, you need to open the [[The_callphys.def_Input_File | callphys.def]] file and change the variable 'diagfi_output_rate':&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
diagfi_output_rate = 240 # this means the simulation will write variables in the output files every 240 physics time steps of the simulation.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: The output temporal resolution of the output files then depends also on the number of physics timestep per day (the 'day_step' variable in [[The_run.def_Input_File | run.def]] file is the number of dynamical steps per day but physics are called every 'iphysiq' dynamical time step). In this example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nday = 1000&lt;br /&gt;
daystep = 480&lt;br /&gt;
iphysiq= 10&lt;br /&gt;
diagfi_output_rate = 12&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The physics are called every 10 dynamical time step (i.e. the physics are called 48(=daystep/iphysiq=480/10) times per day and the output file will provide results every 0.25 days (=diagfi_output_rate/48), and for a total duration of 1000 days (so 4000 time values in total).&lt;br /&gt;
&lt;br /&gt;
=== Changing the output variable ===&lt;br /&gt;
&lt;br /&gt;
To select the variable provided in the output file diagfi.nc, you simply need to add the list of variables needed in the [[The_diagfi.def_Input_File | diagfi.def]].&lt;br /&gt;
&lt;br /&gt;
Please check the [[diagfi.nc]] and [[outputs]] pages for more information.&lt;br /&gt;
&lt;br /&gt;
Note for experts: Some technical variables need to be de-commented in 'physiq_mod.F90' file to be written in the output files.&lt;br /&gt;
&lt;br /&gt;
=== Spectral outputs ===&lt;br /&gt;
&lt;br /&gt;
It is possible to provide spectral outputs such as the OLR (Outgoing Longwave Radiation, i.e. the thermal emission of the planet at the top of the atmosphere), the OSR (Outgoing Stellar Radiation, i.e. the light reflected by the planet at the top of the atmosphere), or the GSR (Ground Stellar Radiation, i.e. the light emitted by the star that reaches the surface of the planet).&lt;br /&gt;
&lt;br /&gt;
For this, you need to activate the option 'specOLR' in the [[The_callphys.def_Input_File | callphys.def]] file, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
specOLR    = .true.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The simulations will then create diagspec_VI.nc and diagspec_IR.nc files (along with the standard diagfi.nc file), which contain the spectra of OLR, OSR, GSR, etc.&lt;br /&gt;
&lt;br /&gt;
Note: The resolution of the spectra is defined by that of the correlated-k (opacity) files used for the simulation.&lt;br /&gt;
&lt;br /&gt;
=== Statistical outputs ===&lt;br /&gt;
&lt;br /&gt;
TBD (explain how to compute stats.nc files as well as what is inside)&lt;br /&gt;
&lt;br /&gt;
== How to Change Vertical and Horizontal Resolutions ==&lt;br /&gt;
&lt;br /&gt;
=== When you are using the regular longitude/latitude horizontal grid ===&lt;br /&gt;
To run at a different grid resolution than available initial conditions files, one needs to use the tools ''newstart.e'' and ''start2archive.e''&lt;br /&gt;
&lt;br /&gt;
For example, to create initial states at grid resolution 32×24×25 from NetCDF files start and startfi at grid resolution 64×48×32 :&lt;br /&gt;
&lt;br /&gt;
* Create file ''start_archive.nc'' with ''start2archive.e'' compiled at grid resolution 64×48×32 using old file ''z2sig.def'' used previously&lt;br /&gt;
* Create files ''restart.nc'' and ''restartfi.nc'' with ''newstart.e'' compiled at grid resolution 32×24×25, using a new file ''z2sig.def'' (more details below on the choice of the ''z2sig.def'').&lt;br /&gt;
* While executing ''newstart.e'', you need to choose the answer '0 - from a file start_archive' and then press enter to all other requests.&lt;br /&gt;
&lt;br /&gt;
==== What you need to ''know'' about the ''z2sig.def'' file ====&lt;br /&gt;
&lt;br /&gt;
Python example scripts to change the vertical resolution of a [[The_z2sig.def_Input_File | z2sig.def]] file can be found in the repository in the &amp;lt;syntaxhighlight inline&amp;gt;UTIL/z2sig/&amp;lt;/syntaxhighlight&amp;gt; folder.&lt;br /&gt;
&lt;br /&gt;
For a model with Nlay layers, the [[The_z2sig.def_Input_File | z2sig.def]] file must contain at least Nlay+1 lines (the other not being read).&lt;br /&gt;
&lt;br /&gt;
The first line is a scale height ($$H$$, in km). The following lines are the target pseudo-altitudes (in km) for the model from the bottom up ($$z_i$$).&lt;br /&gt;
The units do not matter as long as you use the same ones for both. &lt;br /&gt;
&lt;br /&gt;
The model will use these altitudes to compute a target pressure grid ($$p_i$$ ) as follows:&lt;br /&gt;
\begin{align}&lt;br /&gt;
  \label{def:pseudoalt}&lt;br /&gt;
  p_i &amp;amp;= p_{reff} \exp(-z_i/H),&lt;br /&gt;
\end{align}&lt;br /&gt;
where $$p_{reff}$$ is a reference surface pressure. It is important to note that the pseudo-altitudes and pressures mentioned here are targets and technically only exact if the surface pressure $$p_s$$ is to remains constant in time and equal to $$p_{reff}$$ (and it usually isn't), and if the atmospheric scale height is also constant in time and space (and it usually isn't).&lt;br /&gt;
&lt;br /&gt;
As you can see, the scale height and pseudo altitudes enter the equation only through their ratio. So they do not have to to be the real scale-height and altitudes of the atmosphere you are simulating.&lt;br /&gt;
So you can use the same [[The_z2sig.def_Input_File | z2sig.def]].def for different planets. &lt;br /&gt;
&lt;br /&gt;
There is no hard rule to follow to determine the altitude/pressure levels you should use. As a rule of thumb, layers should be thinner near the surface to properly resolve the surface boundary layer. Then they should gradually increase in size over a couple scale heights and transition to constant thickness above that. Of course, some specific applications may require thinner layers in some specific parts of the atmospheres. &lt;br /&gt;
&lt;br /&gt;
A little trick for those who prefer to think in terms of (log)pressure: if you use $$H=  1/\ln 10 \approx 0.43429448$$, then $$z_i=x$$ corresponds to a pressure difference with the surface of exactly x pressure decades (i.e. at $$z=1$$, $$p=0.1 p_{reff}$$). This is particularly useful for giant-planet applications.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!-- [NOT RELEVANT??] If you want to create starts files with tracers for 50 layers using a start archive.nc obtained for 32 layers, do not forget to use the ini_q option in newstart in order to correctly initialize tracers value for layer 33 to layer 50. You just have to answer yes to the question on thermosphere initialization if you want to initialize the thermosphere part only (l=33 to l=50), and no if you want to initialize tracers for all layers (l=0 to l=50). --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check out the following page about [[PCM vertical coordinate| hybrid vertical coordinate]] if you want a detailed description of the PCM model vertical grid.&lt;br /&gt;
&lt;br /&gt;
=== When you are using the DYNAMICO icosahedral horizontal grid ===&lt;br /&gt;
&lt;br /&gt;
The horizontal resolution for the DYNAMICO dynamical core is managed from several setting files, online during the execution. &lt;br /&gt;
To this purpose, each part  of the GCM managing the in/output fields ('''ICOSAGCM''', '''ICOSA_LMDZ''', '''XIOS''') requires to know the input and output grids: &lt;br /&gt;
&lt;br /&gt;
'''1. ''context_lmdz_physics.xml'':'''&lt;br /&gt;
&lt;br /&gt;
You can find several grid setup already defined:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot; line&amp;gt;&lt;br /&gt;
&amp;lt;domain_definition&amp;gt;&lt;br /&gt;
    &amp;lt;domain id=&amp;quot;dom_96_95&amp;quot; ni_glo=&amp;quot;96&amp;quot; nj_glo=&amp;quot;95&amp;quot; type=&amp;quot;rectilinear&amp;quot;  &amp;gt;&lt;br /&gt;
      &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
      &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;/domain&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;domain id=&amp;quot;dom_144_142&amp;quot; ni_glo=&amp;quot;144&amp;quot; nj_glo=&amp;quot;142&amp;quot; type=&amp;quot;rectilinear&amp;quot;  &amp;gt;&lt;br /&gt;
      &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
      &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;/domain&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;domain id=&amp;quot;dom_512_360&amp;quot; ni_glo=&amp;quot;512&amp;quot; nj_glo=&amp;quot;360&amp;quot; type=&amp;quot;rectilinear&amp;quot;  &amp;gt;&lt;br /&gt;
      &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
      &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;/domain&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;domain id=&amp;quot;dom_720_360&amp;quot; ni_glo=&amp;quot;720&amp;quot; nj_glo=&amp;quot;360&amp;quot; type=&amp;quot;rectilinear&amp;quot;&amp;gt;&lt;br /&gt;
      &amp;lt;generate_rectilinear_domain/&amp;gt;&lt;br /&gt;
      &amp;lt;interpolate_domain order=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;/domain&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;domain id=&amp;quot;dom_out&amp;quot; domain_ref=&amp;quot;dom_720_360&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;/domain_definition&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example, the output grid for the physics fields is defined by &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;domain id=&amp;quot;dom_out&amp;quot; domain_ref=&amp;quot;dom_720_360&amp;quot;/&amp;gt; &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is an half-degree horizontal resolution. To change this resolution, you have to change name of the '''domain_ref''' grid, for instance: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;domain id=&amp;quot;dom_out&amp;quot; domain_ref=&amp;quot;dom_96_95&amp;quot;/&amp;gt; &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. ''run_icosa.def'': setting file to execute a simulation''' &lt;br /&gt;
&lt;br /&gt;
In this file, regarding of the horizontal resolution intended, you have to set the number of subdivision on the main triangle. &lt;br /&gt;
For reminder, each hexagonal mesh is divided in several main triangles and each main triangles are divided in suitable number of sub-triangles according the horizontal resolution&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#nbp --&amp;gt; number of subdivision on a main triangle: integer (default=40)&lt;br /&gt;
#              nbp = sqrt((nbr_lat x nbr_lon)/10)&lt;br /&gt;
#              nbp:                 20   40   80  160&lt;br /&gt;
#              T-edge length (km): 500  250  120   60&lt;br /&gt;
#              Example: nbp(128x96) = 35 -&amp;gt; 40&lt;br /&gt;
#                       nbp(256x192)= 70 -&amp;gt; 80&lt;br /&gt;
#                       nbp(360x720)= 160 -&amp;gt; 160&lt;br /&gt;
nbp = 160&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you have chosen the 96_95 output grid in ''context_lmdz_physics.xml'', you have to calculate $$nbp = \sqrt(96x95) / 10 = 10$$ and  in this case &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nbp = 20&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the number of subdivision of the main triangle, you have to define the number subdivision over each direction. At this stage you need to be careful as the number of subdivisions on each direction:&lt;br /&gt;
* needs to be set according to the number of subdivisions on the main triangle '''nbp'''&lt;br /&gt;
* will determine the number of processors on which the GCM will be most effective&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
## sub splitting of main rhombus : integer (default=1)&lt;br /&gt;
#nsplit_i=1&lt;br /&gt;
#nsplit_j=1&lt;br /&gt;
#omp_level_size=1&lt;br /&gt;
###############################################################&lt;br /&gt;
## There must be less MPIxOpenMP processes than the 10 x nsplit_i x nsplit_j tiles&lt;br /&gt;
## typically for pure MPI runs, let nproc = 10 x nsplit_i x nsplit_j&lt;br /&gt;
## it is better to have nbp/nsplit_i  &amp;gt; 10 and nbp/nplit_j &amp;gt; 10&lt;br /&gt;
###############################################################&lt;br /&gt;
#### 40 noeuds de 24 processeurs = 960 procs&lt;br /&gt;
nsplit_i=12&lt;br /&gt;
nsplit_j=8&lt;br /&gt;
&lt;br /&gt;
#### 50 noeuds de 24 processeurs = 1200 procs&lt;br /&gt;
#nsplit_i=10&lt;br /&gt;
#nsplit_j=12&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the same example as above, the 96_95 output grid requires:&lt;br /&gt;
$$nsplit_i &amp;lt; 2$$ and $$nsplit_j &amp;lt; 2$$&lt;br /&gt;
We advise you to select:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
## sub splitting of main rhombus : integer (default=1)&lt;br /&gt;
nsplit_i=1&lt;br /&gt;
nsplit_j=1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and using 10 processors.&lt;br /&gt;
&lt;br /&gt;
== How to Change the Topography (or remove it) ==&lt;br /&gt;
&lt;br /&gt;
The generic model can use in principle any type of surface topography, provided that the topographic data file is available in the right format, and put in the right place. The information content on the surface topography is contained in the ''startfi.nc'', and we do have developed tools (see below) to modify the ''startfi.nc'' to account for a new surface topography.&lt;br /&gt;
&lt;br /&gt;
To change the surface topography of a simulation, we recommend to follow the procedure detailed below:&lt;br /&gt;
&lt;br /&gt;
* Create file ''start_archive.nc'' with ''start2archive.e'' compiled at the same (horizontal and vertical) resolution than the ''start.nc'' and ''startfi.nc'' files.&lt;br /&gt;
* Create files ''restart.nc'' and ''restartfi.nc'' with ''newstart.e'' compiled again at the same (horizontal and vertical) resolution. &lt;br /&gt;
* While executing ''newstart.e'', you need to choose the answer '0 - from a file start_archive' and then press enter to all other requests.&lt;br /&gt;
* At some point, the script ''newstart.e'' asks you to chose the surface topography you want from the list of files available in your 'datagcm/surface_data/' directory. &lt;br /&gt;
&lt;br /&gt;
We do have a repository of for Venus, Earth and Mars through time available at https://web.lmd.jussieu.fr/~lmdz/planets/generic/datagcm/surface_data/. You can download the surface topography files and place them in your 'datagcm/surface_data/' directory.&lt;br /&gt;
&lt;br /&gt;
We also offer a tutorial to design new topography maps here: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Building_Surface_Topography_Files&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Special note: To remove the topography, you can simply add the following tag in callphys.def (but currently, this only works if ''callsoil=.false.''):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nosurf  = .true.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to Change the Stellar Spectrum ==&lt;br /&gt;
&lt;br /&gt;
To simulate the effect of the star's radiation on a given planetary atmosphere, it is necessary to accurately represent the stellar spectrum (spectral shape and total bolometric flux) at the top of this atmosphere. In the model, we have set up two different options to model the stellar spectra of any star.&lt;br /&gt;
&lt;br /&gt;
=== Black Body Stellar Spectra ===&lt;br /&gt;
&lt;br /&gt;
First, it is possible to simply use a black body. In this case, the stellar spectrum depends only on the effective temperature of the star which is provided to the model.&lt;br /&gt;
&lt;br /&gt;
For this, you need to activate the option 'stelbbody' in the [[The_callphys.def_Input_File | callphys.def]] file, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stelbbody  = .true.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and then add, also in the [[The_callphys.def_Input_File | callphys.def]] file, the following line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stelTbb   = 3500.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to specify the effective temperature of the host star. (in this example, we have chosen a M-star with an effective temperature of 3500K)&lt;br /&gt;
&lt;br /&gt;
=== Pre-Tabulated spectra ===&lt;br /&gt;
&lt;br /&gt;
Second, the model can read a file containing any pre-computed stellar spectrum. Traditionally, we have used synthetic spectra from the PHOENIX database, that we adapt to the Generic PCM by decreasing the spectral resolution (use 10000 points with a fixed spectral resolution of 0.001 micron) and by adapting the units (in W/m2/micron). This is the option that is generally preferred to better represent the effect of the star (whose real spectrum can strongly deviate from the black body approximation).&lt;br /&gt;
&lt;br /&gt;
For this, you need to make sure the option 'stelbbody' in the [[The_callphys.def_Input_File | callphys.def]] file is equal to .false. (it not specified, by default stelbbody is assumed to be .false.).&lt;br /&gt;
&lt;br /&gt;
Then you need to add in the [[The_callphys.def_Input_File | callphys.def]] file, the following line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
startype = 1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and change the value of startype depending on the star you want to model. (here 1 means we use the solar spectrum)&lt;br /&gt;
&lt;br /&gt;
To know which stellar spectra are available, you need to open the file LMDZ.GENERIC/libf/phystd/ave_stelspec.F90 and adapt the value of startype accordingly. You also need to make sure the spectra are available in your /datadir/stellar_spectra (or /datagcm/stellar_spectra) directory.&lt;br /&gt;
&lt;br /&gt;
To calculate the true stellar spectrum at the top of the atmosphere, the Generic PCM renormalizes the stellar spectrum by the bolometric flux at 1 Astronomical Unit (AU) provided by the user, which it then converts into the true stellar spectrum by using the star-planet distance. &lt;br /&gt;
&lt;br /&gt;
To specify the flux at 1 AU, you need to add in the [[The_callphys.def_Input_File | callphys.def]] file, the following line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Fat1AU = 1366.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(here 1366W/m2 corresponds to the flux at 1AU for the Sun)&lt;br /&gt;
&lt;br /&gt;
1st NOTE: We will improve this second part (Martin and Mathilde) by the end of 2022.&lt;br /&gt;
&lt;br /&gt;
2nd NOTE: The Generic PCM eventually has the capability to run without any stellar flux. To do that, you can simply put Fat1AU = 0. (@LUCAS_TEINTURIER, could you check that?)&lt;br /&gt;
&lt;br /&gt;
== How to Change the Opacity Tables ==&lt;br /&gt;
&lt;br /&gt;
The model uses opacity tables to compute heating rates throughout the atmosphere. These opacity tables are generated &amp;quot;offline&amp;quot; for a given set of pressures, temperatures, a given composition and a specific spectral decomposition. &lt;br /&gt;
&lt;br /&gt;
=== Getting opacity tables for your desired atmospheric composition ===&lt;br /&gt;
&lt;br /&gt;
* You should first check our common repository here (https://web.lmd.jussieu.fr/~lmdz/planets/generic/datagcm/corrk_data/) to check whether your desired opacity table is not already included. There is a README file there that describes each of the opacity tables.&lt;br /&gt;
&lt;br /&gt;
* You can also check databases provided by the community (the format of the opacity tables may have to be changed/adapted though):&lt;br /&gt;
&lt;br /&gt;
- https://lesia.obspm.fr/exorem/&lt;br /&gt;
&lt;br /&gt;
- https://petitradtrans.readthedocs.io/en/latest/content/available_opacities.html&lt;br /&gt;
&lt;br /&gt;
* If your desired atmospheric composition is not available in the directory above, you will have to build a new opacity table. (Spoiler alert: this can be a long and quite tricky process). &lt;br /&gt;
We have created a dedicated page on how to build new opacity tables here: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Building_Opacity_Tables&lt;br /&gt;
&lt;br /&gt;
=== Implementing your opacity tables in the Generic PCM ===&lt;br /&gt;
&lt;br /&gt;
Once you have your opacity tables ready, you should follow these steps to use the opacity tables in the model:&lt;br /&gt;
&lt;br /&gt;
* Copy your directory containing your opacity tables in your local /datagcm/corrk_data/ directory. The opacity table has to contain the files T.dat (temperature values), P.dat (pressure values), Q.dat (mixing ratio values), g.dat (gauss point values) and a directory N1xN2 (number of infrared bands x number of visible bands). This directory should contain the files narrowbands_IR.in and narrowbands_VI.in (the infrared and visible wavelengths), and the files corrk_gcm_VI.dat and corrk_gcm_IR.dat (these are the correlated-k opacity tables for visible and infrared channels).&lt;br /&gt;
* Change the parameter corrkdir = ... in [[The_callphys.def_Input_File | callphys.def]] with the name of that directory.&lt;br /&gt;
* Change the gases.def file: it has to be consistent with the values written in the Q.dat file of the correlated-k table.&lt;br /&gt;
* Change the -b option when compiling the model with makelmdz_fcm: it has to correspond to the number of bands (in the infrared x in the visible) of the new opacity tables. For instance, compile with -b 38x26 if you used 38 bands in the infrared and 26 in the visible to generate the opacity tables.&lt;br /&gt;
&lt;br /&gt;
== How to Add continuum opacities ==&lt;br /&gt;
&lt;br /&gt;
In general, opacity tables (above) only include so-called &amp;quot;permitted&amp;quot; absorptions (coming from line centers, computed from line lists). But other sources of absorption -- so-called &amp;quot;continuum absorptions&amp;quot; -- can be important : contribution from line far wings, collision-induced absorptions, and dimer absorption (these two last contributions are known as &amp;quot;forbidden&amp;quot; absorptions). How do we handle these continuum absorptions in the model?&lt;br /&gt;
&lt;br /&gt;
Historically, we used to compile (unstructured) data from HITRAN and other sources to build the continuum tables in the model.&lt;br /&gt;
&lt;br /&gt;
Since March 2025, we now have a new, complete continuum database, which makes thing much more easy for users. More information here: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Continuum_Database&lt;br /&gt;
&lt;br /&gt;
== How to Change the Aerosols Optical Properties ==&lt;br /&gt;
&lt;br /&gt;
Aerosol optical properties are represented using three distinct properties: the extinction coefficient (Q_ext), the single scattering albedo (omega) and the asymmetry factor (g). &lt;br /&gt;
&lt;br /&gt;
The Generic PCM can compute the radiative effects of any aerosol, provided that they optical properties (Q_ext, omega, g) are tabulated and provided in the right format.&lt;br /&gt;
&lt;br /&gt;
=== Getting optical properties for your aerosols ===&lt;br /&gt;
&lt;br /&gt;
* You should first check our common repository here (https://web.lmd.jussieu.fr/~lmdz/planets/generic/datagcm/aerosol_properties/) to check whether your favorite aerosol is not already included. The optical properties of each aerosol is built using two distinct files: one in the 'visible' (which is used in the visible part of the radiative transfer, to compute the fate of stellar radiation) and one in the 'infrared' (which is used in the thermal infrared part of the radiative transfer, to compute the fate of thermal emission by the surface and atmosphere).&lt;br /&gt;
&lt;br /&gt;
For instance, if you want to include the radiative effect of CO2 ice clouds, then you just need the files: &lt;br /&gt;
&lt;br /&gt;
- optprop_co2ice_vis_n50.dat&lt;br /&gt;
&lt;br /&gt;
- optprop_co2ice_ir_n50.dat&lt;br /&gt;
&lt;br /&gt;
* Otherwise, you can create your own tables of optical properties, using existing databases. Check this page to learn how to do this: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Building_Tables_Of_Aerosol_Optical_Properties&lt;br /&gt;
&lt;br /&gt;
=== Implementing your aerosols in the Generic PCM ===&lt;br /&gt;
&lt;br /&gt;
Before including the aerosol scheme you want to use, you need to indicate the number of aerosol layers in callphys.def with the option '''naerkind=#number_of_aerosol_layers'''.&lt;br /&gt;
&lt;br /&gt;
There are two available aerosol schemes you can use to add a new aerosol in the Generic PCM:&lt;br /&gt;
&lt;br /&gt;
====The n-layer aerosol scheme====&lt;br /&gt;
&lt;br /&gt;
You can use the n-layer scheme (implemented by Jan Vatant d'Ollone) to easily prescribe an aerosol vertical distribution. The scheme is activated by adding ''aeronlay=.true''. in callphys.def.&lt;br /&gt;
&lt;br /&gt;
In this scheme, each layer can have different optical properties, particle sizes, etc. You can use different options (e.g. to fix the aerosol distribution between two atmospheric pressures) of the scheme with ''aeronlay_choice = 1'', 2, etc. &lt;br /&gt;
&lt;br /&gt;
Firstly, you have to precise the number of aerosols of this scheme. For instance,:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nlayaero = 3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, you can indicate the properties of your aerosol layer(s) one after the other on the same line. For 3 aerosol layers, we have:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aeronlay_tauref       = 1.0 0.05 0.03&lt;br /&gt;
aeronlay_lamref       = 0.8e-6 0.8e-6 0.8e-6&lt;br /&gt;
aeronlay_choice       = 2 2 2&lt;br /&gt;
aeronlay_pbot         = 2.0e5 1.6e5 0.2e5&lt;br /&gt;
aeronlay_ptop         = 0.10e5 2.0e5 1.&lt;br /&gt;
aeronlay_sclhght      = 0.1 2.0 0.1&lt;br /&gt;
aeronlay_size         = 0.8e-6 0.05e-6 2.5e-6&lt;br /&gt;
aeronlay_nueff        = 0.3 0.3 0.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''aeronlay_tauref'' is the optical depth at the reference wavelength ''aeronlay_lamref'' (in metres). The ''aeronlay_choice'' allows you to choose the aerosol distribution between the bottom pressure (''aeronlay_pbot'') and the top pressure (''aeronlay_ptop'') of the aerosol layer (''aeronlay_choice=1'') or from a bottom pressure (''aeronlay_pbot'') with a fractional scale height (''aeronlay_sclhght'') with the ''aeronlay_choice=2''. For the corresponding layer, if you use the choice=2, the ''aeronlay_ptop'' is deactivated. In the same way, for the choice=1, the ''aeronlay_sclhght'' is deactivated for the corresponding layer. You can choose the mean radius of the particles with the ''aeronlay_size'' (in metres) and the corresponding effective standard deviation with ''aeronlay_nueff''.&lt;br /&gt;
&lt;br /&gt;
And finally, you will need to provide the name of your aerosols optical properties tables in callphys.def. For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
optprop_aeronlay_vis = optprop_neptune_n2_vis_n30.dat optprop_neptune_n3_vis_n30.dat optprop_ch4_vis.dat&lt;br /&gt;
optprop_aeronlay_ir = optprop_neptune_n2_ir_n30.dat optprop_neptune_n3_ir_n30.dat optprop_ch4_ir.dat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(here, to use the optical properties of aerosols for Neptune)&lt;br /&gt;
&lt;br /&gt;
We encourage you to search for the keyword &amp;quot;aeronlay&amp;quot; in the source code to use more specific options of the scheme.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====The generic condensable scheme====&lt;br /&gt;
&lt;br /&gt;
You can use the generic condensable scheme (implemented by Lucas Teinturier) to easily compute the radiative effect of cloud particles formed by condensation.&lt;br /&gt;
The scheme has to be used conjointly to the generic condensation scheme.&lt;br /&gt;
To activate it, one needs to add the '''aerogeneric=n'''( with n&amp;gt;0 the number of condensable species handled in the scheme) in callphys.def. On top of that, one needs to add in traceur.def, on the solid/ice traceur the option '''is_rgcs = 1'''.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the model itself, one needs then to manually add your aerosol by modifying the suaer_corrk.F90 routine (if your favorite aerosol is not there yet already) to specify the correct names for your condensing species. A more dynamical/flexible approach will be added at some point, so one won't need to directly modify the code.&lt;br /&gt;
&lt;br /&gt;
More information on how to use the scheme is provided here: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Radiative_Generic_Condensable_Specie&lt;br /&gt;
&lt;br /&gt;
Don’t forget to add your new aerosol species in traceur.def and adapt the -t option (with the correct number of radiatively active aerosols) at the compilation stage.&lt;br /&gt;
&lt;br /&gt;
== How to Manage Tracers ==&lt;br /&gt;
&lt;br /&gt;
Tracers are managed thanks to the [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/The_traceur.def_Input_File ''traceur.def''] file.&lt;br /&gt;
&lt;br /&gt;
Specific treatment of some tracers (e.g., water vapor cycle) can be added directly in the model and an option added in [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/The_callphys.def_Input_File ''callphys.def''] file.&lt;br /&gt;
&lt;br /&gt;
== Use the Z of LMDZ : Zoomed version ==&lt;br /&gt;
&lt;br /&gt;
Do we need this? Has anyone already made use of the zoom module?&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-Model]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_rcm1d.def_Input_File&amp;diff=2730</id>
		<title>The rcm1d.def Input File</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_rcm1d.def_Input_File&amp;diff=2730"/>
				<updated>2025-04-30T13:09:36Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== The run.def input file and its format ==&lt;br /&gt;
&lt;br /&gt;
=== some general comments and disambiguation to start with ===&lt;br /&gt;
This pages specifically focuses on the ''rcm1d.def'' file used by the 1D version of the Generic PCM. &lt;br /&gt;
&lt;br /&gt;
The rcm1d.def file is very similar to the [[The_run.def_Input_File|'''run.def''']] file (in fact, rcm1d.def is copied to run.def during execution of the 1D model), with the difference that the 3D options (linked to the dynamic core) are not used, and a whole bunch of new 1D-specific options (information contained in the start and startfi files) are now in the rcm1d.def file.&lt;br /&gt;
&lt;br /&gt;
== Reference def files ==&lt;br /&gt;
Reference *.def files are provided in the LMDZ.GENERIC/deftank directory&lt;br /&gt;
&lt;br /&gt;
== Outputted used_*def files ==&lt;br /&gt;
When the GCM run finishes, for each of the input def files *.def, an ASCII output file '''used_*.def''' is generated (in practice this would be '''used_rcm1d.def''' and '''used_callphys.def''' for the 1D simulations). These files contain, along the same format as the *.def input files, the &amp;quot;key = value&amp;quot; that were used, along with comments about whether &amp;quot;value&amp;quot; was read in the input def file or if the code default was used (i.e. the sought &amp;quot;keyword&amp;quot; was not present in the input def files).&lt;br /&gt;
&lt;br /&gt;
== Example of ''rcm1d.def'' file ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#---------------------------------#&lt;br /&gt;
# Run parameters for the 1D model #                                         &lt;br /&gt;
#---------------------------------#&lt;br /&gt;
&lt;br /&gt;
## Planetary constants&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
## NB: those are mandatory&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
## LENGTH OF A DAY in s&lt;br /&gt;
daysec         = 86400.&lt;br /&gt;
## GRAVITY in m s-2&lt;br /&gt;
g              = 3.72&lt;br /&gt;
## Radius of the planet, in m&lt;br /&gt;
rad = 3390000&lt;br /&gt;
## LENGTH OF A YEAR in days&lt;br /&gt;
year_day       = 3000&lt;br /&gt;
## MIN DIST STAR-PLANET in AU [periastron]&lt;br /&gt;
periastr       = 1.0&lt;br /&gt;
## MAX DIST STAR-PLANET in AU [apoastron]&lt;br /&gt;
apoastr        = 1.0&lt;br /&gt;
## DATE OF PERIASTRON in days&lt;br /&gt;
peri_day       = 0.&lt;br /&gt;
## OBLIQUITY in deg&lt;br /&gt;
obliquit       = 0.&lt;br /&gt;
## SURFACE PRESSURE in Pa&lt;br /&gt;
psurf          = 100000.&lt;br /&gt;
&lt;br /&gt;
## Time integration parameters&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# Initial date (in solar days,=0 at Ls=0)&lt;br /&gt;
day0           = 0&lt;br /&gt;
# Initial local time (in hours, between 0 and 24)&lt;br /&gt;
time           = 12 &lt;br /&gt;
# Number of time steps per sol&lt;br /&gt;
day_step       = 40&lt;br /&gt;
# Number of sols to run &lt;br /&gt;
ndt            = 1000&lt;br /&gt;
# Number of steps between each writing in diagfi &lt;br /&gt;
diagfi_output_rate=12&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Vertical levels&lt;br /&gt;
## ~~~~~~~~~~~~~~~&lt;br /&gt;
# hybrid vertical coordinate ? (.true. for hybrid and .false. for sigma levels)&lt;br /&gt;
hybrid         = .true.&lt;br /&gt;
# autocompute vertical discretisation? (useful for exoplanet runs)&lt;br /&gt;
autozlevs      = .false.&lt;br /&gt;
# Ceiling pressure (Pa) ?&lt;br /&gt;
pceil          = 0.00001&lt;br /&gt;
&lt;br /&gt;
## Thermal properties&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~&lt;br /&gt;
# Simulate global averaged conditions ? &lt;br /&gt;
global1d       = .true.&lt;br /&gt;
# Latitude (deg) [only used if global1d = F]&lt;br /&gt;
latitude       = 0.0&lt;br /&gt;
# Solar Zenith angle (deg) [only used if global1d = T]&lt;br /&gt;
szangle        = 60. &lt;br /&gt;
# Force specific heat capacity and molecular mass values&lt;br /&gt;
force_cpp      = .false.&lt;br /&gt;
# Specific heat capacity in J K-1 kg-1 [only used if force_cpp = T]&lt;br /&gt;
cpp            = 0.&lt;br /&gt;
# Molecular mass in g mol-1 [only used if force_cpp = T]&lt;br /&gt;
mugaz          = 18.&lt;br /&gt;
# Albedo of bare ground&lt;br /&gt;
albedo         = 0.1&lt;br /&gt;
# Emissivity of bare ground&lt;br /&gt;
emis           = 1.0&lt;br /&gt;
# Soil thermal inertia (SI)&lt;br /&gt;
inertia        = 1000.&lt;br /&gt;
# Initial CO2 ice on the surface (kg.m-2)&lt;br /&gt;
co2ice         = 0.&lt;br /&gt;
&lt;br /&gt;
## Wind profile&lt;br /&gt;
## ~~~~~~~~~~~~&lt;br /&gt;
## zonal eastward component of the geostrophic wind (m/s)&lt;br /&gt;
u              = 10.&lt;br /&gt;
# meridional northward component of the geostrophic wind (m/s)&lt;br /&gt;
v              = 0.&lt;br /&gt;
&lt;br /&gt;
## Initial atmospheric temperature profile&lt;br /&gt;
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br /&gt;
#&lt;br /&gt;
# Type of initial temperature profile&lt;br /&gt;
#         ichoice=1   Constant Temperature:  T=tref&lt;br /&gt;
#         [Mars] ichoice=2   Savidjari profile (as Seiff but with dT/dz=cte)&lt;br /&gt;
#         [Mars] ichoice=3   Lindner (polar profile)&lt;br /&gt;
#         [Mars] ichoice=4   inversion&lt;br /&gt;
#         [Mars] ichoice=5   Seiff  (standard profile, based on Viking entry)&lt;br /&gt;
#         ichoice=6   constant T  +  gaussian perturbation (levels)&lt;br /&gt;
#         ichoice=7   constant T  + gaussian perturbation (km)&lt;br /&gt;
#         ichoice=8   Read in an ascii file &amp;quot;profile&amp;quot; &lt;br /&gt;
ichoice        = 1&lt;br /&gt;
# Reference temperature tref (K)&lt;br /&gt;
tref           = 300. &lt;br /&gt;
# Add a perturbation to profile if isin=1&lt;br /&gt;
isin           = 0&lt;br /&gt;
# peak of gaussian perturbation (for ichoice=6 or 7)&lt;br /&gt;
pic            = 26.522&lt;br /&gt;
# width of the gaussian perturbation (for ichoice=6 or 7)&lt;br /&gt;
largeur        = 10&lt;br /&gt;
# height of the gaussian perturbation (for ichoice=6 or 7)&lt;br /&gt;
hauteur        = 30.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Notes''':&lt;br /&gt;
* Lines beginning with a hashtag are comments&lt;br /&gt;
* values associated to keywords may be logicals, integers, reals or even strings&lt;br /&gt;
* The parsing is case-sensitive : &amp;quot;thisparameter=&amp;quot; and &amp;quot;ThisParameter=&amp;quot; are identified as two distinct keywords&lt;br /&gt;
* The order in which the parameters are given in the file does not matter (except if a parameter is specified multiple times, clearly a bad idea, and then the last occurrence will prevail).&lt;br /&gt;
* Accessing a parameter and its value from ''rcm1d.def''/''run.def'' in the Fortran code is done using the '''getin_p''' routine, e.g.:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;fortran&amp;quot;&amp;gt;&lt;br /&gt;
       call getin_p(&amp;quot;keyword&amp;quot;,val)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
will look for the line with &amp;quot;keyword = &amp;quot; in file ''run.def'' and extract trailing value which will be use to set the value of the  '''val''' variable in the code.&lt;br /&gt;
&lt;br /&gt;
[[Category:Inputs]]&lt;br /&gt;
[[Category:WhatIs]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;br /&gt;
[[Category:Generic-1D]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_diagfi.def_Input_File&amp;diff=2729</id>
		<title>The diagfi.def Input File</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_diagfi.def_Input_File&amp;diff=2729"/>
				<updated>2025-04-30T13:03:23Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The ''diagfi.def'' file is an optional file that is read by the GCM at run time. If it is present then it will dictate which fields will be included in the ''diagfi.nc'' output file.&lt;br /&gt;
&lt;br /&gt;
A clarification: generating the ''diagfi.nc'' output file is only possible when running with the lon-lat (LMDZ.COMMON) dynamical core (or in 1D), with the Generic or Mars physics packages.&lt;br /&gt;
&lt;br /&gt;
== ''diagfi.def '' format ==&lt;br /&gt;
Each line of this ASCII file should contain the name (case sensitive!) of a variable to output.&lt;br /&gt;
&lt;br /&gt;
Note that the variable name to use is the one set in the code when calling the ''write_output'' routine. e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;fortran&amp;quot;&amp;gt;&lt;br /&gt;
call write_output(&amp;quot;temperature&amp;quot;,&amp;quot;atmospheric temperature&amp;quot;,&amp;quot;K&amp;quot;,zt)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
implies that the variable ''temperature'' can be outputted in ''[[diagfi.nc]]''&lt;br /&gt;
&lt;br /&gt;
== Example of ''diagfi.def'' file ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aire&lt;br /&gt;
altitude&lt;br /&gt;
ap&lt;br /&gt;
aps&lt;br /&gt;
ASR&lt;br /&gt;
ASRcs&lt;br /&gt;
beta&lt;br /&gt;
bp&lt;br /&gt;
bps&lt;br /&gt;
CLF&lt;br /&gt;
CLFt&lt;br /&gt;
controle&lt;br /&gt;
Declin&lt;br /&gt;
dt_ekman1&lt;br /&gt;
dt_ekman2&lt;br /&gt;
dt_diff1&lt;br /&gt;
dt_diff2&lt;br /&gt;
DYN&lt;br /&gt;
evap_surf_flux&lt;br /&gt;
fluxsurf_rad&lt;br /&gt;
GND&lt;br /&gt;
h2o_ice&lt;br /&gt;
h2o_ice_col&lt;br /&gt;
h2o_ice_surf&lt;br /&gt;
h2o_vap&lt;br /&gt;
h2o_vap_col&lt;br /&gt;
h2o_vap_surf&lt;br /&gt;
H2Oice_reffcol&lt;br /&gt;
ISR&lt;br /&gt;
latentFlux&lt;br /&gt;
latitude&lt;br /&gt;
longitude&lt;br /&gt;
Ls&lt;br /&gt;
Lss&lt;br /&gt;
mass_evap_col&lt;br /&gt;
OLR&lt;br /&gt;
OLRcs&lt;br /&gt;
p&lt;br /&gt;
pctsrf_sic&lt;br /&gt;
phisinit&lt;br /&gt;
ps&lt;br /&gt;
RA&lt;br /&gt;
rain&lt;br /&gt;
reevap&lt;br /&gt;
RH&lt;br /&gt;
rnat&lt;br /&gt;
sea_ice&lt;br /&gt;
sensibFlux&lt;br /&gt;
shad&lt;br /&gt;
snow&lt;br /&gt;
soildepth&lt;br /&gt;
tau_col&lt;br /&gt;
temp&lt;br /&gt;
Time&lt;br /&gt;
Tsat&lt;br /&gt;
tsea_ice&lt;br /&gt;
tslab1&lt;br /&gt;
tslab2&lt;br /&gt;
tsurf&lt;br /&gt;
u&lt;br /&gt;
v&lt;br /&gt;
w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Various related comments and remarks ==&lt;br /&gt;
* If there is no ''diagfi.def'' file found at run-time, then '''ALL''' variables will be written in ''diagfi.nc'', which will rapidly become huge. You have been warned. Note however that it is a convenient way to have access to the names of all available variables that may be outputted in ''diagfi.nc''.&lt;br /&gt;
* The rate at which the outputs are made in ''diagfi.nc'' is set by the ''diagfi_output_rate'' parameter which is usually set in the ''callphys.def'' file and expressed in terms of physical time steps for the Generic PCM. For the Mars PCM, a different flag sets the ''diagfi.nc'' output rate using ''outputs_per_sol''.&lt;br /&gt;
&lt;br /&gt;
[[Category:Inputs]]&lt;br /&gt;
[[Category:WhatIs]]&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;br /&gt;
[[Category:Generic-1D]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Mars-LMDZ]]&lt;br /&gt;
[[Category:Mars-1D]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	<entry>
		<id>http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_diagfi.def_Input_File&amp;diff=2697</id>
		<title>The diagfi.def Input File</title>
		<link rel="alternate" type="text/html" href="http://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php?title=The_diagfi.def_Input_File&amp;diff=2697"/>
				<updated>2025-04-30T09:00:56Z</updated>
		
		<summary type="html">&lt;p&gt;Emillour: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The ''diagfi.def'' file is an optional file that is read by the GCM at run time. If it is present then it will dictate which fields will be included in the ''diagfi.nc'' output file.&lt;br /&gt;
&lt;br /&gt;
A clarification: generating the ''diagfi.nc'' output file is only possible when running with the lon-lat (LMDZ.COMMON) dynamical core (or in 1D), with the Generic or Mars physics packages.&lt;br /&gt;
&lt;br /&gt;
== ''diagfi.def '' format ==&lt;br /&gt;
Each line of this ASCII file should contain the name (case sensitive!) of a variable to output.&lt;br /&gt;
&lt;br /&gt;
Note that the variable name to use is the one set in the code when calling the ''writediagfi'' or ''write_output'' routines. e.g.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;fortran&amp;quot;&amp;gt;&lt;br /&gt;
call writediagfi(ngrid,&amp;quot;temperature&amp;quot;,&amp;quot;temperature&amp;quot;,&amp;quot;K&amp;quot;,3,zt)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
or &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;fortran&amp;quot;&amp;gt;&lt;br /&gt;
call write_output(&amp;quot;temperature&amp;quot;,&amp;quot;temperature&amp;quot;,&amp;quot;K&amp;quot;,zt)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
implies that the variable ''temperature'' can be outputted in ''[[diagfi.nc]]''&lt;br /&gt;
&lt;br /&gt;
== Example of ''diagfi.def'' file ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aire&lt;br /&gt;
altitude&lt;br /&gt;
ap&lt;br /&gt;
aps&lt;br /&gt;
ASR&lt;br /&gt;
ASRcs&lt;br /&gt;
beta&lt;br /&gt;
bp&lt;br /&gt;
bps&lt;br /&gt;
CLF&lt;br /&gt;
CLFt&lt;br /&gt;
controle&lt;br /&gt;
Declin&lt;br /&gt;
dt_ekman1&lt;br /&gt;
dt_ekman2&lt;br /&gt;
dt_diff1&lt;br /&gt;
dt_diff2&lt;br /&gt;
DYN&lt;br /&gt;
evap_surf_flux&lt;br /&gt;
fluxsurf_rad&lt;br /&gt;
GND&lt;br /&gt;
h2o_ice&lt;br /&gt;
h2o_ice_col&lt;br /&gt;
h2o_ice_surf&lt;br /&gt;
h2o_vap&lt;br /&gt;
h2o_vap_col&lt;br /&gt;
h2o_vap_surf&lt;br /&gt;
H2Oice_reffcol&lt;br /&gt;
ISR&lt;br /&gt;
latentFlux&lt;br /&gt;
latitude&lt;br /&gt;
longitude&lt;br /&gt;
Ls&lt;br /&gt;
Lss&lt;br /&gt;
mass_evap_col&lt;br /&gt;
OLR&lt;br /&gt;
OLRcs&lt;br /&gt;
p&lt;br /&gt;
pctsrf_sic&lt;br /&gt;
phisinit&lt;br /&gt;
ps&lt;br /&gt;
RA&lt;br /&gt;
rain&lt;br /&gt;
reevap&lt;br /&gt;
RH&lt;br /&gt;
rnat&lt;br /&gt;
sea_ice&lt;br /&gt;
sensibFlux&lt;br /&gt;
shad&lt;br /&gt;
snow&lt;br /&gt;
soildepth&lt;br /&gt;
tau_col&lt;br /&gt;
temp&lt;br /&gt;
Time&lt;br /&gt;
Tsat&lt;br /&gt;
tsea_ice&lt;br /&gt;
tslab1&lt;br /&gt;
tslab2&lt;br /&gt;
tsurf&lt;br /&gt;
u&lt;br /&gt;
v&lt;br /&gt;
w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Various related comments and remarks ==&lt;br /&gt;
* If there is no ''diagfi.def'' file found at run-time, then '''ALL''' variables will be written in ''diagfi.nc'', which will rapidly become huge. You have been warned. Note however that it is a convenient way to have access to the names of all available variables that may be outputted in ''diagfi.nc''.&lt;br /&gt;
* The rate at which the outputs are made in ''diagfi.nc'' is set by the ''diagfi_output_rate'' parameter which is usually set in the ''callphys.def'' file and expressed in terms of physical time steps for the Generic PCM. For the Mars PCM, a different flag sets the ''diagfi.nc'' output rate using ''outputs_per_sol''.&lt;br /&gt;
&lt;br /&gt;
[[Category:Inputs]]&lt;br /&gt;
[[Category:WhatIs]]&lt;br /&gt;
[[Category:Generic-Model]]&lt;br /&gt;
[[Category:Generic-LMDZ]]&lt;br /&gt;
[[Category:Generic-1D]]&lt;br /&gt;
[[Category:Mars-Model]]&lt;br /&gt;
[[Category:Mars-LMDZ]]&lt;/div&gt;</summary>
		<author><name>Emillour</name></author>	</entry>

	</feed>