Difference between revisions of "Advanced Use of the GCM"
(→How to Change Vertical and Horizontal Resolutions) |
SBhatnagar (talk | contribs) (→How to Change the Topography (or remove it)) |
||
(117 intermediate revisions by 11 users not shown) | |||
Line 2: | Line 2: | ||
== Running in parallel == | == Running in parallel == | ||
− | + | For large simulation (long run, high resolution etc...), the computational cost can be huge and hence the run time very long. | |
+ | To overcome this issue, the model can be run in parallel. This however requires a few extra steps (compared to compiling and running the serial version of the code). | ||
+ | For all the details see [[Parallelism | the dedicated page]]. | ||
− | == | + | == Disambiguation between ifort, mpif90, etc. == |
+ | |||
+ | For users not used to compilers and/or compiling and running codes in parallel, namely in MPI mode, there is often some confusion which hopefully the following paragraph might help clarify: | ||
+ | * the compiler (typically gfortran, ifort, pgfortran, etc.) is the required tool to compile the Fortran source code and generate an executable. It is strongly recommended that libraries used by a program are also compiled using the same compiler. Thus if you plan to use different compilers to compile the model, note that you should also have at hand versions of the libraries it uses also compiled with these compilers. | ||
+ | * the MPI (Message Passing Interface) library is a library used to solve problems using multiple processes by enabling message-passing between the otherwise independent processes. There are a number of available MPI libraries out there, e.g. OpenMPI, MPICH or IntelMPI to name a few (you can check out the [[Building an MPI library]] page for some information about installing an MPI library). The important point here is that on a given machine the MPI library is related to a given compiler and that it provides related wrappers to compile and run with. Typically (but not always) the compiler wrapper is '''mpif90''' and the execution wrapper is '''mpirun'''. If you want to know which compiler is wrapped in the '''mpif90''' compiler wrapper, check out the output of: | ||
+ | <pre> | ||
+ | mpif90 --version | ||
+ | </pre> | ||
+ | * In addition a second type of parallelism, shared memory parallelism known as OpenMP, is also implemented in the code. In contradistinction to MPI, OpenMP does not require an external library but is instead implemented as a compiler feature. At run time one must then specify some dedicated environment variables (such as OMP_NUM_THREADS and OMP_STACKSIZE) to specify the number of threads to use per process. | ||
+ | * In practice one should favor compiling and running with both MPI and OPenMP enabled. | ||
+ | * For much more detailed information about compiling and running in parallel, check out the [[Parallelism | the page dedicated to Parallelism]]. | ||
+ | |||
+ | == A word about the IOIPSL and XIOS libraries == | ||
+ | * The IOIPSL (Input Output IPSL) library is a library that has developed by the IPSL community to handle input and outputs of (mostly terrestrial) climate models. For the Generic PCM only a small part of this library is actually used, related to reading and processing the input [[The_run.def_Input_File | run.def]] file. For more details check out the [[The IOIPSL Library]] page. | ||
+ | * The [https://forge.ipsl.jussieu.fr/ioserver/wiki XIOS] (Xml I/O Server) library is based on client-server principles where the server manages the outputs asynchronously from the client (the climate model) so that the bottleneck of writing data in a parallel environment is alleviated. All aspects of the outputs (name, units, file, post-processing operations, etc.) are then controlled by dedicated XML files which are read at run-time. Using XIOS is currently optional (and requires compiling the GCM with the XIOS library). More about the XIOS library, how to install and use it, etc. [[The XIOS Library| here]]. | ||
+ | |||
+ | |||
+ | == Playing with the output files == | ||
+ | |||
+ | === Changing the output temporal resolution and time duration === | ||
+ | |||
+ | * To change the total time of a simulation, you need to open the 'For all the details see [[The_run.def_Input_File | run.def]]. file and change the variable 'nday': | ||
+ | |||
+ | <pre> | ||
+ | nday = 1000 # this means the simulation will run for 1000 days ; and that the associated output files will also be computed for a total duration of 1000 days | ||
+ | </pre> | ||
+ | |||
+ | Note: in the example, they are not necessarily 1000 Earth days, because it depends on the definition of the day duration that has been taken in the start files. | ||
+ | |||
+ | * To change the temporal resolution of the output files, you need to open the [[The_run.def_Input_File | run.def]] file and change the variable 'ecritphy': | ||
+ | |||
+ | <pre> | ||
+ | ecritphy = 200 # this means the simulation will write variables in the output files every 200 time steps of the simulation. | ||
+ | </pre> | ||
+ | |||
+ | Note: The output temporal resolution of the output files then depends also on the number of timestep per day ('day_step' variable in [[The_run.def_Input_File | run.def]] file). In this example: | ||
+ | |||
+ | <pre> | ||
+ | nday = 1000 | ||
+ | daystep = 800 | ||
+ | ecritphy = 200 | ||
+ | </pre> | ||
+ | |||
+ | The output file will provide results every 0.25 days (800/200), and for a total duration of 1000 days (so 4000 time values in total). | ||
+ | |||
+ | === Changing the output variable === | ||
+ | |||
+ | To select the variable provided in the output file diagfi.nc, you simply need to add the list of variables needed in the [[The_diagfi.def_Input_File | diagfi.def]]. | ||
+ | |||
+ | Note for experts: Some technical variables need to be de-commented in 'physiq_mod.F90' file to be written in the output files. | ||
+ | |||
+ | === Spectral outputs === | ||
+ | |||
+ | It is possible to provide spectral outputs such as the OLR (Outgoing Longwave Radiation, i.e. the thermal emission of the planet at the top of the atmosphere), the OSR (Outgoing Stellar Radiation, i.e. the light reflected by the planet at the top of the atmosphere), or the GSR (Ground Stellar Radiation, i.e. the light emitted by the star that reaches the surface of the planet). | ||
+ | |||
+ | For this, you need to activate the option 'specOLR' in the [[The_callphys.def_Input_File | callphys.def]] file, as follows: | ||
+ | |||
+ | <pre> | ||
+ | specOLR = .true. | ||
+ | </pre> | ||
+ | |||
+ | The simulations will then create diagspec_VI.nc and diagspec_IR.nc files (along with the standard diagfi.nc file), which contain the spectra of OLR, OSR, GSR, etc. | ||
+ | |||
+ | Note: The resolution of the spectra is defined by that of the correlated-k (opacity) files used for the simulation. | ||
+ | |||
+ | === Statistical outputs === | ||
+ | |||
+ | TBD (explain how to compute stats.nc files as well as what is inside) | ||
== How to Change Vertical and Horizontal Resolutions == | == How to Change Vertical and Horizontal Resolutions == | ||
− | To run at a different grid resolution than available initial conditions files, one needs to use the tools | + | === When you are using the regular longitude/latitude horizontal grid === |
+ | To run at a different grid resolution than available initial conditions files, one needs to use the tools ''newstart.e'' and ''start2archive.e'' | ||
For example, to create initial states at grid resolution 32×24×25 from NetCDF files start and startfi at grid resolution 64×48×32 : | For example, to create initial states at grid resolution 32×24×25 from NetCDF files start and startfi at grid resolution 64×48×32 : | ||
− | * Create file ''start_archive.nc'' with | + | * Create file ''start_archive.nc'' with ''start2archive.e'' compiled at grid resolution 64×48×32 using old file ''z2sig.def'' used previously |
− | * Create files ''restart.nc'' and ''restartfi.nc'' with newstart.e compiled at grid resolution 32×24×25, using a new file ''z2sig.def'' (more details below on the choice of the ''z2sig.def'') | + | * Create files ''restart.nc'' and ''restartfi.nc'' with ''newstart.e'' compiled at grid resolution 32×24×25, using a new file ''z2sig.def'' (more details below on the choice of the ''z2sig.def''). |
+ | * While executing ''newstart.e'', you need to choose the answer '0 - from a file start_archive' and then press enter to all other requests. | ||
+ | |||
+ | ==== What you need to ''know'' about the ''z2sig.def'' file ==== | ||
+ | |||
+ | For a model with Nlay layers, the [[The_z2sig.def_Input_File | z2sig.def]] file must contain at least Nlay+1 lines (the other not being read). | ||
+ | |||
+ | The first line is a scale height ($$H$$). The following lines are the target pseudo-altitudes for the model from the bottom up ($$z_i$$). | ||
+ | The units do not matter as long as you use the same ones for both. | ||
+ | |||
+ | The model will use these altitudes to compute a target pressure grid ($$p_i$$ ) as follows: | ||
+ | \begin{align} | ||
+ | \label{def:pseudoalt} | ||
+ | p_i &= p_s \exp(-z_i/H), | ||
+ | \end{align} | ||
+ | where $$p_s$$ is the surface pressure. | ||
+ | |||
+ | As you can see, the scale height and pseudo altitudes enter the equation only through their ratio. So they do not have to to be the real scale-height and altitudes of the atmosphere you are simulating. | ||
+ | So you can use the same [[The_z2sig.def_Input_File | z2sig.def]].def for different planets. | ||
+ | |||
+ | There is no hard rule to follow to determine the altitude/pressure levels you should use. As a rule of thumb, layers should be thiner near the surface to properly resolve the surface boundary layer. Then they should gradually increase in size over a couple scale heights and transition to constant thickness above that. Of course, some specific applications may require thinner layers in some specific parts of the atmospheres. | ||
+ | |||
+ | A little trick for those who prefer to think in terms of (log)pressure: if you use $$H= 1/\ln 10 \approx 0.43429448$$, then $$z_i=x$$ corresponds to a pressure difference with the surface of exactly x pressure decades (i.e. at $$z=1$$, $$p=0.1p_s$$). This is particularly useful for giant-planet applications. | ||
+ | |||
+ | <!-- [NOT RELEVANT??] If you want to create starts files with tracers for 50 layers using a start archive.nc obtained for 32 layers, do not forget to use the ini_q option in newstart in order to correctly initialize tracers value for layer 33 to layer 50. You just have to answer yes to the question on thermosphere initialization if you want to initialize the thermosphere part only (l=33 to l=50), and no if you want to initialize tracers for all layers (l=0 to l=50). --> | ||
− | + | '''WE SHOULD WRITE A PAGE (LINK HERE) ABOUT HYBRID VERTICAL COORDINATES''' | |
+ | === When you are using the DYNAMICO icosahedral horizontal grid === | ||
+ | The horizontal resolution for the DYNAMICO dynamical core is managed from several setting files, online during the execution. | ||
+ | To this purpose, each part of the GCM managing the in/output fields ('''ICOSAGCM''', '''ICOSA_LMDZ''', '''XIOS''') requires to know the input and output grids: | ||
− | + | '''1. ''context_lmdz_physics.xml'':''' | |
+ | |||
+ | You can find several grid setup already defined: | ||
+ | |||
+ | <syntaxhighlight lang="xml" line> | ||
+ | <domain_definition> | ||
+ | <domain id="dom_96_95" ni_glo="96" nj_glo="95" type="rectilinear" > | ||
+ | <generate_rectilinear_domain/> | ||
+ | <interpolate_domain order="1"/> | ||
+ | </domain> | ||
+ | |||
+ | <domain id="dom_144_142" ni_glo="144" nj_glo="142" type="rectilinear" > | ||
+ | <generate_rectilinear_domain/> | ||
+ | <interpolate_domain order="1"/> | ||
+ | </domain> | ||
+ | |||
+ | <domain id="dom_512_360" ni_glo="512" nj_glo="360" type="rectilinear" > | ||
+ | <generate_rectilinear_domain/> | ||
+ | <interpolate_domain order="1"/> | ||
+ | </domain> | ||
+ | |||
+ | <domain id="dom_720_360" ni_glo="720" nj_glo="360" type="rectilinear"> | ||
+ | <generate_rectilinear_domain/> | ||
+ | <interpolate_domain order="1"/> | ||
+ | </domain> | ||
+ | |||
+ | <domain id="dom_out" domain_ref="dom_720_360"/> | ||
+ | </domain_definition> | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | In this example, the output grid for the physics fields is defined by | ||
+ | |||
+ | <syntaxhighlight lang="xml"> | ||
+ | <domain id="dom_out" domain_ref="dom_720_360"/> | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | which is an half-degree horizontal resolution. To change this resolution, you have to change name of the '''domain_ref''' grid, for instance: | ||
+ | |||
+ | <syntaxhighlight lang="xml"> | ||
+ | <domain id="dom_out" domain_ref="dom_96_95"/> | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | |||
+ | '''2. ''run_icosa.def'': setting file to execute a simulation''' | ||
+ | |||
+ | In this file, regarding of the horizontal resolution intended, you have to set the number of subdivision on the main triangle. | ||
+ | For reminder, each hexagonal mesh is divided in several main triangles and each main triangles are divided in suitable number of sub-triangles according the horizontal resolution | ||
+ | |||
+ | <syntaxhighlight lang="bash" line> | ||
+ | #nbp --> number of subdivision on a main triangle: integer (default=40) | ||
+ | # nbp = sqrt((nbr_lat x nbr_lon)/10) | ||
+ | # nbp: 20 40 80 160 | ||
+ | # T-edge length (km): 500 250 120 60 | ||
+ | # Example: nbp(128x96) = 35 -> 40 | ||
+ | # nbp(256x192)= 70 -> 80 | ||
+ | # nbp(360x720)= 160 -> 160 | ||
+ | nbp = 160 | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | |||
+ | If you have chosen the 96_95 output grid in ''context_lmdz_physics.xml'', you have to calculate $$nbp = \sqrt(96x95) / 10 = 10$$ and in this case | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | nbp = 20 | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | |||
+ | After the number of subdivision of the main triangle, you have to define the number subdivision over each direction. At this stage you need to be careful as the number of subdivisions on each direction: | ||
+ | * needs to be set according to the number of subdivisions on the main triangle '''nbp''' | ||
+ | * will determine the number of processors on which the GCM will be most effective | ||
+ | |||
+ | <syntaxhighlight lang="bash" line> | ||
+ | ## sub splitting of main rhombus : integer (default=1) | ||
+ | #nsplit_i=1 | ||
+ | #nsplit_j=1 | ||
+ | #omp_level_size=1 | ||
+ | ############################################################### | ||
+ | ## There must be less MPIxOpenMP processes than the 10 x nsplit_i x nsplit_j tiles | ||
+ | ## typically for pure MPI runs, let nproc = 10 x nsplit_i x nsplit_j | ||
+ | ## it is better to have nbp/nsplit_i > 10 and nbp/nplit_j > 10 | ||
+ | ############################################################### | ||
+ | #### 40 noeuds de 24 processeurs = 960 procs | ||
+ | nsplit_i=12 | ||
+ | nsplit_j=8 | ||
+ | |||
+ | #### 50 noeuds de 24 processeurs = 1200 procs | ||
+ | #nsplit_i=10 | ||
+ | #nsplit_j=12 | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | |||
+ | With the same example as above, the 96_95 output grid requires: | ||
+ | $$nsplit_i < 2$$ and $$nsplit_j < 2$$ | ||
+ | We advise you to select: | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | ## sub splitting of main rhombus : integer (default=1) | ||
+ | nsplit_i=1 | ||
+ | nsplit_j=1 | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | and using 10 processors. | ||
== How to Change the Topography (or remove it) == | == How to Change the Topography (or remove it) == | ||
+ | |||
+ | The generic model can use in principle any type of surface topography, provided that the topographic data file is available in the right format, and put in the right place. The information content on the surface topography is contained in the ''startfi.nc'', and we do have developed tools (see below) to modify the ''startfi.nc'' to account for a new surface topography. | ||
+ | |||
+ | To change the surface topography of a simulation, we recommend to follow the procedure detailed below: | ||
+ | |||
+ | * Create file ''start_archive.nc'' with ''start2archive.e'' compiled at the same (horizontal and vertical) resolution than the ''start.nc'' and ''startfi.nc'' files. | ||
+ | * Create files ''restart.nc'' and ''restartfi.nc'' with ''newstart.e'' compiled again at the same (horizontal and vertical) resolution. | ||
+ | * While executing ''newstart.e'', you need to choose the answer '0 - from a file start_archive' and then press enter to all other requests. | ||
+ | * At some point, the script ''newstart.e'' asks you to chose the surface topography you want from the list of files available in your 'datagcm/surface_data/' directory. | ||
+ | |||
+ | We do have a repository of for Venus, Earth and Mars through time available at https://web.lmd.jussieu.fr/~lmdz/planets/LMDZ.GENERIC/datagcm/surface_data/. You can download the surface topography files and place them in your 'datagcm/surface_data/' directory. | ||
+ | |||
+ | We also offer a tutorial to design new topography maps here: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Building_Surface_Topography_Files | ||
+ | |||
+ | ---- | ||
+ | |||
+ | Special note: To remove the topography, you can simply add the following tag in callphys.def (but currently, this only works if ''callsoil=.false.''): | ||
+ | |||
+ | <pre> | ||
+ | nosurf = .true. | ||
+ | </pre> | ||
+ | |||
+ | == How to Change the Stellar Spectrum == | ||
+ | |||
+ | To simulate the effect of the star's radiation on a given planetary atmosphere, it is necessary to accurately represent the stellar spectrum (spectral shape and total bolometric flux) at the top of this atmosphere. In the model, we have set up two different options to model the stellar spectra of any star. | ||
+ | |||
+ | === Black Body Stellar Spectra === | ||
+ | |||
+ | First, it is possible to simply use a black body. In this case, the stellar spectrum depends only on the effective temperature of the star which is provided to the model. | ||
+ | |||
+ | For this, you need to activate the option 'stelbbody' in the [[The_callphys.def_Input_File | callphys.def]] file, as follows: | ||
+ | |||
+ | <pre> | ||
+ | stelbbody = .true. | ||
+ | </pre> | ||
+ | |||
+ | and then add, also in the [[The_callphys.def_Input_File | callphys.def]] file, the following line: | ||
+ | |||
+ | <pre> | ||
+ | stelTbb = 3500.0 | ||
+ | </pre> | ||
+ | |||
+ | to specify the effective temperature of the host star. (in this example, we have chosen a M-star with an effective temperature of 3500K) | ||
+ | |||
+ | === Pre-Tabulated spectra === | ||
+ | |||
+ | Second, the model can read a file containing any pre-computed stellar spectrum. Traditionally, we have used synthetic spectra from the PHOENIX database, that we adapt to the Generic PCM by decreasing the spectral resolution (use 10000 points with a fixed spectral resolution of 0.001 micron) and by adapting the units (in W/m2/micron). This is the option that is generally preferred to better represent the effect of the star (whose real spectrum can strongly deviate from the black body approximation). | ||
+ | |||
+ | For this, you need to make sure the option 'stelbbody' in the [[The_callphys.def_Input_File | callphys.def]] file is equal to .false. (it not specified, by default stelbbody is assumed to be .false.). | ||
+ | |||
+ | Then you need to add in the [[The_callphys.def_Input_File | callphys.def]] file, the following line: | ||
+ | |||
+ | <pre> | ||
+ | startype = 1 | ||
+ | </pre> | ||
+ | |||
+ | and change the value of startype depending on the star you want to model. (here 1 means we use the solar spectrum) | ||
+ | |||
+ | To know which stellar spectra are available, you need to open the file LMDZ.GENERIC/libf/phystd/ave_stelspec.F90 and adapt the value of startype accordingly. You also need to make sure the spectra are available in your /datadir/stellar_spectra (or /datagcm/stellar_spectra) directory. | ||
+ | |||
+ | To calculate the true stellar spectrum at the top of the atmosphere, the Generic PCM renormalizes the stellar spectrum by the bolometric flux at 1 Astronomical Unit (AU) provided by the user, which it then converts into the true stellar spectrum by using the star-planet distance. | ||
+ | |||
+ | To specify the flux at 1 AU, you need to add in the [[The_callphys.def_Input_File | callphys.def]] file, the following line: | ||
+ | |||
+ | <pre> | ||
+ | Fat1AU = 1366.0 | ||
+ | </pre> | ||
+ | |||
+ | (here 1366W/m2 corresponds to the flux at 1AU for the Sun) | ||
+ | |||
+ | 1st NOTE: We will improve this second part (Martin and Mathilde) by the end of 2022. | ||
+ | |||
+ | 2nd NOTE: The Generic PCM eventually has the capability to run without any stellar flux. To do that, you can simply put Fat1AU = 0. (@LUCAS_TEINTURIER, could you check that?) | ||
== How to Change the Opacity Tables == | == How to Change the Opacity Tables == | ||
+ | |||
+ | '''SECTION ET PAGES EN COURS D'ÉCRITURE''' | ||
+ | |||
+ | The model uses opacity tables to compute heating rates throughout the atmosphere. These opacity tables are generated "offline" for a given set of pressures, temperatures, a given composition and a specific spectral decomposition. | ||
+ | |||
+ | |||
+ | === Getting opacity tables for your desired atmospheric composition === | ||
+ | |||
+ | * You should first check our common repository here (https://web.lmd.jussieu.fr/~lmdz/planets/LMDZ.GENERIC/datagcm/corrk_data/) to check whether your desired opacity table is not already included. | ||
+ | We have a dedicated page on how to build new opacity tables here: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Building_Opacity_Tables ; N | ||
+ | |||
+ | === Implementing your opacity tables in the Generic PCM === | ||
+ | |||
+ | |||
+ | |||
+ | , follow these steps: | ||
+ | |||
+ | * copy your directory containing your opacity tables in .... It has to contain... [[Building_Opacity_Tables | you need to build opacity tables]] | ||
+ | * change corrkdir = ... in [[The_callphys.def_Input_File | callphys.def]] with the name of that directory. | ||
+ | * change gases.def: it has to be consistent with Q.dat. A special case concerns... | ||
+ | * change -b option when compiling the model with makelmdz_fcm: it has to correspond to the number of bands (in the IR x in the visible) of the new opacity tables. For instance, compile with -b 38x26 if you used 38 bands in the infrared and 26 in the visible to generate the opacity tables. | ||
+ | |||
+ | == How to Change the Aerosols Optical Properties == | ||
+ | |||
+ | Aerosol optical properties are represented using three distinct properties: the extinction coefficient (Q_ext), the single scattering albedo (omega) and the asymmetry factor (g). | ||
+ | |||
+ | The Generic PCM can compute the radiative effects of any aerosol, provided that they optical properties (Q_ext, omega, g) are tabulated and provided in the right format. | ||
+ | |||
+ | === Getting optical properties for your aerosols === | ||
+ | |||
+ | * You should first check our common repository here (https://web.lmd.jussieu.fr/~lmdz/planets/LMDZ.GENERIC/datagcm/aerosol_properties/) to check whether your favorite aerosol is not already included. The optical properties of each aerosol is built using two distinct files: one in the 'visible' (which is used in the visible part of the radiative transfer, to compute the fate of stellar radiation) and one in the 'infrared' (which is used in the thermal infrared part of the radiative transfer, to compute the fate of thermal emission by the surface and atmosphere). | ||
+ | |||
+ | For instance, if you want to include the radiative effect of CO2 ice clouds, then you just need the files: | ||
+ | |||
+ | - optprop_co2ice_vis_n50.dat | ||
+ | |||
+ | - optprop_co2ice_ir_n50.dat | ||
+ | |||
+ | * Otherwise, you can create your own tables of optical properties, using existing databases. Check this page to learn how to do this: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Building_Tables_Of_Aerosol_Optical_Properties | ||
+ | |||
+ | === Implementing your aerosols in the Generic PCM === | ||
+ | |||
+ | Before including the aerosol scheme you want to use, you need to indicate the number of aerosol layers in callphys.def with the option '''naerkind=#number_of_aerosol_layers'''. | ||
+ | |||
+ | There are two available aerosol schemes you can use to add a new aerosol in the Generic PCM: | ||
+ | |||
+ | ====The n-layer aerosol scheme==== | ||
+ | |||
+ | You can use the n-layer scheme (implemented by Jan Vatant d'Ollone) to easily prescribe an aerosol vertical distribution. The scheme is activated by adding ''aeronlay=.true''. in callphys.def. | ||
+ | |||
+ | In this scheme, each layer can have different optical properties, particle sizes, etc. You can use different options (e.g. to fix the aerosol distribution between two atmospheric pressures) of the scheme with ''aeronlay_choice = 1'', 2, etc. | ||
+ | |||
+ | Firstly, you have to precise the number of aerosols of this scheme. For instance,: | ||
+ | <pre> | ||
+ | nlayaero = 3 | ||
+ | </pre> | ||
+ | |||
+ | Then, you can indicate the properties of your aerosol layer(s) one after the other on the same line. For 3 aerosol layers, we have: | ||
+ | |||
+ | <pre> | ||
+ | aeronlay_tauref = 1.0 0.05 0.03 | ||
+ | aeronlay_lamref = 0.8e-6 0.8e-6 0.8e-6 | ||
+ | aeronlay_choice = 2 2 2 | ||
+ | aeronlay_pbot = 2.0e5 1.6e5 0.2e5 | ||
+ | aeronlay_ptop = 0.10e5 2.0e5 1. | ||
+ | aeronlay_sclhght = 0.1 2.0 0.1 | ||
+ | aeronlay_size = 0.8e-6 0.05e-6 2.5e-6 | ||
+ | aeronlay_nueff = 0.3 0.3 0.3 | ||
+ | </pre> | ||
+ | |||
+ | ''aeronlay_tauref'' is the optical depth at the reference wavelength ''aeronlay_lamref'' (in metres). The ''aeronlay_choice'' allows you to choose the aerosol distribution between the bottom pressure (''aeronlay_pbot'') and the top pressure (''aeronlay_ptop'') of the aerosol layer (''aeronlay_choice=1'') or from a bottom pressure (''aeronlay_pbot'') with a fractional scale height (''aeronlay_sclhght'') with the ''aeronlay_choice=2''. For the corresponding layer, if you use the choice=2, the ''aeronlay_ptop'' is deactivated. In the same way, for the choice=1, the ''aeronlay_sclhght'' is deactivated for the corresponding layer. You can choose the mean radius of the particles with the ''aeronlay_size'' (in metres) and the corresponding effective standard deviation with ''aeronlay_nueff''. | ||
+ | |||
+ | And finally, you will need to provide the name of your aerosols optical properties tables in callphys.def. For instance: | ||
+ | |||
+ | <pre> | ||
+ | optprop_aeronlay_vis = optprop_neptune_n2_vis_n30.dat optprop_neptune_n3_vis_n30.dat optprop_ch4_vis.dat | ||
+ | optprop_aeronlay_ir = optprop_neptune_n2_ir_n30.dat optprop_neptune_n3_ir_n30.dat optprop_ch4_ir.dat | ||
+ | </pre> | ||
+ | (here, to use the optical properties of aerosols for Neptune) | ||
+ | |||
+ | We encourage you to search for the keyword "aeronlay" in the source code to use more specific options of the scheme. | ||
+ | |||
+ | |||
+ | ====The generic condensable scheme==== | ||
+ | |||
+ | You can use the generic condensable scheme (implemented by Lucas Teinturier) to easily compute the radiative effect of cloud particles formed by condensation. | ||
+ | The scheme has to be used conjointly to the generic condensation scheme. | ||
+ | To activate it, one needs to add the '''aerogeneric=n'''( with n>0 the number of condensable species handled in the scheme) in callphys.def. On top of that, one needs to add in traceur.def, on the solid/ice traceur the option '''is_rgcs = 1'''. | ||
+ | |||
+ | |||
+ | In the model itself, one needs then to manually add your aerosol by modifying the suaer_corrk.F90 routine (if your favorite aerosol is not there yet already) to specify the correct names for your condensing species. A more dynamical/flexible approach will be added at some point, so one won't need to directly modify the code. | ||
+ | |||
+ | More information on how to use the scheme is provided here: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Radiative_Generic_Condensable_Specie | ||
+ | |||
+ | Don’t forget to add your new aerosol species in traceur.def and adapt the -t option (with the correct number of radiatively active aerosols) at the compilation stage. | ||
== How to Manage Tracers == | == How to Manage Tracers == | ||
+ | |||
+ | Tracers are managed thanks to the [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/The_traceur.def_Input_File ''traceur.def''] file. | ||
+ | |||
+ | Specific treatment of some tracers (e.g., water vapor cycle) can be added directly in the model and an option added in [https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/The_callphys.def_Input_File ''callphys.def''] file. | ||
+ | |||
+ | == Use the Z of LMDZ : Zoomed version == | ||
+ | |||
+ | Do we need this? Has anyone already made use of the zoom module? | ||
+ | |||
+ | [[Category:Generic-Model]] |
Latest revision as of 17:20, 19 June 2024
Contents
- 1 Running in parallel
- 2 Disambiguation between ifort, mpif90, etc.
- 3 A word about the IOIPSL and XIOS libraries
- 4 Playing with the output files
- 5 How to Change Vertical and Horizontal Resolutions
- 6 How to Change the Topography (or remove it)
- 7 How to Change the Stellar Spectrum
- 8 How to Change the Opacity Tables
- 9 How to Change the Aerosols Optical Properties
- 10 How to Manage Tracers
- 11 Use the Z of LMDZ : Zoomed version
Running in parallel
For large simulation (long run, high resolution etc...), the computational cost can be huge and hence the run time very long. To overcome this issue, the model can be run in parallel. This however requires a few extra steps (compared to compiling and running the serial version of the code). For all the details see the dedicated page.
Disambiguation between ifort, mpif90, etc.
For users not used to compilers and/or compiling and running codes in parallel, namely in MPI mode, there is often some confusion which hopefully the following paragraph might help clarify:
- the compiler (typically gfortran, ifort, pgfortran, etc.) is the required tool to compile the Fortran source code and generate an executable. It is strongly recommended that libraries used by a program are also compiled using the same compiler. Thus if you plan to use different compilers to compile the model, note that you should also have at hand versions of the libraries it uses also compiled with these compilers.
- the MPI (Message Passing Interface) library is a library used to solve problems using multiple processes by enabling message-passing between the otherwise independent processes. There are a number of available MPI libraries out there, e.g. OpenMPI, MPICH or IntelMPI to name a few (you can check out the Building an MPI library page for some information about installing an MPI library). The important point here is that on a given machine the MPI library is related to a given compiler and that it provides related wrappers to compile and run with. Typically (but not always) the compiler wrapper is mpif90 and the execution wrapper is mpirun. If you want to know which compiler is wrapped in the mpif90 compiler wrapper, check out the output of:
mpif90 --version
- In addition a second type of parallelism, shared memory parallelism known as OpenMP, is also implemented in the code. In contradistinction to MPI, OpenMP does not require an external library but is instead implemented as a compiler feature. At run time one must then specify some dedicated environment variables (such as OMP_NUM_THREADS and OMP_STACKSIZE) to specify the number of threads to use per process.
- In practice one should favor compiling and running with both MPI and OPenMP enabled.
- For much more detailed information about compiling and running in parallel, check out the the page dedicated to Parallelism.
A word about the IOIPSL and XIOS libraries
- The IOIPSL (Input Output IPSL) library is a library that has developed by the IPSL community to handle input and outputs of (mostly terrestrial) climate models. For the Generic PCM only a small part of this library is actually used, related to reading and processing the input run.def file. For more details check out the The IOIPSL Library page.
- The XIOS (Xml I/O Server) library is based on client-server principles where the server manages the outputs asynchronously from the client (the climate model) so that the bottleneck of writing data in a parallel environment is alleviated. All aspects of the outputs (name, units, file, post-processing operations, etc.) are then controlled by dedicated XML files which are read at run-time. Using XIOS is currently optional (and requires compiling the GCM with the XIOS library). More about the XIOS library, how to install and use it, etc. here.
Playing with the output files
Changing the output temporal resolution and time duration
- To change the total time of a simulation, you need to open the 'For all the details see run.def. file and change the variable 'nday':
nday = 1000 # this means the simulation will run for 1000 days ; and that the associated output files will also be computed for a total duration of 1000 days
Note: in the example, they are not necessarily 1000 Earth days, because it depends on the definition of the day duration that has been taken in the start files.
- To change the temporal resolution of the output files, you need to open the run.def file and change the variable 'ecritphy':
ecritphy = 200 # this means the simulation will write variables in the output files every 200 time steps of the simulation.
Note: The output temporal resolution of the output files then depends also on the number of timestep per day ('day_step' variable in run.def file). In this example:
nday = 1000 daystep = 800 ecritphy = 200
The output file will provide results every 0.25 days (800/200), and for a total duration of 1000 days (so 4000 time values in total).
Changing the output variable
To select the variable provided in the output file diagfi.nc, you simply need to add the list of variables needed in the diagfi.def.
Note for experts: Some technical variables need to be de-commented in 'physiq_mod.F90' file to be written in the output files.
Spectral outputs
It is possible to provide spectral outputs such as the OLR (Outgoing Longwave Radiation, i.e. the thermal emission of the planet at the top of the atmosphere), the OSR (Outgoing Stellar Radiation, i.e. the light reflected by the planet at the top of the atmosphere), or the GSR (Ground Stellar Radiation, i.e. the light emitted by the star that reaches the surface of the planet).
For this, you need to activate the option 'specOLR' in the callphys.def file, as follows:
specOLR = .true.
The simulations will then create diagspec_VI.nc and diagspec_IR.nc files (along with the standard diagfi.nc file), which contain the spectra of OLR, OSR, GSR, etc.
Note: The resolution of the spectra is defined by that of the correlated-k (opacity) files used for the simulation.
Statistical outputs
TBD (explain how to compute stats.nc files as well as what is inside)
How to Change Vertical and Horizontal Resolutions
When you are using the regular longitude/latitude horizontal grid
To run at a different grid resolution than available initial conditions files, one needs to use the tools newstart.e and start2archive.e
For example, to create initial states at grid resolution 32×24×25 from NetCDF files start and startfi at grid resolution 64×48×32 :
- Create file start_archive.nc with start2archive.e compiled at grid resolution 64×48×32 using old file z2sig.def used previously
- Create files restart.nc and restartfi.nc with newstart.e compiled at grid resolution 32×24×25, using a new file z2sig.def (more details below on the choice of the z2sig.def).
- While executing newstart.e, you need to choose the answer '0 - from a file start_archive' and then press enter to all other requests.
What you need to know about the z2sig.def file
For a model with Nlay layers, the z2sig.def file must contain at least Nlay+1 lines (the other not being read).
The first line is a scale height ($$H$$). The following lines are the target pseudo-altitudes for the model from the bottom up ($$z_i$$). The units do not matter as long as you use the same ones for both.
The model will use these altitudes to compute a target pressure grid ($$p_i$$ ) as follows: \begin{align} \label{def:pseudoalt} p_i &= p_s \exp(-z_i/H), \end{align} where $$p_s$$ is the surface pressure.
As you can see, the scale height and pseudo altitudes enter the equation only through their ratio. So they do not have to to be the real scale-height and altitudes of the atmosphere you are simulating. So you can use the same z2sig.def.def for different planets.
There is no hard rule to follow to determine the altitude/pressure levels you should use. As a rule of thumb, layers should be thiner near the surface to properly resolve the surface boundary layer. Then they should gradually increase in size over a couple scale heights and transition to constant thickness above that. Of course, some specific applications may require thinner layers in some specific parts of the atmospheres.
A little trick for those who prefer to think in terms of (log)pressure: if you use $$H= 1/\ln 10 \approx 0.43429448$$, then $$z_i=x$$ corresponds to a pressure difference with the surface of exactly x pressure decades (i.e. at $$z=1$$, $$p=0.1p_s$$). This is particularly useful for giant-planet applications.
WE SHOULD WRITE A PAGE (LINK HERE) ABOUT HYBRID VERTICAL COORDINATES
When you are using the DYNAMICO icosahedral horizontal grid
The horizontal resolution for the DYNAMICO dynamical core is managed from several setting files, online during the execution. To this purpose, each part of the GCM managing the in/output fields (ICOSAGCM, ICOSA_LMDZ, XIOS) requires to know the input and output grids:
1. context_lmdz_physics.xml:
You can find several grid setup already defined:
1 <domain_definition>
2 <domain id="dom_96_95" ni_glo="96" nj_glo="95" type="rectilinear" >
3 <generate_rectilinear_domain/>
4 <interpolate_domain order="1"/>
5 </domain>
6
7 <domain id="dom_144_142" ni_glo="144" nj_glo="142" type="rectilinear" >
8 <generate_rectilinear_domain/>
9 <interpolate_domain order="1"/>
10 </domain>
11
12 <domain id="dom_512_360" ni_glo="512" nj_glo="360" type="rectilinear" >
13 <generate_rectilinear_domain/>
14 <interpolate_domain order="1"/>
15 </domain>
16
17 <domain id="dom_720_360" ni_glo="720" nj_glo="360" type="rectilinear">
18 <generate_rectilinear_domain/>
19 <interpolate_domain order="1"/>
20 </domain>
21
22 <domain id="dom_out" domain_ref="dom_720_360"/>
23 </domain_definition>
In this example, the output grid for the physics fields is defined by
<domain id="dom_out" domain_ref="dom_720_360"/>
which is an half-degree horizontal resolution. To change this resolution, you have to change name of the domain_ref grid, for instance:
<domain id="dom_out" domain_ref="dom_96_95"/>
2. run_icosa.def: setting file to execute a simulation
In this file, regarding of the horizontal resolution intended, you have to set the number of subdivision on the main triangle. For reminder, each hexagonal mesh is divided in several main triangles and each main triangles are divided in suitable number of sub-triangles according the horizontal resolution
1 #nbp --> number of subdivision on a main triangle: integer (default=40)
2 # nbp = sqrt((nbr_lat x nbr_lon)/10)
3 # nbp: 20 40 80 160
4 # T-edge length (km): 500 250 120 60
5 # Example: nbp(128x96) = 35 -> 40
6 # nbp(256x192)= 70 -> 80
7 # nbp(360x720)= 160 -> 160
8 nbp = 160
If you have chosen the 96_95 output grid in context_lmdz_physics.xml, you have to calculate $$nbp = \sqrt(96x95) / 10 = 10$$ and in this case
nbp = 20
After the number of subdivision of the main triangle, you have to define the number subdivision over each direction. At this stage you need to be careful as the number of subdivisions on each direction:
- needs to be set according to the number of subdivisions on the main triangle nbp
- will determine the number of processors on which the GCM will be most effective
1 ## sub splitting of main rhombus : integer (default=1)
2 #nsplit_i=1
3 #nsplit_j=1
4 #omp_level_size=1
5 ###############################################################
6 ## There must be less MPIxOpenMP processes than the 10 x nsplit_i x nsplit_j tiles
7 ## typically for pure MPI runs, let nproc = 10 x nsplit_i x nsplit_j
8 ## it is better to have nbp/nsplit_i > 10 and nbp/nplit_j > 10
9 ###############################################################
10 #### 40 noeuds de 24 processeurs = 960 procs
11 nsplit_i=12
12 nsplit_j=8
13
14 #### 50 noeuds de 24 processeurs = 1200 procs
15 #nsplit_i=10
16 #nsplit_j=12
With the same example as above, the 96_95 output grid requires:
$$nsplit_i < 2$$ and $$nsplit_j < 2$$
We advise you to select:
## sub splitting of main rhombus : integer (default=1)
nsplit_i=1
nsplit_j=1
and using 10 processors.
How to Change the Topography (or remove it)
The generic model can use in principle any type of surface topography, provided that the topographic data file is available in the right format, and put in the right place. The information content on the surface topography is contained in the startfi.nc, and we do have developed tools (see below) to modify the startfi.nc to account for a new surface topography.
To change the surface topography of a simulation, we recommend to follow the procedure detailed below:
- Create file start_archive.nc with start2archive.e compiled at the same (horizontal and vertical) resolution than the start.nc and startfi.nc files.
- Create files restart.nc and restartfi.nc with newstart.e compiled again at the same (horizontal and vertical) resolution.
- While executing newstart.e, you need to choose the answer '0 - from a file start_archive' and then press enter to all other requests.
- At some point, the script newstart.e asks you to chose the surface topography you want from the list of files available in your 'datagcm/surface_data/' directory.
We do have a repository of for Venus, Earth and Mars through time available at https://web.lmd.jussieu.fr/~lmdz/planets/LMDZ.GENERIC/datagcm/surface_data/. You can download the surface topography files and place them in your 'datagcm/surface_data/' directory.
We also offer a tutorial to design new topography maps here: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Building_Surface_Topography_Files
Special note: To remove the topography, you can simply add the following tag in callphys.def (but currently, this only works if callsoil=.false.):
nosurf = .true.
How to Change the Stellar Spectrum
To simulate the effect of the star's radiation on a given planetary atmosphere, it is necessary to accurately represent the stellar spectrum (spectral shape and total bolometric flux) at the top of this atmosphere. In the model, we have set up two different options to model the stellar spectra of any star.
Black Body Stellar Spectra
First, it is possible to simply use a black body. In this case, the stellar spectrum depends only on the effective temperature of the star which is provided to the model.
For this, you need to activate the option 'stelbbody' in the callphys.def file, as follows:
stelbbody = .true.
and then add, also in the callphys.def file, the following line:
stelTbb = 3500.0
to specify the effective temperature of the host star. (in this example, we have chosen a M-star with an effective temperature of 3500K)
Pre-Tabulated spectra
Second, the model can read a file containing any pre-computed stellar spectrum. Traditionally, we have used synthetic spectra from the PHOENIX database, that we adapt to the Generic PCM by decreasing the spectral resolution (use 10000 points with a fixed spectral resolution of 0.001 micron) and by adapting the units (in W/m2/micron). This is the option that is generally preferred to better represent the effect of the star (whose real spectrum can strongly deviate from the black body approximation).
For this, you need to make sure the option 'stelbbody' in the callphys.def file is equal to .false. (it not specified, by default stelbbody is assumed to be .false.).
Then you need to add in the callphys.def file, the following line:
startype = 1
and change the value of startype depending on the star you want to model. (here 1 means we use the solar spectrum)
To know which stellar spectra are available, you need to open the file LMDZ.GENERIC/libf/phystd/ave_stelspec.F90 and adapt the value of startype accordingly. You also need to make sure the spectra are available in your /datadir/stellar_spectra (or /datagcm/stellar_spectra) directory.
To calculate the true stellar spectrum at the top of the atmosphere, the Generic PCM renormalizes the stellar spectrum by the bolometric flux at 1 Astronomical Unit (AU) provided by the user, which it then converts into the true stellar spectrum by using the star-planet distance.
To specify the flux at 1 AU, you need to add in the callphys.def file, the following line:
Fat1AU = 1366.0
(here 1366W/m2 corresponds to the flux at 1AU for the Sun)
1st NOTE: We will improve this second part (Martin and Mathilde) by the end of 2022.
2nd NOTE: The Generic PCM eventually has the capability to run without any stellar flux. To do that, you can simply put Fat1AU = 0. (@LUCAS_TEINTURIER, could you check that?)
How to Change the Opacity Tables
SECTION ET PAGES EN COURS D'ÉCRITURE
The model uses opacity tables to compute heating rates throughout the atmosphere. These opacity tables are generated "offline" for a given set of pressures, temperatures, a given composition and a specific spectral decomposition.
Getting opacity tables for your desired atmospheric composition
- You should first check our common repository here (https://web.lmd.jussieu.fr/~lmdz/planets/LMDZ.GENERIC/datagcm/corrk_data/) to check whether your desired opacity table is not already included.
We have a dedicated page on how to build new opacity tables here: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Building_Opacity_Tables ; N
Implementing your opacity tables in the Generic PCM
, follow these steps:
- copy your directory containing your opacity tables in .... It has to contain... you need to build opacity tables
- change corrkdir = ... in callphys.def with the name of that directory.
- change gases.def: it has to be consistent with Q.dat. A special case concerns...
- change -b option when compiling the model with makelmdz_fcm: it has to correspond to the number of bands (in the IR x in the visible) of the new opacity tables. For instance, compile with -b 38x26 if you used 38 bands in the infrared and 26 in the visible to generate the opacity tables.
How to Change the Aerosols Optical Properties
Aerosol optical properties are represented using three distinct properties: the extinction coefficient (Q_ext), the single scattering albedo (omega) and the asymmetry factor (g).
The Generic PCM can compute the radiative effects of any aerosol, provided that they optical properties (Q_ext, omega, g) are tabulated and provided in the right format.
Getting optical properties for your aerosols
- You should first check our common repository here (https://web.lmd.jussieu.fr/~lmdz/planets/LMDZ.GENERIC/datagcm/aerosol_properties/) to check whether your favorite aerosol is not already included. The optical properties of each aerosol is built using two distinct files: one in the 'visible' (which is used in the visible part of the radiative transfer, to compute the fate of stellar radiation) and one in the 'infrared' (which is used in the thermal infrared part of the radiative transfer, to compute the fate of thermal emission by the surface and atmosphere).
For instance, if you want to include the radiative effect of CO2 ice clouds, then you just need the files:
- optprop_co2ice_vis_n50.dat
- optprop_co2ice_ir_n50.dat
- Otherwise, you can create your own tables of optical properties, using existing databases. Check this page to learn how to do this: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Building_Tables_Of_Aerosol_Optical_Properties
Implementing your aerosols in the Generic PCM
Before including the aerosol scheme you want to use, you need to indicate the number of aerosol layers in callphys.def with the option naerkind=#number_of_aerosol_layers.
There are two available aerosol schemes you can use to add a new aerosol in the Generic PCM:
The n-layer aerosol scheme
You can use the n-layer scheme (implemented by Jan Vatant d'Ollone) to easily prescribe an aerosol vertical distribution. The scheme is activated by adding aeronlay=.true. in callphys.def.
In this scheme, each layer can have different optical properties, particle sizes, etc. You can use different options (e.g. to fix the aerosol distribution between two atmospheric pressures) of the scheme with aeronlay_choice = 1, 2, etc.
Firstly, you have to precise the number of aerosols of this scheme. For instance,:
nlayaero = 3
Then, you can indicate the properties of your aerosol layer(s) one after the other on the same line. For 3 aerosol layers, we have:
aeronlay_tauref = 1.0 0.05 0.03 aeronlay_lamref = 0.8e-6 0.8e-6 0.8e-6 aeronlay_choice = 2 2 2 aeronlay_pbot = 2.0e5 1.6e5 0.2e5 aeronlay_ptop = 0.10e5 2.0e5 1. aeronlay_sclhght = 0.1 2.0 0.1 aeronlay_size = 0.8e-6 0.05e-6 2.5e-6 aeronlay_nueff = 0.3 0.3 0.3
aeronlay_tauref is the optical depth at the reference wavelength aeronlay_lamref (in metres). The aeronlay_choice allows you to choose the aerosol distribution between the bottom pressure (aeronlay_pbot) and the top pressure (aeronlay_ptop) of the aerosol layer (aeronlay_choice=1) or from a bottom pressure (aeronlay_pbot) with a fractional scale height (aeronlay_sclhght) with the aeronlay_choice=2. For the corresponding layer, if you use the choice=2, the aeronlay_ptop is deactivated. In the same way, for the choice=1, the aeronlay_sclhght is deactivated for the corresponding layer. You can choose the mean radius of the particles with the aeronlay_size (in metres) and the corresponding effective standard deviation with aeronlay_nueff.
And finally, you will need to provide the name of your aerosols optical properties tables in callphys.def. For instance:
optprop_aeronlay_vis = optprop_neptune_n2_vis_n30.dat optprop_neptune_n3_vis_n30.dat optprop_ch4_vis.dat optprop_aeronlay_ir = optprop_neptune_n2_ir_n30.dat optprop_neptune_n3_ir_n30.dat optprop_ch4_ir.dat
(here, to use the optical properties of aerosols for Neptune)
We encourage you to search for the keyword "aeronlay" in the source code to use more specific options of the scheme.
The generic condensable scheme
You can use the generic condensable scheme (implemented by Lucas Teinturier) to easily compute the radiative effect of cloud particles formed by condensation. The scheme has to be used conjointly to the generic condensation scheme. To activate it, one needs to add the aerogeneric=n( with n>0 the number of condensable species handled in the scheme) in callphys.def. On top of that, one needs to add in traceur.def, on the solid/ice traceur the option is_rgcs = 1.
In the model itself, one needs then to manually add your aerosol by modifying the suaer_corrk.F90 routine (if your favorite aerosol is not there yet already) to specify the correct names for your condensing species. A more dynamical/flexible approach will be added at some point, so one won't need to directly modify the code.
More information on how to use the scheme is provided here: https://lmdz-forge.lmd.jussieu.fr/mediawiki/Planets/index.php/Radiative_Generic_Condensable_Specie
Don’t forget to add your new aerosol species in traceur.def and adapt the -t option (with the correct number of radiatively active aerosols) at the compilation stage.
How to Manage Tracers
Tracers are managed thanks to the traceur.def file.
Specific treatment of some tracers (e.g., water vapor cycle) can be added directly in the model and an option added in callphys.def file.
Use the Z of LMDZ : Zoomed version
Do we need this? Has anyone already made use of the zoom module?