Advanced Use of the GCM

From Planets
Revision as of 18:43, 20 June 2022 by Martin.turbet (talk | contribs) (Distinctions beween ifort, mpi, etc.)

Jump to: navigation, search

Running in parallel

For large simulation (long run, high resolution etc...), the waiting time can becomes very long. To overcome this issue, the model can works in parallel. It first needs to be compiled in parallel mode and then be run with a specific command. For all the details see the dedicated page Parallelism

Distinctions beween ifort, mpi, etc.

TO BE COMPLETED by Ehouarn ?

Distinction between using IOIPSL or XIOS

TO BE COMPLETED

How to Change Vertical and Horizontal Resolutions

When you are using the regular longitude/latitude horizontal grid

To run at a different grid resolution than available initial conditions files, one needs to use the tools newstart.e and start2archive.e

For example, to create initial states at grid resolution 32×24×25 from NetCDF files start and startfi at grid resolution 64×48×32 :

  • Create file start_archive.nc with start2archive.e compiled at grid resolution 64×48×32 using old file z2sig.def used previously
  • Create files restart.nc and restartfi.nc with newstart.e compiled at grid resolution 32×24×25, using a new file z2sig.def (more details below on the choice of the z2sig.def)

What you need to know about the z2sig.def file (change of vertical resolution only)

TO BE COMPLETED


When you are using the DYNAMICO icosahedral horizontal grid

The horizontal resolution for the DYNAMICO dynamical core is managed from several setting files, online during the execution. To this purpose, each part of the GCM managing the in/output fields (ICOSAGCM, ICOSA_LMDZ, XIOS) requires to know the input and output grids:

1. context_lmdz_physics.xml:

You can find several grid setup already defined:

 1 <domain_definition>
 2     <domain id="dom_96_95" ni_glo="96" nj_glo="95" type="rectilinear"  >
 3       <generate_rectilinear_domain/>
 4       <interpolate_domain order="1"/>
 5     </domain>
 6 
 7     <domain id="dom_144_142" ni_glo="144" nj_glo="142" type="rectilinear"  >
 8       <generate_rectilinear_domain/>
 9       <interpolate_domain order="1"/>
10     </domain>
11 
12     <domain id="dom_512_360" ni_glo="512" nj_glo="360" type="rectilinear"  >
13       <generate_rectilinear_domain/>
14       <interpolate_domain order="1"/>
15     </domain>
16 
17     <domain id="dom_720_360" ni_glo="720" nj_glo="360" type="rectilinear">
18       <generate_rectilinear_domain/>
19       <interpolate_domain order="1"/>
20     </domain>
21 
22     <domain id="dom_out" domain_ref="dom_720_360"/>
23 </domain_definition>

In this example, the output grid for the physics fields is defined by

<domain id="dom_out" domain_ref="dom_720_360"/>

which is an half-degree horizontal resolution. To change this resolution, you have to change name of the domain_ref grid, for instance:

<domain id="dom_out" domain_ref="dom_96_95"/>


2. run_icosa.def: setting file to execute a simulation

In this file, regarding of the horizontal resolution intended, you have to set the number of subdivision on the main triangle. For reminder, each hexagonal mesh is divided in several main triangles and each main triangles are divided in suitable number of sub-triangles according the horizontal resolution

1 #nbp --> number of subdivision on a main triangle: integer (default=40)
2 #              nbp = sqrt((nbr_lat x nbr_lon)/10)
3 #              nbp:                 20   40   80  160
4 #              T-edge length (km): 500  250  120   60
5 #              Example: nbp(128x96) = 35 -> 40
6 #                       nbp(256x192)= 70 -> 80
7 #                       nbp(360x720)= 160 -> 160
8 nbp = 160


If you have chosen the 96_95 output grid in context_lmdz_physics.xml, you have to calculate $$nbp = \sqrt(96x95) / 10 = 10$$ and in this case

nbp = 20


After the number of subdivision of the main triangle, you have to define the number subdivision over each direction. At this stage you need to be careful as the number of subdivisions on each direction:

  • needs to be set according to the number of subdivisions on the main triangle nbp
  • will determine the number of processors on which the GCM will be most effective
 1 ## sub splitting of main rhombus : integer (default=1)
 2 #nsplit_i=1
 3 #nsplit_j=1
 4 #omp_level_size=1
 5 ###############################################################
 6 ## There must be less MPIxOpenMP processes than the 10 x nsplit_i x nsplit_j tiles
 7 ## typically for pure MPI runs, let nproc = 10 x nsplit_i x nsplit_j
 8 ## it is better to have nbp/nsplit_i  > 10 and nbp/nplit_j > 10
 9 ###############################################################
10 #### 40 noeuds de 24 processeurs = 960 procs
11 nsplit_i=12
12 nsplit_j=8
13 
14 #### 50 noeuds de 24 processeurs = 1200 procs
15 #nsplit_i=10
16 #nsplit_j=12


With the same example as above, the 96_95 output grid requires: $$nsplit_i < 2$$ and $$nsplit_j < 2$$ We advise you to select:

## sub splitting of main rhombus : integer (default=1)
nsplit_i=1
nsplit_j=1

and using 10 processors.

How to Change the Topography (or remove it)

The generic model can use in principle any type of surface topography, provided that the topographic data file is available in the right format, and put in the right place.

How to Change the Opacity Tables

TO BE COMPLETED

How to Manage Tracers

Tracers are managed thanks to the traceur.def file.

Specific treatment of some tracers (e.g., water vapor cycle) can be added directly in the model and an option added in callphys.def file.