Run.def for Pluto-DYNAMICO

From Planets
Revision as of 18:00, 11 April 2025 by Emillour (talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

An example of a run_icosa.def to use for a Pluto-DYNAMICO run is given in the LMDZ.PLUTO/deftank/dynamico subdirectory (see e.g. https://trac.lmd.jussieu.fr/Planeto/browser/trunk/LMDZ.PLUTO/deftank/dynamico/run_icosa.def ). It should look something like:

#---------------- Mesh ----------------

# Number of subdivisions on a main triangle : integer (default=40)
nbp = 20

# Number of vertical layers : integer (default=19)
llm = 27

# Vertical grid : [std|ncar|ncarl30] (default=std)
# disvert = std
disvert = plugin
hybrid = .false.

# Mesh optimisation : number of iterations : integer (default=0)
optim_it = 1000

# Sub splitting of main rhombus : integer (default=1)
# NB: total number of computational subdomains is 10*nsplit_i*nsplit_j
nsplit_i = 1
nsplit_j = 1

#number of openmp task on vertical level
omp_level_size=1

#---------------- Numerics ----------------

# Advection called every itau_adv time steps : integer (default=2)
itau_adv = 1

# Time step in s : real (default=480)
# dt = 720
# Alternative to specifying "dt", specify number of steps per day : day_step
day_step = 2400

# Number of tracers : integer (default=1)
nqtot = 7

#---------------- Time and output ----------------

# Time style : [none|dcmip] (default=dcmip)
time_style = none

# Run length in s : real (default=??)
# run_length = 100
# Alternative to specifying "run_length", specify number of days to run : ndays
ndays=1

# Interval in s between two outputs : integer (default=??)
write_period = 1000



#---------------- Planet ----------------

INCLUDEDEF=pluto_const.def

#---------------- Physical parameters ----------------

# Initial state :
#   [jablonowsky06|academic|dcmip[1-4]|heldsz|dcmip2_schaer_noshear] (default=jablonowsky06)
#etat0=isothermal
#etat0_isothermal_temp=80
etat0 = start_file
etat0_start_file_colocated=true
# start file name (default: start)
start_file_name = start_icosa

# restart file name (default: restart)
restart_file_name = restart_icosa

# optional perturbations to add to initial state:
#etat0_ps_white_noise=0.01
#etat0_theta_rhodz_white_noise=0.01
#etat0_u_white_noise=0.01

# Physics package : [none|held_suarez|dcmip] (default=none)
# physics = held_suarez
# physics = none
physics = phys_external

# Call physics every itau_physics dynamical steps
itau_physics=5
# try to be consistant with LMDZ 
iphysiq=5

startphy_file = true
# startphy_file = false

# Dissipation time for grad(div) : real (default=5000)
tau_graddiv = 18000

# Exponent of grad(div) disspation : integer (default=1)
nitergdiv = 1

# Dissipation time for curl(curl) : real (default=5000)
tau_gradrot = 18000

# Exponent of curl(curl) disspation : integer (default=1)
nitergrot = 2

# Dissipation time for div(grad) : real (default=5000)
tau_divgrad = 18000

# Exponent of div(grad) disspation : integer (default=1)
niterdivgrad = 2

Where a couple of noteworthy parameters are discussed below:

Initial conditions

In practice one would run with initial condition files (default behavior):

  • a start_icosa.nc dynamics start file (the name of the file can be chosen via the start_file_name= option)
  • a startfi.nc physics start files (sticking to this name is mandatory)

One may start "from scratch" without any initial condition files by the following options:

  • set etat0=isothermal and then the value of temperature to initialize the atmosphere to with etat0_isothermal_temp=
  • set startphy_file = false so that the physics do not read inputs from a startfi.nc file

Number of computational subdomains

In practice DYNAMICO is run in parallel (at least in MPI, usually in mixed MPI/OpenMP). In practice one should use as many cores as defined computational subdomains, i.e. have ncores such that:

ncores = 10 * nsplit_i * nsplit_j

Because the DYNAMICO grid is made up of 10 rhombus and each rhombus will be split in nsplit_i * nsplit_j subdomains