Difference between revisions of "Run.def for Pluto-DYNAMICO"
Line 1: | Line 1: | ||
− | + | An example of a run_icosa.def to use for a Pluto-DYNAMICO run is given in the LMDZ.PLUTO/deftank/dynamico subdirectory (see e.g. https://trac.lmd.jussieu.fr/Planeto/browser/trunk/LMDZ.PLUTO/deftank/dynamico/run_icosa.def ). | |
− | + | It should look something like: | |
<pre> | <pre> | ||
#---------------- Mesh ---------------- | #---------------- Mesh ---------------- | ||
# Number of subdivisions on a main triangle : integer (default=40) | # Number of subdivisions on a main triangle : integer (default=40) | ||
− | nbp = | + | nbp = 20 |
# Number of vertical layers : integer (default=19) | # Number of vertical layers : integer (default=19) | ||
− | llm = | + | llm = 27 |
# Vertical grid : [std|ncar|ncarl30] (default=std) | # Vertical grid : [std|ncar|ncarl30] (default=std) | ||
Line 19: | Line 19: | ||
# Sub splitting of main rhombus : integer (default=1) | # Sub splitting of main rhombus : integer (default=1) | ||
− | nsplit_i = | + | # NB: total number of computational subdomains is 10*nsplit_i*nsplit_j |
− | nsplit_j = | + | nsplit_i = 1 |
+ | nsplit_j = 1 | ||
#number of openmp task on vertical level | #number of openmp task on vertical level | ||
Line 33: | Line 34: | ||
# dt = 720 | # dt = 720 | ||
# Alternative to specifying "dt", specify number of steps per day : day_step | # Alternative to specifying "dt", specify number of steps per day : day_step | ||
− | day_step = | + | day_step = 2400 |
# Number of tracers : integer (default=1) | # Number of tracers : integer (default=1) | ||
Line 44: | Line 45: | ||
# Run length in s : real (default=??) | # Run length in s : real (default=??) | ||
− | run_length = | + | # run_length = 100 |
# Alternative to specifying "run_length", specify number of days to run : ndays | # Alternative to specifying "run_length", specify number of days to run : ndays | ||
− | + | ndays=1 | |
# Interval in s between two outputs : integer (default=??) | # Interval in s between two outputs : integer (default=??) | ||
− | write_period = | + | write_period = 1000 |
Line 61: | Line 62: | ||
# Initial state : | # Initial state : | ||
# [jablonowsky06|academic|dcmip[1-4]|heldsz|dcmip2_schaer_noshear] (default=jablonowsky06) | # [jablonowsky06|academic|dcmip[1-4]|heldsz|dcmip2_schaer_noshear] (default=jablonowsky06) | ||
− | etat0 = isothermal | + | #etat0=isothermal |
− | etat0_isothermal_temp= | + | #etat0_isothermal_temp=80 |
− | etat0_ps_white_noise=0.01 | + | etat0 = start_file |
− | etat0_theta_rhodz_white_noise=0.01 | + | etat0_start_file_colocated=true |
+ | # start file name (default: start) | ||
+ | start_file_name = start_icosa | ||
+ | |||
+ | # restart file name (default: restart) | ||
+ | restart_file_name = restart_icosa | ||
+ | |||
+ | # optional perturbations to add to initial state: | ||
+ | #etat0_ps_white_noise=0.01 | ||
+ | #etat0_theta_rhodz_white_noise=0.01 | ||
+ | #etat0_u_white_noise=0.01 | ||
# Physics package : [none|held_suarez|dcmip] (default=none) | # Physics package : [none|held_suarez|dcmip] (default=none) | ||
+ | # physics = held_suarez | ||
+ | # physics = none | ||
physics = phys_external | physics = phys_external | ||
− | startphy_file = false | + | # Call physics every itau_physics dynamical steps |
+ | itau_physics=5 | ||
+ | # try to be consistant with LMDZ | ||
+ | iphysiq=5 | ||
+ | |||
+ | startphy_file = true | ||
+ | # startphy_file = false | ||
# Dissipation time for grad(div) : real (default=5000) | # Dissipation time for grad(div) : real (default=5000) | ||
Line 75: | Line 94: | ||
# Exponent of grad(div) disspation : integer (default=1) | # Exponent of grad(div) disspation : integer (default=1) | ||
− | nitergdiv = | + | nitergdiv = 1 |
# Dissipation time for curl(curl) : real (default=5000) | # Dissipation time for curl(curl) : real (default=5000) | ||
Line 89: | Line 108: | ||
niterdivgrad = 2 | niterdivgrad = 2 | ||
− | |||
</pre> | </pre> | ||
+ | Where a couple of noteworthy parameters are discussed below: | ||
+ | |||
+ | == Initial conditions == | ||
+ | In practice one would run with initial condition files (default behavior): | ||
+ | * a <code>start_icosa.nc</code> dynamics start file (the name of the file can be chosen via the <code>start_file_name=</code> option) | ||
+ | * a <code>startfi.nc</code> physics start files (sticking to this name is mandatory) | ||
+ | |||
+ | One may start "from scratch" without any initial condition files by the following options: | ||
+ | * set <code>etat0=isothermal</code> and then the value of temperature to initialize the atmosphere to with <code>etat0_isothermal_temp=</code> | ||
+ | * set <code>startphy_file = false</code> so that the physics do not read inputs from a startfi.nc file | ||
+ | |||
+ | == Number of computational subdomains == | ||
+ | In practice DYNAMICO is run in parallel (at least in MPI, usually in mixed MPI/OpenMP). In practice one should use as many cores as defined computational subdomains, i.e. have ncores such that: | ||
+ | <pre> | ||
+ | ncores = 10 * nsplit_i * nsplit_j | ||
+ | </pre> | ||
+ | Because the DYNAMICO grid is made up of 10 rhombus and each rhombus will be split in <code>nsplit_i * nsplit_j</code> subdomains | ||
+ | |||
+ | |||
+ | [[Category:Pluto-DYNAMICO]] |
Latest revision as of 18:00, 11 April 2025
An example of a run_icosa.def to use for a Pluto-DYNAMICO run is given in the LMDZ.PLUTO/deftank/dynamico subdirectory (see e.g. https://trac.lmd.jussieu.fr/Planeto/browser/trunk/LMDZ.PLUTO/deftank/dynamico/run_icosa.def ). It should look something like:
#---------------- Mesh ---------------- # Number of subdivisions on a main triangle : integer (default=40) nbp = 20 # Number of vertical layers : integer (default=19) llm = 27 # Vertical grid : [std|ncar|ncarl30] (default=std) # disvert = std disvert = plugin hybrid = .false. # Mesh optimisation : number of iterations : integer (default=0) optim_it = 1000 # Sub splitting of main rhombus : integer (default=1) # NB: total number of computational subdomains is 10*nsplit_i*nsplit_j nsplit_i = 1 nsplit_j = 1 #number of openmp task on vertical level omp_level_size=1 #---------------- Numerics ---------------- # Advection called every itau_adv time steps : integer (default=2) itau_adv = 1 # Time step in s : real (default=480) # dt = 720 # Alternative to specifying "dt", specify number of steps per day : day_step day_step = 2400 # Number of tracers : integer (default=1) nqtot = 7 #---------------- Time and output ---------------- # Time style : [none|dcmip] (default=dcmip) time_style = none # Run length in s : real (default=??) # run_length = 100 # Alternative to specifying "run_length", specify number of days to run : ndays ndays=1 # Interval in s between two outputs : integer (default=??) write_period = 1000 #---------------- Planet ---------------- INCLUDEDEF=pluto_const.def #---------------- Physical parameters ---------------- # Initial state : # [jablonowsky06|academic|dcmip[1-4]|heldsz|dcmip2_schaer_noshear] (default=jablonowsky06) #etat0=isothermal #etat0_isothermal_temp=80 etat0 = start_file etat0_start_file_colocated=true # start file name (default: start) start_file_name = start_icosa # restart file name (default: restart) restart_file_name = restart_icosa # optional perturbations to add to initial state: #etat0_ps_white_noise=0.01 #etat0_theta_rhodz_white_noise=0.01 #etat0_u_white_noise=0.01 # Physics package : [none|held_suarez|dcmip] (default=none) # physics = held_suarez # physics = none physics = phys_external # Call physics every itau_physics dynamical steps itau_physics=5 # try to be consistant with LMDZ iphysiq=5 startphy_file = true # startphy_file = false # Dissipation time for grad(div) : real (default=5000) tau_graddiv = 18000 # Exponent of grad(div) disspation : integer (default=1) nitergdiv = 1 # Dissipation time for curl(curl) : real (default=5000) tau_gradrot = 18000 # Exponent of curl(curl) disspation : integer (default=1) nitergrot = 2 # Dissipation time for div(grad) : real (default=5000) tau_divgrad = 18000 # Exponent of div(grad) disspation : integer (default=1) niterdivgrad = 2
Where a couple of noteworthy parameters are discussed below:
Initial conditions
In practice one would run with initial condition files (default behavior):
- a
start_icosa.nc
dynamics start file (the name of the file can be chosen via thestart_file_name=
option) - a
startfi.nc
physics start files (sticking to this name is mandatory)
One may start "from scratch" without any initial condition files by the following options:
- set
etat0=isothermal
and then the value of temperature to initialize the atmosphere to withetat0_isothermal_temp=
- set
startphy_file = false
so that the physics do not read inputs from a startfi.nc file
Number of computational subdomains
In practice DYNAMICO is run in parallel (at least in MPI, usually in mixed MPI/OpenMP). In practice one should use as many cores as defined computational subdomains, i.e. have ncores such that:
ncores = 10 * nsplit_i * nsplit_j
Because the DYNAMICO grid is made up of 10 rhombus and each rhombus will be split in nsplit_i * nsplit_j
subdomains