The run icosa.def Input File

From Planets
Jump to: navigation, search

When running with the Dynamico dynamical core, one must provide a run.def file, just like when running with the LMDZ dynamical core. Common practice is to have a dedicated run_icosa.def file included in the run.def file, included via the line:

INCLUDEDEF=run_icosa.def

where all variables and flags relevant to dynamico are set.

Example of a run_icosa.def file

Note that lines begining with # are treated as comments.


# -------------------------------------------------------------------------------------
# --------------------------------------- Mesh ----------------------------------------
# -------------------------------------------------------------------------------------

# Number of subdivision on a main triangle (nbp) : integer (default=40)
nbp = 160
#nbp = 40

# nbp                 20  40  80 160
# T-edge length (km) 500 250 120  60

## sub splitting of main rhombus : integer (default=1)
#nsplit_i=1
#nsplit_j=1
#omp_level_size=1
###########################################
## There must be less MPIxOpenMP processes than the 10 x nsplit_i x nsplit_j tiles
## typically for pure MPI runs, let nproc = 10 x nsplit_i x nsplit_j
## it is better to have nbp/split >~ 10
###########################################
## 2 procs
#nsplit_i = 1
#nsplit_j = 1
## 40 procs
#nsplit_i = 2
#nsplit_j = 2
#### 40 noeuds de 24 processeurs = 960 procs
#nsplit_i = 8
#nsplit_j = 12
#### 30 noeuds de 24 processeurs = 720 procs
#nsplit_i = 8
#nsplit_j = 9
#### 20 noeuds de 24 processeurs = 480 procs
nsplit_i = 8
nsplit_j = 6

# Number of vertical layer (llm) : integer (default=19)
llm = 64

# disvert : vertical discretisation : string (default='std') : std, ncar, ncar30l
disvert=read_apbp

# optim_it : mesh optimisation : number of iteration : integer (default=0)
#optim_it = 1000
optim_it = 0

# -------------------------------------------------------------------------------------
# --------------------------------------- Time ----------------------------------------
# -------------------------------------------------------------------------------------

## scheme type : string ( default='runge_kutta') euler, leapfrog_matsuno, runge_kutta )
#scheme = runge_kutta   leapfrog_matsuno

## matsuno period : integer ( default=5)
#matsuno_period = 5

# number of dynamical time step per day
day_step = 80

# advection called every itau_adv time steps : integer (default 2)
# standard : umax=100m/s vs c=340m/s (ratio 1:3)
# in JW06 umax=35m/s vs c=340m/s (ratio 1:10)
itau_adv=3

# number of days to run (substitute run_length) !!!!
ndays = 5


####################################

##activate IO (default = true)
#enable_io = false

## output with XIOS (only if compiled with XIOS): true/false (default true)
#xios_output=false

# output field period (only when not using XIOS) : integer (default none)
#write_period=7200
# write_period=14400
#write_period=118.9125
write_period = 446.75


#itau_write_etat0=380520

# -------------------------------------------------------------------------------------
# -------------------------------------- Misc -----------------------------------------
# -------------------------------------------------------------------------------------

# number of tracer (nqtot) : integer (default 1)
nqtot=2

# pression value where output is interpolated : real (default=0, no output)
out_pression_level=85000

# etat0 : initial state : string (default=jablonowsky06) : 
# jablonowsky06, academic, ncar

etat0=start_file
etat0_start_file_colocated=false

## -- to cross the 2y limit
#etat0_start_iteration=0
#run_length=38052

#etat0=isothermal
#etat0_isothermal_temp=175

# start file name (default is start.nc)
# restart file name (default is restart.nc)
start_file_name=start_icosa
restart_file_name=restart_icosa

#######################################
### to run a one-day simulation
### using a ASCII profile
### to create start files
#######################################
#etat0=temperature_profile
#temperature_profile_file=temp_profile.txt


# -------------------------------------------------------------------------------------
# ----------------------------------- Dynamics ----------------------------------------
# -------------------------------------------------------------------------------------

# caldyn : computation type for gcm equation : string (default=gcm) : gcm, adv
caldyn=gcm

# caldyn_conserv : string (default=energy) : energy,enstrophy
caldyn_conserv=energy

# caldyn_exner : scheme for computing Exner function : string (default=direct) : direct,lmdz
caldyn_exner=direct

# caldyn_hydrostat : scheme for computing geopotential : string (default=direct) : direct,lmdz
caldyn_hydrostat=direct

# guided_type : string (default=none) : none, ncar
guided_type=none

#Sponge layer
### iflag_sponge=0 for no sponge
### iflag_sponge=1 for sponge over 4 topmost layers
### iflag_sponge=2 for sponge from top to ~1% of top layer pressure
### tau_sponge --> damping frequency at last layer
### e-5 medium / e-4 strong yet reasonable / e-3 very strong
### mode_sponge=1 for u,v --> 0
### mode_sponge=2 for u,v --> zonal mean (NOT IMPLEMENTED)
### mode_sponge=3 for u,v,h --> zonal mean (NOT IMPLEMENTED)
#iflag_sponge = 1
iflag_sponge = 0
#tau_sponge = 1.e-4
#mode_sponge = 1


# -------------------------------------------------------------------------------------
# ---------------------------------- Dissipation --------------------------------------
# -------------------------------------------------------------------------------------

# dissipation time graddiv : real (default=5000)
tau_graddiv = 10000

# number of iteration for graddiv : integer (default=1)
nitergdiv = 2

# dissipation time nxgradrot (default=5000)
tau_gradrot = 10000

# number of iteration for nxgradrot : integer (default=1)
nitergrot=2

# dissipation time divgrad (theta) (default=5000)
tau_divgrad= 10000

# number of iteration for divgrad : integer (default=1)
niterdivgrad=2

# Rayleigh friction : string (default=none) : 
#            none, dcmip2_schaer_noshear, dcmip2_schaer_shear, giant_liu_schneider
rayleigh_friction_type = giant_liu_schneider
rayleigh_limlat = 16.
rayleigh_friction_tau = 8640000.

# -------------------------------------------------------------------------------------
# ------------------------------------- Physics ---------------------------------------
# -------------------------------------------------------------------------------------

# (itau_physics=160)*(dt=111.6875) --> half a jupiter day as physical timestep
# ---- for some reason (day consistency, physical timestep must be less a day)
# ---- change also in run.def
#itau_physics = 160
itau_physics = 40

# kind of physics : string : none, dcmip (default=none)
physics=phys_external


# -------------------------------------------------------------------------------------

some information about noteworthy parameters in run_icosa.def

concerning the grid

  • nbp: number of intervals along each of the 10 rhombus that make the icosahedron. In practice the number of horizontal grid points (i.e. number of atmospheric columns) of the GCM is thus roughly 10*nbp*nbp.
  • nsplit_i and nsplit_j: each of the 10 rhombus can be sub-divided along each direction (i and j) to generate nsplit_i*nsplit_j tiles. This is important for parallelism as ideally one will want to use one computing core for each tile. The GCM should therefore be run using 10*nsplit_i*nsplit_j cores. Moreover it is recommended that an individual tiles consist of at least 15*15 grid points (there are extra redundant computations required on the border of the domain; the number of points there should thus be a small fraction of the entire tile).
  • llm: number of vertical layers , with various options on how they are setup, as indicated by the disvert flag

concerning the time stepping

  • scheme: the time stepping scheme (the default, runge_kutta, is recommended)
  • day_step: number of time steps per day
  • ndays: number of days to run
  • itau_physics: frequency (in dynamical steps) at which the physics will be called