Advanced Topics Mars PCM

From Planets
Revision as of 16:18, 26 August 2023 by Emillour (talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Running in parallel

For large simulation (long runs, high resolution etc...), the computational cost can be huge and hence the run time quite long. To somewhat overcome this issue, the model can be run in parallel. This however requires a few extra steps (compared to compiling and running the serial version of the code, as described in the Quick Install and Run Mars PCM page).

Various comments and disambiguations around parallelism and related tools

For users not used to compilers and/or compiling and running codes in parallel, there is often some confusion which hopefully the following paragraph might help clarify:

  • the compiler (typically gfortran, ifort, pgfortran, etc.) is the required tool to compile the Fortran source code and generate an executable. It is strongly recommended that libraries used by a program are also compiled using the same compiler. Thus if you plan to use different compilers to compile the model, note that you should also have at hand versions of the libraries it uses also compiled with these compilers.
  • A first level of parallelism is obtained using MPI. The MPI (Message Passing Interface) library is a library used to solve problems using multiple processes by enabling message-passing between the otherwise independent processes. There are a number of available MPI libraries out there, e.g. OpenMPI, MPICH or IntelMPI to name a few (you can check out the Building an MPI library page for some information about installing an MPI library). The important point here is that on a given machine the MPI library is related to a given compiler and that it provides related wrappers to compile and run with. Typically (but not always) the compiler wrapper is mpif90 and the execution wrapper is mpirun. If you want to know which compiler is wrapped in the mpif90 compiler wrapper, check out the output of:
mpif90 --version
  • In addition a second type of parallelism, shared memory parallelism known as OpenMP, is also implemented in the code. In contradistinction to MPI, OpenMP does not require an external library but is instead implemented as a compiler feature. At run time one must then specify some dedicated environment variables (such as OMP_NUM_THREADS and OMP_STACKSIZE) to specify the number of threads to use per process.

Compiling the Mars PCM

  • In practice one should favor compiling and running with both MPI and OPenMP enabled, and therefore compile with the
-parallel mpi_omp

option of the makelmdz_fcm compilation script

  • It is also advised (but not mandatory) to use the XIOS library for outputs when in parallel, in which case one should also compile with the
-io xios

option of the makelmdz_fcm compilation script

Running the Mars PCM in Parallel

Check out pages like Parallelism for an overview of how it's done.

Other Mars PCM configurations worth knowing about

Running the 1D version of the Mars PCM

One can run the PCM in a single column configuration, i.e. as a 1D model. It is the same physics package that is used in 1D than in 3D. Compilation is also done using the makelmdz_fcm script except that only a number of vertical layers needs be specified and that the main program is called "testphys1d". For example to compile a 29 vertical layer version of "testphys1d" one would run a command of the likes of:

makelmdz_fcm -arch somearch -p mars -d 29 -j 8 testphys1d

See the dedicated page: Mars 1D testphys1d program for more.

Running with the DYNAMICO dynamical core

TODO: Some intro and link to relevent pages here.

Running with the WRF dynamical core

TODO: Some intro and link to relevent pages here.