Difference between revisions of "Quick Install and Run Venus PCM"
Line 206: | Line 206: | ||
The short tutorial presented in this page is meant to be useful to get an overview of what is required to install and run the GCM, in addition to checking the results of a simulation. Moving on to a more intensive and problem-specific usage will require diving into additional topics and aspects such as: | The short tutorial presented in this page is meant to be useful to get an overview of what is required to install and run the GCM, in addition to checking the results of a simulation. Moving on to a more intensive and problem-specific usage will require diving into additional topics and aspects such as: | ||
* Selecting the appropriate inputs and run parameters for a given study. | * Selecting the appropriate inputs and run parameters for a given study. | ||
− | * Compiling and running in parallel (MPI) to | + | * Compiling and running in parallel (MPI) to run faster: [[Running the Venus PCM in parallel]] |
+ | * Running with advanced configurations of the physics packages, e.g. adding chemistry, thermospheric processes, etc. | ||
* Compiling and running with the other dynamical cores (DYNAMICO and WRF) | * Compiling and running with the other dynamical cores (DYNAMICO and WRF) | ||
− | * Using the XIOS library (instead of IOIPSL) to handle PCM outputs | + | * Using the XIOS library (instead of IOIPSL) to handle PCM outputs: [[Managing the Venus PCM outputs]] |
* post-processing and analysis of model outputs. | * post-processing and analysis of model outputs. | ||
All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)! | All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)! |
Revision as of 20:31, 27 May 2023
In this page we give a hopefully exhaustive enough overview of the necessary prerequisites and steps to download, compile and run a simple simulation with the Venus PCM (with the LMDZ dynamical core), set up on a Linux computer.
Note that there is a dedicated, install script, that attempts to do all these steps (up to and including running the simulation) that you can obtain here: https://web.lmd.jussieu.fr/~lmdz/planets/install_lmdz_venus.bash Automating the process is not trivial as there are many subtle variants of Linux flavors and user account setups, so the script may fail in your case, but hopefully the information given in this page should help you solve the encountered problems.
Contents
Prerequisites: Tools and Libraries
In order to use (i.e. compile and run) the GCM, one needs to have some tools and installed libraries at hand. We list below a (minimal) set that you should check that is available and/or that you'll need to first install on your machine. Note that we assume in this tutorial that you are on a Linux native-OS/cluster.
Fortran compiler
The GCM source code is in Fortran. One thus needs a Fortran compiler to build (compile) the executable. The most easily available one (on Linux) is gfortran and examples discussed here will assume it is the one used. You can check that you indeed have a gfortran compiler at hand with the following Bash command:
which gfortran
which should return something like
/usr/bin/gfortran
Subversion
The source code is managed using subversion (svn), which you'll need to download or update. Leaving aside the subtleties of svn and code organization for now, downloading the code amounts to doing the following:
svn checkout https://svn.lmd.jussieu.fr/Planeto/trunk --depth empty
cd trunk
svn update LMDZ.COMMON LMDZ.VENUS
As a side note: the source code that will be fetched by svn can also be browsed online here: https://trac.lmd.jussieu.fr/Planeto
FCM
The FCM (Flexible Configuration Management) tool is a suite of perl scripts to help building and managing codes. We use a slightly modified version which can be obtained using subversion (svn). Ideally you'll want to download it somewhere on your computer once in for all. To do this:
svn checkout https://forge.ipsl.jussieu.fr/fcm/svn/PATCHED/FCM_V1.2
You'll then need to add the resulting FCM_V1.2/bin to your PATH environment variable so that the command "fcm" may be used from anywhere on your machine. e.g. by adding the following line in your .bashrc:
export PATH=$PATH:$HOME/FCM_V1.2/bin
The assumption here is that the downloaded FCM_V1.2 directory is in your home ($HOME) directory. Adapt accordingly if not.
the NetCDF library
The GCM reads and writes input and output files in NetCDF format. Therefore a NetCDF library must be available. As this library is not quite standard you'll probably have to install it yourself on your system (check out the netCDF library page for more). You can use the following home-made "install_netcdf4_hdf5_seq.bash" script to do so. For this, ensure that you are in your home directory:
mkdir netcdf
cd netcdf
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/pub/script_install/install_netcdf4_hdf5_seq.bash
chmod u=rwx install_netcdf4_hdf5_seq.bash
./install_netcdf4_hdf5_seq.bash > netcdf.log 2>&1
Compiling the library and dependencies can take a while (>>15 minutes; be patient). Once this is done, check file netcdf.log to verify that all went well. You may want to also add its "bin" directory to your PATH environment variable by adding in your .bashrc a line of:
export PATH=$PATH:$HOME/netcdf/bin
The assumption here is that you have run the "install_netcdf4_hdf5_seq.bash" script in a "netcdf" subdirectory of your home directory. Adapt accordingly if not.
As a side note: The NetCDF library provides a very simple command line tool (ncdump) to inspect the contents of NetCDF files, but you'll need more advanced visualization tools (e.g., Panoply, Python scripts, etc. - see further down this page in the "Checking the Results" section) for more advanced post-processing of the outputs.
the IOIPSL library
The IOIPSL (Input/Output IPSL) library is a library designed to handle both the reading of some input files used by the GCM (the *.def files which are described further below) and the writing of some NetCDF output files.
Prior to a first compilation: ksh to bash conversion
Some of the IOIPSL install scripts are written in ksh (Korn shell). Given that most systems currently use Bash (Bourne Again Shell) as their command-line interpreter and not ksh (Korn Shell), you might need to install ksh on your system (assuming you have super-user privileges), for e.g., on Linux-Ubuntu:
sudo apt install ksh
Or, if that is not an option, change the occurrences in the package's scripts (ins_m_prec) from:
#!/bin/ksh
to
#!/bin/bash
Automated IOIPSL install script
Scripts to download and install the IOIPSL library can be found in the "ioipsl" subdirectory of the "LMDZ.COMMON" library. Since here we assume we're working with gfortran, the relevant one is "install_ioipsl_gfortran.bash". If your PATH environment variable is already such that it includes the path to your NetCDF library distribution's bin directory (see previous section) then all you need to do is execute the script:
./install_ioipsl_gfortran.bash
If all went well the script should end with:
OK: ioipsl library is in ...
(for further details about the IOIPSL library and installing it, follow the link and/or use the Search Box at the top of this page)
GCM Input Datafiles and Datasets
In order to run, the GCM needs some inputs, such as initial conditions (values of state variables), external inputs (e.g. optical properties) and simulation setup (e.g. specifications on how long to run, which parametrizations should be activated, etc.)
In the spirit of the illustrative example considered here, a set of necessary input data may be downloaded with:
wget -nv --no-check-certificate http://www.lmd.jussieu.fr/~lmdz/planets/venus/bench_48x32x50.tar.gz
Note that this is a quite low resolution case, mostly for simple tests or checking that the model was well installed. For scientific work the model is typically run at higher resolution.
Nonetheless this bench_48x32x50 example already provides insights on the required input files one needs:
- a run.def file, along with companion gcm.def and physiq.def ASCII files
- a z2sig.def ASCII file, which is read at runtime and contains information about the vertical levels of the PCM
- a traceur.def ASCII file, which contains the list of tracers the PCM will use
- a start.nc and a 'startphy.nc NetCDF files which respectively contain the initial conditions for the dynamics and the physics
- Input datasets (read at run-time by the PCM) ksi_global.txt and SolarNetFlux_RH.dat
Compiling the GCM
Now that all the prerequisites are fulfilled, it is (almost!) time to compile the GCM
Prior to a first compilation: setting up the target architecture files
Compiling the model is done using a dedicated Bash script makelmdz_fcm located in the LMDZ.COMMON directory. This script however relies on architecture files. These files contain information on which compiler to use, what compilation options to use, where relevant libraries are located etc. In practice, one must thus create these ASCII text files in the arch/ subdirectory of LMDZ.COMMON. The naming convention is rather straightforward, when the script makelmdz_fcm is run with the option -arch somename, it will look for files arch/arch-somename.env, arch/arch-somename.path and arch/arch-somename.fcm. Leaving aside a detailed description for later (see this page), here we mention that:
- the arch*.env is an optional file containing environment information, such as setting up environment variables or loading modules on some machines, e.g.
export NETCDF_HOME=/path/to/the/netcdf/distribution
- the arch*.path is a mandatory file containing information relative to external libraries such as NetCDF and IOIPSL, e.g.
ROOT=$PWD
NETCDF_LIBDIR="-L${NETCDF_HOME}/lib"
NETCDF_LIB="-lnetcdf -lnetcdff"
NETCDF_INCDIR="-I${NETCDF_HOME}/include"
IOIPSL_INCDIR="-I$ROOT/../IOIPSL/inc"
IOIPSL_LIBDIR="-L$ROOT/../IOIPSL/lib"
IOIPSL_LIB="-lioipsl"
Each library is referenced by a fixed identifier (NETCDF, IOIPSL, XIOS, ...) and 3 trailing strings: _LIBDIR, for the path to the library, _LIB, for the library name(s), and _INCDIR for the path to the library's include directory.
- the arch*.fcm is a mandatory file containing information relative to the compiler and compilation options, e.g.
%COMPILER gfortran
%LINK gfortran
%AR ar
%MAKE make
%FPP_FLAGS -P -traditional
%FPP_DEF NC_DOUBLE
%BASE_FFLAGS -c -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none -fno-align-commons
%PROD_FFLAGS -O3
%DEV_FFLAGS -O
%DEBUG_FFLAGS -ffpe-trap=invalid,zero,overflow -fbounds-check -g3 -O0 -fstack-protector-all -finit-real=snan -fbacktrace
%MPI_FFLAGS
%OMP_FFLAGS
%BASE_LD
%MPI_LD
%OMP_LD
Again, not going into a detailed description (follow this link for that), just note here that each line corresponds to a keyword (starting with "%") followed by the relevant options. Here, we mention a few of the main ones:
- %COMPILER: The compiler to use (here, gfortran)
- %BASE_FFLAGS: compiler options (always included)
- %PROD_FFLAGS: compilation flags to include if makelmdz_fcm is run with the "-prod" option
- %DEBUG_FFLAGS: compilation flags to include if makelmdz_fcm is run with the "-debug" option
- %BASE_LD: flags to add at the linking step of the compilation
Compiling the test case
To compile the GCM at the sought resolution, run (in LMDZ.COMMON):
./makelmdz_fcm -arch local -p venus -d 48x32x50 -j 8 gcm
Here, we assume that you have generated the arch-local.* files as per what is suggested in the previous section. The options for makelmdz_fcm used here imply:
- -p venus: the GCM will use the "venus" physics package
- -d 48x32x50: the GCM grid will be 48x32 in longitude x latitude, with 50 vertical levels.
For a glimpse at all the possible makelmdz_fcm options and their meanings, run:
./makelmdz_fcm -h
and/or check the dedicated makelmdz_fcm page.
Upon successful compilation, the executable gcm_48x32x50_phyvenus_seq.e should be generated in the bin subdirectory.
Running the GCM
You need to copy (or move) the executable gcm_48x32x50_phyvenus_seq.e from LMDZ.COMMON/bin to the directory containing the initial conditions and parameter files, e.g. bench_48x32x50.
You can now run the GCM. This is usually a two-step process: the (optional) first step is to source the environment architecture file (the very same that was used to compile the model), e.g.,:
source ../LMDZ.COMMON/arch.env
The second step is to execute the model, e.g.,:
./gcm_48x32x50_phyvenus_seq.e > gcm.out 2>&1
With this command line, the (text) outputs messages are redirected into a text file, gcm.out. It is convenient to keep this file for later inspection (e.g., to track a bug). If there is no redirection (only ./gcm_48x32x50_phyvenus_seq.e), then the outputs will be directly on the screen.
Checking the Results of a Simulation
Once the simulation is finished, you'll know that all went well ("everything is cool") if the last few lines of the standard text output are:
in abort_gcm Stopping in leapfrog Reason = Simulation finished Everything is cool
If not, start looking for an error message and a way to fix the problem...
Apart from the standard text output messages from the GCM, which are mostly for monitoring and checking the simulation progress, the user will more likely be interested in checking the contents of the histmth.nc file produced by the GCM, as it contains instantaneous values of the main model variables (atmospheric temperature, winds, etc.).
...TODO...ADD HERE SOME ILLUSTRATIVE PLOTS OF THE EXPECTED BENCH OUTPUTS...
Taking Things to the Next Level
The short tutorial presented in this page is meant to be useful to get an overview of what is required to install and run the GCM, in addition to checking the results of a simulation. Moving on to a more intensive and problem-specific usage will require diving into additional topics and aspects such as:
- Selecting the appropriate inputs and run parameters for a given study.
- Compiling and running in parallel (MPI) to run faster: Running the Venus PCM in parallel
- Running with advanced configurations of the physics packages, e.g. adding chemistry, thermospheric processes, etc.
- Compiling and running with the other dynamical cores (DYNAMICO and WRF)
- Using the XIOS library (instead of IOIPSL) to handle PCM outputs: Managing the Venus PCM outputs
- post-processing and analysis of model outputs.
All these points and much more are detailed in the many pages of this site (do check out the menu on the left and dare use intensively the site's search engine)!