Difference between revisions of "Installing Mars mesoscale model on spirit"

From Planets
Jump to: navigation, search
(Set up the installer)
(Troubleshooting (in case this happens to you))
 
(34 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
== Set up environment ==
 
== Set up environment ==
 
+
=== Option 1 ===
 
Add this in your ~/.bashrc then source the file
 
Add this in your ~/.bashrc then source the file
  
<syntaxhighlight lang="Bash" line>
+
<syntaxhighlight lang="Bash">
 
module purge
 
module purge
 
module load intel/2021.4.0
 
module load intel/2021.4.0
Line 16: Line 16:
 
declare -x NCDFINC=$NETCDF/include
 
declare -x NCDFINC=$NETCDF/include
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
 +
Do not forget to declare the local directory in PATH by adding this to your ~/.bashrc
 +
 +
<syntaxhighlight lang="Bash">
 +
declare -x PATH=./:$PATH
 +
</syntaxhighlight>
 +
 +
It is necessary to unlimit the stacksize to avoit unwanted segmentation faults by adding this to your ~/.bashrc
 +
<syntaxhighlight lang="Bash">
 +
ulimit -s unlimited
 +
</syntaxhighlight>
 +
 +
In the end, source the .bashrc file!
 +
 +
=== Option 2 ===
 +
If you prefer to not modify your .bashrc file, you should instead put all the lines in a "mesoscale.env" file and add a
 +
<syntaxhighlight lang="bash">
 +
source mesoscale.env
 +
</syntaxhighlight>
 +
at the beginning of the meso_install.sh script.
 +
 +
Note that you will still need to have "." in your PATH and unlimited stacksize. So you will definitely need to have:
 +
<syntaxhighlight lang="Bash">
 +
declare -x PATH=./:$PATH
 +
ulimit -s unlimited
 +
</syntaxhighlight>
 +
in your .bashrc file
 +
 +
=== Extra technical details===
 +
* WRF needs a NETCDF environment variable where everything is in the same dir (Fortran and C stuff). If this is not available one needs to create a uniquer dire with links to all the C and Fortran stuff. NCDFLIB and NCDFINC are for the physics.
 +
* WHERE_MPI env variable is used by some scripts to make sure we use the right one
 +
  
 
== Set up the installer ==
 
== Set up the installer ==
  
Go to your 'data' directory and download the installer with the following command
+
Go to your 'data' directory (cd /homedata/_MY_LOGIN_ ''or'' cd /data/_MY_LOGIN_) and download the installer with the following command
 
<syntaxhighlight lang="Bash">
 
<syntaxhighlight lang="Bash">
 
svn co https://svn.lmd.jussieu.fr/Planeto/trunk/MESOSCALE/LMD_MM_MARS/SIMU
 
svn co https://svn.lmd.jussieu.fr/Planeto/trunk/MESOSCALE/LMD_MM_MARS/SIMU
 
</syntaxhighlight>
 
</syntaxhighlight>
  
Make a link in the parent directory to the main script in this SIMU directory
+
Make sure the installer can be executed
 +
<syntaxhighlight lang="Bash">
 +
chmod 755 SIMU/meso_install.sh
 +
</syntaxhighlight>
 +
 
 +
Make a link (e.g. where you are, in the parent directory of /data) to the main script in this SIMU directory
 
<syntaxhighlight lang="Bash">
 
<syntaxhighlight lang="Bash">
 
ln -sf SIMU/meso_install.sh .
 
ln -sf SIMU/meso_install.sh .
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
 +
Run the installer with a simple display of possible options
 +
<syntaxhighlight lang="Bash">
 +
./meso_install.sh -h
 +
</syntaxhighlight>
 +
 +
== Install the code ==
 +
 +
Update to the latest version of installer
 +
<syntaxhighlight lang="Bash">
 +
cd SIMU
 +
svn update
 +
chmod 755 meso_install.sh
 +
cd ..
 +
</syntaxhighlight>
 +
 +
Run the installer by providing a name for your specific directory
 +
<syntaxhighlight lang="Bash">
 +
./meso_install.sh -n DESCRIBE_YOUR_RESEARCH_PROJECT
 +
</syntaxhighlight>
 +
''Important'': This will only work if you have access (i.e. an account) to the IN2P3 Gitlab project "La communauté des modèles atmosphériques planétaires" AND if you have added your personal ssh key there. Otherwise proceed as indicated below.
 +
 +
''Special case'': In case you do not have a gitlab account, ask for an archive (tar.gz) of the code.
 +
Let us assume the name is git-trunk-mesoscale-compile-run-spirit.tar.gz
 +
and the file is in the current directory (for instance in /homedata/_MY_LOGIN_)
 +
then run the installer with the following command
 +
<syntaxhighlight lang="Bash">
 +
./meso_install.sh -n DESCRIBE_YOUR_RESEARCH_PROJECT -a git-trunk-mesoscale-compile-run-spirit
 +
</syntaxhighlight>
 +
 +
 +
=== Troubleshooting (in case this happens to you) ===
 +
If GCM compilation fails with error message "Can't locate Fcm/Config.pm in @INC (you may need to install the Fcm::Config module)", this is because "." is in your PATH before "FCM_V1.2/bin".
 +
You can fix it by either adapting the PATH in the environment file by adding
 +
<syntaxhighlight lang="Bash">
 +
declare -x PATH=/homedata/_MY_LOGIN_/DESCRIBE_YOUR_RESEARCH_PROJECT/code/LMDZ.COMMON/FCM_V1.2/bin/:$PATH
 +
</syntaxhighlight>
 +
or by removing the symbolic link to "fcm" in "/homedata/_MY_LOGIN_/DESCRIBE_YOUR_RESEARCH_PROJECT/code/LMDZ.COMMON/".
 +
 +
Once you have fixed the problem you can recompile the GCM (which had failed previously) by doing:
 +
<syntaxhighlight lang="Bash">
 +
cd /homedata/_MY_LOGIN_/DESCRIBE_YOUR_RESEARCH_PROJECT/
 +
./compile_gcm.sh
 +
</syntaxhighlight>
 +
and recreate start files from a sample start_archive.nc by doing (in subdirectory gcm/newstart)
 +
<syntaxhighlight lang="Bash">
 +
./mini_startbase.sh
 +
</syntaxhighlight>
 +
 +
=== Extra technical stuff ===
 +
 +
* in meso_install.sh is hard coded the git tag of the version that will be installed e.g. version="tags/mesoscale-compile-run_MESOIPSL_exploration". the tags point to specific versions which have been tested and are thus reference. e.g. mesoscale_compile-run_MESOIPSL on the git-trunk
 +
 +
* To recompile WRF (see "readme" file)
 +
<syntaxhighlight lang="Bash">
 +
cd code/MESOSCALE/LMD_MM_MARS
 +
makemeso -p mars_lmd_new
 +
</syntaxhighlight>
 +
To recompile WRF in debug mode:
 +
<syntaxhighlight lang="Bash">
 +
cd code/MESOSCALE/LMD_MM_MARS
 +
makemeso -p mars_lmd_new -g
 +
</syntaxhighlight>
 +
The compilation script (and options) is in MESOSCALE/LMD_MM_MARS/SRC/WRFV2/mars_lmd_new/makegcm_mpifort for physics
 +
And in MESOSCALE/LMD_MM_MARS/makemeso for WRF
 +
 +
== Run the full workflow GCM + initialization + mesoscale model ==
 +
 +
Have a look in the script named launch just for info or to change the number of processors or the step at which you start. Send the launch job to the cluster by typing (note that you can change the number of core to run with in the header of the script, e.g. ''#SBATCH --ntasks=24'' to use 24 cores, but this doesn't work for now...)
 +
<syntaxhighlight lang="Bash">
 +
sbatch launch
 +
</syntaxhighlight>
 +
IMPORTANT: if you have used "Option 2" in the setup/environment, i.e. creating a dedicated "mesoscale.env" file, then you should source it in the "launch" script with something of the likes of:
 +
<syntaxhighlight lang="Bash">
 +
source ../mesoscale.env
 +
</syntaxhighlight>
 +
 +
 +
You can check the status of the run by typing
 +
<syntaxhighlight lang="Bash">
 +
squeue -u $USER
 +
</syntaxhighlight>
 +
 +
=== Extra technical stuff ===
 +
* The namelist.input file contains inputs for WRF. One can change output rate by specifying "history_interval_s" in seconds (instead of "history_interval"), e.g. set to the timestep for an output at every step to debug.
 +
<syntaxhighlight lang="Bash">
 +
history_interval_s=20
 +
</syntaxhighlight>
 +
* to run in hydrostatic mode: it is a parameter for "&dynamics" namelist
 +
<syntaxhighlight lang="Bash">
 +
non_hydrostatic=F
 +
</syntaxhighlight>
 +
* Note that the full list of options for namelist.in can be found in "SIMU/namelist.input_full"
 +
 +
== A more detailed run ==
 +
 +
The best is probably to use a more complete startbase than the minimal one that was created. For instance, to get a more complete startbase, link 'data_gcm' to point towards '/data/spiga/2021_STARTBASES_rev2460/MY35'
 +
 +
To use the tyler cap setting in namelist.wps, download the tylerall archive here 'https://web.lmd.jussieu.fr/~aslmd/mesoscale_model/data_static/' and extract the content in the folder named data_static
 +
 +
[[Category:Mars-Mesoscale]]

Latest revision as of 14:37, 15 May 2024

Set up environment

Option 1

Add this in your ~/.bashrc then source the file

module purge
module load intel/2021.4.0
module load intel-mkl/2020.4.304
module load openmpi/4.0.7
module load hdf5/1.10.7-mpi
module load netcdf-fortran/4.5.3-mpi
module load netcdf-c/4.7.4-mpi
declare -x WHERE_MPI=/net/nfs/tools/u20/22.3/PrgEnv/intel/linux-ubuntu20.04-zen2/openmpi/4.0.7-intel-2021.4.0-43fdcnab3ydwu7ycrplnzlp6xieusuz7/bin/
declare -x NETCDF=/scratchu/spiga/les_mars_project_spirit/netcdf_hacks/SPIRIT
declare -x NCDFLIB=$NETCDF/lib
declare -x NCDFINC=$NETCDF/include

Do not forget to declare the local directory in PATH by adding this to your ~/.bashrc

declare -x PATH=./:$PATH

It is necessary to unlimit the stacksize to avoit unwanted segmentation faults by adding this to your ~/.bashrc

ulimit -s unlimited

In the end, source the .bashrc file!

Option 2

If you prefer to not modify your .bashrc file, you should instead put all the lines in a "mesoscale.env" file and add a

source mesoscale.env

at the beginning of the meso_install.sh script.

Note that you will still need to have "." in your PATH and unlimited stacksize. So you will definitely need to have:

declare -x PATH=./:$PATH
ulimit -s unlimited

in your .bashrc file

Extra technical details

  • WRF needs a NETCDF environment variable where everything is in the same dir (Fortran and C stuff). If this is not available one needs to create a uniquer dire with links to all the C and Fortran stuff. NCDFLIB and NCDFINC are for the physics.
  • WHERE_MPI env variable is used by some scripts to make sure we use the right one


Set up the installer

Go to your 'data' directory (cd /homedata/_MY_LOGIN_ or cd /data/_MY_LOGIN_) and download the installer with the following command

svn co https://svn.lmd.jussieu.fr/Planeto/trunk/MESOSCALE/LMD_MM_MARS/SIMU

Make sure the installer can be executed

chmod 755 SIMU/meso_install.sh

Make a link (e.g. where you are, in the parent directory of /data) to the main script in this SIMU directory

ln -sf SIMU/meso_install.sh .

Run the installer with a simple display of possible options

./meso_install.sh -h

Install the code

Update to the latest version of installer

cd SIMU
svn update
chmod 755 meso_install.sh
cd ..

Run the installer by providing a name for your specific directory

./meso_install.sh -n DESCRIBE_YOUR_RESEARCH_PROJECT

Important: This will only work if you have access (i.e. an account) to the IN2P3 Gitlab project "La communauté des modèles atmosphériques planétaires" AND if you have added your personal ssh key there. Otherwise proceed as indicated below.

Special case: In case you do not have a gitlab account, ask for an archive (tar.gz) of the code. Let us assume the name is git-trunk-mesoscale-compile-run-spirit.tar.gz and the file is in the current directory (for instance in /homedata/_MY_LOGIN_) then run the installer with the following command

./meso_install.sh -n DESCRIBE_YOUR_RESEARCH_PROJECT -a git-trunk-mesoscale-compile-run-spirit


Troubleshooting (in case this happens to you)

If GCM compilation fails with error message "Can't locate Fcm/Config.pm in @INC (you may need to install the Fcm::Config module)", this is because "." is in your PATH before "FCM_V1.2/bin". You can fix it by either adapting the PATH in the environment file by adding

declare -x PATH=/homedata/_MY_LOGIN_/DESCRIBE_YOUR_RESEARCH_PROJECT/code/LMDZ.COMMON/FCM_V1.2/bin/:$PATH

or by removing the symbolic link to "fcm" in "/homedata/_MY_LOGIN_/DESCRIBE_YOUR_RESEARCH_PROJECT/code/LMDZ.COMMON/".

Once you have fixed the problem you can recompile the GCM (which had failed previously) by doing:

cd /homedata/_MY_LOGIN_/DESCRIBE_YOUR_RESEARCH_PROJECT/
./compile_gcm.sh

and recreate start files from a sample start_archive.nc by doing (in subdirectory gcm/newstart)

./mini_startbase.sh

Extra technical stuff

  • in meso_install.sh is hard coded the git tag of the version that will be installed e.g. version="tags/mesoscale-compile-run_MESOIPSL_exploration". the tags point to specific versions which have been tested and are thus reference. e.g. mesoscale_compile-run_MESOIPSL on the git-trunk
  • To recompile WRF (see "readme" file)
cd code/MESOSCALE/LMD_MM_MARS
makemeso -p mars_lmd_new

To recompile WRF in debug mode:

cd code/MESOSCALE/LMD_MM_MARS
makemeso -p mars_lmd_new -g

The compilation script (and options) is in MESOSCALE/LMD_MM_MARS/SRC/WRFV2/mars_lmd_new/makegcm_mpifort for physics And in MESOSCALE/LMD_MM_MARS/makemeso for WRF

Run the full workflow GCM + initialization + mesoscale model

Have a look in the script named launch just for info or to change the number of processors or the step at which you start. Send the launch job to the cluster by typing (note that you can change the number of core to run with in the header of the script, e.g. #SBATCH --ntasks=24 to use 24 cores, but this doesn't work for now...)

sbatch launch

IMPORTANT: if you have used "Option 2" in the setup/environment, i.e. creating a dedicated "mesoscale.env" file, then you should source it in the "launch" script with something of the likes of:

source ../mesoscale.env


You can check the status of the run by typing

squeue -u $USER

Extra technical stuff

  • The namelist.input file contains inputs for WRF. One can change output rate by specifying "history_interval_s" in seconds (instead of "history_interval"), e.g. set to the timestep for an output at every step to debug.
history_interval_s=20
  • to run in hydrostatic mode: it is a parameter for "&dynamics" namelist
non_hydrostatic=F
  • Note that the full list of options for namelist.in can be found in "SIMU/namelist.input_full"

A more detailed run

The best is probably to use a more complete startbase than the minimal one that was created. For instance, to get a more complete startbase, link 'data_gcm' to point towards '/data/spiga/2021_STARTBASES_rev2460/MY35'

To use the tyler cap setting in namelist.wps, download the tylerall archive here 'https://web.lmd.jussieu.fr/~aslmd/mesoscale_model/data_static/' and extract the content in the folder named data_static