Setting Up the GCHP Environment
Previous | Next | User Manual Home | GCHP Home
- Hardware and Software Requirements
- Downloading Source Code
- Obtaining a Run Directory
- Setting Up the GCHP Environment
- Compiling
- Basic Example Run
- Configuring a Run
- Output Data
- Developing GCHP
- Run Configuration Files
Create an Environment File
You must load all necessary libraries and export certain environment variables before compiling GCHP. The GCHP environment is different from GEOS-Chem Classic and is often considered the largest obstacle to getting GCHP up and running for the first time. We have tried to make setting libraries and variables as automatic as possible to minimize problems. However, libraries will always be specific to your local compute cluster which presents challenges for compatibility. We recommend simplifying the environment setup process by customizing a GCHP-specific .bashrc file that works on your system and saving it for future work.
Sample .bashrc files are included in the run directory, several for the Harvard University Odyssey cluster and one for a more generic Linux system. These are located in the bashrcSamples subdirectory. You can use these to develop one compatible with your system. Each sample .bashrc file is customized for a specific combination of Fortran compiler, MPI implementation, netCDF libraries, and compute cluster. For clarity we recommend using the naming format gchp.compilerNameVersion_mpiNameVersion_clusterName.bashrc. For example, gchp.ifort17.0.4_openMPI3.1.0_computecanada.bashrc.
We recommend opening several of the sample environment files and getting familiar with the environment variables set. The files include printing the variables to screen when sourced. This is particularly useful for logging purposes if you automatically source the environment file within a run script as all of the run script samples included in the run directory do. The environment variables will be printed to your system log file and will reflect the settings used during your run, potentially useful for debugging and archiving.
An example of environment variables included in a GCHP bashrc are as follows:
System memory limits should also be set to unlimited if possible.
Also included in the sample environment files are a few aliases for commands that are commonly used when developing, compiling, and running GCHP. It may be useful to look at them to see if you would like to adopt them or add your own.
Expanding MPI Options
GCHP is compatible with MPICH, OpenMPI, and MVAPICH2 MPI implementations. However, you may want to use a different implementation, possibly out of necessity. To do this, follow the steps below.
- Specify environment variable ESMF_COMM to match the MPI implementation. Options are currently in place for MVAPICH2 (ESMF_COMM=mvapich2), OpenMPI (ESMF_COMM=openmpi), and a generic MPI implementation. (ESMF_COMM=mpi). The generic option is sufficient when, for example, running with the SGI MPI implementation on NASA's Pleiades servers. If you are using a new MPI implementation not covered by one of these options, we recommend running first with ESMF_COMM=mpi.
- Specify the environment variable MPI_ROOT to point to the MPI root directory, such that $MPI_ROOT/bin/mpirun points to the correct MPI run binary.
- Ensure that you have valid mpif90 and mpifort executables. These almost always perform the same role but both names are invoked in the build sequence. If you have one but not the other, we strongly recommend that you make a symbolic link to the working binary with the name of the missing binary in a dedicated folder, and then add that folder to your path at the command line and in your .bashrc. For example, if you have a mpifort binary but not an mpif90 binary, run the following commands:
mkdir $HOME/mpi_extra cd $HOME/mpi_extra ln -s $( which mpifort ) mpif90 export PATH=${PATH}:${HOME}/mpi_extra
You should now try to compile GCHP. If the generic option does not work then you will need to implement a new option. This involves updating GCHP source code. An example of how to do this for Intel MPI is as follows:
- Decide a new name, such as ESMF_COMM=intel for the Intel MPI implementation.
- Determine the relevant include path and linker commands for your MPI implementation. In this example for Intel MPI they are $(MPI_ROOT)/include and -L$(MPI_ROOT)/lib -lmpi -lmpi++ respectively.
- Update source code files CodeDir/GCHP/GIGC.mk and CodeDir/GCHP/Shared/Config/ESMA_base.mk. In both files, search for environment variable ESMF_COMM in the file. You should find a small set of occurrences in a single "if..else.." block. Add a new clause below the one for mvapich2 as follows.
In GIGC.mk:
else ifeq ($(ESMF_COMM),intel) # %%%%% Intel MPI %%%%% MPI_LIB := -L$(MPI_ROOT)/lib -lmpi -lmpi++
In ESMA_base.mk:
else ifeq ($(ESMF_COMM),intel) INC_MPI := $(MPI_ROOT)/include LIB_MPI := -L$(MPI_ROOT)/lib -lmpi -lmpi++
If you have tried all of this and are still having trouble, please contact the GEOS-Chem Support Team. If you have a new MPI implementation working please also let us know! We may want to bring in your updates as a permanent option for use by the wider community.