Difference between revisions of "Setting Up the GCHP Environment"

From Geos-chem
Jump to: navigation, search
(Create an Environment File)
(Create an Environment File)
Line 12: Line 12:
 
#[[GCHP_Run_Configuration_Files|Run Configuration Files]]
 
#[[GCHP_Run_Configuration_Files|Run Configuration Files]]
 
<br>
 
<br>
 +
 +
== Recent Changes ===
 +
 +
Please note that starting GCHP 12.5.0 the environment file must include defining environment variable gFTL. If you have an existing environment file, please add the following when upgrading to GCHP 12.5.0:
 +
 +
# Set path to GMAO Fortran template library (gFTL)
 +
export gFTL=$(readlink -f ./gFTL)
  
 
== Create an Environment File ==
 
== Create an Environment File ==

Revision as of 21:43, 15 August 2019

Previous | Next | Getting Started With GCHP | GCHP Main Page

  1. Hardware and Software Requirements
  2. Downloading Source Code and Data Directories
  3. Obtaining a Run Directory
  4. Setting Up the GCHP Environment
  5. Compiling
  6. Running GCHP: Basics
  7. Running GCHP: Configuration
  8. Output Data
  9. Developing GCHP
  10. Run Configuration Files


Recent Changes =

Please note that starting GCHP 12.5.0 the environment file must include defining environment variable gFTL. If you have an existing environment file, please add the following when upgrading to GCHP 12.5.0:

# Set path to GMAO Fortran template library (gFTL)
export gFTL=$(readlink -f ./gFTL)

Create an Environment File

You must load all necessary libraries and export certain environment variables before compiling GCHP. The GCHP environment is different from GEOS-Chem Classic and is often considered the largest obstacle to getting GCHP up and running for the first time. We have tried to make setting libraries and variables as automatic as possible to minimize problems. However, libraries will always be specific to your local compute cluster which presents challenges for compatibility. We recommend simplifying the environment setup process by customizing a GCHP-specific environment file that works on your system and saving it for future work.

Sample environment files are included in the run directory, several for the Harvard University Odyssey cluster and one for a more generic Linux system. These are located in the environmentFileSamples subdirectory. You can use these to develop one compatible with your system. Each sample environment file is customized for a specific combination of Fortran compiler, MPI implementation, netCDF libraries, and compute cluster. For clarity we recommend using the naming format gchp.compiler_mpi_cluster.env. For example, gchp.ifort17_openmpi3_computecanada.env. Open several of the sample environment files and getting familiar with the environment variables set.

An example of environment variables needed for GCHP are as follows. In this example a version of netCDF is used that does not break up the C and Fortran libraries. Setting environment variables for netCDF-Fortran is therefore commented out. For more discussion about this, and whether you need to do it based on your netCDF library, see the netCDF libraries section of this guide.

if  $- = *i*  ; then
  echo "Loading modules for GCHP on Odyssey, please wait ..."
fi

#==============================================================================
# %%%%% Clear existing environment variables %%%%%
#==============================================================================
unset GC_BIN
unset GC_INCLUDE
unset GC_LIB
unset GC_F_BIN
unset GC_F_INCLUDE
unset GC_F_LIB 

#==============================================================================
# Modules (specific to compute cluster)
#==============================================================================

module purge
module load git/2.17.0-fasrc01 

# Modules for CentOS7
module load intel/17.0.4-fasrc01
module load openmpi/3.1.1-fasrc01
module load netcdf/4.1.3-fasrc03 

#==============================================================================
# Environment variables
#============================================================================== 

# Make all files world-readable by default
umask 022 

# Specify compilers
export CC=gcc
export OMPI_CC=$CC 

export CXX=g++
export OMPI_CXX=$CXX

export FC=ifort
export F77=$FC
export F90=$FC
export OMPI_FC=$FC
export COMPILER=$FC
export ESMF_COMPILER=intel

# MPI Communication
export ESMF_COMM=openmpi
export MPI_ROOT=$MPI_HOME

# Base paths
export GC_BIN="$NETCDF_HOME/bin"
export GC_INCLUDE="$NETCDF_HOME/include"
export GC_LIB="$NETCDF_HOME/lib"

# Add to primary path
export PATH=${NETCDF_HOME}/bin:$PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${NETCDF_HOME}/lib

# If using NetCDF after the C/Fortran split (4.3+), then you will need to
# specify the following additional environment variables
#export GC_F_BIN="$NETCDF_FORTRAN_HOME/bin"
#export GC_F_INCLUDE="$NETCDF_FORTRAN_HOME/include"
#export GC_F_LIB="$NETCDF_FORTRAN_HOME/lib"
#export PATH=${NETCDF_FORTRAN_HOME}/bin:$PATH
#export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${NETCDF_FORTRAN_HOME}/lib

# Set ESMF optimization (g=debugging, O=optimized (capital o))
export ESMF_BOPT=O

# Set path to GMAO Fortran template library (gFTL)
export gFTL=$(readlink -f ./gFTL)

# Specify number number of job slots for build
export NUM_JOB_SLOTS=8

#==============================================================================
# Raise memory limits
#==============================================================================

ulimit -c unlimited              # coredumpsize
ulimit -l unlimited              # memorylocked
ulimit -u 50000                  # maxproc
ulimit -v unlimited              # vmemoryuse
ulimit -s unlimited              # stacksize

#==============================================================================
# Print information for clarity
#==============================================================================

module list
echo ""
echo "Environment variables set:"
echo ""
echo "LD_LIBRARY_PATH: ${LD_LIBRARY_PATH}"
echo ""
echo "ESMF_COMM: ${ESMF_COMM}"
echo "ESMP_BOPT: ${ESMF_BOPT}"
echo "MPI_ROOT: ${MPI_ROOT}"
echo ""
echo "CC: ${CC}"
echo "OMPI_CC: ${OMPI_CC}"
echo ""
echo "CXX: ${CXX}"
echo "OMPI_CXX: ${OMPI_CXX}"
echo ""
echo "FC: ${FC}"
echo "F77: ${F77}"
echo "F90: ${F90}"
echo "OMPI_FC: ${OMPI_FC}"
echo "COMPILER: ${COMPILER}"
echo "ESMF_COMPILER: ${ESMF_COMPILER}"
echo ""
echo "GC_BIN: ${GC_BIN}"
echo "GC_INCLUDE: ${GC_INCLUDE}"
echo "GC_LIB: ${GC_LIB}"
echo ""
#echo "GC_F_BIN: ${GC_F_BIN}"
#echo "GC_F_INCLUDE: ${GC_F_INCLUDE}"
#echo "GC_F_LIB: ${GC_F_LIB}"
#echo ""
echo "Done sourcing ${BASH_SOURCE[0]}"

System memory limits and stack size should be set to unlimited to avoid memory problems. Such problems would manifest as sudden termination upon file read or a segmentation fault during advection. You can find out what you system limits are by typing the following at the command prompt:

ulimit -a

You will see something like this:

core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1030083
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 100000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

All example environment files for GCHP explicitly set several of these to unlimited, as shown above in the example. If you run into a memory issue be sure to check your limits against the list above to see if anything may be limiting your run.

Expanding MPI Options (Advanced)

GCHP is currently not tested with MPI implementations other than OpenMPI3. However, we encourage users to experiment with other MPI implementations. To do this, follow the steps below. You may need to make tweaks and it is possible it may fail. Please report what you are trying and your results by opening a GitHub GCHP issue whether you fail or succeed. Before you begin, check the MPI section of this guide to see if there is any news about known issues.

  1. Specify environment variable ESMF_COMM to match the MPI implementation. Options are currently in place for MVAPICH2 (ESMF_COMM=mvapich2), OpenMPI (ESMF_COMM=openmpi), and a generic MPI implementation. (ESMF_COMM=mpi). The generic option is sufficient when, for example, running with the SGI MPI implementation on NASA's Pleiades servers. If you are using a new MPI implementation not covered by one of these options, we recommend running first with ESMF_COMM=mpi.
  2. Check that ${MPI_HOME} exists and contains the path to your MPI library. It should be set automatically when you load the library.
  3. Ensure that you have valid mpif90 and mpifort executables. These almost always perform the same role but both names are invoked in the build sequence. If you have one but not the other, we strongly recommend that you make a symbolic link to the working binary with the name of the missing binary in a dedicated folder, and then add that folder to your path at the command line and in your .bashrc. For example, if you have a mpifort binary but not an mpif90 binary, run the following commands:
mkdir $HOME/mpi_extra
cd $HOME/mpi_extra
ln -s $( which mpifort ) mpif90
export PATH=${PATH}:${HOME}/mpi_extra

You should now try to compile GCHP. If the generic option does not work then you will need to implement a new option. This involves updating GCHP source code. An example of how to do this for Intel MPI is as follows:

  1. Decide a new name, such as ESMF_COMM=intel for the Intel MPI implementation.
  2. Determine the relevant include path and linker commands for your MPI implementation. In this example for Intel MPI they are $(MPI_ROOT)/include and -L$(MPI_ROOT)/lib -lmpi -lmpi++ respectively.
  3. Update source code files CodeDir/GCHP/GIGC.mk and CodeDir/GCHP/Shared/Config/ESMA_base.mk. In both files, search for environment variable ESMF_COMM in the file. You should find a small set of occurrences in a single "if..else.." block. Add a new clause below the one for mvapich2 as follows.

In GIGC.mk:

else ifeq ($(ESMF_COMM),intel)
   # %%%%% Intel MPI %%%%%
   MPI_LIB     := -L$(MPI_ROOT)/lib -lmpi -lmpi++

In ESMA_base.mk:

else ifeq ($(ESMF_COMM),intel)
   INC_MPI := $(MPI_ROOT)/include
   LIB_MPI := -L$(MPI_ROOT)/lib -lmpi -lmpi++

Previous | Next | Getting Started With GCHP | GCHP Main Page