Difference between revisions of "Setting Up the GCHP Environment"

From Geos-chem
Jump to: navigation, search
Line 12: Line 12:
 
#[[GCHP_Run_Configuration_Files|Run Configuration Files]]
 
#[[GCHP_Run_Configuration_Files|Run Configuration Files]]
 
<br>
 
<br>
 +
 +
Please note that documentation on this page primarily reflects the latest GCHP public release which is currently the GCHP 12 series. The documentation will be updated for the GCHP 13.0.0 release over the coming months.
  
 
== Recent Changes ==
 
== Recent Changes ==

Revision as of 14:31, 22 June 2020

Previous | Next | Getting Started With GCHP | GCHP Main Page

  1. Hardware and Software Requirements
  2. Downloading Source Code and Data Directories
  3. Obtaining a Run Directory
  4. Setting Up the GCHP Environment
  5. Compiling
  6. Running GCHP: Basics
  7. Running GCHP: Configuration
  8. Output Data
  9. Developing GCHP
  10. Run Configuration Files


Please note that documentation on this page primarily reflects the latest GCHP public release which is currently the GCHP 12 series. The documentation will be updated for the GCHP 13.0.0 release over the coming months.

Recent Changes

Please note that starting in GCHP 12.5.0 the environment file must include defining environment variable gFTL. If you have an existing environment file, please add the following when upgrading to GCHP 12.5.0:

# Set path to GMAO Fortran template library (gFTL)
export gFTL=$(readlink -f ./gFTL)

This code assumes your environment file is sourced from the run directory where a gFTL symbolic link was created during run directory creation. If you source the environment file, adjust as needed.

Create an Environment File

You must load all necessary libraries and export certain environment variables before compiling GCHP. The GCHP environment is different from GEOS-Chem Classic and is often considered the largest obstacle to getting GCHP up and running for the first time. We have tried to make setting libraries and variables as automatic as possible to minimize problems. However, libraries will always be specific to your local compute cluster which presents challenges for compatibility. We recommend simplifying the environment setup process by customizing a GCHP-specific environment file that works on your system and saving it for future work.

Sample environment files are included in the run directory, several for the Harvard University Odyssey cluster and one for a more generic Linux system. These are located in the environmentFileSamples subdirectory. You can use these to develop one compatible with your system. Each sample environment file is customized for a specific combination of Fortran compiler, MPI implementation, netCDF libraries, and compute cluster. For clarity we recommend using the naming format gchp.compiler_mpi_cluster.env. For example, gchp.ifort17_openmpi3_computecanada.env. Open several of the sample environment files and getting familiar with the environment variables set.

An example of environment variables needed for GCHP are as follows. In this example a version of netCDF is used that does not break up the C and Fortran libraries. Setting environment variables for netCDF-Fortran is therefore commented out. For more discussion about this, and whether you need to do it based on your netCDF library, see the netCDF libraries section of this guide.

if  $- = *i*  ; then
  echo "Loading modules for GCHP on Odyssey, please wait ..."
fi

#==============================================================================
# %%%%% Clear existing environment variables %%%%%
#==============================================================================
unset GC_BIN
unset GC_INCLUDE
unset GC_LIB
unset GC_F_BIN
unset GC_F_INCLUDE
unset GC_F_LIB 

#==============================================================================
# Modules (specific to compute cluster)
#==============================================================================

module purge
module load git/2.17.0-fasrc01 

# Modules for CentOS7
module load intel/17.0.4-fasrc01
module load openmpi/3.1.1-fasrc01
module load netcdf/4.1.3-fasrc03 

#==============================================================================
# Environment variables
#============================================================================== 

# Make all files world-readable by default
umask 022 

# Specify compilers
export CC=gcc
export OMPI_CC=$CC 

export CXX=g++
export OMPI_CXX=$CXX

export FC=ifort
export F77=$FC
export F90=$FC
export OMPI_FC=$FC
export COMPILER=$FC
export ESMF_COMPILER=intel

# MPI Communication
export ESMF_COMM=openmpi
export MPI_ROOT=$MPI_HOME

# Base paths
export GC_BIN="$NETCDF_HOME/bin"
export GC_INCLUDE="$NETCDF_HOME/include"
export GC_LIB="$NETCDF_HOME/lib"

# Add to primary path
export PATH=${NETCDF_HOME}/bin:$PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${NETCDF_HOME}/lib

# If using NetCDF after the C/Fortran split (4.3+), then you will need to
# specify the following additional environment variables
#export GC_F_BIN="$NETCDF_FORTRAN_HOME/bin"
#export GC_F_INCLUDE="$NETCDF_FORTRAN_HOME/include"
#export GC_F_LIB="$NETCDF_FORTRAN_HOME/lib"
#export PATH=${NETCDF_FORTRAN_HOME}/bin:$PATH
#export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${NETCDF_FORTRAN_HOME}/lib

# Set ESMF optimization (g=debugging, O=optimized (capital o))
export ESMF_BOPT=O

# Set path to GMAO Fortran template library (gFTL)
export gFTL=$(readlink -f ./gFTL)

# Specify number number of job slots for build
export NUM_JOB_SLOTS=8

#==============================================================================
# Raise memory limits
#==============================================================================

ulimit -c 0                      # coredumpsize
ulimit -l unlimited              # memorylocked
ulimit -u 50000                  # maxproc
ulimit -v unlimited              # vmemoryuse
ulimit -s unlimited              # stacksize

#==============================================================================
# Print information for clarity
#==============================================================================

module list
echo ""
echo "Environment variables set:"
echo ""
echo "LD_LIBRARY_PATH: ${LD_LIBRARY_PATH}"
echo ""
echo "ESMF_COMM: ${ESMF_COMM}"
echo "ESMP_BOPT: ${ESMF_BOPT}"
echo "MPI_ROOT: ${MPI_ROOT}"
echo ""
echo "CC: ${CC}"
echo "OMPI_CC: ${OMPI_CC}"
echo ""
echo "CXX: ${CXX}"
echo "OMPI_CXX: ${OMPI_CXX}"
echo ""
echo "FC: ${FC}"
echo "F77: ${F77}"
echo "F90: ${F90}"
echo "OMPI_FC: ${OMPI_FC}"
echo "COMPILER: ${COMPILER}"
echo "ESMF_COMPILER: ${ESMF_COMPILER}"
echo ""
echo "GC_BIN: ${GC_BIN}"
echo "GC_INCLUDE: ${GC_INCLUDE}"
echo "GC_LIB: ${GC_LIB}"
echo ""
#echo "GC_F_BIN: ${GC_F_BIN}"
#echo "GC_F_INCLUDE: ${GC_F_INCLUDE}"
#echo "GC_F_LIB: ${GC_F_LIB}"
#echo ""
echo "Done sourcing ${BASH_SOURCE[0]}"

System memory limits and stack size should be set to unlimited to avoid memory problems. Such problems would manifest as sudden termination upon file read or a segmentation fault during advection. You can find out what you system limits are by typing the following at the command prompt:

ulimit -a

You will see something like this:

core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1030083
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 100000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

All example environment files for GCHP explicitly set several of these to unlimited, as shown above in the example. If you run into a memory issue be sure to check your limits against the list above to see if anything may be limiting your run.


Previous | Next | Getting Started With GCHP | GCHP Main Page