Difference between revisions of "Setting Up the GCHP Environment"

From Geos-chem
Jump to: navigation, search
Line 11: Line 11:
 
#[[Developing_GCHP|Developing GCHP]]
 
#[[Developing_GCHP|Developing GCHP]]
 
<br>
 
<br>
 
== Overview ==
 
  
 
Once you are sure you meet all [[GCHP_Hardware_and_Software_Requirements|GCHP hardware and software requirements]], you must load all necessary libraries and export certain environment variables before compiling GCHP. If you are using Harvard's [https://www.rc.fas.harvard.edu/odyssey Odyssey compute cluster] setting up an interactive session is required. For non-Odyssey users, check with your IT staff about preferred protocol.
 
Once you are sure you meet all [[GCHP_Hardware_and_Software_Requirements|GCHP hardware and software requirements]], you must load all necessary libraries and export certain environment variables before compiling GCHP. If you are using Harvard's [https://www.rc.fas.harvard.edu/odyssey Odyssey compute cluster] setting up an interactive session is required. For non-Odyssey users, check with your IT staff about preferred protocol.
 
== Step 1: Set Up an Interactive Session (Harvard Odyssey Users Only) ==
 
 
All CPU- or memory-intensive work on Odyssey should be performed with an [https://www.rc.fas.harvard.edu/resources/running-jobs/#Interactive_jobs_and_srun interactive session] so that you do not use up shared login node resources. Use the <tt>srun</tt> command to request an interactive session. The number of CPUs that you request must be a multiple of 6 (at least one core for each of the [http://geos-chem.org/cubed_sphere/CubeSphere_step-by-step.html cubed-sphere faces], and the same number of cores for each face).
 
 
For example, to start a 3-hour interactive session on the Odyssey regal partition with 6 cores on 1 node and 6000 MB of RAM per CPU, type:
 
 
srun -p regal --pty --x11=first --mem-per-cpu=6000 -N 1 -n 6 -t 00-03:00 /bin/bash
 
 
If you want to run on more than one node, say 12 cores, distributed evenly across 2 nodes, use:
 
 
srun -p regal --pty --x11=first --ntasks-per-node=6 --mem-per-cpu=6000 -N 2 -n 12 -t 0-03:00 /bin/bash
 
 
The new argument<tt>--ntasks-per-node=6</tt> guarantees that the cores are evenly distributed over the nodes (a requirement for GCHP).
 
 
=== Jacob Group Users ===
 
 
If you are using the <tt>env</tt> repository set up by the GEOS-Chem Support Team for Jacob Group users, you can use the shortcut script <tt>interactive_gchp</tt> located in the <tt>/env/bin</tt> directory to simplify requesting an interactive session. The <tt>interactive_gchp</tt> script takes the following arguments:
 
 
interactive_gchp NODES CPUs MEM-PER-CPU-IN-MB [TIME-IN-MINUTES] [PARTITION]
 
 
If you omit <tt>TIME-IN-MINUTES</tt> then 60 minutes will be requested and if you omit <tt>PARTITION</tt> then <tt>jacob</tt> will be used.
 
 
To use <tt>interactive_gchp</tt> to request a 3-hour interactive session on the regal partition with 6 cores on 1 node and 6000 MB of RAM per CPU (as in the <tt>srun</tt> example above, you would type:
 
 
interactive_gchp 1 6 6000 180 regal
 
 
If you are not in the Jacob group and would like a copy of this script, please contact us.
 
 
== Step 2: Load Libraries and Environment Settings ==
 
  
 
The GCHP environment is different from GEOS-Chem classic and we have tried to make setting libraries and variables as automatic as possible to minimize problems. We recommend simplifying the environment setup process by customizing a GCHP-specific <tt>.bashrc</tt> (or <tt>.cshrc</tt>) file that you source prior to compiling and running GCHP.  
 
The GCHP environment is different from GEOS-Chem classic and we have tried to make setting libraries and variables as automatic as possible to minimize problems. We recommend simplifying the environment setup process by customizing a GCHP-specific <tt>.bashrc</tt> (or <tt>.cshrc</tt>) file that you source prior to compiling and running GCHP.  

Revision as of 22:06, 29 June 2018

Previous | Next | Getting Started with GCHP

  1. Downloading Source Code
  2. Obtaining a Run Directory
  3. Setting Up the GCHP Environment
  4. Compiling
  5. Basic Example Run
  6. Run Configuration Files
  7. Advanced Run Examples
  8. Output Data
  9. Developing GCHP


Once you are sure you meet all GCHP hardware and software requirements, you must load all necessary libraries and export certain environment variables before compiling GCHP. If you are using Harvard's Odyssey compute cluster setting up an interactive session is required. For non-Odyssey users, check with your IT staff about preferred protocol.

The GCHP environment is different from GEOS-Chem classic and we have tried to make setting libraries and variables as automatic as possible to minimize problems. We recommend simplifying the environment setup process by customizing a GCHP-specific .bashrc (or .cshrc) file that you source prior to compiling and running GCHP.

Three sample .bashrc files are included in the run directory, two for the Harvard University Odyssey cluster and one for the the Dalhousie University Glooscap cluster. These are located in the bashrcSamples subdirectory. You can use these to develop one compatible with your system. The sample .bashrc files are customized for a specific combination of Fortran compiler, MPI implementation, and compute cluster. For example, one of the .bashrcs customized for use on the Harvard Odyssey compute cluster uses ifort15 and the MVAPICH2 implemenation of MPI. To setup your environment source your .bashrc file, use the command: source .bashrc.

Note that while GCHP requires a different set of libraries and environment variables than GEOS-Chem classic there are some similaries such as the GC_BIN, GC_INCLUDE, and GC_LIB netCDF variables. For more information on defining these environment variables, see the Setting Unix environment variables for GEOS-Chem wiki page. Other examples are the ESMF_COMPILER and ESMF_BOPT variables set in bash script build.sh that specify intel by default (more on that file in the next section of this guide).

Specifying the MPI Implementation

The GCHP run directory is set up by default for use with the MPAVICH2 implementation of MPI. However, we realize that you may want to use a different implementation, possibly out of necessity. To do this, follow the steps below.

  1. Specify environment variable ESMF_COMM to match the MPI implementation. Options are currently in place for MVAPICH2 (ESMF_COMM=mvapich2), OpenMPI (ESMF_COMM=openmpi), and a generic MPI implementation. (ESMF_COMM=mpi). The generic option is sufficient when, for example, running with the SGI MPI implementation on NASA's Pleiades servers. If you are using a new MPI implementation not covered by one of these options, we recommend running first with ESMF_COMM=mpi.
  2. Specify the environment variable MPI_ROOT to point to the MPI root directory, such that $MPI_ROOT/bin/mpirun points to the correct MPI run binary.
  3. Ensure that you have valid mpif90 and mpifort executables. These almost always perform the same role but both names are invoked in the build sequence. If you have one but not the other, we strongly recommend that you make a softlink to the working binary with the name of the missing binary in a dedicated folder, and then add that folder to your path at the command line and in your .bashrc. For example, if you have a mpifort binary but not an mpif90 binary, run the following commands:
mkdir $HOME/mpi_extra
cd $HOME/mpi_extra
ln -s $( which mpifort ) mpif90
export PATH=${PATH}:${HOME}/mpi_extra

You should now try to compile GCHP. If the generic option does not work then you will need to implement a new option. This involves updating GCHP source code. An example of how to do this for Intel MPI is as follows:

  1. Decide a new name, such as ESMF_COMM=intel for the Intel MPI implementation.
  2. Determine the relevant include path and linker commands for your MPI implementation. In this example for Intel MPI they are $(MPI_ROOT)/include and -L$(MPI_ROOT)/lib -lmpi -lmpi++ respectively.
  3. Update source code files CodeDir/GCHP/GIGC.mk and CodeDir/GCHP/Shared/Config/ESMA_base.mk. In both files, search for environment variable ESMF_COMM in the file. You should find a small set of occurrences in a single "if..else.." block. Add a new clause below the one for mvapich2 as follows.

In GIGC.mk:

else ifeq ($(ESMF_COMM),intel)
   # %%%%% Intel MPI %%%%%
   MPI_LIB     := -L$(MPI_ROOT)/lib -lmpi -lmpi++

In ESMA_base.mk:

else ifeq ($(ESMF_COMM),intel)
   INC_MPI := $(MPI_ROOT)/include
   LIB_MPI := -L$(MPI_ROOT)/lib -lmpi -lmpi++

If you have tried all of this and are still having trouble, please contact the GEOS-Chem Support Team. If you have a new MPI implementation working please also let us know! We may want to bring in your updates as a permanent option for use by the wider community.



Previous | Next | GCHP Home