Running GCHP: Basics

From Geos-chem
Revision as of 22:59, 6 March 2019 by Lizzie Lundgren (Talk | contribs) (Lizzie Lundgren moved page GCHP Basic Example Run to Running GCHP: Basics)

Jump to: navigation, search

Previous | Next | User Manual Home | GCHP Home

  1. Hardware and Software Requirements
  2. Downloading Source Code and Data Directories
  3. Obtaining a Run Directory
  4. Setting Up the GCHP Environment
  5. Compiling
  6. Running GCHP: Part 1
  7. Running GCHP: Part 2
  8. Output Data
  9. Developing GCHP
  10. Run Configuration Files


This page presents the basic information needed to run GCHP as well as how to verify a successful run and reuse a run directory. The default GCHP run directories are configured for a 1-hr simulation at c24 resolution using native resolution meteorology, six cores, and one node. This simple configuration is a good test case to check that GCHP runs on your system. Typically the TransportTracer simulation requires about 50G and the standard and benchmark simulations require about 110G. More advanced instructions for configuring your GCHP run with different settings is in the next chapter.

Pre-run Checklist

Prior to running GCHP, always run through the following checklist to ensure everything is set up properly.

  1. Your run directory contains the executable geos.
  2. All symbolic links are present in your run directory and point to a valid path. These include TileFiles, MetDir, MainDataDir, ChemDataDir, CodeDir, and an initial restart file at the grid resolution you will run at.
  3. The input meteorology resolution and source are as you intend (inspect with "grep MetDir ExtData.rc" and "file MetDir"). Note: for versions 12.1.0 and later, create a new run directory if you wish to change Met source.
  4. You have looked through and set all configurable settings in (discussed in the next chapter)
  5. You have a run script (see below for information about run scripts)
  6. The resource allocation in and your run script are consistent (# nodes and cores).
  7. The run script sources your environment file that you used for compiling GCHP (gchp.env for version 12.1.0 and later).
  8. If reusing a run directory, you have archived your last run or discarded it with 'make cleanup_output' (optional but recommended; discussed below)

How to Run GCHP

You can run GCHP locally from within your run directory (interactively) or by submitting your run to your cluster's job scheduler. To make running GCHP simpler there is a folder in the GCHP run directory called runScriptSamples that contains example scripts to run GCHP. Each file includes additional steps to make the run process easier, including sourcing your environment file so all libraries are loaded, deleting file cap_restart from any previous runs, sourcing config file (more on this next chapter), and sending standard output to a log file. cap_restart is a text file output by GCHP with the simulation end date, and in some instances GCHP will attempt to start new runs at that date. It is therefore good practice to delete the file when rerunning within the same a run directory.

Running Interactively

Use example run script to run GCHP locally on your machine. Before running, check that you have at least 6 cores available at your disposal. Then copy to the main level of your run directory and type the following at the command prompt:


If your run crashes during transport then you need additional memory. Either request an interactive session on your cluster with additional memory or consider running GCHP as a batch job by submitting your run to a job scheduler.

Running as a Batch Job

The recommended job script example is which is custom for use with SLURM on the Harvard University Odyssey cluster. However, it may be adapted for other systems. You may also adapt the interactive run script for your system as well. The "multirun" scripts are for submitting multiple consecutive jobs in a row and are more advanced. Read more about that option in the chapter on configuring a run later in this manual.

Example run scripts send standard output to file gchp.log by default and require manually configuring your job-specific resources such as number of cores and nodes. If using versions prior to 12.1.0 then you must also manually add your environment filename to the script; later versions simply source local symbolic link gchp.env which you set to point to your environment file during run directory setup.

If using SLURM, submit your batch job with this command:


Job submission is different for other systems. For example, to submit a Grid Engine batch file, type:


If your computational cluster uses a different job scheduler (e.g. LSF or PBS), then check with your IT staff about how to submit batch jobs. Please also consider submitting your working run script for inclusion in the run script examples folder in future versions.

Verifying a Successful Run

There are several ways to verify that your run was successful.

  1. NetCDF files are present in the OutputDir subdirectory.
  2. gchp.log ends with timing information for the run.
  3. Your scheduler log (e.g. output from SLURM) does not contain any obvious errors.
  4. gchp.log contains text with format "AGCM Date: YYYY/MM/DD Time: HH:mm:ss" for each timestep (e.g. 00:10, 00:20, 00:30, 00:40, 00:50, and 01:00 for a 1-hr run).

If it looks like something went wrong, check all log files (type "ls *.log" in run directory to list them) as well as your scheduler output file (if one exists) to determine where there may have been an error. Beware that if you have a problem in one of your configuration files then you will likely see a MAPL error with traceback to the GCHP/Shared directory. Review all of your configuration files to ensure you have proper setup. Errors in "CAP" typically indicate an error with your start time, end time, and/or duration set in (more on this file in the next chapter). Errors in "ExtData" often indicate an error with your input files specified in either HEMCO_Config.rc or ExtData.rc. Errors in "HISTORY" are related to your configured output in HISTORY.rc

GCHP errors can be cryptic. If you find yourself debugging within MAPL then you may be on the wrong track as most issues can be resolved by updating the run settings. If you cannot figure out where you are going wrong please create an issue on the GCHP GitHub issue tracker located at

Reusing a Run Directory

Archiving a Run

One of the benefits of GCHP relative to GEOS-Chem Classic is that you can reuse a run directory for different grid resolutions and meteorology sources without recompiling. This comes with the perils of losing your old work. To mitigate this issue there is utility shell script to archive output and configuration files from your last run within a subdirectory in the run directory. All you need to do is pass a non-existent subdirectory name of your choosing where the files should be stored. Here is an example:

./ c48_test

The following output is then printed to screen to show you exactly what is being archived and where:

Archiving files...
-> c48_test/build/lastbuild
-> c48_test/build/compile.log
-> c48_test/config/input.geos
-> c48_test/config/CAP.rc
-> c48_test/config/ExtData.rc
-> c48_test/config/fvcore_layout.rc
-> c48_test/config/GCHP.rc
-> c48_test/config/HEMCO_Config.rc
-> c48_test/config/HEMCO_Diagn.rc
-> c48_test/config/HISTORY.rc
-> c48_test/restarts/
-> c48_test/logs/compile.log
-> c48_test/logs/gchp.log
-> c48_test/logs/HEMCO.log
-> c48_test/logs/PET00000.GEOSCHEMchem.log
-> c48_test/logs/runConfig.log
-> c48_test/logs/slurm-50168021.out
-> c48_test/run/
-> c48_test/run/
-> c48_test/run/gchp.ifort17_openmpi_odyssey.env
Warning: * not found

There is a file structure within the archive directory automatically set up to store files of various typed (e.g. logs files in the logs subdirectory). This particular archived run was a single segment run (single job) which is why there is a warning about a multirun file being missing. This can be ignored.

Cleaning the Run Directory

If you do not want to save your last run you can discard its remnants by doing "make cleanup_output". All sample environment files have an alias for this, "mco", to make it easier to do. Here is an example of output printed when cleaning the run directory:

rm -f /n/home08/elundgren/GC/testruns/12.0.0/Aug01/gchp_RnPbBe/OutputDir/*.nc4
rm -f trac_avg.*
rm -f tracerinfo.dat
rm -f diaginfo.dat
rm -f cap_restart
rm -f gcchem*
rm -f *.rcx
rm -f *~
rm -f gchp.log
rm -f HEMCO.log
rm -f PET*.log
rm -f runConfig*log
rm -f multirun.log
rm -f logfile.000000.out
rm -f slurm-*
rm -f 1
rm -f EGRESS

Rerunning Without Cleaning

You can reuse a run directory without cleaning it and without archiving your last run. Files will generally simply be replaced by files generated in the next run. This will work okay with one exception. The output cap_restart file must be removed prior to subsequent runs if you are starting a run from scratch. The cap_restart file contains a date and time string for the end of your last run. GCHP will attempt to start your next run at this date and time if the file is present. This is useful for splitting up a run into multiple jobs but is generally not desirable. Therefore you should always delete cap_restart before a new run. This is included in all sample run scripts except the multi-run run script which has special handling of the file.

Previous | Next | User Manual Home | GCHP Home