Difference between revisions of "Running GCHP: Configuration"

From Geos-chem
Jump to: navigation, search
m
 
(37 intermediate revisions by one other user not shown)
Line 1: Line 1:
 +
----
 +
<span style="color:crimson;font-size:120%">'''The GCHP documentation has moved to https://gchp.readthedocs.io/.''' The GCHP documentation on http://wiki.seas.harvard.edu/ will stay online for several months, but it is outdated and no longer active!</span>
 +
----
 +
 
__FORCETOC__
 
__FORCETOC__
'''''[[Running_GCHP:_Basics|Previous]] | [[GCHP_Output_Data| Next]] | [[Getting Started With GCHP]] | [[GCHP Main Page]]'''''
+
'''''[[Running_GCHP:_Basics|Previous]] | [[GCHP_Output_Data| Next]] | [[Getting Started with GCHP]] | [[GCHP Main Page]]'''''
 
#[[GCHP_Hardware_and_Software_Requirements|Hardware and Software Requirements]]
 
#[[GCHP_Hardware_and_Software_Requirements|Hardware and Software Requirements]]
#[[Downloading_GCHP|Downloading Source Code and Data Directories]]
 
#[[Obtaining_a_GCHP_Run_Directory|Obtaining a Run Directory]]
 
 
#[[Setting_Up_the_GCHP_Environment|Setting Up the GCHP Environment]]
 
#[[Setting_Up_the_GCHP_Environment|Setting Up the GCHP Environment]]
 +
#[[Downloading_GCHP|Downloading Source Code and Data Directories]]
 
#[[Compiling_GCHP|Compiling]]
 
#[[Compiling_GCHP|Compiling]]
 +
#[[Obtaining_a_GCHP_Run_Directory|Obtaining a Run Directory]]
 
#[[Running_GCHP:_Basics|Running GCHP: Basics]]
 
#[[Running_GCHP:_Basics|Running GCHP: Basics]]
 
#<span style="color:blue">'''Running GCHP: Configuration'''</span>
 
#<span style="color:blue">'''Running GCHP: Configuration'''</span>
Line 15: Line 19:
 
== Overview ==
 
== Overview ==
  
All default GCHP run directories are set up to run at c24 resolution with 0.25x0.325 GEOS-FP meteorology, 6 cores, and 1 node. This is the simplest possible run and a good test case for your initial setup. However, you will want to change these settings, and potentially several others, for your research runs. This page goes over how to do this.
+
All GCHP run directories have default simulation-specific run-time settings that are set when you create a run directory. You will likely want to change these settings. This page goes over how to do this.
  
GCHP has several configuration files, most of which end in suffix ".rc". Rather than update many files, some of which contain redundant information, we instead use utility shell script <tt>runConfig.sh</tt> to set most options in a single location. Sourcing the file automatically updates other configuration files prior to the run and eliminates the need for remembering what to update and where. However, it is important to note that that doing this will overwrite settings in other configuration files. You therefore should never manually update other configuration files unless you know the specific option is not available for setting in <tt>runConfig.sh</tt>.
+
== Configuration files ==
  
All sample run scripts include sourcing <tt>runConfig.sh</tt>. When <tt>runConfig.sh</tt> is sourced it prints out information on what settings are being changed to what value and in what file. This information is sent output log <tt>gchp.log</tt>.
+
GCHP is controlled using a set of configuration files that are included in the GCHP run directory. Files include:
 +
#[[GCHP_Run_Configuration_Files#CAP.rc|CAP.rc]]
 +
#[[GCHP_Run_Configuration_Files#ExtData.rc|ExtData.rc]]
 +
#[[GCHP_Run_Configuration_Files#GCHP.rc|GCHP.rc]]
 +
#[[GCHP_Run_Configuration_Files#input.geos|input.geos]]
 +
#[[GCHP_Run_Configuration_Files#HEMCO_Config.rc|HEMCO_Config.rc]]
 +
#[[GCHP_Run_Configuration_Files#HEMCO_Diagn.rc|HEMCO_Diagn.rc]]
 +
#[[GCHP_Run_Configuration_Files#input.nml|input.nml]]
 +
#[[GCHP_Run_Configuration_Files#HISTORY.rc|HISTORY.rc]]
  
You generally will not need to know more about the GCHP configuration files beyond what is listed on this page. However, for more detailed information about the configuration files used by GCHP see the last section of this user manual which includes a list and description of all contents as well as a more detailed display of what <tt>runConfig.sh</tt> is actually doing. Even better is to look at the configuration files, look at the source code, and if in doubt, contact the GEOS-Chem Support Team.  
+
Several run-time settings must be set consistently across multiple files. Inconsistencies may result in your program crashing or yielding unexpected results. To avoid mistakes and make run configuration easier, bash shell script <tt>runConfig.sh</tt> is included in all run directories to set the most commonly changed config file settings from one location. Sourcing this script will update multiple config files to use values specified in file.
  
If there is something you want to configure in your GCHP run that is not described on this page, or if you see an error, please contact the GEOS-Chem Support Team with feedback. You can also sign up for your own wiki account and expand these sections with clarifying information that you think would help other users.
+
Sourcing <tt>runConfig.sh</tt> is done automatically prior to running GCHP if using any of the example run scripts, or you can do it at the command line. Information about what settings are changed and in what files are standard output of the script. To source the script, type the following:
  
== Run Configuration Options ==
+
source runConfig.sh
  
=== Compute Configuration ===
+
You may also use it in silent mode if you wish to update files but not display settings on the screen:
  
==== Set Number of Nodes and Cores ====
+
source runConfig.sh --silent
  
To change the number of nodes and cores for your run you must update settings in two places: (1) <tt>runConfig.sh</tt>, and (2) your run script. The <tt>runConfig.sh</tt> file contains detailed instructions on how to set resource parameter options as shown below in an example using 96 cores. This example is for GCHP 12.5.0 and may be slightly different for earlier versions.
+
While using <tt>runConfig.sh</tt> to configure common settings makes run configure much simpler, it comes with a major caveat. If you manually edit a config file setting that is also set in <tt>runConfig.sh</tt> then your manual update will be overrided via string replacement. Please get very familiar with the options in <tt>runConfig.sh</tt> and be conscientious about not updating the same setting elsewhere.
  
#------------------------------------------------
+
You generally will not need to know more about the GCHP configuration files beyond what is listed on this page. However, for a comprehensive description of all configuration files used by GCHP see the last section of this user manual.
#  Compute Resources
+
#------------------------------------------------
+
# Set number of cores, number of nodes, and number of cores per node.
+
# Total cores must be divisible by 6. Cores per node must equal number
+
# of cores divided by number of nodes. Make sure you have these
+
# resources available.
+
TOTAL_CORES=6
+
NUM_NODES=1
+
NUM_CORES_PER_NODE=6
+
+
# Cores are distributed across each of the six cubed sphere faces using
+
# configurable parameters NX and NY. Each face is divided into NX by NY/6
+
# regions and each of those regions is processed by a single core
+
# independent of which node it belongs to. Making NX by NY/6 as square
+
# as possible reduces communication overhead in GCHP.
+
#
+
# Set NXNY_AUTO to either auto-calculate NX and NY (ON) (recommended)
+
# or set them manually (OFF).
+
NXNY_AUTO=ON
+
+
# Rules and tips for setting NX and NY manually (NXNY_AUTO=OFF):
+
#  1. NY must be an integer and a multiple of 6  
+
#  2. NX*NY must equal total number of cores (NUM_NODES*NUM_CORES_PER_NODE)
+
#  3. Choose NX and NY to optimize NX x NY/6 squareness
+
#        Good examples: (NX=4,NY=24)  -> 96  cores at 4x4
+
#                        (NX=6,NY=24)  -> 144 cores at 6x4
+
#        Bad examples:  (NX=8,NY=12)  -> 96  cores at 8x2
+
#                        (NX=12,NY=12) -> 144 cores at 12x2
+
#  4. Domain decomposition requires that CS_RES/NX >= 4 and CS_RES*6/NY >= 4,
+
#      which puts an upper limit on total cores per grid resolution.
+
#        c24: 216 cores  (NX=6,  NY=36 )
+
#        c48: 864 cores  (NX=12, NY=72 )
+
#        c90: 3174 cores  (NX=22, NY=132)
+
#        c180: 12150 cores (NX=45, NY=270)
+
#        c360: 48600 cores (NX=90, NY=540)
+
#      Using fewer cores may still trigger a domain decomposition error, e.g.:
+
#        c48: 768 cores  (NX=16, NY=48)  --> 48/16=3 will trigger FV3 error
+
NX=1 # Ignore if NXNY_AUTO=ON
+
NY=6 # Ignore if NXNY_AUTO=ON
+
  
The sample SLURM run script will assign core resources based on settings in <tt>runConfig.sh</tt>. You can request additional cores in your run script to maximize memory available per core. However, you must request the same number of nodes in your run script as in <tt>runConfig.sh</tt>. For examples:
+
== Commonly Changed Run Options ==
  
#SBATCH -n 144                                                                                     
+
=== Compute Configuration ===
#SBATCH -N 4                                                                                   
+
#SBATCH --exclusive                                                                               
+
#SBATCH -t 0-5:00                                                                                 
+
#SBATCH -p huce_intel                                                                             
+
#SBATCH --mem=MaxMemPerNode                                                                       
+
#SBATCH --mail-type=ALL
+
  
In this example 144 cores are requested across 4 nodes. The <code>--exclusive</code> option prevents other users from using cores on that node, thereby maximizing memory available per core. In this example, if there are 32 cores per node, requesting 144 cores total achieves the same thing as the exclusive option and therefore is redundant. With the presence of the exclusive option the number of requested cores could be lowered to match the number of cores used in GCHP, in this case 96. This would have the advantage of allowing the run to be picked up by a node with as few as 24 cores, if available. However, the <code>--exclusive</code> option is sometimes disabled on clusters and may not work on your system. In this case, it is best to reserve an entire node by specifying all cores on the node.
+
==== Set Number of Nodes and Cores ====
 +
 
 +
To change the number of nodes and cores for your run you must update settings in two places: (1) <tt>runConfig.sh</tt>, and (2) your run script. The <tt>runConfig.sh</tt> file contains detailed instructions on how to set resource parameter options and what they mean. Look for the <tt>Compute Resources</tt> section in the script. Update your resource request in your run script to match the resources set in <tt>runConfig.sh</tt>.
  
 
It is important to be smart about your resource allocation. To do this it is useful to understand how GCHP works with respect to distribution of nodes and cores across the grid. At least one unique core is assigned to each face on the cubed sphere, resulting in a constraint of at least six cores to run GCHP. The same number of cores must be assigned to each face, resulting in another constraint of total number of cores being a multiple of six. Communication between the cores occurs only during transport processes.  
 
It is important to be smart about your resource allocation. To do this it is useful to understand how GCHP works with respect to distribution of nodes and cores across the grid. At least one unique core is assigned to each face on the cubed sphere, resulting in a constraint of at least six cores to run GCHP. The same number of cores must be assigned to each face, resulting in another constraint of total number of cores being a multiple of six. Communication between the cores occurs only during transport processes.  
  
While any number of cores is valid as long as it is a multiple of six, you will typically start to see negative effects due to excessive communication if a core is handling less than around one hundred grid cells or a cluster of grid cells that are not approximately square. You can determine how many grid cells are handled per core by analyzing your grid resolution and resource allocation. For example, if running at C24 with six cores each face is handled by one core (6 faces / 6 cores) and contains 576 cells (24x24). Each core therefore processes 576 cells. Since each core handles one face, each core communicates with four other cores (four surrounding faces).
+
While any number of cores is valid as long as it is a multiple of six (although there is an upper limit per resolution), you will typically start to see negative effects due to excessive communication if a core is handling less than around one hundred grid cells or a cluster of grid cells that are not approximately square. You can determine how many grid cells are handled per core by analyzing your grid resolution and resource allocation. For example, if running at C24 with six cores each face is handled by one core (6 faces / 6 cores) and contains 576 cells (24x24). Each core therefore processes 576 cells. Since each core handles one face, each core communicates with four other cores (four surrounding faces). Maximizing squareness of grid cells per core is done automatically within <tt>runConfig.sh</tt> if variable <tt>NXNY_AUTO</tt> is set to <tt>ON</tt>.
  
You can configure approximately how the cores are assigned to grid cell geometry by using the NX and NY configuration variables in GCHP as shown above. Starting in 12.5.0 this can be done automatically for you. But what is this actually doing? Imagine lining up the six face grids adjacent to each other to get a single rectangular array. The rectangle will have N grid cells width (e.g. 24 if a C24 grid), and 6N grid cells height (since 6 faces). NX is the number of segments the width N is broken into for core distribution. NY is the number of segments the height 6N is broken into and must always be a multiple of six. NX * NY is always the total number of cores.
+
Further discussion about domain decomposition is in <tt>runConfig.sh</tt> section <tt>Domain Decomposition</tt>.
 
+
For the case of a six core run, NX is equal to 1 and NY is equal to 6. This is because the entire N grid cells width is handled by 1 core (NX) and the 6N grid cells height is handled by 6 cores (NY), or one per face. If you instead wanted each face to be handled by four cores, and further constrain each core to handle one face quadrant, you would set NX equal to two and NY equal to twelve. A simple way of thinking about this is that core distribution across each face is with geometry NX x NY/6. In this last example that would be equivalent to 2x2.
+
  
 
==== Split a Simulation Into Multiple Jobs ====
 
==== Split a Simulation Into Multiple Jobs ====
Line 98: Line 65:
 
There is an option to split up a single simulation into separate serial jobs. To use this option, do the following:
 
There is an option to split up a single simulation into separate serial jobs. To use this option, do the following:
  
#Update <tt>runConfig.sh</tt> with your full simulation (all runs) start and end dates, and the duration per segment (single run). Also update the number of runs options to reflect to total number of jobs that will be submitted. Carefully read these parts of <tt>runConfig.sh</tt> to ensure you understand how it works.
+
#Update <tt>runConfig.sh</tt> with your full simulation (all runs) start and end dates, and the duration per segment (single run). Also update the number of runs options to reflect to total number of jobs that will be submitted (<tt>NUM_RUNS</tt>). Carefully read the comments in <tt>runConfig.sh</tt> to ensure you understand how it works.
 +
#Optionally turn on monthly diagnostic (<tt>Monthly_Diag</tt>). Only turn on monthly diagnostics if your run duration is monthly.
 
#Use <tt>gchp.multirun.run</tt> as your run script, or adapt it if your cluster does not use SLURM. It is located in the <tt>runScriptSamples</tt> subdirectory of your run directory. As with the regular <tt>gchp.run</tt>, you will need to update the file with compute resources consistent with <tt>runConfig.sh</tt>.  '''Note that you should not submit the run script directly.''' It will be done automatically by the file described in the next step.
 
#Use <tt>gchp.multirun.run</tt> as your run script, or adapt it if your cluster does not use SLURM. It is located in the <tt>runScriptSamples</tt> subdirectory of your run directory. As with the regular <tt>gchp.run</tt>, you will need to update the file with compute resources consistent with <tt>runConfig.sh</tt>.  '''Note that you should not submit the run script directly.''' It will be done automatically by the file described in the next step.
 
#Use <tt>gchp.multirun.sh</tt> to submit your job, or adapt it if your cluster does not use SLURM. It is located in the <tt>runScriptSamples</tt> subdirectory of your run directory. For example, to submit your series of jobs, type: <code>./gchp.multirun.sh</code>
 
#Use <tt>gchp.multirun.sh</tt> to submit your job, or adapt it if your cluster does not use SLURM. It is located in the <tt>runScriptSamples</tt> subdirectory of your run directory. For example, to submit your series of jobs, type: <code>./gchp.multirun.sh</code>
 
The settings for the multi-run option in <tt>runConfig.sh</tt> are number of consecutive runs, and whether to turn on monthly diagnostics. Only turn on monthly diagnostics if your run duration is monthly.
 
 
#------------------------------------------------
 
#    Multi-run option
 
#------------------------------------------------       
 
# The simplest run is a single segment. Set Num_Runs=1 and Monthly_Diag=0.
 
#
 
# In some cases it is advantageous to split up your simulation into
 
# multiple runs, what we call the multi-run option. Use this option as follows:
 
#  1. Set Num_Runs below to total # of consecutive runs
 
#  2. Set Monthly_Diag=1 to output monthly diagnostics; else 0.
 
#  3. Copy gchp.multirun.sh and gchp.multirun.run from runScriptSamples/
 
#      to run directory
 
#  4. Configure resources at the top of gchp.multirun.run (assumes SLURM).
 
#      This is the run script used for each individual run in the sequence.
 
#  5. Set duration above to the duration of each INDIVIDUAL run
 
#  6. Set end date after start date to span ALL runs
 
#  7. Execute shell script gchp.multirun.sh at the command line
 
#        $ ./gchp.multirun.sh
 
#
 
# When using monthly diagnostics:
 
#  - Run segment duration must be 1-month (00000100 000000)
 
#  - Start date must be within the first 28 days of the month
 
#  - There is no need to set diag frequency and duration in this file
 
#    since they will be over-written for each run based on days in month
 
#
 
Num_Runs=1
 
Monthly_Diag=0
 
  
 
There is much documentation in the headers of both <tt>gchp.multirun.run</tt> and <tt>gchp.multirun.sh</tt> that is worth reading and getting familiar with, although not entirely necessary to get the multi-run option working. If you have not done so already, it is worth trying out a simple multi-segmented run of short duration to demonstrate that the multi-segmented run configuration and scripts work on your system. For example, you could do a 3-hour simulation with 1-hour duration and number of runs equal to 3.  
 
There is much documentation in the headers of both <tt>gchp.multirun.run</tt> and <tt>gchp.multirun.sh</tt> that is worth reading and getting familiar with, although not entirely necessary to get the multi-run option working. If you have not done so already, it is worth trying out a simple multi-segmented run of short duration to demonstrate that the multi-segmented run configuration and scripts work on your system. For example, you could do a 3-hour simulation with 1-hour duration and number of runs equal to 3.  
Line 137: Line 76:
 
==== Change Domains Stack Size ====
 
==== Change Domains Stack Size ====
  
For runs at very high resolution or small number of processors you may run into a domains stack size error. This is caused by exceeding the domains stack size memory limit set at run-time and the error will be apparent from the message in your log file. If this occurs you can increase the domains stack size in file <tt>input.nml</tt>. The default is set to 20000000.  
+
For runs at very high resolution or small number of processors you may run into a domains stack size error. This is caused by exceeding the domains stack size memory limit set at run-time and the error will be apparent from the message in your log file. If this occurs you can increase the domains stack size in file <tt>input.nml</tt>. The default is set to 20000000.
 +
 
 
=== Basic Run Settings ===
 
=== Basic Run Settings ===
  
 
==== Set Cubed Sphere Grid Resolution ====
 
==== Set Cubed Sphere Grid Resolution ====
  
GCHP uses a cubed sphere grid rather than the traditional lat-lon grid used in GEOS-Chem Classic. While regular lat-lon grids are typically designated as ΔLat ⨉ ΔLon (e.g. 4⨉5), cubed sphere grids are designated by the side-length of the cube. In GCHP we specify this as CX (e.g. C24 or C180). The simple rule of thumb for determining the roughly equivalent lat/lon for a given cubed sphere resolution is to divide the side length by 90.  Using this rule you can quickly match C24 with 4x5, C90 with 1 degree, C360 with quarter degree, and so on.
+
GCHP uses a cubed sphere grid rather than the traditional lat-lon grid used in GEOS-Chem Classic. While regular lat-lon grids are typically designated as ΔLat ⨉ ΔLon (e.g. 4⨉5), cubed sphere grids are designated by the side-length of the cube. In GCHP we specify this as CX (e.g. C24 or C180). The simple rule of thumb for determining the roughly equivalent lat-lon resolution for a given cubed sphere resolution is to divide the side length by 90.  Using this rule you can quickly match C24 with about 4x5, C90 with 1 degree, C360 with quarter degree, and so on.
 +
 
 +
To change your grid resolution in the run directory edit the <tt>CS_RES</tt> integer parameter in <tt>runConfig.sh</tt> section <tt>Internal Cubed Sphere Resolution</tt> to the cube side length you wish to use. To use a uniform global grid resolution make sure that <tt>STRETCH_GRID</tt> is set to <tt>OFF</tt>.
  
To change your grid resolution in the run directory edit the "CS_RES" integer parameter in <tt>runConfig.sh</tt> to the cube side-length you wish to use.
+
==== Set Stretch Grid Resolution ====
  
#------------------------------------------------
+
GCHP has the capability to run with a stretched grid, meaning one portion of the globe is stretched to fine resolution. Set stretched grid parameter in <tt>runConfig.sh</tt> section <tt>Internal Cubed Sphere Resolution</tt>. See instructions in that section of the file.
Internal Cubed Sphere Resolution
+
#------------------------------------------------
+
CS_RES=24 # 24 ~ 4x5, 48 ~ 2x2.5, 90 ~ 1x1.25, 180 ~ 1/2 deg, 360 ~ 1/4 deg
+
  
 
==== Turn On/Off Model Components ====
 
==== Turn On/Off Model Components ====
  
You can toggle all primary GEOS-Chem components, including type of mixing, from within <tt>runConfig.sh</tt>. The settings in that file will update <tt>input.geos</tt> automatically.
+
You can toggle all primary GEOS-Chem components, including type of mixing, from within <tt>runConfig.sh</tt>. The settings in that file will update <tt>input.geos</tt> automatically. Look for section <tt>Turn Components On/Off, and other settings in input.geos</tt>. Other settings in this section beyond component on/off toggles using CH4 emissions in UCX, and initializing stratospheric H2O in UCX.
 
+
#------------------------------------------------
+
#    Turn Components On/Off
+
#------------------------------------------------
+
# Automatically turns on/off GEOS-Chem components in input.geos.
+
#
+
# WARNING: these settings will override manual updates you make to input.geos!
+
#
+
Turn_on_Chemistry=T
+
Turn_on_emissions=T
+
Turn_on_Dry_Deposition=T
+
Turn_on_Wet_Deposition=T
+
Turn_on_Transport=T
+
Turn_on_Cloud_Conv=T
+
Turn_on_PBL_Mixing=T
+
Turn_on_Non_Local_Mixing=T
+
  
 
==== Change Model Timesteps ====
 
==== Change Model Timesteps ====
  
Model timesteps, both chemistry and dynamic, are configured within <tt>runConfig.sh</tt>. They are set to match GEOS-Chem Classic default values for comparison purposes but can be updated, with caution. Read the documentation in <tt>runConfig.sh</tt> for setting them to be fully aware of recommended settings. Changing to higher resolutions will automatically change the timestep based on the rules set in <tt>runConfig.sh</tt>.
+
Model timesteps, both chemistry and dynamic, are configured within <tt>runConfig.sh</tt>. They are set to match GEOS-Chem Classic default values for low resolutions for comparison purposes but can be updated, with caution. Timesteps are automatically reduced for high resolution runs. Read the documentation in <tt>runConfig.sh</tt> section <tt>Timesteps</tt> for setting them.
 
+
#------------------------------------------------
+
#    Timesteps
+
#------------------------------------------------
+
# Optimal timesteps are dependent on grid resolution and are automatically
+
# set based on the GCHP Working Group's recommendation below. To override
+
# these settings, comment out the code and manually define the following
+
# variables:
+
#    ChemEmiss_Timestep_sec    : chemistry timestep interval [s]
+
#    TransConv_Timestep_sec    : dynamic timestep interval [s]
+
#    TransConv_Timestep_HHMMSS  : dynamic timestep interval as HHMMSS string
+
#
+
# WARNING: Settings in this file will override settings in input.geos!
+
#
+
# NOTE: Default timesteps for c24 and c48, the cubed-sphere rough equivalents
+
# of 4x5 and 2x2.5, are the same as defaults timesteps in GEOS-Chem Classic
+
#
+
if [[ $CS_RES -lt 180 ]]; then
+
    ChemEmiss_Timestep_sec=1200
+
    TransConv_Timestep_sec=600
+
    TransConv_Timestep_HHMMSS=001000
+
else
+
    ChemEmiss_Timestep_sec=600
+
    TransConv_Timestep_sec=300
+
    TransConv_Timestep_HHMMSS=000500
+
fi
+
  
 
==== Set Simulation Start and End Dates ====
 
==== Set Simulation Start and End Dates ====
  
Set simulation start and end in <tt>runConfig.sh</tt>.
+
Set simulation start and end in <tt>runConfig.sh</tt> section <tt>Simulation Start, End, Duration, # runs</tt>. Read the comments in the file for a complete description of the options. Typically a "CAP" runtime error indicates a problem with start, end, and duration settings. If you encounter an error with the words "CAP" near it then double-check that these settings make sense.
 
+
#------------------------------------------------
+
#    Simulation Start/End/Duration
+
#------------------------------------------------
+
# For single-segment runs, duration should be less than or equal to the
+
# difference between start and end time. If end time is past start time
+
# plus duration, the simulation will end at start time plus duration rather
+
# than end time.
+
#
+
# Setting duration such that two or more durations can occur between start
+
# and end will enable multi-segmented runs. At the end of each run the
+
# end time is stored as the new start time in output file cap_restart.
+
# Rerunning without removing or editing cap_restart will start at the
+
# start time in cap_restart rather than the start time listed below.
+
# Use this feature with the multi-segmented runs / monthly diagnostics
+
# section below. See more information about this on the GCHP wiki.
+
#
+
Start_Time="20160101 000000"
+
End_Time="20160101 030000"
+
Duration="00000000 030000"
+
 
+
There is also a "Duration" field in the file which must be set to reflect how long your run will last. If your end date is earlier than your start date plus duration then your GCHP run will fail. If your end date is later than your start date plus duration then your job will not make it to your configured end date; it will end at start date plus duration. If your end date is multiple durations past your start date then subsequent job submissions will start where your last run ended, so long as you do not delete file <tt>cap_restart</tt>. That file contains a new start string that will always be used if the file is present. You can take advantage of this file for splitting up a long simulation into multiple jobs. See further down on this page for automation of this task built into the run directory.
+
 
+
Typically a "CAP" error indicates a problem with start, end, and duration settings. If you encounter an error with the words "CAP" near it then double-check that these settings make sense.
+
  
 
=== Inputs ===
 
=== Inputs ===
  
==== Change Input Meteorology Grid Resolution ====
+
==== Change Initial Restart File ====
  
The meteorology grid resolutions are set to 0.25x0.3125 for GEOS-FP and 0.5x0.625 for MERRA2 by default when creating a GCHP run directory. If you wish to change meteorology resolution you must update all meteorology paths and filenames in <tt>ExtData.rc</tt> and make sure run directory symbolic link <tt>MetDir</tt> points to the data at the resolution you will run at.  
+
All GCHP run directories come with symbolic links to initial restart files for commonly used cubed sphere resolutions. The appropriate restart file is automatically chosen based on the cubed sphere resolution you set in <tt>runConfig.sh</tt>.
  
Sebastian Eastham (MIT) developed the following python code to automatically update <tt>ExtData.rc</tt> for alternative grid resolutions.
+
You may overwrite the default restart file with your own by specifying the restart filename in <tt>runConfig.sh</tt> section <tt>Initial Restart File</tt>. Beware that it is your responsibility to make sure it is the proper grid resolution.
 
+
#!/bin/bash
+
if <nowiki>[[ $# -ne 1 ]]</nowiki>; then
+
echo "Must provide path to either GEOS_2x2.5/MERRA2 or GEOS_2x2.5/GEOSFP"
+
exit 70
+
fi
+
if <nowiki>[[ -L MetDir ]]</nowiki>; then unlink MetDir; fi
+
# For GEOS-FP
+
sed -i "s/025x03125\.nc/2x25.nc/g" ExtData.rc
+
# For MERRA-2
+
#sed -i "s/05x0625\.nc/2x25.nc/g" ExtData.rc
+
ln -s $1 MetDir
+
exit 0
+
 
+
Copy and paste the above code into a file in your run directory, for example <tt>change_met.sh</tt>. Assuming you are using the standard directory structure, the following command should then automatically switch your ExtData.rc and MetDir to the 2x2.5 GEOS-FP input:
+
 
+
./change_met.sh $(readlink -f $( readlink -f MainDataDir )/../GEOS_2x2.5/GEOS_FP)
+
 
+
When changing meteorology source and/or grid resolution, be sure that you have the data available for the time period you plan on simulating. In addition, note that meteorology listed in <tt>ExtData.rc</tt> includes both data for the time period you plan on running at as well as constants files (2011 to GEOS-FP and 2015 for MERRA2). See the [[Downloading_GEOS-Chem_source_code_and_data|downloading GEOS-Chem data page]] for more information on meteorology sources available and how to download them.
+
 
+
==== Change Your Initial Restart File ====
+
 
+
All GCHP run directories come with symbolic links to initial restart files for commonly used cubed sphere resolutions. The appropriate restart file is automatically chosen based on the cubed sphere resolution you set in <tt>runConfig.sh</tt>. All of the restart files are simply GEOS-Chem Classic restart files regridded to the cubed sphere.
+
 
+
#------------------------------------------------
+
#    Initial Restart File
+
#------------------------------------------------
+
# By default the linked restart files in the run directories will be
+
# used. Please note that HEMCO restart variables are stored in the same
+
# restart file as species concentrations. Initial restart files available
+
# on gcgrid do not contain HEMCO variables which will have the same effect
+
# as turning the HEMCO restart file option off in GC classic. However, all
+
# output restart files will contain HEMCO restart variables for your next run.
+
INITIAL_RESTART=initial_GEOSChem_rst.c${CS_RES}_TransportTracers.nc
+
+
# You can specify a custom initial restart file here to overwrite:
+
# INITIAL_RESTART=your_restart_filename_here
+
 
+
You may over-write the default restart file with your own by specifying the restart filename in <tt>runConfig.sh</tt>. Beware that it is your responsibility to make sure it is the proper grid resolution.
+
  
 
Unlike GEOS-Chem Classic, HEMCO restart files are not used in GCHP. HEMCO restart variables may be included in the initial species restart file, or they may be excluded and HEMCO will start with default values. GCHP initial restart files that come with the run directories do not include HEMCO restart variables, but all output restart files do.
 
Unlike GEOS-Chem Classic, HEMCO restart files are not used in GCHP. HEMCO restart variables may be included in the initial species restart file, or they may be excluded and HEMCO will start with default values. GCHP initial restart files that come with the run directories do not include HEMCO restart variables, but all output restart files do.
 
==== Regrid an Initial Restart File ====
 
 
GCHP expects the restart file to be at the same grid resolution as the model run. If you have a lat-lon restart file that you want to regrid for input to GCHP you can, or a restart file already on cubed-sphere but at the wrong resolution, you can regrid to cubed sphere at any resolution using #FORTRAN tool [https://bitbucket.org/sdeastham/csregridtool <tt>CSRegridTool</tt>] developed by Sebastian Eastham (MIT). To use, clone the repository, source your GCHP environment file, and run <tt>make</tt> from within csregridtool. This will build the program. Then copy your source restart file to the directory, edit the input.regrid text file for the target resolution, source filename, target filename, and if you want vertical flipping (GCHP uses level 0 as top of atmosphere).
 
 
Please note that csregridtool requires offline tile files with mapping weights for the regridding. We have tile files available for commonly used source and target grids, such as {4x5, 2x2.5} to {c24, c48, c90, c180, c360}. If you need tile files for a different conversion please contact the GEOS-Chem Support Team. Please note that csregridtool creates files in NetCDF-3 format. You will need to reprocess to NetCDF-4 prior to use in GCHP.
 
  
 
==== Turn On/Off Emissions Inventories ====
 
==== Turn On/Off Emissions Inventories ====
  
Because file I/O impacts GCHP performance it is a good idea to turn of file read of emissions that you do not need. You can turn emissions inventories on or off the same way you would in GEOS-Chem Classic, by setting the inventories to true or false at the top of configuration file <tt>HEMCO_Config.rc</tt>. All emissions that are turned off in this way will be ignored when GCHP uses <tt>ExtData.rc</tt> to read files, thereby speeding up the model.
+
Because file I/O impacts GCHP performance it is a good idea to turn off file read of emissions that you do not need. You can turn emissions inventories on or off the same way you would in GEOS-Chem Classic, by setting the inventories to true or false at the top of configuration file <tt>HEMCO_Config.rc</tt>. All emissions that are turned off in this way will be ignored when GCHP uses <tt>ExtData.rc</tt> to read files, thereby speeding up the model.
  
 
For emissions that do not have an on/off toggle at the top of the file, you can prevent GCHP from reading them by commenting them out in <tt>HEMCO_Config.rc</tt>. No updates to <tt>ExtData.rc</tt> would be necessary. If you alternatively comment out the emissions in <tt>ExtData.rc</tt> but not <tt>HEMCO_Config.rc</tt> then GCHP will fail with an error when looking for the file information.
 
For emissions that do not have an on/off toggle at the top of the file, you can prevent GCHP from reading them by commenting them out in <tt>HEMCO_Config.rc</tt>. No updates to <tt>ExtData.rc</tt> would be necessary. If you alternatively comment out the emissions in <tt>ExtData.rc</tt> but not <tt>HEMCO_Config.rc</tt> then GCHP will fail with an error when looking for the file information.
  
Another option to skip file read for certain files is to replace the file path in <tt>ExtData.rc</tt> with <tt>/dev/null</tt>. However, if you want to turn these inputs back on at a later time you should preserve the original path in a comment.  
+
Another option to skip file read for certain files is to replace the file path in <tt>ExtData.rc</tt> with <tt>/dev/null</tt>. However, if you want to turn these inputs back on at a later time you should preserve the original path by commenting out the original line.  
  
==== Add New Input Files ====
+
==== Add New Emissions Files ====
  
'''''New in GCHP 12.5.0:''''' '''Online ESMF regridding removes the need for tile files when running GCHP. However, online regridding does not apply to restart files. You can still use the tools listed below to create tile files to regrid restart files, or you can regrid using python.'''
+
There are two steps for adding new emissions inventories to GCHP:  
 
+
#Add the inventory information to <tt>HEMCO_Config.rc</tt>.
There are three main requirements for adding new emissions inventories to GCHP:  
+
#Add the inventory information to <tt>HEMCO_Config.rc</tt>. If you wish to add new inputs to the model that are not handled by HEMCO then you can skip this step.
+
 
#Add the inventory information to <tt>ExtData.rc</tt>.
 
#Add the inventory information to <tt>ExtData.rc</tt>.
#Have a tile file available that maps the inventory's lat/lon grid to the cubed sphere grid for the resolution you will use.
 
 
To add information to <tt>HEMCO_Config.rc</tt>, follow the same rules as you would for [[The_HEMCO_User%27s_Guide|adding a new emission inventory to GEOS-Chem Classic]]. Note that not all information in <tt>HEMCO_Config.rc</tt> is used by GCHP. This is because HEMCO is only used by GCHP to handle emissions after they are read, e.g. scaling and applying hierarchy. All functions related to HEMCO file read are skipped. This means that you could put garbage for the file path and units in <tt>HEMCO_Config.rc</tt> without running into problems with GCHP. However, we recommend that you fill in <tt>HEMCO_Config.rc</tt> in the same way you would for GEOS-Chem Classic for consistency and also to avoid potential format check errors.
 
 
Staying consistent with the information that you put into <tt>HEMCO_Config.rc</tt>, add the inventory information to <tt>ExtData.rc</tt> following the guidelines listed at the top of the file and using existing inventories as examples. You can ignore all entries in <tt>HEMCO_Config.rc</tt> that are copies of another entry since putting these in <tt>ExtData.rc</tt> would result in reading the same variable in the same file twice. Doing so would be costly in GCHP because each file is opened and closed for each variable in the file. HEMCO interprets the copied variables, denoted by having dashes in the <tt>HEMCO_Config.rc</tt> entry, separate from file read.
 
  
At this point it is best to run a very short simulation with GCHP with MAPL debug prints on (see [[Running_GCHP:_Configuration#Debugging|section on debugging below]]). If your file(s) need a new tile file then the model will crash. Tile files have already been created for many lat/lon grids and these are stored in <tt>ExtData/GCHP/TileFiles</tt>. The GCHP log file error will include the tile file name that GCHP expects to be available for regridding your new inventory. In that filename <tt>DC</tt> = dateline centered, <tt>PC</tt> = pole centered, <tt>DE</tt> = dateline edge, and <tt>PE</tt> = pole edge. <tt>UU</tt> is reserved for files on regional grids. Once you have this information you should be able to generate your own tile file by downloading the [https://github.com/geoschem/tempestremap <tt>tempestremap</tt>] and [https://github.com/geoschem/CsGrid <tt>CSGrid</tt>] repositories from GitHub and following these steps:
+
To add information to <tt>HEMCO_Config.rc</tt>, follow the same rules as you would for [[The_HEMCO_User%27s_Guide|adding a new emission inventory to GEOS-Chem Classic]]. Note that not all information in <tt>HEMCO_Config.rc</tt> is used by GCHP. This is because HEMCO is only used by GCHP to handle emissions after they are read, e.g. scaling and applying hierarchy. All functions related to HEMCO file read are skipped. This means that you could put garbage for the file path and units in <tt>HEMCO_Config.rc</tt> without running into problems with GCHP, as long as the syntax is what HEMCO expects. However, we recommend that you fill in <tt>HEMCO_Config.rc</tt> in the same way you would for GEOS-Chem Classic for consistency and also to avoid potential format check errors.
*'''<tt>tempestremap</tt>:''' This tool will generate a netcdf tile file for mapping lat/lon coordinates to cubed sphere. This is a fortran tool that should work in your existing [[Setting_Up_the_GCHP_Environment|GCHP environment]]. Simply do <tt>make clean</tt> and then <tt>make</tt> to build the <tt>tempestremap</tt> code. Then use <tt>runGlobal.sh</tt> or <tt>runRegional.template.sh</tt> to generate global or regional bound tile files. When using <tt>runGlobal.sh</tt>, you will need to specify whether your data is dateline-centered and/or pole-centered, as determined from the log file error message. We recommend generating a tile file for all supported cubed-sphere resolutions (<tt>nC</tt> = 24, 48, 90, 180, and 360). NOTE: There seems to be a 2 GB limit when creating tile files with tempestremap.
+
*'''<tt>CSGrid</tt>:''' This tool will convert the netCDF file created by <tt>tempestremap</tt> to binary for compatibility with GCHP. <tt>CSGrid</tt> requires a Matlab license. If you have Matlab you can use <tt>exampleScripts/create_Tempest_TileFile_LL2CS.m</tt> to convert your netCDF output in <tt>tempestremap/TileFiles</tt> from netcdf to binary. Send the resulting file to the [[GCST]] and they can add it to <tt>ExtData/GCHP/TileFiles</tt>.
+
  
Once read in by GCHP, your data will be stored as MAPL Import variables with the same names that appear in the first column of <tt>ExtData.rc</tt>. If your input files are handled by HEMCO then you do not need to do anything else to handle the MAPL Imports. However, if your new inputs are not handled by HEMCO then you will need to take the additional steps of adding source code to transfer your MAPL Imports to something that GEOS-Chem can understand.  If you wish to assign a MAPL Import directly to a <tt>State_Met</tt> or other state field in GEOS-Chem, you can do this in GCHP file "Includes_Before_Run.H". The lines in that file are executed prior to every dynamic timestep in GCHP and currently contain the setting of all <tt>State_Met</tt> fields derived from MAPL Imports. For more advanced use cases, read through GCHP file <tt>Chem_GridCompMod.F90</tt> for examples, specifically searching for calls to subroutine <tt>MAPL_GetPointer</tt>. Contact the [[GEOS-Chem Support Team]] for more information on how to use MAPL Imports within GEOS-Chem.
+
Staying consistent with the information that you put into <tt>HEMCO_Config.rc</tt>, add the inventory information to <tt>ExtData.rc</tt> following the guidelines listed at the top of the file and using existing inventories as examples. You can ignore all entries in <tt>HEMCO_Config.rc</tt> that are copies of another entry since putting these in <tt>ExtData.rc</tt> would result in reading the same variable in the same file twice. HEMCO interprets the copied variables, denoted by having dashes in the <tt>HEMCO_Config.rc</tt> entry, separate from file read.  
  
A few common errors encountered when adding new input files to GCHP are:
+
A few common errors encountered when adding new input emissions files to GCHP are:
  
#Your input file contains integer values. Beware that the MAPL I/O component in GCHP does not read or write integers. If your data contains integers then you should reprocess the file to contain floating point values instead. If you try to input integers you will get an error such as this:<br>'''>>Reading  TESTDATA from ./MainDataDir/testfile.nc<br>CFIO: Reading ./MainDataDir/testfile.nc at 19850101 000000<br>CFIO_GetVar: error getting scale<br>CFIO_CFIO_GetVar failed<br>problem in ESMF_CFIOSdfVarRead'''
+
#Your input file contains integer values. Beware that the MAPL I/O component in GCHP does not read or write integers. If your data contains integers then you should reprocess the file to contain floating point values instead.
#Your data latitude and longitude dimensions are in the wrong order. Lat must always come before lon in your inputs arrays, a requirement true for both GCHP and GEOS-Chem Classic. For more information about this, see the [Preparing_data_files_for_use_with_HEMCO#Ordering_of_the_data|Preparing Data Files for Use with HEMCO wiki page]]. The symptom of this error in GCHP is:<br>'''CFIO: Reading {filename} at {YYYYMMDD} {HHmmSS}<br>Error reading<br>variable using NF90_GET_VAR    -57<br>NetCDF: Start+count exceeds dimension bound''' 
+
#Your data latitude and longitude dimensions are in the wrong order. Lat must always come before lon in your inputs arrays, a requirement true for both GCHP and GEOS-Chem Classic. For more information about this, see the [Preparing_data_files_for_use_with_HEMCO#Ordering_of_the_data|Preparing Data Files for Use with HEMCO wiki page]].  
#You do not have a tile file that regrids between your input file data resolution and the internal resolution of GCHP. This will result in an ExtData error in MAPL_HorzTransform.  
+
 
#Your 3D input data are mapped to the wrong levels in GEOS-Chem (silent error). If you read in 3D data and assign the resulting import to a GEOS-Chem state variable such as State_Chm or State_Met, then you must flip the vertical axis during the assignment. See files <tt>Includes_Before_Run.H</tt> and setting State_Chm%Species in <tt>Chem_GridCompMod.F90</tt> for examples.
 
#Your 3D input data are mapped to the wrong levels in GEOS-Chem (silent error). If you read in 3D data and assign the resulting import to a GEOS-Chem state variable such as State_Chm or State_Met, then you must flip the vertical axis during the assignment. See files <tt>Includes_Before_Run.H</tt> and setting State_Chm%Species in <tt>Chem_GridCompMod.F90</tt> for examples.
#You have a typo in either <tt>HEMCO_Config.rc</tt> or <tt>ExtData.rc</tt>. Error in <tt>HEMCO_Config.rc</tt> typically result in the model crashing right away. Errors in <tt>ExtData.rc</tt> typically result in a problem later on during ExtData read. Always try running with DEBUG=20 in <tt>runConfig.sh</tt> (maximizes output to <tt>gchp.log</tt>) and Warnings and Verbose set to 3 in <tt>HEMCO_Config.rc</tt> (maximizes output to <tt>HEMCO.log</tt>) when encountering errors such as this. Another useful strategy is to find rc-file entries for similar input files and compare them against the entry for your new file. Directly comparing the file metadata may also lead to insights into the problem.
+
#You have a typo in either <tt>HEMCO_Config.rc</tt> or <tt>ExtData.rc</tt>. Error in <tt>HEMCO_Config.rc</tt> typically result in the model crashing right away. Errors in <tt>ExtData.rc</tt> typically result in a problem later on during ExtData read. Always try running with the MAPL debug flags on <tt>runConfig.sh</tt> (maximizes output to <tt>gchp.log</tt>) and Warnings and Verbose set to 3 in <tt>HEMCO_Config.rc</tt> (maximizes output to <tt>HEMCO.log</tt>) when encountering errors such as this. Another useful strategy is to find config file entries for similar input files and compare them against the entry for your new file. Directly comparing the file metadata may also lead to insights into the problem.
  
 
=== Outputs ===
 
=== Outputs ===
Line 323: Line 141:
 
==== Output Diagnostics Data on a Lat-Lon Grid ====
 
==== Output Diagnostics Data on a Lat-Lon Grid ====
  
This feature is new in GCHP 12.5.0. See the HISTORY.rc file in GCHP 12.5.0 run directories for instructions. Details will be included here on the wiki in the future.
+
See documentation in the <tt>HISTORY.rc</tt> config file for instructions on how to output diagnostic collection on lat-lon grids.
  
==== Output Restart Files at Regular Frequency ====
+
==== Output Restart Files at Regular or Irregular Frequency ====
  
The MAPL component in GCHP has the option to output restart files (also called checkpoint files) at regular intervals. Unlike the final restart file output at the end of a simulation, these regularly output restart files contain the date and time in their filename. Enabling this feature is a good idea if you plan on doing a long simulation and you are not splitting your run into multiple jobs. If the run crashes unexpectedly then you can restart mid-run rather than start over from the beginning. To set the checkpoint frequency, simply update the HHmmSS string for "Checkpoint_Freq" in <tt>runConfig.rc</tt>. Minutes and seconds must each be two digits but hours can be more than two. Each output checkpoint file will include the timestamp in the filename.
+
The MAPL component in GCHP has the option to output restart files (also called checkpoint files) prior to run end. The frequency of restart file write may be at regular time intervals (regular frequency) or at specific programmed times (irregular frequency). These periodic output restart files contain the date and time in their filenames.
  
#------------------------------------------------
+
Enabling this feature is a good idea if you plan on doing a long simulation and you are not splitting your run into multiple jobs. If the run crashes unexpectedly then you can restart mid-run rather than start over from the beginning.
#    Output Restart Files
+
 
#------------------------------------------------
+
Update settings for checkpoint restart outputs in <tt>runConfig.sh</tt> section <tt>Output Restarts</tt>. Instructions for configuring both regular and irregular frequency restart files are included in the file.
# You can output restart files at regular intervals throughout your
+
# simulation. These restarts are in addition to the end-of-run restart
+
# which is always produced. To configure output restart file frequency,
+
# set the variable below to a string of format HHmmSS. More than 2
+
# digits for the hours string is permitted (e.g. 1680000 for 7 days).
+
# Setting the frequency to 000000 will turn off this feature by setting
+
# it to a very large number.
+
Checkpoint_Freq="000000"
+
  
 
==== Turn On/Off Diagnostics ====
 
==== Turn On/Off Diagnostics ====
  
All GCHP run directories have four collections on by default: time-averaged species concentrations, instantaneous species concentrations, time-averaged meteorology, and instantaneous meteorology. All species are enabled while only a subset of meteorology variables are enabled. There are several other collections already implemented but they are off by default for the standard and benchmark simulations, and on by default for the RnPbBe simulation.
+
To turn diagnostic collections on or off, comment ("#") collection names in the "COLLECTIONS" list at the top of file <tt>HISTORY.rc</tt>. Collections cannot be turned on/off from <tt>runConfig.sh</tt>.
 
+
To turn collections on or off, comment ("#") collection names in the "COLLECTIONS" list at the top of file <tt>HISTORY.rc</tt>.  
+
 
+
#===================================================================
+
# Declare collection names and toggle on/off
+
#===================================================================
+
COLLECTIONS: #'AerosolMass'
+
            #'Aerosols',
+
            #'Budget',
+
            #'CloudConvFlux',
+
            #'ConcAfterChem',
+
            #'DryDep',
+
            'Emissions',
+
            #'JValues',
+
            #'LevelEdgeDiags',     
+
            #'ProdLoss',
+
            'SpeciesConc',
+
            #'StateChm',
+
            'StateMet_avg', 
+
            'StateMet_inst', 
+
            #'WetLossConv',
+
            #'WetLossLS',
+
::
+
 
+
Once a collection is turned on, you can comment diagnostics within it further down in the file by searching for the collection name with ".fields" suffix. Be aware that you cannot comment out the diagnostic that appears on the same line as the fields keyword. If you wish to suppress that specific diagnostic then move it to the next line and replace it with a diagnostic that you want to output.
+
 
+
#===================================================================
+
# State_Met array diagnostics - time-averaged
+
  StateMet_avg.template:      '%y4%m2%d2_%h2%n2z.nc4',
+
  StateMet_avg.format:        'CFIO',
+
  StateMet_avg.frequency:    010000
+
  StateMet_avg.duration:      010000
+
  StateMet_avg.mode:          'time-averaged'
+
  StateMet_avg.fields:        'Met_AD              ', 'GIGCchem',
+
                              #'Met_AIRDEN          ', 'GIGCchem',
+
                              #'Met_AIRVOL          ', 'GIGCchem',
+
                              #'Met_ALBD            ', 'GIGCchem',
+
                              'Met_AREAM2          ', 'GIGCchem',
+
                              #'Met_AVGW            ', 'GIGCchem',
+
                              'Met_BXHEIGHT        ', 'GIGCchem',
+
                              etc
+
  
 
==== Set Diagnostic Frequency, Duration, and Mode ====
 
==== Set Diagnostic Frequency, Duration, and Mode ====
  
'''WARNING: There is currently a bug in GCHP the prevents writing out more than one time per file. Duration in HISTORY.rc is ignored.'''
+
All diagnostic collections that come with the run directory have frequency, duration, and mode auto-set within <tt>runConfig.sh</tt>. The file contains a list of time-averaged collections and instantaneous collections, and allows setting a frequency and duration to apply to all collections listed for each. See section <tt>Output Diagnostics</tt> within <tt>runConfig.sh</tt>. To avoid auto-update of a certain collection, remove it from the list in <tt>runConfig.sh</tt>. If adding a new collection, you can add it to the file to enable auto-update of frequency, duration, and mode.
 
+
All diagnostic collections that come with the run directory have frequency, duration, and mode defined within <tt>runConfig.sh</tt>. With the exception of SpeciesConc_inst and StateMet_inst, all collections are time-averaged (mode) with frequency and duration set to the simulation length you specified in <tt>CopyRunDirs.input</tt> when creating the run directory. Any of these defaults can be over-written by editing <tt>runConfig.sh</tt>. Be aware that manual updates of <tt>HISTORY.rc</tt> will be over-written by <tt>runConfig.sh</tt> settings.
+
 
+
#------------------------------------------------
+
#    Diagnostics
+
#------------------------------------------------       
+
# Frequency, duration, and mode used for all default HISTORY.rc diagnostic
+
# collections are set from within this file. These are defined as:
+
#
+
#  Frequency = frequency of diagnostic calculation (HHmmSS)
+
#  Duration  = frequency of diagnostic file  write (HHmmSS)
+
#  Mode      = computation of diagnostics (time-averaged or instantaneous)
+
#
+
# Edit the frequency, duration, and mode below to change global settings.
+
# See the list further below of what HISTORY.rc collections will be updated.
+
#
+
# NOTES:
+
#  1. Freq and duration hours may exceed 2 digits, e.g. 7440000 for 31 days
+
#  2. Freq and duration are ignored if Monthly_Diag is set to 1
+
#  3. If you do not want settings for certain collections set automatically
+
#    from this file, comment them out below.
+
#  4. If you add a collection to HISTORY.rc and want its settings
+
#    automatically updated from this file, add to the list below.
+
#  5. To turn off collections completely, comment them out in HISTORY.rc.
+
#
+
common_freq="010000"          # Ignore if using multi-run monthly diag option
+
common_dur="010000"          # Ignore if using multi-run monthly diag option
+
common_mode="'time-averaged'" # "'time-averaged'" and "'instantaneous'"
+
+
SpeciesConc_freq=${common_freq}
+
SpeciesConc_dur=${common_dur}
+
SpeciesConc_mode=${common_mode}
+
AerosolMass_freq=${common_freq}
+
AerosolMass_dur=${common_dur}
+
AerosolMass_mode=${common_mode}
+
Aerosols_freq=${common_freq}
+
Aerosols_dur=${common_dur}
+
Aerosols_mode=${common_mode}
+
Budget_freq=${common_freq}
+
etc
+
  
 
==== Add a New Diagnostics Collection ====
 
==== Add a New Diagnostics Collection ====
  
Adding a new diagnostics collection in GCHP is the same as for GEOS-Chem Classic netcdf diagnostics. You must add your collection to the collection list in <tt>HISTORY.rc</tt> and then define it further down in the file. Any 2D or 3D arrays that are stored within State_Met, State_Chm, or State_Diag, and that are successfully incorporated into the GEOS-Chem Registry may be included as fields in a collection. State_Met variables must be preceded by "met_", State_Chm variables must be preceded by "chm_", and State_Diag variables should not have a prefix. See <tt>GeosCore/state_diag_mod.F90</tt> for examples of how existing State_Diag arrays are implemented.
+
Adding a new diagnostics collection in GCHP is the same as for GEOS-Chem Classic netcdf diagnostics. You must add your collection to the collection list in <tt>HISTORY.rc</tt> and then define it further down in the file. Any 2D or 3D arrays that are stored within GEOS-Chem objects State_Met, State_Chm, or State_Diag, may be included as fields in a collection. State_Met variables must be preceded by "Met_", State_Chm variables must be preceded by "Chem_", and State_Diag variables should not have a prefix. See the <tt>HISTORY.rc</tt> file for examples.
  
Once implemented, you can either incorporate the new collection settings into <tt>runConfig.sh</tt> for auto-update, or you can manually configure all settings in <tt>HISTORY.rc</tt>.
+
Once implemented, you can either incorporate the new collection settings into <tt>runConfig.sh</tt> for auto-update, or you can manually configure all settings in <tt>HISTORY.rc</tt>. See the <tt>Output Diagnostics</tt> section of <tt>runConfig.sh</tt> for more information.
  
 
==== Generate Monthly Mean Diagnostics ====
 
==== Generate Monthly Mean Diagnostics ====
Line 441: Line 170:
  
 
To use the monthly diagnostics option, first read and follow instructions for splitting a simulation into multiple jobs (see separate section on this page). Prior to submitting your run, enable monthly diagnostics in <tt>runConfig.sh</tt> by searching for variable "Monthly_Diag" and changing its value from 0 to 1. Be sure to always start your monthly diagnostic runs on the first day of the month.
 
To use the monthly diagnostics option, first read and follow instructions for splitting a simulation into multiple jobs (see separate section on this page). Prior to submitting your run, enable monthly diagnostics in <tt>runConfig.sh</tt> by searching for variable "Monthly_Diag" and changing its value from 0 to 1. Be sure to always start your monthly diagnostic runs on the first day of the month.
 
==== Additional Diagnostic Collection Options ====
 
 
See file <tt>GCHP/Shared/MAPL_Base/TeX/HistoryIntro.tex</tt> for original MAPL documentation on MAPL History. Please note that we have not tested all of these functionalities and some of them to seem to not work in MAPL. Proceed with caution and let the GEOS-Chem Support Team know what you find. Here is a brief overview of options that may be included for each collection that is taken from that document:
 
 
'''template'''
 
Character string defining the time stamping template that is appended to <tt>collection</tt> to create a particular file name. The template uses GrADS convensions. The default value depends on the <tt>duration</tt> of the file.
 
 
'''descr'''       
 
Character string describing the collection. Defaults to `expdsc'.
 
 
'''format'''       
 
Character string to select file format ("CFIO", "CFIOasync", "flat").  "CFIO" uses MAPL_CFIO and produces netcdf output. "CFIOasync" uses MAPL_CFIO but delegates the actual I/O to the MAPL_CFIOServer (see MAPL_CFIOServer documenation for details). Default = "flat".
 
 
'''frequency'''   
 
Integer (HHHHMMSS) for the frequency of time groups in the collection. Default = 060000.
 
 
'''mode'''         
 
Character string equal to `instantaneous' or `time-averaged'. Default = 'instantaneous'.
 
 
'''acc_interval'''
 
Integer (HHHHMMSS) for the acculation interval ($\le$ frequency) for time-averaged diagnostics. Default = <tt>frequency</tt>; ignored if <tt>mode</tt> is `instantaneous'.
 
 
'''ref_date'''   
 
Integer (YYYYMMDD) reference date for {\em frquency}; also the beginning date for the collection. Default is the Start date on the Clock.
 
 
'''ref_time'''   
 
Integer (HHMMSS) Same a <tt>ref_date</tt>.
 
 
'''end_date'''   
 
Integer (YYYYMMDD) ending date to stop diagnostic output. Default: no end
 
 
'''end_time'''   
 
Integer (HHMMSS) ending time to stop diagnostic output. Default: no end.
 
 
'''duration'''     
 
Integer (HHHHMMSS) for the duration of each file. Default = 00000000 (everything in one file). '''''Duration is not currently functional in GCHP and will be ignored. Frequency is used instead for write frequency.'''''
 
 
'''resolution'''   
 
Optional resolution (IM JM) for the ouput stream. Transforms betwee two regulate LogRect grid in index space. Default is the native resolution.
 
 
'''xyoffset'''     
 
Optional Flag for output grid offset when interpolating. Must be between 0 and 3. (Cryptic Meaning: 0:DcPc, 1:DePc, 2:DcPe, 3:DePe). Ignored when <tt>resolution</tt> results in no interpolation (native). Default: 0 (DatelineCenterPoleCenter).
 
 
'''levels'''       
 
Optional list of output levels (Default is all levels on Native Grid). If <tt>vvars</tt> is not specified, these are layer indices. Otherwise see <tt>vvars</tt>, <tt>vunits</tt>, <tt>vscale</tt>.
 
 
'''vvars'''       
 
Optional field to use as the vertical coordinate and functional form of vertical interpolation. A second argument specifies the component the field comes from. Example 1: the entry 'log(PLE)','DYNAMICS' uses PLE from the FV3 advection component as the vertical coordinate and interpolates to <tt>levels</tt> linearly in its log. Example 2: 'THETA','DYN' a way of producing isentropic output. Only log(*), pow(*), and real number and straight linear interpolation are supported.
 
 
'''vunit'''       
 
Character string to use for units attribute of the vertical coordinate in file. The default is the MAPL_CFIO default. This affects only the name in the file. It does not do the conversion. See <tt>vscale</tt>
 
 
'''vscale'''       
 
Optional Scaling to convert VVARS units to VUNIT units. Default: no conversion.
 
 
'''regrid_exch''' 
 
Name of the exchange grid that can be used for interpolation between two LogRect grids or from a tile grid to a LogRect grid. Default: no exchange grid interpolation. irregular grid.
 
 
'''regrid_name''' 
 
Name of the Log-Rect grid to interpolate to when going from a tile to Field to a gridde output. <tt>regrid_exch</tt> must be set, otherwise it is ignored.
 
 
'''conservative''' 
 
Set to a non-zero integer to turn on conservative regridding when going from a native cube-sphere grid to lat-lon output. Default: 0
 
 
'''deflate'''     
 
Set deflate level (0-9) of NETCDF output when format is CFIO or CFIOasync. Default: 0
 
 
'''subset'''     
 
Optional subset (lonMin lonMax latMin latMax) for the output when performing non-conservative cube-sphere to lat-lon regridding of the output.
 
 
'''chunksize'''   
 
Optional user specified chunking of NETCDF output when format is CFIO or CFIOasync, (Lon chunksize, Lat chunksize, Lev chunksize, Time chunksize)
 
  
 
=== Debugging ===
 
=== Debugging ===
Line 519: Line 175:
 
==== Enable Maximum Print Output ====
 
==== Enable Maximum Print Output ====
  
Besides compiling with "make compile_debug", there are a few run settings you can configure to boost your chance of successful debugging. All of them involve sending additional print statements to the log files.  
+
Besides compiling with <tt>CMAKE_BUILD_TYPE=Debug</tt>, there are a few settings you can configure to boost your chance of successful debugging. All of them involve sending additional print statements to the log files.  
#Change "ND70" in input.geos from 0 to 1 to turn on extra GEOS-Chem print statements in the main log file.  
+
#Set <tt>Turn on debug printout?</tt> in <tt>input.geos</tt> to <tt>T</tt> to turn on extra GEOS-Chem print statements in the main log file.  
#Set the "MAPL_DEBUG_LEVEL" variable in <tt>runConfig.sh</tt> to a number greater than 0 to turn on extra MAPL print statements in MAPL ExtData. This is useful if you are having a problem reading input files. The higher the number the more prints will be sent to the log (and the slower your run will be). Usually 20 is sufficient, although you can go higher. Please be sure to remember to set MAPL_DEBUG back to 0 when you are done so as not to severely slow down your runs!
+
#Set <tt>MAPL_EXTDATA_DEBUG_LEVEL</tt> in <tt>runConfig.sh</tt> to <tt>1</tt> to turn on extra MAPL print statements in ExtData, the component that handles input.
#Set the "Verbose" and "Warnings" settings in <tt>HEMCO_Config.rc</tt> to maximum values of 3 to send the maximum number of prints to <tt>HEMCO.log</tt>.
+
#Set the <tt>Verbose</tt> and <tt>Warnings</tt> settings in <tt>HEMCO_Config.rc</tt> to maximum values of 3 to send the maximum number of prints to <tt>HEMCO.log</tt>.
#Set the "MEMORY_DEBUG_LEVEL" option, new in 12.5.0, to 1 to turn on additional memory usage prints per timestep.
+
 
+
#------------------------------------------------
+
#    Debug Options
+
#------------------------------------------------
+
# Set MAPL debug flag to 0 for no extra MAPL debug log output, or 1 to
+
# print information to log. Using this flag is most helpful for debugging
+
# issues with file read (MAPL ExtData).
+
#
+
# Set memory debug flag to 0 to print memory only once per timestep. Set to
+
# 1 to enable memory prints at additional locations throughout the run.
+
#
+
# For GEOS-Chem debug prints, turn on ND70 in input.geos manually.     
+
#
+
# WARNING: Turning on debug prints significantly slows down the model!
+
#
+
MAPL_DEBUG_LEVEL=0
+
MEMORY_DEBUG_LEVEL=0
+
  
 
None of these options require recompiling. Be aware that all of them will slow down your simulation. Be sure to set them back to the default values after you are finished debugging.
 
None of these options require recompiling. Be aware that all of them will slow down your simulation. Be sure to set them back to the default values after you are finished debugging.
  
==== Turn On/Off MAPL Timers and Memory Logging ====
 
 
Your GCHP log file will include timing and memory information by default, and this is usually a good thing. If for some reason you want to turn these features off you can do so in file <tt>CAP.rc</tt>. Search for "MAPL_ENABLE_TIMERS" and "MAPL_ENABLE_MEMUTILS" and simply change "YES" to "NO". Remember to turn them back on again if you later need to to debug.
 
  
 
----------------------------------------
 
----------------------------------------
  
'''''[[Running_GCHP:_Basics|Previous]] | [[GCHP_Output_Data| Next]] | [[Getting Started With GCHP]] | [[GCHP Main Page]]'''''
+
'''''[[Running_GCHP:_Basics|Previous]] | [[GCHP_Output_Data| Next]] | [[Getting Started with GCHP]] | [[GCHP Main Page]]'''''

Latest revision as of 15:41, 8 December 2020


The GCHP documentation has moved to https://gchp.readthedocs.io/. The GCHP documentation on http://wiki.seas.harvard.edu/ will stay online for several months, but it is outdated and no longer active!



Previous | Next | Getting Started with GCHP | GCHP Main Page

  1. Hardware and Software Requirements
  2. Setting Up the GCHP Environment
  3. Downloading Source Code and Data Directories
  4. Compiling
  5. Obtaining a Run Directory
  6. Running GCHP: Basics
  7. Running GCHP: Configuration
  8. Output Data
  9. Developing GCHP
  10. Run Configuration Files


Overview

All GCHP run directories have default simulation-specific run-time settings that are set when you create a run directory. You will likely want to change these settings. This page goes over how to do this.

Configuration files

GCHP is controlled using a set of configuration files that are included in the GCHP run directory. Files include:

  1. CAP.rc
  2. ExtData.rc
  3. GCHP.rc
  4. input.geos
  5. HEMCO_Config.rc
  6. HEMCO_Diagn.rc
  7. input.nml
  8. HISTORY.rc

Several run-time settings must be set consistently across multiple files. Inconsistencies may result in your program crashing or yielding unexpected results. To avoid mistakes and make run configuration easier, bash shell script runConfig.sh is included in all run directories to set the most commonly changed config file settings from one location. Sourcing this script will update multiple config files to use values specified in file.

Sourcing runConfig.sh is done automatically prior to running GCHP if using any of the example run scripts, or you can do it at the command line. Information about what settings are changed and in what files are standard output of the script. To source the script, type the following:

source runConfig.sh

You may also use it in silent mode if you wish to update files but not display settings on the screen:

source runConfig.sh --silent

While using runConfig.sh to configure common settings makes run configure much simpler, it comes with a major caveat. If you manually edit a config file setting that is also set in runConfig.sh then your manual update will be overrided via string replacement. Please get very familiar with the options in runConfig.sh and be conscientious about not updating the same setting elsewhere.

You generally will not need to know more about the GCHP configuration files beyond what is listed on this page. However, for a comprehensive description of all configuration files used by GCHP see the last section of this user manual.

Commonly Changed Run Options

Compute Configuration

Set Number of Nodes and Cores

To change the number of nodes and cores for your run you must update settings in two places: (1) runConfig.sh, and (2) your run script. The runConfig.sh file contains detailed instructions on how to set resource parameter options and what they mean. Look for the Compute Resources section in the script. Update your resource request in your run script to match the resources set in runConfig.sh.

It is important to be smart about your resource allocation. To do this it is useful to understand how GCHP works with respect to distribution of nodes and cores across the grid. At least one unique core is assigned to each face on the cubed sphere, resulting in a constraint of at least six cores to run GCHP. The same number of cores must be assigned to each face, resulting in another constraint of total number of cores being a multiple of six. Communication between the cores occurs only during transport processes.

While any number of cores is valid as long as it is a multiple of six (although there is an upper limit per resolution), you will typically start to see negative effects due to excessive communication if a core is handling less than around one hundred grid cells or a cluster of grid cells that are not approximately square. You can determine how many grid cells are handled per core by analyzing your grid resolution and resource allocation. For example, if running at C24 with six cores each face is handled by one core (6 faces / 6 cores) and contains 576 cells (24x24). Each core therefore processes 576 cells. Since each core handles one face, each core communicates with four other cores (four surrounding faces). Maximizing squareness of grid cells per core is done automatically within runConfig.sh if variable NXNY_AUTO is set to ON.

Further discussion about domain decomposition is in runConfig.sh section Domain Decomposition.

Split a Simulation Into Multiple Jobs

There is an option to split up a single simulation into separate serial jobs. To use this option, do the following:

  1. Update runConfig.sh with your full simulation (all runs) start and end dates, and the duration per segment (single run). Also update the number of runs options to reflect to total number of jobs that will be submitted (NUM_RUNS). Carefully read the comments in runConfig.sh to ensure you understand how it works.
  2. Optionally turn on monthly diagnostic (Monthly_Diag). Only turn on monthly diagnostics if your run duration is monthly.
  3. Use gchp.multirun.run as your run script, or adapt it if your cluster does not use SLURM. It is located in the runScriptSamples subdirectory of your run directory. As with the regular gchp.run, you will need to update the file with compute resources consistent with runConfig.sh. Note that you should not submit the run script directly. It will be done automatically by the file described in the next step.
  4. Use gchp.multirun.sh to submit your job, or adapt it if your cluster does not use SLURM. It is located in the runScriptSamples subdirectory of your run directory. For example, to submit your series of jobs, type: ./gchp.multirun.sh

There is much documentation in the headers of both gchp.multirun.run and gchp.multirun.sh that is worth reading and getting familiar with, although not entirely necessary to get the multi-run option working. If you have not done so already, it is worth trying out a simple multi-segmented run of short duration to demonstrate that the multi-segmented run configuration and scripts work on your system. For example, you could do a 3-hour simulation with 1-hour duration and number of runs equal to 3.

The multi-run script assumes use of SLURM, and a separate SLURM log file is created for each run. There is also log file called multirun.log with high-level information such as the start, end, duration, and job ids for all jobs submitted. If a run fails then all scheduled jobs are cancelled and a message about this is sent to that log file. Inspect this and your other log files, as well as output in the OutputDir/ directory prior to using for longer duration runs.

Change Domains Stack Size

For runs at very high resolution or small number of processors you may run into a domains stack size error. This is caused by exceeding the domains stack size memory limit set at run-time and the error will be apparent from the message in your log file. If this occurs you can increase the domains stack size in file input.nml. The default is set to 20000000.

Basic Run Settings

Set Cubed Sphere Grid Resolution

GCHP uses a cubed sphere grid rather than the traditional lat-lon grid used in GEOS-Chem Classic. While regular lat-lon grids are typically designated as ΔLat ⨉ ΔLon (e.g. 4⨉5), cubed sphere grids are designated by the side-length of the cube. In GCHP we specify this as CX (e.g. C24 or C180). The simple rule of thumb for determining the roughly equivalent lat-lon resolution for a given cubed sphere resolution is to divide the side length by 90. Using this rule you can quickly match C24 with about 4x5, C90 with 1 degree, C360 with quarter degree, and so on.

To change your grid resolution in the run directory edit the CS_RES integer parameter in runConfig.sh section Internal Cubed Sphere Resolution to the cube side length you wish to use. To use a uniform global grid resolution make sure that STRETCH_GRID is set to OFF.

Set Stretch Grid Resolution

GCHP has the capability to run with a stretched grid, meaning one portion of the globe is stretched to fine resolution. Set stretched grid parameter in runConfig.sh section Internal Cubed Sphere Resolution. See instructions in that section of the file.

Turn On/Off Model Components

You can toggle all primary GEOS-Chem components, including type of mixing, from within runConfig.sh. The settings in that file will update input.geos automatically. Look for section Turn Components On/Off, and other settings in input.geos. Other settings in this section beyond component on/off toggles using CH4 emissions in UCX, and initializing stratospheric H2O in UCX.

Change Model Timesteps

Model timesteps, both chemistry and dynamic, are configured within runConfig.sh. They are set to match GEOS-Chem Classic default values for low resolutions for comparison purposes but can be updated, with caution. Timesteps are automatically reduced for high resolution runs. Read the documentation in runConfig.sh section Timesteps for setting them.

Set Simulation Start and End Dates

Set simulation start and end in runConfig.sh section Simulation Start, End, Duration, # runs. Read the comments in the file for a complete description of the options. Typically a "CAP" runtime error indicates a problem with start, end, and duration settings. If you encounter an error with the words "CAP" near it then double-check that these settings make sense.

Inputs

Change Initial Restart File

All GCHP run directories come with symbolic links to initial restart files for commonly used cubed sphere resolutions. The appropriate restart file is automatically chosen based on the cubed sphere resolution you set in runConfig.sh.

You may overwrite the default restart file with your own by specifying the restart filename in runConfig.sh section Initial Restart File. Beware that it is your responsibility to make sure it is the proper grid resolution.

Unlike GEOS-Chem Classic, HEMCO restart files are not used in GCHP. HEMCO restart variables may be included in the initial species restart file, or they may be excluded and HEMCO will start with default values. GCHP initial restart files that come with the run directories do not include HEMCO restart variables, but all output restart files do.

Turn On/Off Emissions Inventories

Because file I/O impacts GCHP performance it is a good idea to turn off file read of emissions that you do not need. You can turn emissions inventories on or off the same way you would in GEOS-Chem Classic, by setting the inventories to true or false at the top of configuration file HEMCO_Config.rc. All emissions that are turned off in this way will be ignored when GCHP uses ExtData.rc to read files, thereby speeding up the model.

For emissions that do not have an on/off toggle at the top of the file, you can prevent GCHP from reading them by commenting them out in HEMCO_Config.rc. No updates to ExtData.rc would be necessary. If you alternatively comment out the emissions in ExtData.rc but not HEMCO_Config.rc then GCHP will fail with an error when looking for the file information.

Another option to skip file read for certain files is to replace the file path in ExtData.rc with /dev/null. However, if you want to turn these inputs back on at a later time you should preserve the original path by commenting out the original line.

Add New Emissions Files

There are two steps for adding new emissions inventories to GCHP:

  1. Add the inventory information to HEMCO_Config.rc.
  2. Add the inventory information to ExtData.rc.

To add information to HEMCO_Config.rc, follow the same rules as you would for adding a new emission inventory to GEOS-Chem Classic. Note that not all information in HEMCO_Config.rc is used by GCHP. This is because HEMCO is only used by GCHP to handle emissions after they are read, e.g. scaling and applying hierarchy. All functions related to HEMCO file read are skipped. This means that you could put garbage for the file path and units in HEMCO_Config.rc without running into problems with GCHP, as long as the syntax is what HEMCO expects. However, we recommend that you fill in HEMCO_Config.rc in the same way you would for GEOS-Chem Classic for consistency and also to avoid potential format check errors.

Staying consistent with the information that you put into HEMCO_Config.rc, add the inventory information to ExtData.rc following the guidelines listed at the top of the file and using existing inventories as examples. You can ignore all entries in HEMCO_Config.rc that are copies of another entry since putting these in ExtData.rc would result in reading the same variable in the same file twice. HEMCO interprets the copied variables, denoted by having dashes in the HEMCO_Config.rc entry, separate from file read.

A few common errors encountered when adding new input emissions files to GCHP are:

  1. Your input file contains integer values. Beware that the MAPL I/O component in GCHP does not read or write integers. If your data contains integers then you should reprocess the file to contain floating point values instead.
  2. Your data latitude and longitude dimensions are in the wrong order. Lat must always come before lon in your inputs arrays, a requirement true for both GCHP and GEOS-Chem Classic. For more information about this, see the [Preparing_data_files_for_use_with_HEMCO#Ordering_of_the_data|Preparing Data Files for Use with HEMCO wiki page]].
  3. Your 3D input data are mapped to the wrong levels in GEOS-Chem (silent error). If you read in 3D data and assign the resulting import to a GEOS-Chem state variable such as State_Chm or State_Met, then you must flip the vertical axis during the assignment. See files Includes_Before_Run.H and setting State_Chm%Species in Chem_GridCompMod.F90 for examples.
  4. You have a typo in either HEMCO_Config.rc or ExtData.rc. Error in HEMCO_Config.rc typically result in the model crashing right away. Errors in ExtData.rc typically result in a problem later on during ExtData read. Always try running with the MAPL debug flags on runConfig.sh (maximizes output to gchp.log) and Warnings and Verbose set to 3 in HEMCO_Config.rc (maximizes output to HEMCO.log) when encountering errors such as this. Another useful strategy is to find config file entries for similar input files and compare them against the entry for your new file. Directly comparing the file metadata may also lead to insights into the problem.

Outputs

Output Diagnostics Data on a Lat-Lon Grid

See documentation in the HISTORY.rc config file for instructions on how to output diagnostic collection on lat-lon grids.

Output Restart Files at Regular or Irregular Frequency

The MAPL component in GCHP has the option to output restart files (also called checkpoint files) prior to run end. The frequency of restart file write may be at regular time intervals (regular frequency) or at specific programmed times (irregular frequency). These periodic output restart files contain the date and time in their filenames.

Enabling this feature is a good idea if you plan on doing a long simulation and you are not splitting your run into multiple jobs. If the run crashes unexpectedly then you can restart mid-run rather than start over from the beginning.

Update settings for checkpoint restart outputs in runConfig.sh section Output Restarts. Instructions for configuring both regular and irregular frequency restart files are included in the file.

Turn On/Off Diagnostics

To turn diagnostic collections on or off, comment ("#") collection names in the "COLLECTIONS" list at the top of file HISTORY.rc. Collections cannot be turned on/off from runConfig.sh.

Set Diagnostic Frequency, Duration, and Mode

All diagnostic collections that come with the run directory have frequency, duration, and mode auto-set within runConfig.sh. The file contains a list of time-averaged collections and instantaneous collections, and allows setting a frequency and duration to apply to all collections listed for each. See section Output Diagnostics within runConfig.sh. To avoid auto-update of a certain collection, remove it from the list in runConfig.sh. If adding a new collection, you can add it to the file to enable auto-update of frequency, duration, and mode.

Add a New Diagnostics Collection

Adding a new diagnostics collection in GCHP is the same as for GEOS-Chem Classic netcdf diagnostics. You must add your collection to the collection list in HISTORY.rc and then define it further down in the file. Any 2D or 3D arrays that are stored within GEOS-Chem objects State_Met, State_Chm, or State_Diag, may be included as fields in a collection. State_Met variables must be preceded by "Met_", State_Chm variables must be preceded by "Chem_", and State_Diag variables should not have a prefix. See the HISTORY.rc file for examples.

Once implemented, you can either incorporate the new collection settings into runConfig.sh for auto-update, or you can manually configure all settings in HISTORY.rc. See the Output Diagnostics section of runConfig.sh for more information.

Generate Monthly Mean Diagnostics

There is an option to automatically generate monthly diagnostics by submitting month-long simulations as separate jobs. Splitting up the simulation into separate jobs is a requirement for monthly diagnostics because MAPL History requires a fixed number of hours set for diagnostic frequency and file duration. The monthly mean diagnostic option automatically updates HISTORY.rc diagnostic settings each month to reflect the number of days in that month taking into account leap years.

To use the monthly diagnostics option, first read and follow instructions for splitting a simulation into multiple jobs (see separate section on this page). Prior to submitting your run, enable monthly diagnostics in runConfig.sh by searching for variable "Monthly_Diag" and changing its value from 0 to 1. Be sure to always start your monthly diagnostic runs on the first day of the month.

Debugging

Enable Maximum Print Output

Besides compiling with CMAKE_BUILD_TYPE=Debug, there are a few settings you can configure to boost your chance of successful debugging. All of them involve sending additional print statements to the log files.

  1. Set Turn on debug printout? in input.geos to T to turn on extra GEOS-Chem print statements in the main log file.
  2. Set MAPL_EXTDATA_DEBUG_LEVEL in runConfig.sh to 1 to turn on extra MAPL print statements in ExtData, the component that handles input.
  3. Set the Verbose and Warnings settings in HEMCO_Config.rc to maximum values of 3 to send the maximum number of prints to HEMCO.log.

None of these options require recompiling. Be aware that all of them will slow down your simulation. Be sure to set them back to the default values after you are finished debugging.



Previous | Next | Getting Started with GCHP | GCHP Main Page