Difference between revisions of "Running GCHP: Basics"

From Geos-chem
Jump to: navigation, search
(Archiving a Run)
 
(15 intermediate revisions by one other user not shown)
Line 1: Line 1:
'''''[[Compiling_GCHP|Previous]] | [[Running_GCHP:_Configuration|Next]] | [[Getting Started With GCHP]] | [[GCHP Main Page]]'''''
+
----
 +
<span style="color:crimson;font-size:120%">'''The GCHP documentation has moved to https://gchp.readthedocs.io/.''' The GCHP documentation on http://wiki.seas.harvard.edu/ will stay online for several months, but it is outdated and no longer active!</span>
 +
----
 +
 
 +
'''''[[Obtaining_a_GCHP_Run_Directory|Previous]] | [[Running_GCHP:_Configuration|Next]] | [[Getting Started with GCHP]] | [[GCHP Main Page]]'''''
 
#[[GCHP_Hardware_and_Software_Requirements|Hardware and Software Requirements]]
 
#[[GCHP_Hardware_and_Software_Requirements|Hardware and Software Requirements]]
#[[Downloading_GCHP|Downloading Source Code and Data Directories]]
 
#[[Obtaining_a_GCHP_Run_Directory|Obtaining a Run Directory]]
 
 
#[[Setting_Up_the_GCHP_Environment|Setting Up the GCHP Environment]]
 
#[[Setting_Up_the_GCHP_Environment|Setting Up the GCHP Environment]]
 +
#[[Downloading_GCHP|Downloading Source Code and Data Directories]]
 
#[[Compiling_GCHP|Compiling]]
 
#[[Compiling_GCHP|Compiling]]
 +
#[[Obtaining_a_GCHP_Run_Directory|Obtaining a Run Directory]]
 
#<span style="color:blue">'''Running GCHP: Basics'''</span>
 
#<span style="color:blue">'''Running GCHP: Basics'''</span>
 
#[[Running_GCHP:_Configuration|Running GCHP: Configuration]]
 
#[[Running_GCHP:_Configuration|Running GCHP: Configuration]]
Line 14: Line 18:
 
== Overview ==
 
== Overview ==
  
This page presents the basic information needed to run GCHP as well as how to verify a successful run and reuse a run directory. The default GCHP run directories (except benchmark) are configured for a 1-hr simulation at c24 resolution using native resolution meteorology, six cores, and one node. This simple configuration is a good test case to check that GCHP runs on your system. Typically simulations require about 90G for a full chemistry run due to memory needed for ESMF regridding. More advanced instructions for configuring your GCHP run with different settings is in the next chapter.
+
This page presents the basic information needed to run GCHP as well as how to verify a successful run and reuse a run directory. A pre-run checklist is included at the end to help prevent run errors. The GCHP "standard" simulation run directory is configured for a 1-hr simulation at c24 resolution and is a good first test case to check that GCHP runs on your system.
  
== Pre-run Checklist ==
+
== How to Run GCHP ==
  
Prior to running GCHP, always run through the following checklist to ensure everything is set up properly.
+
You can run GCHP locally from within your run directory ("interactively") or by submitting your run to a job scheduler if one is available. Either way, it is useful to put run commands into a reusable script we call the run script. Executing the script will either run GCHP or submit a job that will run GCHP.
#Your run directory contains the executable <tt>geos</tt>.
+
#All symbolic links in your run directory are valid (no broken links)
+
#You have looked through and set all configurable settings in <tt>runConfig.sh</tt> (discussed in the next chapter)
+
#You have a run script (see below for information about run scripts)
+
#If submitting your job, the resource allocation in <tt>runConfig.sh</tt> and your run script are consistent (# nodes and cores)
+
#If running interactively, the resource allocation in <tt>runConfig.sh</tt> is available locally
+
#If reusing a run directory, you have archived your last run or discarded it with 'make cleanup_output' (optional but recommended; discussed below)
+
  
== How to Run GCHP ==
+
There is a symbolic link in the GCHP run directory called <tt>runScriptSamples</tt> that points to a directory in the source code containing example run scripts. Each file includes extra commands that make the run process easier and less prone to user error. These commands include:
 +
#Source environment file symbolic link <tt>gchp.env</tt> to ensure run environment consistent with build
 +
#Source config file <tt>runConfig.sh</tt> to set run-time configuration
 +
#Delete any previous run output files that might interfere with the new run if present
 +
#Send standard output to run-time log file <tt>gchp.log</tt>
 +
#Rename the output restart file to include "restart" and datetime
  
You can run GCHP locally from within your run directory (interactively) or by submitting your run to your cluster's job scheduler. To make running GCHP simpler there is a folder in the GCHP run directory called <tt>runScriptSamples</tt> that contains example scripts to run GCHP. Each file includes additional steps to make the run process easier, including sourcing your environment file so all libraries are loaded, sourcing config file <tt>runConfig.sh</tt> to set run-time configuration (more on this in the next chapter), and sending standard output to log file <tt>gchp.log</tt>.
+
=== Run Interactively ===
  
=== Running Interactively ===
+
Copy or adapt example run script <tt>gchp.local.run</tt> to run GCHP locally on your machine. Before running, open your run script and set <tt>nCores</tt> to the number of processors you plan to use. Make sure you have this number of processors available locally. It must be at least 6. Next, open file <tt>runConfig.sh</tt> and set <tt>NUM_CORES</tt>, <tt>NUM_NODES</tt>, and <tt>NUM_CORES_PER_NODE</tt> to be consistent with your run script.
  
Use example run script <tt>gchp.local.run</tt> to run GCHP locally on your machine. Before running, check that you have at least 6 cores available at your disposal.  Then copy <tt>gchp.local.run</tt> to the main level of your run directory and type the following at the command prompt. Make sure you have gone through the pre-run checklist first.
+
To run, type the following at the command prompt:
  
 
  ./gchp.local.run
 
  ./gchp.local.run
  
If your run crashes during transport then you need additional memory. Either request an interactive session on your cluster with additional memory or consider running GCHP as a batch job by submitting your run to a job scheduler.
+
Standard output will be displayed on your screen in addition to being sent to log file <tt>gchp.log</tt>.
  
=== Running as a Batch Job ===
+
=== Run as a Batch Job ===
  
The recommended job script example is <tt>gchp.run</tt> which is custom for use with SLURM on the Harvard University Odyssey cluster. However, it may be adapted for other systems. You may also adapt the interactive run script <tt>gchp.local.run</tt> for your system as well. The "multirun" scripts are for submitting multiple consecutive jobs in a row, a useful feature for generating monthly diagnostics, and are more advanced. Read more about that option in the chapter on configuring a run.
+
Batch job run scripts will vary based on what job scheduler you have available. Most of the example run scripts are for use with SLURM, and the most basic example of these is <tt>gchp.run</tt>. You may copy any of the example run scripts to your run directory and adapt for your system and preferences as needed.
  
Example job-submission run scripts send standard output to file <tt>gchp.log</tt> by default and require manually configuring your job-specific resources such as number of cores and nodes.
+
At the top of all batch job scripts are configurable run settings. Most critically are requested # cores, # nodes, time, and memory. Figuring out the optimal values for your run can take some trial and error. For a basic six core standard simulation job on one node you should request at least ___ min and __ Gb. The more cores you request the faster GCHP will run.
  
If using SLURM, submit your batch job with this command:
+
To submit a batch job using SLURM:
  
 
   sbatch gchp.run
 
   sbatch gchp.run
  
Job submission is different for other systems. For example, to submit a Grid Engine batch file, type:
+
To submit a batch job using Grid Engine:
  
 
   qsub gchp.run
 
   qsub gchp.run
  
If your computational cluster uses a different job scheduler (e.g. LSF or PBS), then check with your IT staff about how to submit batch jobs. Please also consider submitting your working run script for inclusion in the run script examples folder in future versions. This will make workflow easier for both you and potentially other users.
+
Standard output will be sent to log file <tt>gchp.log</tt> once the job is started unless you change that feature of the run script. Standard error will be sent to a file specific to your scheduler, e.g. <tt>slurm-''jobid''.out</tt> if using SLURM, unless you configure your run script to do otherwise.
  
== Verifying a Successful Run ==
+
If your computational cluster uses a different job scheduler, e.g. Grid Engine, LSF, or PBS, check with your IT staff or search the internet for how to configure and submit batch jobs. For each job scheduler, batch job configurable settings and acceptable formats are available on the internet and are often accessible from the command line. For example, type <tt>man sbatch</tt> to scroll through options for SLURM, including various ways of specifying number of cores, time and memory requested.
 +
 
 +
== Verify a Successful Run ==
  
 
There are several ways to verify that your run was successful.  
 
There are several ways to verify that your run was successful.  
  
# NetCDF files are present in the <tt>OutputDir</tt> subdirectory.
+
# NetCDF files are present in the <tt>OutputDir</tt> subdirectory
# <tt>gchp.log</tt> ends with timing information for the run.
+
# Standard output file <tt>gchp.log</tt> ends with <tt>Model Throughput</tt> timing information
# Your scheduler log (e.g. output from SLURM) does not contain any obvious errors.
+
# The job scheduler log does not contain any error messages
# <tt>gchp.log</tt> contains text with format "AGCM Date: YYYY/MM/DD  Time: HH:mm:ss" for each timestep you ran at. 
+
  
If it looks like something went wrong, scan through <tt>gchp.log</tt> (sometimes the error is near the top) as well as your scheduler output file (if one exists) to determine where there may have been an error. Beware that if you have a problem in one of your configuration files then you will likely see a MAPL error with traceback to the <tt>GCHP/Shared</tt> directory. Review all of your configuration files to ensure you have proper setup. Errors in "CAP" typically indicate an error with your start time, end time, and/or duration set in <tt>runConfig.sh</tt> (more on this file in the next chapter). Errors in "ExtData" often indicate an error with your input files specified in either <tt>HEMCO_Config.rc</tt> or <tt>ExtData.rc</tt>. Errors in "HISTORY" are related to your configured output in <tt>HISTORY.rc</tt>
+
If it looks like something went wrong, scan through the log files to determine where there may have been an error. Here are a few debugging tips:
 +
*Review all of your configuration files to ensure you have proper setup
 +
*<tt>MAPL_Cap</tt> errors typically indicate an error with your start time, end time, and/or duration set in <tt>runConfig.sh</tt>
 +
*<tt>MAPL_ExtData</tt> errors often indicate an error with your input files specified in either <tt>HEMCO_Config.rc</tt> or <tt>ExtData.rc</tt>
 +
*<tt>MAPL_HistoryGridComp</tt> errors are related to your configured output in <tt>HISTORY.rc</tt>
  
GCHP errors can be cryptic. If you find yourself debugging within MAPL then you may be on the wrong track as most issues can be resolved by updating the run settings. If you cannot figure out where you are going wrong please create an issue on the GCHP GitHub issue tracker located at [https://github.com/geoschem/gchp/issues https://github.com/geoschem/gchp/issues].
+
If you cannot figure out where the problem is please do not hesitate to create a [https://github.com/geoschem/gchpctm/issues GCHPctm GitHub issue].
  
== Reusing a Run Directory ==
+
== Reuse a Run Directory ==
  
=== Archiving a Run ===
+
=== Archive Run Output ===
  
One of the benefits of GCHP relative to GEOS-Chem Classic is that you can reuse a run directory for different grid resolutions without recompiling. You can also copy your executable between different simulation run directories as long as you are using the same code. However, reusing a run directory comes with the perils of losing your old work. To mitigate this issue there is utility shell script <tt>archiveRun.sh</tt> to archive data output and configuration files to a subdirectory within your run directory. All you need to do is pass a non-existent subdirectory name of your choosing. Here is an example:
+
Reusing a GCHP run directory comes with the perils of losing your old work. To mitigate this issue there is utility shell script <tt>archiveRun.sh</tt>. This script archives data output and configuration files to a subdirectory that will not be deleted if you clean your run directory.
  
./archiveRun.sh c24_3hr
+
Archiving runs is useful for other reasons as well, including:
 +
*Save all settings and logs for later reference after a run crashes
 +
*Generate data from the same executable using different run-time settings for comparison, e.g. c48 versus c180
 +
*Run short runs in quick succession for debugging
  
The following output is then printed to screen to show you exactly what is being archived and where:
+
To archive a run, pass the archive script a descriptive subdirectory name where data will be archived. For example:
  
  Archiving files to directory c24_3hr
+
  ./archiveRun.sh 1mo_c24_24hrdiag
Moving files and directories...
+
  Warning: No files to move from Plots
+
  -> c24_3hr/diagnostics/GCHP.SpeciesConc.20160101_0030z.nc4
+
  -> c24_3hr/diagnostics/GCHP.SpeciesConc.20160101_0130z.nc4
+
  -> c24_3hr/diagnostics/GCHP.SpeciesConc.20160101_0230z.nc4
+
Copying files...
+
  -> c24_3hr/config/input.geos
+
  -> c24_3hr/config/CAP.rc
+
  -> c24_3hr/config/ExtData.rc
+
  -> c24_3hr/config/fvcore_layout.rc
+
  -> c24_3hr/config/GCHP.rc
+
  -> c24_3hr/config/HEMCO_Config.rc
+
  -> c24_3hr/config/HEMCO_Diagn.rc
+
  -> c24_3hr/config/HISTORY.rc
+
  -> c24_3hr/config/runConfig.sh
+
  -> c24_3hr/config/gchp.local.run
+
  -> c24_3hr/config/gchp.run
+
  -> c24_3hr/config/gchp.env
+
  -> c24_3hr/logs/compile.log
+
  -> c24_3hr/logs/gchp.log
+
  -> c24_3hr/logs/HEMCO.log
+
  -> c24_3hr/logs/mem_transportTracers_1mo.log
+
  -> c24_3hr/logs/PET00000.GEOSCHEMchem.log
+
  Warning: slurm-* not found
+
  -> c24_3hr/checkpoints/gcchem_internal_checkpoint.20160101_0000z.nc4
+
  -> c24_3hr/checkpoints/gcchem_internal_checkpoint.restart.20160101_030000.nc4
+
  -> c24_3hr/checkpoints/cap_restart
+
  -> c24_3hr/restart/initial_GEOSChem_rst.c24_TransportTracers.nc
+
Complete!
+
  
All files except output diagnostics data are copied so that you can still see them after archiving. This includes restart files which remain in your run directory until you delete them. However, the diagnostic data are moved rather than copied, leaving your <tt>OutputDir</tt> directory empty. In this particular example I ran interactively so no SLURM file was found, and I archived a single segment run (single job) which is why there is a warning about a multirun file being missing. This can be ignored. If you do a multi-run, which involves running multiple consecutive jobs, archiving will move data and copy other files from all runs to your archive directory. And if you do a run as a batch job using SLURM, the SLURM files will be sent to the logs archive directory.
+
All files are archived to subfolders in the new directory. Which files are copied and to where are displayed on the screen. Diagnostic files in the <tt>OutputDir</tt> directory are moved rather than copied so as not to duplicate large files. You will be prompted at the command line to accept this change prior to data move.
  
Since the <tt>archiveRun.sh</tt> is a simple bash script you may edit it to do customized archiving based on your own preferences.
+
=== Clean the Run Directory ===
  
=== Cleaning the Run Directory ===
+
You should always clean your run directory prior to your next run. This avoids confusion about what output was generated when and with what settings. Under certain circumstances it also avoids having your new run crash. GCHP will crash if:
 +
*Output file <tt>cap_restart</tt> is present and you did not change your start/end times
 +
*Your last run failed in such a way that the restart file was not renamed in the post-run commands in the run script
  
If you have archived your last run, or simply do not want to keep it, you should then clean your run directory prior to your next run by doing "make cleanup_output". Here is an example of output printed when cleaning the run directory:
+
The example run scripts include extra commands to clean the run directory of the two problematic files listed above. However, you may write your own run script and omit them in which case not cleaning the run directory prior to rerun will cause problems.
  
rm -f /n/home/gchp_RnPbBe/OutputDir/*.nc4
+
To make run directory cleaning simple is utility shell script <tt>cleanRunDir.sh</tt>. To clean the run directory simply execute this script.
rm -f trac_avg.*
+
rm -f tracerinfo.dat
+
rm -f diaginfo.dat
+
rm -f cap_restart
+
rm -f gcchem*
+
rm -f *.rcx
+
rm -f *~
+
rm -f gchp.log
+
rm -f HEMCO.log
+
rm -f PET*.log
+
rm -f multirun.log
+
rm -f logfile.000000.out
+
rm -f slurm-*
+
rm -f 1
+
rm -f EGRESS
+
  
=== Rerunning Without Cleaning ===
+
  ./cleanRunDir.sh
  
You can reuse a run directory without cleaning it and without archiving your last run. Files will generally simply be replaced by files generated in the next run. This will work okay with twos exceptions.  
+
All GCHP output files, including diagnostics files in <tt>OutputDir</tt>, will then be deleted. Only restart files with names that begin with <tt>gcchem</tt> are deleted. This preserve the initial restart symbolic links that come with the run directory.
  
====cap_restart must be deleted====
+
== Pre-run Checklist ==
The output <tt>cap_restart</tt> file must be removed prior to subsequent runs if you are starting a run from scratch. The <tt>cap_restart</tt> file contains a date and time string for the end of your last run. GCHP will attempt to start your next run at this date and time if the file is present. This is useful for splitting up a run into multiple jobs. Unless you are doing this you should always delete <tt>cap_restart</tt> before a new run. This is included in all sample run scripts except the multi-run run script which has special handling of <tt>cap_restart</tt> to pick up where the last run left off. See the next chapter for more information on the multi-run option.
+
  
====gcchem_internal_checkpoint must be deleted or renamed====
+
Prior to running GCHP, always run through the following checklist to ensure everything is set up properly.
The GCHP output restart filename is configured in <code>GCHP.rc</code>. If a file with that name exists at the start of a run GCHP will fail at the end when it tries to overwrite it. This is a quirk with the new version of MAPL introduced in GCHP 12.5.0. To get around this, all sample run scripts located in the run directory <code>runScriptSamples</code> directory rename the output checkpoint file to a file containing 'restart' and timestamp. However, if your run fails with early exit you may have the original restart file present since it is created at the start of the run, remaining empty until successful end. Using <code>make cleanup_output</code> to clean up your run directory prior to rerunning will prevent this issue since it include deletion of all files starting with "gcchem".
+
#Your run directory contains the executable <tt>gchp</tt>.
 +
#All symbolic links in your run directory are valid (no broken links)
 +
#You have looked through and set all configurable settings in <tt>runConfig.sh</tt> (discussed in the next chapter)
 +
#If running via a job scheduler: you have a run script and the resource allocation in <tt>runConfig.sh</tt> and your run script are consistent (# nodes and cores)
 +
#If running interactively: the resource allocation in <tt>runConfig.sh</tt> is available locally
 +
#If reusing a run directory (optional but recommended): you have archived your last run with <tt>./archiveRun.sh</tt> if you want to keep it and you have deleted old output files with <tt>./cleanRunDir.sh</tt>
  
 
--------------------------------------
 
--------------------------------------
'''''[[Compiling_GCHP|Previous]] | [[Running_GCHP:_Configuration|Next]] | [[Getting Started With GCHP]] | [[GCHP Main Page]]'''''
+
'''''[[Obtaining_a_GCHP_Run_Directory|Previous]] | [[Running_GCHP:_Configuration|Next]] | [[Getting Started with GCHP]] | [[GCHP Main Page]]'''''

Latest revision as of 15:40, 8 December 2020


The GCHP documentation has moved to https://gchp.readthedocs.io/. The GCHP documentation on http://wiki.seas.harvard.edu/ will stay online for several months, but it is outdated and no longer active!


Previous | Next | Getting Started with GCHP | GCHP Main Page

  1. Hardware and Software Requirements
  2. Setting Up the GCHP Environment
  3. Downloading Source Code and Data Directories
  4. Compiling
  5. Obtaining a Run Directory
  6. Running GCHP: Basics
  7. Running GCHP: Configuration
  8. Output Data
  9. Developing GCHP
  10. Run Configuration Files


Overview

This page presents the basic information needed to run GCHP as well as how to verify a successful run and reuse a run directory. A pre-run checklist is included at the end to help prevent run errors. The GCHP "standard" simulation run directory is configured for a 1-hr simulation at c24 resolution and is a good first test case to check that GCHP runs on your system.

How to Run GCHP

You can run GCHP locally from within your run directory ("interactively") or by submitting your run to a job scheduler if one is available. Either way, it is useful to put run commands into a reusable script we call the run script. Executing the script will either run GCHP or submit a job that will run GCHP.

There is a symbolic link in the GCHP run directory called runScriptSamples that points to a directory in the source code containing example run scripts. Each file includes extra commands that make the run process easier and less prone to user error. These commands include:

  1. Source environment file symbolic link gchp.env to ensure run environment consistent with build
  2. Source config file runConfig.sh to set run-time configuration
  3. Delete any previous run output files that might interfere with the new run if present
  4. Send standard output to run-time log file gchp.log
  5. Rename the output restart file to include "restart" and datetime

Run Interactively

Copy or adapt example run script gchp.local.run to run GCHP locally on your machine. Before running, open your run script and set nCores to the number of processors you plan to use. Make sure you have this number of processors available locally. It must be at least 6. Next, open file runConfig.sh and set NUM_CORES, NUM_NODES, and NUM_CORES_PER_NODE to be consistent with your run script.

To run, type the following at the command prompt:

./gchp.local.run

Standard output will be displayed on your screen in addition to being sent to log file gchp.log.

Run as a Batch Job

Batch job run scripts will vary based on what job scheduler you have available. Most of the example run scripts are for use with SLURM, and the most basic example of these is gchp.run. You may copy any of the example run scripts to your run directory and adapt for your system and preferences as needed.

At the top of all batch job scripts are configurable run settings. Most critically are requested # cores, # nodes, time, and memory. Figuring out the optimal values for your run can take some trial and error. For a basic six core standard simulation job on one node you should request at least ___ min and __ Gb. The more cores you request the faster GCHP will run.

To submit a batch job using SLURM:

 sbatch gchp.run

To submit a batch job using Grid Engine:

 qsub gchp.run

Standard output will be sent to log file gchp.log once the job is started unless you change that feature of the run script. Standard error will be sent to a file specific to your scheduler, e.g. slurm-jobid.out if using SLURM, unless you configure your run script to do otherwise.

If your computational cluster uses a different job scheduler, e.g. Grid Engine, LSF, or PBS, check with your IT staff or search the internet for how to configure and submit batch jobs. For each job scheduler, batch job configurable settings and acceptable formats are available on the internet and are often accessible from the command line. For example, type man sbatch to scroll through options for SLURM, including various ways of specifying number of cores, time and memory requested.

Verify a Successful Run

There are several ways to verify that your run was successful.

  1. NetCDF files are present in the OutputDir subdirectory
  2. Standard output file gchp.log ends with Model Throughput timing information
  3. The job scheduler log does not contain any error messages

If it looks like something went wrong, scan through the log files to determine where there may have been an error. Here are a few debugging tips:

  • Review all of your configuration files to ensure you have proper setup
  • MAPL_Cap errors typically indicate an error with your start time, end time, and/or duration set in runConfig.sh
  • MAPL_ExtData errors often indicate an error with your input files specified in either HEMCO_Config.rc or ExtData.rc
  • MAPL_HistoryGridComp errors are related to your configured output in HISTORY.rc

If you cannot figure out where the problem is please do not hesitate to create a GCHPctm GitHub issue.

Reuse a Run Directory

Archive Run Output

Reusing a GCHP run directory comes with the perils of losing your old work. To mitigate this issue there is utility shell script archiveRun.sh. This script archives data output and configuration files to a subdirectory that will not be deleted if you clean your run directory.

Archiving runs is useful for other reasons as well, including:

  • Save all settings and logs for later reference after a run crashes
  • Generate data from the same executable using different run-time settings for comparison, e.g. c48 versus c180
  • Run short runs in quick succession for debugging

To archive a run, pass the archive script a descriptive subdirectory name where data will be archived. For example:

./archiveRun.sh 1mo_c24_24hrdiag

All files are archived to subfolders in the new directory. Which files are copied and to where are displayed on the screen. Diagnostic files in the OutputDir directory are moved rather than copied so as not to duplicate large files. You will be prompted at the command line to accept this change prior to data move.

Clean the Run Directory

You should always clean your run directory prior to your next run. This avoids confusion about what output was generated when and with what settings. Under certain circumstances it also avoids having your new run crash. GCHP will crash if:

  • Output file cap_restart is present and you did not change your start/end times
  • Your last run failed in such a way that the restart file was not renamed in the post-run commands in the run script

The example run scripts include extra commands to clean the run directory of the two problematic files listed above. However, you may write your own run script and omit them in which case not cleaning the run directory prior to rerun will cause problems.

To make run directory cleaning simple is utility shell script cleanRunDir.sh. To clean the run directory simply execute this script.

 ./cleanRunDir.sh

All GCHP output files, including diagnostics files in OutputDir, will then be deleted. Only restart files with names that begin with gcchem are deleted. This preserve the initial restart symbolic links that come with the run directory.

Pre-run Checklist

Prior to running GCHP, always run through the following checklist to ensure everything is set up properly.

  1. Your run directory contains the executable gchp.
  2. All symbolic links in your run directory are valid (no broken links)
  3. You have looked through and set all configurable settings in runConfig.sh (discussed in the next chapter)
  4. If running via a job scheduler: you have a run script and the resource allocation in runConfig.sh and your run script are consistent (# nodes and cores)
  5. If running interactively: the resource allocation in runConfig.sh is available locally
  6. If reusing a run directory (optional but recommended): you have archived your last run with ./archiveRun.sh if you want to keep it and you have deleted old output files with ./cleanRunDir.sh

Previous | Next | Getting Started with GCHP | GCHP Main Page