Specifying settings for OpenMP parallelization

From Geos-chem
Jump to: navigation, search

Previous | Next | Getting Started with GEOS-Chem

  1. Minimum system requirements
  2. Configuring your computational environment
  3. Downloading source code
  4. Downloading data directories
  5. Creating run directories
  6. Configuring runs
  7. Compiling
  8. Running
  9. Output files
  10. Visualizing and processing output
  11. Coding and debugging
  12. Further reading


Parallelization settings for GEOS-Chem "Classic"

GEOS-Chem "Classic" uses OpenMP parallelization, which is an implementation of shared-memory (aka serial) parallelization. Two Unix environment variables control the OpenMP parallelization settings, as defined below.

OMP_NUM_THREADS

The OMP_NUM_THREADS environment variable sets the number of computational cores (aka threads) that you would like GEOS-Chem to use.

The following commands will request that GEOS-Chem use 8 cores by default:

export OMP_NUM_THREADS=8

You can of course change the number of cores from 8 to however many you want your GEOS-Chem simulation to use. The caveat being that OpenMP-parallelized programs cannot execute on more than 1 computational node of a multi-node system. Most modern computational cores typically contain between 16 and 64 cores. Therefore, your GEOS-Chem "Classic" simulations will not be able to take advantage of more cores than these. (We recommend that you consider using GCHP for more computationally-intensive simulations.)

Where to define OMP_NUM_THREADS

We recommend that you set OMP_NUM_THREADS not only in your Bash startup script, but in also each GEOS-Chem run script that you use.

Example: SLURM run script

If your system uses the SLURM batch scheduler, then you can write your GEOS-Chem job script using the SLURM_CPUS_PER_TASK environment variable so that it will use the same number of cores as the number of cores you requested via SLURM.

#!/bin/bash

#SBATCH -c 24
#SBATCH -N 1
#SBATCH -t 0-12:00
#SBATCH -p MY_QUEUE_NAME
#SBATCH --mem=60000

# Apply your environment settings to the computational queue
source ~/.bashrc
 
# Set the proper # of threads for OpenMP
# SLURM_CPUS_PER_TASK ensures this matches the number you set with -c above
#
# So in this example, we requested that SLURM make 24 cores available,
# and GEOS-Chem will use all of these 24 cores.
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

... etc ...

IMPORTANT! If you forget to define OMP_NUM_THREADS in your Unix environment and/or run scripts, then GEOS-Chem will only execute using one core. This can cause GEOS-Chem to execute much more slowly than intended.

--Bob Yantosca (talk) 15:50, 20 December 2019 (UTC)

Example: Run script for the Amazon Web Services cloud

When you log into an Amazon Web Services cloud instance, you will receive an entire node with as many vCPUs as you have requested. A vCPU is equivalent to the number of computational cores. Most cloud instances have twice as many vCPUs as physical CPUs (i.e. each CPU chip has 2 cores).

To find out how many vCPUs are available in your instance, you can use then nproc command. The nproc command can also be embedded in your shell startup scripts such as (.bashrc or .bash_aliases), as well as into your GEOS-Chem run script. The following is a sample run script

#!/bin/bash
 
# Apply your environment settings to the computational queue
source ~/.bashrc
 
# In an AWS cloud instance, you own the entire node, so there is no need
# for a scheduler.  Use nproc to specify the number of cores for OpenMP.
export OMP_NUM_THREADS=`nproc`

...etc...

--Bob Yantosca (talk) 15:19, 16 January 2020 (UTC)

OMP_STACKSIZE

In order to use GEOS-Chem "Classic" with OpenMP parallelization, you must request the maximum amount of stack memory in your Unix environment. (The stack memory is where local automatic variables and temporary !$OMP PRIVATE variables will be created.) Add the following lines to your system startup file and to your GEOS-Chem run scripts:

ulimit -s unlimited
export OMP_STACKSIZE=500m

The ulimit -s unlimited (for bash) or limit stacksize unlimited commands tell the Unix shell to use the maximum amount of stack memory available.

The environment variable OMP_STACKSIZE must also be set to a very large number. In this example, we are nominally requesting 500 MB of memory. But in practice, this will tell the GNU Fortran compiler to use the maximum amount of stack memory available on your system. The value 500m is a good round number that is larger than the amount of stack memory on most computer clusters, but you can increase this if you wish.

--Bob Yantosca (talk) 21:02, 19 December 2019 (UTC)

Where to define OMP_STACKSIZE

We recommend that you set OMP_STACKSIZE not only in your Bash startup script, but in also each GEOS-Chem run script that you use.

Errors caused by incorrect settings

  1. If the OMP_NUM_THREADS is set to 1, then your GEOS-Chem simulation will execute properly, but only use one computational core. This will make your simulation run much more slowly than intended.

  2. If the OMP_STACKSIZE environment variable is not included in your startup script, or if it is set to a very low value, you might encounter a segmentation fault error after the TPCORE transport module is initialized. In this case, GEOS_Chem "thinks" that it does not have enough memory to perform the simulation, even though sufficient memory may be present. Including the OMP_STACKSIZE variable definition in your startup script as described above usually fixes this error.

--Bob Yantosca (talk) 15:54, 20 December 2019 (UTC)

Parallelization settings for GCHP

GCHP uses a different type of parallelization called MPI ("Message Passing Interface"). MPI allows GEOS-Chem to take advantage of cores on multiple nodes instead of being limited to executing on a single node. For detailed information, please see our GEOS-Chem HP wiki page and our Getting Started with GCHP manual.

--Bob Yantosca (talk) 21:02, 19 December 2019 (UTC)

Further reading

  1. Parallelizing GEOS-Chem
  2. Guide to GEOS-Chem performance



Previous | Next | Getting Started with GEOS-Chem