Parallelizing GEOS-Chem
In the late 1990's, a new open standard for parallel computing named OpenMP was developed by several compiler vendors, including Sun, SGI, Compaq, The Portland Group, and Microsoft. The resulting standard allows parallel processing source code (written in either Fortran or C) to be ported between different platforms with minimal effort.
As of version 4.17, all GEOS–Chem parallel processing commands have been converted to the new OpenMP standard. Therefore, in order to run GEOS–Chem on your platform, you must make sure that your compiler supports OpenMP.
Example
In GEOS–Chem, parallelization is acheieved by splitting the work contained by a DO-loop across several processors. Here is an example of parallel code written with the OpenMP directives:
!$OMP PARALLEL DO !$OMP+SHARED( A ) !$OMP+PRIVATE( I, J, B ) !$OMP+SCHEDULE( DYNAMIC ) DO J = 1, JJPAR DO I = 1, IIPAR B = A(I,J) A(I,J) = B * 2.0 ENDDO ENDDO !$OMP END PARALLEL DO
The !$OMP PARALLEL DO (which must start in column 1) is called a sentinel. It tells the compiler that the following DO-loop is to be executed in parallel. The commands following the sentinel specify further options for the parallelization. These options may be spread across multiple lines by using the OpenMP line continuation command !$OMP+.
The above DO-loop will assign different (I,J) pairs to different processors. The more processors specified, the less time it will take to do the operation.
The declaration SHARED( A ) tells the compiler that the A array may be shared across all processors. We say that A is a SHARED variable.
Even though A itself can be shared across all processors, its indices I and J cannot be shared. Because different processors will be handling different (I,J) pairs, each processor needs its own local copy of (I,J). In this way, the processors will not interfere with each other by overwriting each other's values of I and J. We say that I and J need to be made PRIVATE to the parallel loop; this declaration is achieved with the !$OMP+PRIVATE( I, J ) declaration.
The B scalar also needs to be declared PRIVATE, since its value will be recomputed for each (I,J) pair. We thus must extend the declaration of PRIVATE( I, J ) to PRIVATE( I, J, B ).
The !$OMP END PARALLEL DO is another sentinel. It lets the compiler know where the parallel DO-loop ends. The !$OMP END PARALLEL DO sentinel is optional and thus may be omitted. However, specifying both the beginning and end of parallel sections is not only good style, but also enhances the overall readability of the code.
PRIVATE vs SHARED
Here is a quick and dirty rule of thumb for determining which in a parallel DO-loop must be declared PRIVATE:
- All loop indices must be declared PRIVATE.
- All array indices must be declared PRIVATE.
- All scalars which are assigned a value within a parallel loop must be declared PRIVATE.
- All arguments to a function or subroutine called within a parallel loop must be declared PRIVATE.
Also, you may also have noticed that the first character of both the !$OMP PARALLEL DO sentinel and the !$OMP+ line continuation command is a legal Fortran comment character (!). This is by design. In order to invoke the parallel procesing commands, you must turn on a specific switch in your makefile (this is -mp for SGI; check your compiler manual for other platforms). If you do not specify multiprocessor compilation, then the parallel processing directives will be considered as Fortran comments, and the associated DO-loops will be executed on one processor only.
Because GEOS–Chem uses the traditional "fixed-form" Fortran style, the !$OMP commands must begin at column one. Otherwise a syntax error will result at compile time.
It should be noted that OpenMP commands are not the same as MPI (message passing interface). With OpenMP directives, you are able to split a job among several processors on the same machine. You are NOT able to split a job among several processors on different machines. Therefore, OpenMP is not suitable for Beowulf or other distributed memory architectures.
For more information
Please consult the following web pages for more information about the OpenMP parallelization directives:
MPI
We are recoding GEOS–Chem for MPI parallelization with the help of NASA. This is an ongoing project. NASA will also help us identify ways in which GEOS–Chem can be tuned for better performance.