GEOS-Chem performance

From Geos-chem
Revision as of 21:21, 11 December 2015 by Jaf (Talk | contribs) (Table of 7-model-day run times)

Jump to: navigation, search

On this page we will post information about GEOS-Chem performance and timing results.

7-day time tests

Overview

The GEOS-Chem Support Team has created a timing test package that you can use to determine the performance of GEOS-Chem on your system. The time test runs the GEOS-Chem v10-01 public release code for 7 model days with the "benchmark" chemistry mechanism. Our experience has shown that a 7-day simulation will give a more accurate timing result than a 1-day simulation. This is because much of the file I/O (i.e. HEMCO reading annual or monthly-mean emissions fields) occurs on the first day of a run.

To install the time test package on your system with:

 wget "ftp://ftp.as.harvard.edu/pub/exchange/bmy/gc_timing.tar.gz"
 tar xvzf gc_timing_tar.gz

To build the code, follow these steps:

 cd gc_timing/run.v10-01
 make realclean
 make -j4 mpbuild > log.build

To run the code, follow the instructions in the

 gc_timing/run.v10-01/README 

file. We have provided sample run scripts that you can use to submit jobs:

 gc_timing/run.v10-01/doTimeTest          # Submit job directly
 gc_timing/run.v10-01/doTimeTest.slurm    # Submit job using the SLURM scheduler  

The regular GEOS-Chem output as well as timing information will be sent to a log file named:

 doTimeTest.log.ID

where ID is either the SLURM job ID # or the process ID. You can print out the timing results with the printTime script:

 cd gc_timing/run.v10-01
 ./printTime doTimeTest.log.ID

which will display results similar to this:

 GEOS-Chem Time Test output
 ====================================================================
 Machine or node name: : holyseas04.rc.fas.harvard.edu
 CPU vendor            : AuthenticAMD
 CPU model name        : AMD Opteron(tm) Processor 6376                 
 CPU speed [MHz]       : 2300.078
 Number of CPUs used   : 8
 Simulation start date : 20130701 000000
 Simulation end date   : 20130708 000000
 Total CPU time  [s]   : 55287.61
 Wall clock time [s]   : 7999.61
 CPU / Wall ratio      : 6.9113
 % of ideal performace : 86.39

You can then use these results to fill in the table below.

--Bob Yantosca (talk) 19:06, 30 November 2015 (UTC)

Table of 7-model-day run times

The following timing test results were done with the "out-of-the-box" GEOS-Chem v10-01 public release code configuration.

  • All jobs used GEOS-FP meteorology at 4° x 5° resolution.
  • Jobs started on model date 2013/07/01 00:00 GMT and finished on 2013/07/08 00:00 GMT.
  • The code was compiled from the run directory (run.v10-01) with the the standard option make -j4 mpbuild. This sets the following compilation variables:
    • MET=geosfp GRID=4x5 CHEM=benchmark UCX=y NO_REDUCED=n TRACEBACK=n BOUNDS=n FPE=n DEBUG=n NO_ISO=n NEST=n
  • Wall clock times are listed from fastest to slowest, for the same number of CPUs. (Bands of white and cyan in the table indicate different number of CPUs.)
  • It's OK to round CPU and wall clock times to the nearest second, for clarity.
Submitter Machine or Node
and Compiler
CPU vendor CPU model Speed [MHz] # of
CPUs
CPU time Wall time CPU / Wall
ratio
% of ideal
Mat Evans (York/NCAS) earth0.york.ac.uk
ifort Version 13.0.1.117
GenuineIntel / SGU UV-2000 Intel(R) Xeon(R) CPU E5-4650L 0 @ 2.60GHz 2600.153 64 98821.79 s
27:27:01
1841.46 s
00:30:41
53.6649 83.85
Luke Schiferl (MIT) hopper.louvre.mit.edu
ifort 12.1.3
GenuineIntel Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz 2497.656 48 47530 s
13:12:10
1350 s
00:22:30
35.2186 73.37
Yanko Davila (CU Boulder) node39
ifort 11.1.069
GenuineIntel Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz 2400.00 32 40000.39 s
11:06:40
1641.58 s
00:27:22
24.367 76.15
Mat Evans (York/NCAS) earth0.york.ac.uk
ifort Version 13.0.1.117
GenuineIntel / SGU UV-2000 Intel(R) Xeon(R) CPU E5-4650L 0 @ 2.60GHz 2600.153 32 49170.2 s
13:39:3-
1775.27 s
00:29:34
27.6973 86.55
Jenny Fisher (U. Wollongong) hpcn11.local
ifort 2015
AuthenticAMD AMD Opteron(tm) Processor 6376 2300.055 32 84236.18 s
23:23:56
3217.73 s
00:53:38
26.1788 81.81
Luke Schiferl (MIT) hopper.louvre.mit.edu
ifort 12.1.3
GenuineIntel Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz 2500.000 24 29985 s
08:19:45
1519 s
00:25:19
19.7377 82.24
Luke Schiferl (MIT) turner.louvre.mit.edu
ifort 12.1.3
GenuineIntel Intel(R) Xeon(R) CPU X5675 @ 3.07GHz 3068.000 24 38914 s
10:48:34
1965 s
00:32:45
19.8021 82.51
Yanko Davila (CU Boulder) node30
ifort 11.1.069
GenuineIntel Intel(R) Xeon(R) CPU X5650 @ 2.67GHz 2670.00 24 42881.39 s
11:54:41
2262.19 s
00:37:42
18.9557 78.98
Huang Shan (Tsinghua) yxw.tsinghua.edu.cn GenuineIntel Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 2799.978 20 24062 s
06:41:02
1422 s
00:23:42
16.9264 84.63
Junwei Xu (Dalhousie) newnode7
ifort 11.1
GenuineIntel Intel(R) Xeon(R) CPU X5660 @ 2.80GHz 2801.000 20 37485 s
10:24:45
2481 s
00:41:21
15.1095 75.55
Huang Shan (Tsinghua) yxw.tsinghua.edu.cn GenuineIntel Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 2799.978 16 20523 s
05:42:03
1479 s
00:24:39
13.8802 86.75
Melissa Sulprizio (GCST) regal18.rc.fas.harvard.edu
ifort 11.1.069
GenuineIntel Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20 GHz 2199.822 16 24559 s
06:49:19
1866 s
00:31:06
13.1594 82.25
Mat Evans (York/NCAS) earth0.york.ac.uk
ifort Version 13.0.1.117
GenuineIntel / SGU UV-2000 Intel(R) Xeon(R) CPU E5-4650L 0 @ 2.60GHz 2600.153 16 29962.15 s
08:19:22
2088.55 s
00:34:48
14.3459 89.66
Jenny Fisher (U. Wollongong / NCI) r3199 (Raijin @ NCI)
ifort 12.1.9.293
GenuineIntel Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz 2601.00 16 22150.52 s
06:09:11
2660.78 s
00:44:20
12.4368 77.73
Melissa Sulprizio (GCST) fry-02.as.harvard.edu
ifort 11.1.069
GenuineIntel Westmere E56xx/L56xx/X56xx (Nehalem-C) 2925.998 16 35221 s
9:47:01
2734 s
00:45:34
12.8978 80.61
Jenny Fisher (U. Wollongong) hpcn11.local
ifort 2015
AuthenticAMD AMD Opteron(tm) Processor 6376 2299.992 16 50992.52 s
14:09:53
3725.06 s
01:02:05
13.689 85.56
Luke Schiferl (MIT) hopper.louvre.mit.edu
ifort 12.1.3
GenuineIntel Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz 2197.558 12 16879 s
04:39:49
1639 s
00:27:19
10.2994 85.83
Huang Shan (Tsinghua) yxw.tsinghua.edu.cn GenuineIntel Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 2799.978 12 18410 s
05:06:50
1724 s
00:28:44
10.6758 88.97
Melissa Sulprizio (GCST) regal18.rc.fas.harvard.edu
ifort 11.1.069
GenuineIntel Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20 GHz 2199.822 12 21718 s
06:01:58
2127 s
00:35:27
10.2086 85.07
Melissa Sulprizio (GCST) fry-02.as.harvard.edu
ifort 11.1.069
GenuineIntel Westmere E56xx/L56xx/X56xx (Nehalem-C) 2925.998 12 25443 s
7:04:03
2575 s
00:42:55
9.9881 82.34
Luke Schiferl (MIT) turner.louvre.mit.edu
ifort 12.1.3
GenuineIntel Intel(R) Xeon(R) CPU X5675 @ 3.07GHz 3068.000 12 32342 s
08:59:02
2989 s
00:49:49
10.8211 90.18
Karl Seltzer/Barron Henderson (Duke/UF) c6a-s12.ufhpc
ifort 12.1.5
AuthenticAMD AMD Opteron(tm) Processor 6378 2400.038 12 41023 s
11:23:44
4268 s
01:11:08
9.6108 80.09
Zahra Hosseini (RWDI) private
PGI 14.7 (optimization -O1)
GenuineIntel Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz 2600.157 12 68523 s
19:02:03
6319 s
01:45:19
10.8440 90.37
Jenny Fisher (U. Wollongong / NCI) r105 (Raijin @ NCI)
ifort 12.1.9.293
GenuineIntel Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz 2601.00 8 18535.98 s
05:08:56
2660.78 s
00:44:21
6.9664 87.08
Mat Evans (York/NCAS) earth0.york.ac.uk
ifort Version 13.0.1.117
GenuineIntel / SGU UV-2000 Intel(R) Xeon(R) CPU E5-4650L 0 @ 2.60GHz 2600.153 8 20082 s
05:34:42
2681 s
00:44:40
7.4884 93.61
Melissa Sulprizio (GCST) regal17.rc.fas.harvard.edu
ifort 11.1.069
GenuineIntel Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20 GHz 2199.849 8 20398 s
05:39:58
2837 s
00:47:17
7.2045 90.06
Melissa Sulprizio (GCST) fry-01.as.harvard.edu
ifort 11.1.069
GenuineIntel Westmere E56xx/L56xx/X56xx (Nehalem-C) 2925.998 8 23048 s
06:24:08
3312 s
00:55:12
6.9611 87.01
Bob Yantosca (GCST) fry-01.as.harvard.edu
ifort 11.1.069
GenuineIntel Westmere E56xx/L56xx/X56xx (Nehalem-C) 2925.998 8 24234 s
06:43:54
3456 s
00:57:36
7.0114 87.64
Bob Yantosca (GCST) fry-02.as.harvard.edu
ifort 11.1.069
GenuineIntel Westmere E56xx/L56xx/X56xx (Nehalem-C) 2925.998 8 25222 s
07:00:22
3583 s
00:59:43
7.0397 88.0
Bob Yantosca (GCST) holyseas03.rc.fas.harvard.edu
ifort 11.1.069
AuthenticAMD AMD Opteron(tm) Processor 6376 2300.024 8 32972 s
09:09:32
5054 s
01:24:14
6.5241 81.55
Jenny Fisher (U. Wollongong) hpcn01.local
ifort 2015
AuthenticAMD AMD Opteron(tm) Processor 6376 2299.983 8 37536.33 s
10:25:36
5146.54 s
01:25:47
7.2935 91.17
Bob Yantosca (GCST) holyseas02.rc.fas.harvard.edu
ifort 11.1.069
AuthenticAMD AMD Opteron(tm) Processor 6376 2300.054 8 33722 s
09:22:02
5281 s
01:28:01
6.385 79.81
Melissa Sulprizio (GCST) holyseas01.rc.fas.harvard.edu
ifort 11.1.069
AuthenticAMD AMD Opteron(tm) Processor 6376 2299.936 8 37379 s
10:22:59
5477 s
01:31:17
6.8353 85.44
Karl Seltzer/Barron Henderson (Duke/UF) c6a-s12.ufhpc
ifort 12.1.5
AuthenticAMD AMD Opteron(tm) Processor 6378 2399.936 8 35988 s
9:59:48
6137 s
01:42:17
5.8641 73.3
Bob Yantosca (GCST) holyseas04.rc.fas.harvard.edu
ifort 11.1.069
AuthenticAMD AMD Opteron(tm) Processor 6376 2300.078 8 55288 s
15:21:28
8000 s
02:13:20
6.9113 86.39
Junwei Xu (Dalhousie) dal.acenet.ca
ifort 11.1
AuthenticAMD Quad-Core AMD Opteron(tm) Processor 8384 2700.000 1 23443 s
06:30:43
23645 s
06:34:05
0.9915 99.15


A quick glance at the table shows that timing tests that used the Intel Fortran Compiler to compile GEOS-Chem always ran more slowly on machines with AMD CPUs than on machines with Intel CPUs. This is a long-standing issue. The Intel Fortran Compiler is known to optimize best on GenuineIntel CPUs.

--Bob Yantosca (talk) 16:46, 11 December 2015 (UTC)

Graph of 7-model-day run times

The plot below is a graphical representation of the table from the above section.

7 day time tests.png

--Bob Yantosca (talk) 17:15, 11 December 2015 (UTC)

GEOS-Chem scalability

Overview

Colette Heald wrote:

I was wondering if you could give me your thoughts on GEOS-Chem scalability? I'm about to purchase some new servers, and the default would be 6 dual core servers, so 12 processors total. I see a huge difference in my 4p vs. 8p machines, but I'm wondering if there's much advantage going beyond that to 12p. My sense from past discussions is that GC does not scale very well.

Jack Yatteau replied:

First, if you’re getting Intel processors with hyperthreading, your 2 socket hex core system will look as if it has 24 processors under Linux. We’re current using 2 socket quad core systems that appear to have 16 processors under Linux. Codes run almost twice as fast up to 8 threads, and run about the same speed at 16 threads, meaning, an 8p job will run faster on the newer system than on a older 2 socket quad core system without hyperthreading at about the clock speed, but two 8p jobs running simultaneously will each run at about the same speed as they would on the older systems. So the system appears to slow down as you add more than 8 threads. On a hex core system, the threshold would be at 12 threads. So you’ll have a difficult time making sense of timing tests if you use one of the newer systems unless you disable hyperthreading, in which case you might as well limit the number of threads to 12 and leave hyperthreading enabled.
I measured scaling 5 years ago using a 16 processor Origin 2000 and a 12 processor Altix and you can see the results and my analysis of them in this Powerpoint presentation.
Since then I’ve run tests at 4x5 resolution on dual core Opteron processors up to 16 cores and on modern Xeon systems up to 8p. GEOS-Chem still runs about 1.5-1.6 times faster on 8 threads than on 4 threads. In our environment, it matters more how many runs get completed. Even if we could get a job to run 25% faster on 16 threads than on 8 threads, we’d be better off running 2 simultaneous jobs each using 8 threads. Also, be aware that at 2x2.5 resolution GEOS-Chem doesn’t scale as well, since more time is spend doing transport, and the transport code doesn’t scale as well as the chemistry code.
Finally, we’ve done very well using dual socket systems since for the past several years computers have been designed with high bandwidth to memory for pairs of processors. Going to more than 2 sockets (e.g. 4 quad or hex core systems), the bandwidth between 2 or more pairs of processors drops, and I’d expect that to slow down SMP jobs whose threads don’t all run on the same pair. So my recommendation would be to stick to 2 socket systems and use the savings to add more of them. Plus, maybe I’ve convinced you that you’ll be getting a machine with 3 times the capability of an older dual quad-core system.

--Bob Y. 13:09, 12 August 2010 (EDT)

Benchmarking results from MIT user group

Colette Heald wrote:

I have been benchmarking GEOS-Chem on my new system here at MIT and I thought you might be interested in seeing the results for the scaling. This is with a dual hex-core Xeon 3.07 GHz chip & 48 Gb RAM from Thinkmate.
Mit gc benchmark.png

Jack Yatteau replied:

Note that there is no difference between 12 and 24. It’s not just scaling. With 12 real cores, jobs run faster than on old non-hyperthreaded cores at the same clock speed, but once you start relying on hyperthreading (>12) you don’t gain speed, even if the job scales. But you could run two 12-core jobs at about the speed of the older processors, which is what we do when the cluster is busy.

Colette Heald wrote:

Yup, hyperthreading doesn't appear to buy me anything. But I did test submitting 2 12-core jobs to the same machine and the run time dropped from 36 min to 58 min. I suppose not quite a doubling, but didn't seem like a worthwhile experiment on my system.

--Bob Y. 12:00, 17 April 2012 (EDT)

Benchmarking results from University of Liege user group

Manu Mahieu wrote:

GEOS-Chem v9-01-03 is now installed and running at ULg. I have performed a few benchmark runs on our server involving from 4 up to 32 CPUs. This PDF document provides information about our server, the OS and compiler used as well as the running times for the various configurations tested up to now. By far, the best performance was obtained when submitting the GC simulation to all available CPUs (i.e. 32).

--Bob Y. 11:44, 19 June 2013 (EDT)

Benchmarking results from University of York user group

Mat Evans and his group at the University of York have done an analysis of how GEOS-Chem v9-02 performs when compared to the previous version, GEOS-Chem v9-01-03. Please follow this post on our GEOS-Chem v9-02 wiki page to view the results.

--Bob Y. 10:57, 19 November 2013 (EST)

Timing information from older GEOS-Chem versions

The following information is mostly out-of-date. We shall keep it here for future reference.

Adding additional tracers

Claire Carouge wrote:

I ran an ensemble of 5 identical runs for 43 and 54 tracers with GEOS-Chem v8-02-04, compiled with IFORT 11.1.069 with GEOS-5 and in 4x5 resolution. For each set of tracers, I've run with everything turned on and then again with the chemistry turned off.
Here are the times (simulation length: 4 days).
# of tracers Avg total time (s) Avg chemistry time (s) Avg transport time (s)
54 (SOA chemistry) 760.27 457.93 302.34
43 (no SOA chemistry) 709.92 427.55 282.37
Diff 54-43 tracers +50.35 +30.38 +19.97
So it gives an increase of 7% for transport, chemistry and total time when adding 11 tracers. The additional time for transport is then not linear, but a linear estimate (1% additional time per additional tracer) can give a high estimate of the additional time.
The additional time in chemistry is very dependent on the type of tracer you add (aerosol, gas tracer with modifications to globchem.dat....) so the 7% increase in time is probably very particular to the SOA tracers.

--Bob Y. 10:51, 15 April 2010 (EDT)

Intel Fortran Compiler

Please see the following links for some timing comparisons between the various versions of the Intel Fortran Compiler (aka "IFORT" compiler):

--Bob Y. 10:51, 15 April 2010 (EDT)

Timing results from 1-month benchmarks

Please see our GEOS-Chem supported platforms and compilers page for a user-submitted list of timing results from GEOS-Chem 1-month benchmark simulation. Several platform/compiler combinations are listed on this page.

--Bob Y. 10:57, 15 April 2010 (EDT)