Downloading GEOS-Chem data directories
- Minimum system requirements
- Configuring your computational environment
- Downloading source code
- Downloading data directories
- Creating run directories
- Configuring runs
- Output files
- Visualizing and processing output
- Coding and debugging
- Further reading
This page describes where you can obtain the GEOS-Chem source code and required data files.
- 1 Overview
- 2 Shared data directory archives
In addition to the configuration files that ship with GEOS-Chem run directories, GEOS-Chem also needs to access data directories containing:
- Meteorological data (a.k.a. the "met fields") used to drive GEOS–Chem
- Emissions inventories used by GEOS-Chem
- Scale factors used to scale emissions from a base year to a given year
- Sample restart files that you can use to spin up your GEOS-Chem simulations
- Oxidant (OH, O3) concentrations for both full-chemistry and offline simulations
- Other GEOS–Chem specific data files.
These files are often too large to store in a single user's disk space. Therefore, they are meant to be stored in shared disk space where all GEOS-Chem users in your group can have access to them.
Do I really need to download ALL of this data?
Maybe not! If you are located at an institution that has multiple GEOS-Chem users, then your computer system might already have a copy of the GEOS-Chem shared data directories. If this is the case, you will not have to download any data (unless you need e.g. met field data for 2020 and your system only has the data up to 2019, etc.) If you are unsure whether or not the shared data directories are available to you, ask your sysadmin or IT staff.
Also, starting with GEOS-Chem 12.7.0, you can use a GEOS-Chem dry-run to download only the data files you need for a specific GEOS-Chem simulation. This can drastically reduce the number of data files that you need to download.
What if I am running GEOS-Chem on the AWS cloud?
A copy of the GEOS-Chem data directories is synced from the Harvard University FTP site (ftp.as.harvard.edu) to the Amazon Web Services s3://gcgrid bucket. You can easily download the data files you need from s3://gcgrid to the Elastic Block Storage (EBS) volume that is attached to your cloud instance. This is described in our cloud-computing tutorial cloud.geos-chem.org
To simplify matters even further, we recommend that you use a GEOS-Chem dry-run to download data from s3://gcgrid to your EBS volume.
Do I still need to use the hemco_data_download package?
The hemco_data_download has for a long time been the method of choice for downloading HEMCO emissions data. But hemco_data_download typically downloads the entire contents of HEMCO data directory folders, which can end up giving you more data than you actually need to run a given GEOS-Chem simulation.
For this reason, we have replaced the hemco_data_download package with the GEOS-Chem dry-run capability, starting in GEOS-Chem 12.7.0. With the dry-run option, you can download only those data files that your GEOS-Chem simulation needs.
I am located in China and data download speeds are slow. What can I do?
At present we are working on a better solution for our Chinese GEOS-Chem users. This will probably involve a point person located in China who can oversee and/or centralize data download activities. Stay tuned for more information.
The GEOS–Chem shared data directories may be downloaded from the following locations:
|Archive||Location||Description||How to download?|
|Compute Canada||http://geoschemdata.computecanada.ca||This is the main GEOS-Chem data archive.
|Amazon Web Services S3 storage||s3://gcgrid||This is an AWS S3 bucket containing a mirror of the Harvard University storage server. It will not contain the complete record of met fields, but additional data may be added by submitting a request to the GCST.
See our cloud computing tutorial (cloud.geos-chem.org) for more information.