Downloading GEOS-Chem data directories
- Minimum system requirements
- Installing required software
- Configuring your computational environment
- Downloading source code
- Downloading data directories
- Creating run directories
- Configuring runs
- Output files
- Python tools for use with GEOS-Chem
- Coding and debugging
- Further reading
This page describes where you can obtain the GEOS-Chem source code and required data files.
In addition to the configuration files that ship with GEOS-Chem run directories, GEOS-Chem also needs to access data directories containing:
- Meteorological data (a.k.a. the "met fields") used to drive GEOS–Chem
- Emissions inventories used by GEOS-Chem
- Scale factors used to scale emissions from a base year to a given year
- Sample restart files that you can use to spin up your GEOS-Chem simulations
- Oxidant (OH, O3) concentrations for both full-chemistry and offline simulations
- Other GEOS–Chem specific data files.
These files are often too large to store in a single user's disk space. Therefore, they are meant to be stored in shared disk space where all GEOS-Chem users in your group can have access to them.
Do I really need to download ALL of this data?
Maybe not! If you are located at an institution that has multiple GEOS-Chem users, then your computer system might already have a copy of the GEOS-Chem shared data directories. If this is the case, you will not have to download any data (unless you need e.g. met field data for 2020 and your system only has the data up to 2019, etc.) If you are unsure whether or not the shared data directories are available to you, ask your sysadmin or IT staff.
Also, starting with GEOS-Chem 12.7.0, you can use a GEOS-Chem dry-run to download only the data files you need for a specific GEOS-Chem simulation. This can drastically reduce the number of data files that you need to download.
What if I am running GEOS-Chem on the AWS cloud?
A copy of the GEOS-Chem data directories is synced from the Harvard University FTP site (ftp.as.harvard.edu) to the Amazon Web Services s3://gcgrid bucket. You can easily download the data files you need from s3://gcgrid to the Elastic Block Storage (EBS) volume that is attached to your cloud instance. This is described in our cloud-computing tutorial cloud.geos-chem.org
To simplify matters even further, we recommend that you use a GEOS-Chem dry-run to download data from s3://gcgrid to your EBS volume.
I am located in China and data download speeds are slow. What can I do?
At present we are working on a better solution for our Chinese GEOS-Chem users. This will probably involve a point person located in China who can oversee and/or centralize data download activities. Stay tuned for more information.
The GEOS–Chem shared data directories may be downloaded from the following locations:
|Archive||Location||Description||How to download?|
|Washington University in St. Louis||http://geoschemdata.wustl.edu
Globus endpoint: GEOS-Chem Data (WashU)
|This is soon-to-be the main GEOS-Chem data archive. The Compute Canada server is being phased out, and the WUSTL server is its long term replacement.
|Compute Canada||http://geoschemdata.computecanada.ca||This archive is still active but it's being phased out. The Compute Canada archive is being replaced by http://geoschemdata.wustl.edu.|
|Amazon Web Services S3 storage||s3://gcgrid||This is an AWS S3 bucket containing a mirror of the Harvard University storage server. It will not contain the complete record of met fields, but additional data may be added by submitting a request to the GCST.
See our cloud computing tutorial (cloud.geos-chem.org) for more information.