Welcome to the Climate Data Operators¶
- Table of contents
- Welcome to the Climate Data Operators
- How to Get Help or Contribute
- Installation and Supported Platforms
- Download / Compile / Install
- Common Issues and Known Problems
- Segfault with netcdf4 files
- make check fails with tsformat.test.8 - Errors with operator chaining and netCDF4/HDF5 files
- netCDF with packed data
- Lost netCDF variables/dimensions after processing with CDO
- Static build with netcdf 4.x incl. dap
- EXTRA formatted files with mixed precision
- SZIP compressed GRIB1 files.
- CDO Mailing Lists
- Using CDO at MPIM and DKRZ
CDO is a large tool set for working on climate and NWP model data. NetCDF 3/4, GRIB 1/2 including SZIP and JPEG compression, EXTRA, SERVICE and IEG are supported as IO-formats. Apart from that CDO can be used to analyse any kind of gridded data not related to climate science.
CDO has very small memory requirements and can process files larger than the physical memory.
CDO is open source and released under the terms of the GNU General Public License v2 (GPL).
There is no man-page since operator descriptions are built into the interpreter:
cdo -h [operator]
More documentation is available:
- Tutorial, continuously under development
- FAQ, continuously under development
- Reference card, for everyone with a really large coffee mug
- ECA Climate Indices Package
- Plotting with Magics++
- Operator News - Subscribe to keep yourself up-to-date!
- CDO CMOR handson slides by Fabian Wachsmann
- Em português [Author: Guilherme Martins]:
- Presentation at MPIMET
- OpenMP support
- Citing CDO
- Using CDO from python or ruby (Talk from 2018)
- Autocompletion for shells and vim
How to Get Help or Contribute¶
To be most helpful, we recommend the following:
- Please include the version of CDO in your postings. Use
- If possible, check your calls with the latest CDO release - problems may be solved already
- Input Files: Almost all problems have to do with files. To face such problems the original data is needed. This does not mean, that the full data set is needed. Please use following methods to shrink the data before uploading:
- Select a single variable from the file, which causes problems. This can be with CDO using selvar
- If your problem arises with a single timestep, send us only one! Use operators from the seltime
- Remap to a coarse grid with CDO's remapping facilities.
- If the file is still too large, there might be a public ftp server, where you can upload it.
- If the data cannot be uploaded, include the output from
ncdump -hof your input NetCDF data.
- Building Problem: Please submit the
config.logfile created by the configure call.
- No X-posting: Do NOT create Forums entries and Issues on the same topic. This is annoying and does not solve your problem.
Installation and Supported Platforms¶
CDO should easily compile on every Posix compatible operating system like IBM's AIX, HP-UX, Sun's Solaris as well as on most Linux distributions, BSD variants and cygwin. Thus it is possible to use CDO similarly on general purpose PCs and unix-based high performance clusters.
In case of HPC, it is quite common to install software via source code compilation, because theses machines tend to be highly tuned beasts. Special libraries, special compilers, special directories make binary software delivery simply useless even if operating systems support package management systems like
rpm (e.g. AIX). That's why CDO uses a customisable building process with autoconf and automake. For more commonly used Unix systems, some progress have been made to ease installation of CDO. Further information can be found here:
Download / Compile / Install¶
CDO is distributed as source code - it has to be compiled and installed by the user. Please download the current release from here. For high portability CDO is built with autotools. After unpacking the archive, check all configure options with
Most important options are described in the manual. Some functionality (e.g. IO formats) will only be available when CDO is built/linked against the corresponding library. If you need to install those libaries too, you may consider using libs4cdo, a preconfigured package which contains all external functionality for CDO. After successful configuration type
make && make install
Common Issues and Known Problems¶
Segfault with netcdf4 files¶
Netcdf4 is based on the hdf5 libary library, which can be build thread-safe or non-thread-safe. Depending on this, concurrent IO on netcdf4 files (like in operator chains) may lead to segmentation faults in the underlying hdf5 library. Because CDO has to deal with whatever hdf5 installation is on the target system, there is a special CDO command line option for serialization of IO:
-L. Please add it to the CDO calls accordingly!
make check fails with
tsformat.test.8 - Errors with operator chaining and netCDF4/HDF5 files¶
CDO is a multi-threaded application. When chaining operators possibly all operators are running in parallel on different threads. Therefor all external libraries should be compiled thread safe. Using non-threadsafe libraries could cause unexpected errors! Especially netCDF4(HDF5) in combination with operator chaining can cause problems, if the HDF5 library is not compiled thread-safe.
If you compile CDO yourself, you can check this by running
make checkUsually the test called
tsformat.testnumber 8 will fail on systems with non-threadsafe hdf5 installations.
The runtime errors could vary for different runs. Typical error messages are:
Error (xxx) : NetCDF: HDF error cdo(xxx) malloc: *** error for object xxx: pointer being freed was not allocated segmentation fault (core dumped) Bus error (core dumped)A workaround is to change the output file format to standard netCDF
cdo -f nc fldmean -selname,XX ifile.nc4 ofile.ncSince CDO version 1.5.8 you can lock the I/O with the option -L. This will serialize all I/O accesses.
cdo -L fldmean -selname,XX ifile.nc4 ofile.nc4
netCDF with packed data¶
Packing reduces the data volume by reducing the precision of the stored numbers. In netCDF it is implemented using the attributes add_offset and scale_factor. CDO supports netCDF files with packed data but can not automatically repack the data. That means the attributes add_offset and scale_factor are never changed. If you are using a CDO operator which change the range of the data you also have to take care that the modified data can be packed with the same add_offset and scale_factor. Otherwise the result could be wrong. You will get the following error message if some data values are out of the range of the packed datatype:
Error (cdf_put_vara_double) : NetCDF: Numeric conversion not representableIn this case you have to change the data type to single or double precision floating-point. This can be done with the CDO option -b F32 or -b F64.
Lost netCDF variables/dimensions after processing with CDO¶
CDO process only the data variables and the corresponding coordinate variables of a netCDF file. All coordinate variables and dimensions which are not assigned to a data variable will be lost after processing with CDO!
Static build with netcdf 4.x incl. dap¶
For a static binary linked to a netcdf 4.1.1 default installation the dependencies of dap have to be added manually. This is because nc-config does not keep trac of them. Add
LIBS='-lcurl -lgssapi_krb5 -lssl -lcrypto -ldl -lidn -ldes425 -lkrb5 -lk5crypto -lcom_err -lkrb5support -lresolv'to the ./configure call. You may need the shared runtime environment of you compiler. For gcc, add -lgcc_s to LIBS. If this does not work, dependencies can be checked through package management or with ldd, if a shared version is available. Kerberos related bindings are described be the krb5-config script. Like CDO itself netcdf uses libtool for building. It keeps track of further dependencies and uses runtime library paths for linking to shared libs. That's why it is recommended to user shared instead of static linking.
EXTRA formatted files with mixed precision¶
The EXTRA format has a header section with 4 integer values followed by the data section. The header and data section can have an accuracy of 4 or 8 bytes (single or double precision). There is no real standard for the EXTRA format but the header and data section should have the same precision. An EXTRA file with a header precision of 4 bytes and a data precision of 8 bytes couldn't be processed with CDO since version 1.4.2.
SZIP compressed GRIB1 files.¶
SZIP compression of GRIB1 records is a local extension to the GRIB standard at the MPI for Metereology. SZIP compressed GRIB1 records can only be decoded correctly with tools from the MPI for Metereology (e.g. CDO). It is neither recommended to share nor to generate those files outside the MPI for Metereology!
CDO Mailing Lists¶
There is no CDO mailing list anymore. We use a newsfeed for announcing releases.
Using CDO at MPIM and DKRZ¶
The latest and all previously installed CDO versions are available by the module system. Use
module load cdo/1.X.Yto load CDO version 1.X.Y, or
module load cdoto load the latest version.