Welcome to the Climate Data Operators

CDO is a large tool set for working on climate and NWP model data. NetCDF 3/4, GRIB 1/2 including SZIP and JPEG compression, EXTRA, SERVICE and IEG are supported as IO-formats. Apart from that CDO can be used to analyse any kind of gridded data not related to climate science.
CDO has very small memory requirements and can process files larger than the physical memory.

CDO is open source and released under the terms of the GNU General Public License v2 (GPL).


Full documentation is available as pdf and html.

There is no man-page since operator descriptions are built into the interpreter:

cdo -h [operator]

More documentation is available:

How to Get Help or Contribute

We encourage users to use both Forums and Issues tracking system. If you are not sure about using Forums or Issues list, use the Forums first especially the Support list.

To be most helpful, we recommend the following:

  • Please include the version of CDO in your postings. Use
    cdo -V
  • If possible, check your calls with the latest CDO release - problems may be solved already
  • Input Files: Almost all problems have to do with files. To face such problems the original data is needed. This does not mean, that the full data set is needed. Please use following methods to shrink the data before uploading:
    • Select a single variable from the file, which causes problems. This can be with CDO using selvar
    • If your problem arises with a single timestep, send us only one! Use operators from the seltime
    • Remap to a coarse grid with CDO's remapping facilities.
    • If the file is still too large, there might be a public ftp server, where you can upload it.
    • If the data cannot be uploaded, include the output from ncdump -h of your input NetCDF data.
  • Building Problem: Please submit the config.log file created by the configure call.
  • No X-posting: Do NOT create Forums entries and Issues on the same topic. This is annoying and does not solve your problem.

To see if there is already an answer to your question you can search in the Forums and Issues lists and the internet.

Installation and Supported Platforms

CDO should easily compile on every Posix compatible operating system like IBM's AIX, HP-UX, Sun's Solaris as well as on most Linux distributions, BSD variants and cygwin. Thus it is possible to use CDO similarly on general purpose PCs and unix-based high performance clusters.
In case of HPC, it is quite common to install software via source code compilation, because theses machines tend to be highly tuned beasts. Special libraries, special compilers, special directories make binary software delivery simply useless even if operating systems support package management systems like rpm (e.g. AIX). That's why CDO uses a customisable building process with autoconf and automake. For more commonly used Unix systems, some progress have been made to ease installation of CDO. Further information can be found here:

External links:

Download / Compile / Install

CDO is distributed as source code - it has to be compiled and installed by the user. Please download the current release from here. For high portability CDO is built with autotools. After unpacking the archive, check all configure options with

./configure --help

Most important options are described in the manual. Some functionality (e.g. IO formats) will only be available when CDO is built/linked against the corresponding library. If you need to install those libaries too, you may consider using libs4cdo, a preconfigured package which contains all external functionality for CDO. After successful configuration type

make && make install

Common Issues and Known Problems

Errors with operator chaining and netCDF4/HDF5 files

CDO is a multi-threaded application. When chaining operators possibly all operators are running in parallel on different threads. Therefor all external libraries should be compiled thread safe. Using non-threadsafe libraries could cause unexpected errors! Especially netCDF4(HDF5) in combination with operator chaining can cause problems, if the HDF5 library is not compiled threadsafe.

If you compile CDO yourself, you can check this by running

make check
Usually the test called tsformat.test number 8 will fail on systems with non-threadsafe hdf5 installations.

The runtime errors could vary for different runs. Typical error messages are:

Error (xxx) : NetCDF: HDF error
cdo(xxx) malloc: *** error for object xxx: pointer being freed was not allocated
segmentation fault (core dumped)
Bus error (core dumped)
A workaround is to change the output file format to standard netCDF
cdo -f nc fldmean -selname,XX ifile.nc4 ofile.nc
Since CDO version 1.5.8 you can lock the I/O with the option -L. This will serialize all I/O accesses.
cdo -L fldmean -selname,XX ifile.nc4 ofile.nc4

netCDF with packed data

Packing reduces the data volume by reducing the precision of the stored numbers. In netCDF it is implemented using the attributes add_offset and scale_factor. CDO supports netCDF files with packed data but can not automatically repack the data. That means the attributes add_offset and scale_factor are never changed. If you are using a CDO operator which change the range of the data you also have to take care that the modified data can be packed with the same add_offset and scale_factor. Otherwise the result could be wrong. You will get the following error message if some data values are out of the range of the packed datatype:

Error (cdf_put_vara_double) : NetCDF: Numeric conversion not representable
In this case you have to change the data type to single or double precision floating-point. This can be done with the CDO option -b F32 or -b F64.

Lost netCDF variables/dimensions after processing with CDO

CDO process only the data variables and the corresponding coordinate variables of a netCDF file. All coordinate variables and dimensions which are not assigned to a data variable will be lost after processing with CDO!

Static build with netcdf 4.x incl. dap

For a static binary linked to a netcdf 4.1.1 default installation the dependencies of dap have to be added manually. This is because nc-config does not keep trac of them. Add

LIBS='-lcurl -lgssapi_krb5 -lssl -lcrypto -ldl -lidn -ldes425 -lkrb5 -lk5crypto -lcom_err -lkrb5support -lresolv'
to the ./configure call. You may need the shared runtime environment of you compiler. For gcc, add -lgcc_s to LIBS. If this does not work, dependencies can be checked through package management or with ldd, if a shared version is available. Kerberos related bindings are described be the krb5-config script. Like CDO itself netcdf uses libtool for building. It keeps track of further dependencies and uses runtime library paths for linking to shared libs. That's why it is recommended to user shared instead of static linking.

EXTRA formatted files with mixed precision

The EXTRA format has a header section with 4 integer values followed by the data section. The header and data section can have an accuracy of 4 or 8 bytes (single or double precision). There is no real standard for the EXTRA format but the header and data section should have the same precision. An EXTRA file with a header precision of 4 bytes and a data precision of 8 bytes couldn't be processed with CDO since version 1.4.2.

SZIP compressed GRIB1 files.

SZIP compression of GRIB1 records is a local extension to the GRIB standard at the MPI for Metereology. SZIP compressed GRIB1 records can only be decoded correctly with tools from the MPI for Metereology (e.g. CDO). It is neither recommended to share nor to generate those files outside the MPI for Metereology!

CDO Mailing Lists

Two electronic mailing lists are available for users to subscribe to:

  • cdo-announce
    is a read-only low volume list for important announcements and new release information about CDO.
  • cdo-intern
    is a read-only low volume list for announcements of new CDO installations at MPIM and DKRZ.

You can subscribe to the lists by filling out the form on the following web pages:


We also use a newsfeed for announcing releases.

Using CDO at MPIM and DKRZ

CDO is installed on the flowing machines at MPIM and DKRZ:

Site Machine System
DKRZ HRLE3 (mistral) rhel6-x64
MPIM Workstations jessie-x64 (x86_64)

The latest and all previously installed CDO versions are available by the module system. Use

module load cdo/1.X.Y
to load CDO version 1.X.Y, or
module load cdo
to load the latest version.

CDO_Seminar_20161206.pdf (2.18 MB) Ralf Mueller, 2016-12-06 17:57