CDO User Guide
Climate Data Operator
Version 2.1.1
November 2022
Uwe Schulzweida – MPI for Meteorology
Contents
1.1 Installation
1.1.1 Unix
1.1.2 MacOS
1.1.3 Windows
1.2 Usage
1.2.1 Options
1.2.2 Environment variables
1.2.3 Operators
1.2.4 Parallelized operators
1.2.5 Operator parameter
1.2.6 Operator chaining
1.2.7 Chaining Benefits
1.3 Advanced Usage
1.3.1 Wildcards
1.3.2 Argument Groups
1.3.3 Apply Keyword
1.4 Memory Requirements
1.5 Horizontal grids
1.5.1 Grid area weights
1.5.2 Grid description
1.5.3 ICON  Grid File Server
1.6 Zaxis description
1.7 Time axis
1.7.1 Absolute time
1.7.2 Relative time
1.7.3 Conversion of the time
1.8 Parameter table
1.9 Missing values
1.9.1 Mean and average
1.10 Percentile
1.10.1 Percentile over timesteps
1.11 Regions
2 Reference manual
2.1 Information
2.1.1 INFO  Information and simple statistics
2.1.2 SINFO  Short information
2.1.3 DIFF  Compare two datasets field by field
2.1.4 NINFO  Print the number of parameters, levels or times
2.1.5 SHOWINFO  Show variables, levels or times
2.1.6 SHOWATTRIBUTE  Show attributes
2.1.7 FILEDES  Dataset description
2.2 File operations
2.2.1 APPLY  Apply operators
2.2.2 COPY  Copy datasets
2.2.3 TEE  Duplicate a data stream and write it to file
2.2.4 PACK  Pack data
2.2.5 BITROUNDING  Bit rounding
2.2.6 REPLACE  Replace variables
2.2.7 DUPLICATE  Duplicates a dataset
2.2.8 MERGEGRID  Merge grid
2.2.9 MERGE  Merge datasets
2.2.10 SPLIT  Split a dataset
2.2.11 SPLITTIME  Split timesteps of a dataset
2.2.12 SPLITSEL  Split selected timesteps
2.2.13 DISTGRID  Distribute horizontal grid
2.2.14 COLLGRID  Collect horizontal grid
2.3 Selection
2.3.1 SELECT  Select fields
2.3.2 SELMULTI  Select multiple fields via GRIB1 parameters
2.3.3 SELVAR  Select fields
2.3.4 SELTIME  Select timesteps
2.3.5 SELBOX  Select a box
2.3.6 SELREGION  Select horizontal regions
2.3.7 SELGRIDCELL  Select grid cells
2.3.8 SAMPLEGRID  Resample grid
2.3.9 SELYEARIDX  Select year by index
2.3.10 SELSURFACE  Extract surface
2.4 Conditional selection
2.4.1 COND  Conditional select one field
2.4.2 COND2  Conditional select two fields
2.4.3 CONDC  Conditional select a constant
2.4.4 MAPREDUCE  Reduce fields to userdefined mask
2.5 Comparison
2.5.1 COMP  Comparison of two fields
2.5.2 COMPC  Comparison of a field with a constant
2.6 Modification
2.6.1 SETATTRIBUTE  Set attributes
2.6.2 SETPARTAB  Set parameter table
2.6.3 SET  Set field info
2.6.4 SETTIME  Set time
2.6.5 CHANGE  Change field header
2.6.6 SETGRID  Set grid information
2.6.7 SETZAXIS  Set zaxis information
2.6.8 INVERT  Invert latitudes
2.6.9 INVERTLEV  Invert levels
2.6.10 SHIFTXY  Shift field
2.6.11 MASKREGION  Mask regions
2.6.12 MASKBOX  Mask a box
2.6.13 SETBOX  Set a box to constant
2.6.14 ENLARGE  Enlarge fields
2.6.15 SETMISS  Set missing value
2.6.16 SETGRIDCELL  Set the value of a grid cell
2.7 Arithmetic
2.7.1 EXPR  Evaluate expressions
2.7.2 MATH  Mathematical functions
2.7.3 ARITHC  Arithmetic with a constant
2.7.4 ARITH  Arithmetic on two datasets
2.7.5 DAYARITH  Daily arithmetic
2.7.6 MONARITH  Monthly arithmetic
2.7.7 YEARARITH  Yearly arithmetic
2.7.8 YHOURARITH  Multiyear hourly arithmetic
2.7.9 YDAYARITH  Multiyear daily arithmetic
2.7.10 YMONARITH  Multiyear monthly arithmetic
2.7.11 YSEASARITH  Multiyear seasonal arithmetic
2.7.12 ARITHDAYS  Arithmetic with days
2.7.13 ARITHLAT  Arithmetic with latitude
2.8 Statistical values
2.8.1 TIMCUMSUM  Cumulative sum over all timesteps
2.8.2 CONSECSTAT  Consecute timestep periods
2.8.3 VARSSTAT  Statistical values over all variables
2.8.4 ENSSTAT  Statistical values over an ensemble
2.8.5 ENSSTAT2  Statistical values over an ensemble
2.8.6 ENSVAL  Ensemble validation tools
2.8.7 FLDSTAT  Statistical values over a field
2.8.8 ZONSTAT  Zonal statistical values
2.8.9 MERSTAT  Meridional statistical values
2.8.10 GRIDBOXSTAT  Statistical values over grid boxes
2.8.11 REMAPSTAT  Remaps source points to target cells
2.8.12 VERTSTAT  Vertical statistical values
2.8.13 TIMSELSTAT  Time range statistical values
2.8.14 TIMSELPCTL  Time range percentile values
2.8.15 RUNSTAT  Running statistical values
2.8.16 RUNPCTL  Running percentile values
2.8.17 TIMSTAT  Statistical values over all timesteps
2.8.18 TIMPCTL  Percentile values over all timesteps
2.8.19 HOURSTAT  Hourly statistical values
2.8.20 HOURPCTL  Hourly percentile values
2.8.21 DAYSTAT  Daily statistical values
2.8.22 DAYPCTL  Daily percentile values
2.8.23 MONSTAT  Monthly statistical values
2.8.24 MONPCTL  Monthly percentile values
2.8.25 YEARMONSTAT  Yearly mean from monthly data
2.8.26 YEARSTAT  Yearly statistical values
2.8.27 YEARPCTL  Yearly percentile values
2.8.28 SEASSTAT  Seasonal statistical values
2.8.29 SEASPCTL  Seasonal percentile values
2.8.30 YHOURSTAT  Multiyear hourly statistical values
2.8.31 DHOURSTAT  Multiday hourly statistical values
2.8.32 YDAYSTAT  Multiyear daily statistical values
2.8.33 YDAYPCTL  Multiyear daily percentile values
2.8.34 YMONSTAT  Multiyear monthly statistical values
2.8.35 YMONPCTL  Multiyear monthly percentile values
2.8.36 YSEASSTAT  Multiyear seasonal statistical values
2.8.37 YSEASPCTL  Multiyear seasonal percentile values
2.8.38 YDRUNSTAT  Multiyear daily running statistical values
2.8.39 YDRUNPCTL  Multiyear daily running percentile values
2.9 Correlation and co.
2.9.1 FLDCOR  Correlation in grid space
2.9.2 TIMCOR  Correlation over time
2.9.3 FLDCOVAR  Covariance in grid space
2.9.4 TIMCOVAR  Covariance over time
2.10 Regression
2.10.1 REGRES  Regression
2.10.2 DETREND  Detrend time series
2.10.3 TREND  Trend of time series
2.10.4 TRENDARITH  Add or subtract a trend
2.11 EOFs
2.11.1 EOFS  Empirical Orthogonal Functions
2.11.2 EOFCOEFF  Principal coefficients of EOFs
2.12 Interpolation
2.12.1 REMAPBIL  Bilinear interpolation
2.12.2 REMAPBIC  Bicubic interpolation
2.12.3 REMAPNN  Nearest neighbor remapping
2.12.4 REMAPDIS  Distance weighted average remapping
2.12.5 REMAPCON  First order conservative remapping
2.12.6 REMAPCON2  Second order conservative remapping
2.12.7 REMAPLAF  Largest area fraction remapping
2.12.8 REMAP  Grid remapping
2.12.9 REMAPETA  Remap vertical hybrid level
2.12.10 VERTINTML  Vertical interpolation
2.12.11 VERTINTAP  Vertical pressure interpolation
2.12.12 VERTINTGH  Vertical height interpolation
2.12.13 INTLEVEL  Linear level interpolation
2.12.14 INTLEVEL3D  Linear level interpolation from/to 3D vertical coordinates
2.12.15 INTTIME  Time interpolation
2.12.16 INTYEAR  Year interpolation
2.13 Transformation
2.13.1 SPECTRAL  Spectral transformation
2.13.2 SPECCONV  Spectral conversion
2.13.3 WIND2  D and V to velocity potential and stream function
2.13.4 WIND  Wind transformation
2.13.5 FOURIER  Fourier transformation
2.14 Import/Export
2.14.1 IMPORTBINARY  Import binary data sets
2.14.2 IMPORTCMSAF  Import CMSAF HDF5 files
2.14.3 IMPORTAMSR  Import AMSR binary files
2.14.4 INPUT  Formatted input
2.14.5 OUTPUT  Formatted output
2.14.6 OUTPUTTAB  Table output
2.14.7 OUTPUTGMT  GMT output
2.15 Miscellaneous
2.15.1 GRADSDES  GrADS data descriptor file
2.15.2 AFTERBURNER  ECHAM standard post processor
2.15.3 FILTER  Time series filtering
2.15.4 GRIDCELL  Grid cell quantities
2.15.5 SMOOTH  Smooth grid points
2.15.6 DELTAT  Difference between timesteps
2.15.7 REPLACEVALUES  Replace variable values
2.15.8 VARGEN  Generate a field
2.15.9 TIMSORT  Timsort
2.15.10 WINDTRANS  Wind Transformation
2.15.11 ROTUVB  Rotation
2.15.12 MROTUVB  Backward rotation of MPIOM data
2.15.13 MASTRFU  Mass stream function
2.15.14 DERIVEPAR  Derived model parameters
2.15.15 ADISIT  Potential temperature to insitu temperature and vice versa
2.15.16 RHOPOT  Calculates potential density
2.15.17 HISTOGRAM  Histogram
2.15.18 SETHALO  Set the left and right bounds of a field
2.15.19 WCT  Windchill temperature
2.15.20 FDNS  Frost days where no snow index per time period
2.15.21 STRWIN  Strong wind days index per time period
2.15.22 STRBRE  Strong breeze days index per time period
2.15.23 STRGAL  Strong gale days index per time period
2.15.24 HURR  Hurricane days index per time period
2.15.25 CMORLITE  CMOR lite
2.15.26 VERIFYGRID  Verify grid coordinates
3 Contributors
3.1 History
3.2 External sources
3.3 Contributors
A Environment Variables
B Parallelized operators
C Standard name table
D Grid description examples
D.1 Example of a curvilinear grid description
D.2 Example description for an unstructured grid
Operator catalog
Operator list
1 Introduction
The Climate Data Operator (CDO) software is a collection of many operators for standard processing of climate and forecast model data. The operators include simple statistical and arithmetic functions, data selection and subsampling tools, and spatial interpolation. CDO was developed to have the same set of processing functions for GRIB [GRIB] and NetCDF [NetCDF] datasets in one package.
The Climate Data Interface [CDI] is used for the fast and file format independent access to GRIB and NetCDF datasets. The local MPIMET data formats SERVICE, EXTRA and IEG are also supported.
There are some limitations for GRIB and NetCDF datasets:
GRIB

datasets have to be consistent, similar to NetCDF. That means all time steps need to have the same variables, and within a time step each variable may occur only once. Multiple fields in single GRIB2 messages are not supported!
NetCDF

datasets are only supported for the classic data model and arrays up to 4 dimensions. These dimensions should only be used by the horizontal and vertical grid and the time. The NetCDF attributes should follow the GDT, COARDS or CF Conventions.
The main CDO features are:

More than 700 operators available

Modular design and easily extendable with new operators

Very simple UNIX command line interface

A dataset can be processed by several operators, without storing the interim results in files

Most operators handle datasets with missing values

Fast processing of large datasets

Support of many different grid types

Tested on many UNIX/Linux systems, Cygwin, and MacOSX
Latest pdf documentation be found here.
1.1 Installation
CDO is supported in different operative systems such as Unix, macOS and Windows. This section describes how to install CDO in those platforms. More examples are found on the main website ( https://code.mpimet.mpg.de/projects/cdo/wiki)
1.1.1 Unix
1.1.1.1. Prebuilt CDO packages
Prebuilt CDO versions are available in online Unix repositories, and you can install them by typing on the Unix terminal
aptget install cdo
Note that prebuilt libraries do not offer the most recent version, and their version might vary with the Unix system (see table below). It is recommended to build from the source
or Conda environment for an updated version or a customised setting.



Unix OS

CDO Version  



11 (Bullseye)  1.9.101  
10 (Buster)  1.9.61  
Debian

Sid  2.0.62 



13  2.0.6  
FreeBSD

12  2.0.6 



Leap 15.3  2.0.6  
openSUSE

Tumbleweed  2.0.61 



18.04 LTS  1.9.3  
20.04 LTS  1.9.9  
Ubuntu

22.04 LTS  2.0.41 



1.1.1.2. Building from sources
CDO uses the GNU configure and build system for compilation. The only requirement is a working ISO C++17 and C11 compiler.
First go to the download page (https://code.mpimet.mpg.de/projects/cdo) to get the latest distribution, if you do not have it yet.
To take full advantage of CDO features the following additional libraries should be installed:

Unidata NetCDF library (https://www.unidata.ucar.edu/software/netcdf) version 3 or higher.
This library is needed to process NetCDF [NetCDF] files with CDO. 
ECMWF ecCodes library (https://software.ecmwf.int/wiki/display/ECC/ecCodes+Home) version 2.3.0 or higher. This library is needed to process GRIB2 files with CDO.

HDF5 szip library (https://www.hdfgroup.org/doc_resource/SZIP) version 2.1 or higher.
This library is needed to process szip compressed GRIB [GRIB] files with CDO. 
HDF5 library (https://www.hdfgroup.org) version 1.6 or higher.
This library is needed to import CMSAF [CMSAF] HDF5 files with the CDO operator import_cmsaf. 
PROJ library (https://proj.org) version 5.0 or higher.
This library is needed to convert Sinusoidal and Lambert Azimuthal Equal Area coordinates to geographic coordinates, for e.g. remapping. 
Magics library (https://software.ecmwf.int/wiki/display/MAGP/Magics) version 2.18 or higher.
This library is needed to create contour, vector and graph plots with CDO.
CDO is a multithreaded application. Therefore all the above libraries should be compiled thread safe. Using nonthreadsafe libraries could cause unexpected errors!
Compilation
Compilation is done by performing the following steps:

Unpack the archive, if you haven’t done that yet:
gunzip cdo$VERSION.tar.gz # uncompress the archive tar xf cdo$VERSION.tar # unpack it cd cdo$VERSION

Run the configure script:
./configure

Optionaly with NetCDF [NetCDF] support:
./configure withnetcdf=<NetCDF root directory>

and with ecCodes:
./configure witheccodes=<ecCodes root directory>
For an overview of other configuration options use
./configure help


Compile the program by running make:
make
The program should compile without problems and the binary (cdo) should be available in the src directory of the distribution.
Installation
After the compilation of the source code do a make install, possibly as root if the destination permissions require that.
make install
The binary is installed into the directory <prefix>/bin. <prefix> defaults to /usr/local but can be changed with the prefix option of the configure script.
Alternatively, you can also copy the binary from the src directory manually to some bin directory in your search path.
1.1.1.3. Conda
Conda is an opensource package manager and environment management system for various languages (Python, R, etc.). Conda is installed via Anaconda or Miniconda. Unlike Anaconda, miniconda is a lightweight conda distribution. They can be dowloaded from the main conda Website ( https://conda.io/projects/conda/en/latest/userguide/install/linux.html) or on the terminal
wget https://repo.anaconda.com/archive/Anaconda32021.11Linuxx86_64.sh bash Anaconda32021.11Linuxx86_64.sh source ~/.bashrc
and
wget https://repo.continuum.io/miniconda/Miniconda3latestLinuxx86_64.sh sh Miniconda3latestLinuxx86_64.sh
Upon setting your conda environment, you can install CDO using conda
conda install cdo conda install pythoncdo
1.1.2 MacOS
Among the MacOS package managers, CDO can be installed from Homebrew and Macports. The installation via Homebrew is straight forward process on the terminal
brew install cdo
Similarly, Macports
port install cdo
In contrast to Homebrew, Macport allows you to enable GRIB2, szip compression and Magics++ graphic in CDO installation.
port install cdo +grib_api +magicspp +szip
In addition, you could also set CDO via Conda as Unix. You can follow this tutorial to install anaconda or miniconda in your computer ( https://conda.io/projects/conda/en/latest/userguide/install/macos.html). Then, you can install cdo by
conda install c condaforge cdo
1.1.3 Windows
Currently, CDO is not supported in Windows system and the binary is not available in the windows conda repository. Therefore, CDO needs to be set in a virtual environment. Here, it covers the installation of CDO using Windows Subsystem Linux (WSL) and virtual
machines.
1.1.3.1. WSL
WSL emulates Unix in your Windows system. Then, you can install Unix libraries and software such as CDO or the linux conda distribution in your computer. Also, it allows you to directly share your files between your Windows and the WSL environment. However, more complex functions that require a graphic interface are not allowed.
In Windows 10 or newer, WSL can be readily set in your cmd by typing
wsl install
This command will install, by default, Ubuntu 20.04 in WSL2. You could also choose a different system from this list.
wsl l o
Then, you can install your WSL environment as
wsl install d NAME
1.1.3.2. Virtual machine
Virtual machines can emulate different operative systems in your computer. Virtual machines are guest computers mounted inside your host computer. You can set a Linux
distribution in your Windows device in this particular case. The advantages of Virtual machines to WSL are the graphical interface and the fully operational Linux system. You can follow any
tutorial on the internet such as this one
Finally, you can install CDO following any method listed in the section 1.1.1.
1.2 Usage
This section descibes how to use CDO. The syntax is:
cdo [ Options ] Operator1 [ Operator2 [ OperatorN ] ]
1.2.1 Options
All options have to be placed before the first operator. The following options are available for all operators:
a  Generate an absolute time axis. 
b <nbits>  Set the number of bits for the output precision. The valid precisions depend 
on the file format: 

For srv, ext and ieg format the letter L or B can be added to set the byteorder 
to Little or Big endian. 
cmor  CMOR conform NetCDF output. 
C, color  Colorized output messages. 
double  Using double precision floats for data in memory. 
eccodes  Use ecCodes to decode/encode GRIB1 messages. 
f <format>  Set the output file format. The valid file formats are: 

GRIB2 is only available if CDO was compiled with ecCodes support and all 
NetCDF file types are only available if CDO was compiled with NetCDF support! 
g <grid>  Define the default grid description by name or from file (see chapter 1.3 on page 73). 
Available grid names are: r<NX>x<NY>, lon=<LON>/lat=<LAT>, F<XXX>, gme<NI> 
h, help  Help information for the operators. 
no_history  Do not append to NetCDF history global attribute. 
netcdf_hdr_pad, hdr_pad, header_pad <nbr> 
Pad NetCDF output header with nbr bytes. 
k <chunktype>  NetCDF4 chunk type: auto, grid or lines. 
L  Lock I/O (sequential access). 
m <missval>  Set the missing value of non NetCDF files (default: 9e+33). 
O  Overwrite existing output file, if checked. 
Existing output file is checked only for: ens<STAT>, merge, mergetime 
operators  List of all operators. 
P <nthreads>  Set number of OpenMP threads (Only available if OpenMP support was compiled in). 
pedantic  Warnings count as errors. 
percentile <method> 
Percentile method: nrank nist rtype8 numpy numpy_lower numpy_higher numpy_nearest 
reduce_dim  Reduce NetCDF dimensions. 
R, regular  Convert GRIB1 data from global reduced to regular Gaussian grid (only with cgribex lib). 
r  Generate a relative time axis. 
S  Create an extra output stream for the module TIMSTAT. This stream contains 
the number of non missing values for each output period. 
s, silent  Silent mode. 
single  Using single precision floats for data in memory. 
sortname  Alphanumeric sorting of NetCDF parameter names. 
t <partab>  Set the GRIB1 (cgribex) default parameter table name or file (see chapter 1.6 on page 80). 
Predefined tables are: echam4 echam5 echam6 mpiom1 ecmwf remo 
timestat_date <srcdate> 
Target timestamp (temporal statistics): first, middle, midhigh or last source timestep. 
V, version  Print the version number. 
v, verbose  Print extra details for some operators. 
w  Disable warning messages. 
worker <num>  Number of worker to decode/decompress GRIB records. 
z szip  SZIP compression of GRIB1 records. 
jpeg  JPEG compression of GRIB2 records. 
zip[_19]  Deflate compression of NetCDF4 variables. 
1.2.2 Environment variables
There are some environment variables which influence the behavior of CDO. An incomplete list can be found in Appendix A.
Here is an example to set the envrionment variable CDO_RESET_HISTORY for different shells:
Bourne shell (sh):  CDO_RESET_HISTORY=1 ; export CDO_RESET_HISTORY 
Korn shell (ksh):  export CDO_RESET_HISTORY=1 
C shell (csh):  setenv CDO_RESET_HISTORY 1 
1.2.3 Operators
There are more than 700 operators available. A detailed description of all operators can be found in the Reference Manual section.
1.2.4 Parallelized operators
Some of the CDO operators are shared memory parallelized with OpenMP. An OpenMPenabled C compiler is needed to use this feature. Users may request a specific number of OpenMP threads nthreads with the ’ P’ switch.
Here is an example to distribute the bilinear interpolation on 8 OpenMP threads:
cdo P 8 remapbil,targetgrid infile outfile
Many CDO operators are I/Obound. This means most of the time is spend in reading and writing the data. Only compute intensive CDO operators are parallelized. An incomplete list of OpenMP parallelized operators can be found in Appendix B.
1.2.5 Operator parameter
Some operators need one or more parameter. A list of parameter is indicated by the seperator ’,’.

STRING
String parameters require quotes if the string contains blanks or other characters interpreted by the shell. The following command select variables with the name pressure and tsurf:
cdo selvar,pressure,tsurf infile outfile

FLOAT
Floating point number in any representation. The following command sets the range between 0 and 273.15 of all fields to missing value:
cdo setrtomiss,0,273.15 infile outfile

BOOL
Boolean parameter in the following representation TRUE/FALSE, T/F or 0/1. To disable the weighting by grid cell area in the calculation of a field mean, use:
cdo fldmean,weights=FALSE infile outfile

INTEGER
A range of integer parameter can be specified by first/last[/inc]. To select the days 5, 6, 7, 8 and 9 use:
cdo selday,5/9 infile outfile
The result is the same as:
cdo selday,5,6,7,8,9 infile outfile
1.2.6 Operator chaining
Operator chaining allows to combine two or more operators on the command line into a single CDO call. This allows the creation of complex operations out of more simple ones: reductions over several dimensions, file merges and all kinds of analysis processes. All operators with a fixed number of input streams and one output stream can pass the result directly to an other operator. For differentiation between files and operators all operators must be written with a prepended "–" when chaining.
cdo monmean add mulc,2.0 infile1 daymean infile2 outfile (CDO example call)
Here monmean will have the output of add while add takes the output of mulc,2.0 and daymean. infile1 and infile2 are inputs for their predecessor. When mixing operators with an arbitrary number of input streams extra care needs to be taken. The following examples illustrates why.

cdo info timavg infile1 infile2

cdo info timavg infile?

cdo timavg infile1 tmpfile
cdo info tmpfile infile2
rm tmpfile
All three examples produce identical results. The time average will be computed only on the first input file.
Note(1): In section 1.3.2 we introduce argument groups which will make this a lot easier and less error
prone.
Note(2): Operator chaining is implemented over POSIX Threads (pthreads). Therefore this CDO feature is not available on operating
systems without POSIX Threads support!
1.2.7 Chaining Benefits
Combining operators can have several benefits. The most obvious is a performance increase through reducing disk I/O:
cdo sub dayavg infile2 timavg infile1 outfile
instead of
cdo timavg infile1 tmp1 cdo dayavg infile2 tmp2 cdo sub tmp2 tmp1 outfile rm tmp1 tmp2
Especially with large input files the reading and writing of intermediate files can have a big influence on the overall performance.
A second aspect is the execution of operators: Limited by the algorythms potentially all operators of a chain can run in parallel.
1.3 Advanced Usage
In this section we will introduce advanced features of CDO. These include operator grouping which allows to write more complex CDO calls and the apply keyword which allows to shorten calls that need an operator to be executed on multiple files as well as wildcards which allow to search paths for file signatures. These features have several restrictions and follow rules that depend on the input/output properties. These required properties of operators can be investigated with the following commands which will output a list of operators that have selected properties:
cdo attribs [arbitrary/filesOnly/onlyFirst/noOutput/obase]

arbitrary describes all operators where the number of inputs is not defined.

filesOnly are operators that can have other operators as input.

onlyFirst shows which operators can only be at the most left position of the polish notation argument chain.

noOutput are all operators that do not print to any file (e.g info)

obase Here obase describes an operator that does not use the output argument as file but e.g as a file name base (output base). This is almost exclusivly used for operators the split input files.
cdo splithour baseName_ could result in: baseName_1 baseName_2 ... baseName_N
For checking a single or multiple operator directly the following usage of attribs can be used:
cdo attribs operatorName
1.3.1 Wildcards
Wildcards are a standard feature of command line interpreters (shells) on many operating systems. They are placeholder characters used in file paths that are expanded by the
interpreter into file lists. For further information the Advance Bash Scripting Guide is a valuable source of information. Handling of input is a central
issue for CDO and in some circumstances it is not enough to use the wildcards from the shell. That’s why CDO can handle
them on its own.


all files  2020201.txt 2020211.txt 2020215.txt 2020301.txt 2020302.txt 
2020312.txt 2020313.txt 2020315.txt 2021.grb 2022.grb  




wildcard  filelist results 


20203* and 20203??.txt  2020301.txt 2020302.txt 2020312.txt 2020313.txt 2020315.txt 


20203?1.txt  2020301.txt 


*.grb  2021.grb 2020.grb 


Use single quotes if the input stream names matched to a single wildcard expression. In this case CDO will do the pattern matching and the output can be combined with other operators. Here is an example for this feature:
cdo timavg select,name=temperature ’infile?’ outfile
In earlier versions of CDO this was necessary to have the right files parsed to the right operator. Newer version support this with the argument
grouping feature (see 1.3.2). We advice the use of the grouping mechanism instead of the single quoted wildcards since this feature could
be deprecated in future versions.
Note: Wildcard expansion is not available on operating systems without the glob() function!
1.3.2 Argument Groups
In section 1.2.6 we described that it is not possible to chain operators with an arbitrary number of inputs. In
this section we want to show how this can be achieved through the use of operator grouping with angled brackets []. Using these
brackets CDO can assigned the inputs to their corresponding operators during the execution of the command line. The ability to write operator combination in a
parenthisfree way is partly given up in favor of allowing operators with arbitrary number of inputs. This allows a much more compact way to handle large number of input files.
The following example shows an example which we will transform from a nonworking solution to a working one.
cdo infon div fldmean cat infile1 mulc,2.0 infile2 fldmax infile3
This example will throw the following error:
cdo (Warning): Did you forget to use ’[’ and/or ’]’ for multiple variable input operators? cdo (Warning): use option variableInput, for description cdo (Abort): Too few streams specified! Operator div needs 2 input streams and 1 output stream!
The error is raised by the operator div. This operator needs two input streams and one output stream, but the cat operator has claimed all possible streams on its right hand side as input because it accepts an arbitrary number of inputs. Hence it didn’t leave anything for the remaining input or output streams of div. For this we can declare a group which will be passed to the operator left of the group.
cdo infon div fldmean cat [ infile1 mulc,2.0 infile2 ] fldmax infile3
For full flexibility it is possible to have groups inside groups:
cdo infon div fldmean cat [ fileA1 infileC2 merge [ infileB1 infileB2 ] ] fldmax infileD
1.3.3 Apply Keyword
When working with medium or large number of similar files there is a common problem of a processing step (often a reduction) which needs to be performed on all of them before a
more specific analysis can be applied. Ususally this can be done in two ways: One option is to use merge to glue everything together and chain the reduction step
after it. The second option is to write a forloop over all inputs which perform the basic processing on each of the files separately and call merge one the results.
Unfortunately both options have sideeffects: The first one needs a lot of memory because all files are read in completely and reduced afterwards while the latter one creates a lot of temporary
files. Both memory and disk IO can be bottlenecks and should be avoided.
The apply keyword was introduced for that purpose. It can be used as an operator, but it needs at least one operator as a parameter, which is applied in parallel to
all related input streams in a parallel way before all streams are passed to operator next in the chain.
The following is an example with three input files:
cdo merge apply,daymean [ file1 file2 file3 ] outfile
would result in:
cdo merge daymean file1 daymean file2 daymean file3 outfile
Apply is especially useful when combined with wildcards. The previous example can be shortened further.
cdo merge apply,daymean [ file? ] outfile
As shown this feature allows to simplify commands with medium amount of files and to move reductions further back. This can also have a positive impact on the performance.
An example where performance can take a hit.
cdo yearmean daymean merge [ f1 ... f40 ]
An improved but ugly to write example.
cdo yearmean merge [ daymean f1 daymean f2 ... daymean f40 ]
Apply saves the day. And creates the call above with much less typing.
cdo yearmean merge [ apply,daymean [ f1 ... f40 ] ]
In the example in figure 1.2 the resulting call will dramatically save process interaction as well as execution times since the reduction (daymean) is applied on the files first. That means that the merge operator will receive the reduced files and the operations for merging the whole data is saved. For other CDO calls further improvements can be made by adding more arguments to apply (1.3)
A less performant example.
cdo aReduction anotherReduction daymean merge [ f1 ... f40 ]
cdo merge apply,"aReduction anotherReduction daymean" [ f1 ... f40 ]
Restrictions:While the apply keyword can be extremely helpful it has several restrictions (for now!).

Apply inputs can only be files, wildcards and operators that have 0 inputs and 1 output.

Apply can not be used as the first CDO operator.

Apply arguments can only be operators with 1 input and 1 output.

Grouping inside the Apply argument or input is not allowed.
1.4 Memory Requirements
This section roughly describes the memory requirements of CDO. CDO tries to use as little memory as possible. The smallest unit that is read by all operators is a horizontal field. The required memory depends mainly on the used operators, the data format, the data type and the size of the fields.
The operators have partly very different memory requirements. Many CDO modules like FLDSTAT process one horizontal field at a time. Memoryintensive modules such as ENSSTAT and TIMSTAT require all fields of a time step to be held in memory. Of course, the memory requirements of each operator add up when they are combined. Some operators are parallelized with OpenMP. In multithreaded mode (see option P) the memory requirement can increase for these operators. This increase grows with the number of threads used.
The data type determines the number of bytes per value. Single precision floating point data (float) occupies 4 bytes per value. All other data types are read as double precision floats and thus occupy 8 bytes per value. With the CDO option single all data is read as single precision floats. This can reduce the memory requirement by a factor of 2.
1.5 Horizontal grids
Physical quantities of climate models are typically stored on a horizonal grid. CDO supports structured grids like regular lon/lat or curvilinear grids and also unstructured grids.
1.5.1 Grid area weights
One single point of a horizontal grid represents the mean of a grid cell. These grid cells are typically of different sizes, because the grid points are of varying distance.
Area weights are individual weights for each grid cell. They are needed to compute the area weighted mean or variance of a set of grid cells (e.g. fldmean  the mean value of all grid cells). In CDO the area weights are derived from the grid cell area. If the cell area is not available then it will be computed from the geographical coordinates via spherical triangles. This is only possible if the geographical coordinates of the grid cell corners are available or derivable. Otherwise CDO gives a warning message and uses constant area weights for all grid cells.
The cell area is read automatically from a NetCDF input file if a variable has the corresponding “cell_measures” attribute, e.g.:
var:cell_measures = "area: cell_area" ;
If the computed cell area is not desired then the CDO operator setgridarea can be used to set or overwrite the grid cell area.
1.5.2 Grid description
In the following situations it is necessary to give a description of a horizontal grid:

Changing the grid description (operator: setgrid)

Horizontal interpolation (all remapping operators)

Generating of variables (operator: const, random)
As now described, there are several possibilities to define a horizontal grid.
1.5.2.1. Predefined grids
Predefined grids are available for global regular, gaussian, HEALPix or icosahedralhexagonal GME grids.
Global regular grid: global_<DXY>
global_<DXY> defines a global regular lon/lat grid. The grid increment <DXY> can be chosen arbitrarily. The longitudes start at <DXY>/2  180^{∘} and the latitudes start at <DXY>/2  90^{∘}.
Regional regular grid: dcw:<CountryCode>[_<DXY>]
dcw:<CountryCode>[_<DXY>] defines a regional regular lon/lat grid from the country code. The default value of the optional grid increment <DXY> is 0.1 degree. The ISO twoletter country codes can be found on https://en.wikipedia.org/wiki/ISO_31661_alpha2. For the coordinates of a country CDO uses the DCW (Digital Chart of the World) dataset from GMT. This dataset must be installed on the system and the environment variable DIR_DCW must point to it.
Zonal latitudes: zonal_<DY>
zonal_<DY> defines a grid with zonal latitudes only. The latitude increment <DY> can be chosen arbitrarily. The latitudes start at <DY>/2  90^{∘}. The boundaries of each latitude are also generated. The number of longitudes is 1. A grid description of this type is needed to calculate the zonal mean (zonmean) for data on an unstructured grid.
Global regular grid: r<NX>x<NY>
r<NX>x<NY> defines a global regular lon/lat grid. The number of the longitudes <NX> and the latitudes <NY> can be chosen arbitrarily. The longitudes start at 0^{∘} with an increment of (360/<NX>)^{∘}. The latitudes go from south to north with an increment of (180/<NY>)^{∘}.
One grid point: lon=<LON>/lat=<LAT>
lon=<LON>/lat=<LAT> defines a lon/lat grid with only one grid point.
Full regular Gaussian grid: F<XXX>
F<XXX> defines a global regular Gaussian grid. XXX specifies the number of latitudes lines between the Pole and the Equator. The longitudes start at 0^{∘} with an increment of (360/nlon)^{∘}. The gaussian latitudes go from north to south.
Global icosahedralhexagonal GME grid: gme<NI>
gme<NI> defines a global icosahedralhexagonal GME grid. NI specifies the number of intervals on a main triangle side.
HEALPix grid: hp<NSIDE>[b][_<ORDER>]
HEALPix is an acronym for Hierarchical Equal Area isoLatitude Pixelization of a sphere.
hp<NSIDE>[b][_<ORDER>] defines a global HEALPix grid. The NSIDE parameter controls the resolution of the pixellization.
It is the number of pixels on the side of each of the 12 toplevel HEALPix pixels. The total number of grid pixels is 12*NSIDE*NSIDE. NSIDE=1 generates the 12 (H=4, K=3) equal sized toplevel HEALPix pixels. ORDER sets the index ordering convention of the pixels, available are
ring (default) or nest ordering. If the parameter b is set, the grid cell corners are also
calculated.
1.5.2.2. Grids from data files
You can use the grid description from an other datafile. The format of the datafile and the grid of the data field must be supported by CDO. Use the operator ’sinfo’ to get short informations about your variables and the grids. If there are more then one grid in the datafile the grid description of the first variable will be used. Add the extension :N to the name of the datafile to select grid number N.
1.5.2.3. SCRIP grids
SCRIP (Spherical Coordinate Remapping and Interpolation Package) uses a common grid description for curvilinear and unstructured grids. For more information about the convention see [SCRIP]. This grid description is stored in NetCDF. Therefor it is only available if CDO was compiled with NetCDF support!
SCRIP grid description example of a curvilinear MPIOM [MPIOM] GROB3 grid (only the NetCDF header):
netcdf grob3s {
dimensions:
grid_size = 12120 ;
grid_corners = 4 ;
grid_rank = 2 ;
variables:
int grid_dims(grid_rank) ;
double grid_center_lat(grid_size) ;
grid_center_lat:units = "degrees" ;
grid_center_lat:bounds = "grid_corner_lat" ;
double grid_center_lon(grid_size) ;
grid_center_lon:units = "degrees" ;
grid_center_lon:bounds = "grid_corner_lon" ;
int grid_imask(grid_size) ;
grid_imask:units = "unitless" ;
grid_imask:coordinates = "grid_center_lon grid_center_lat" ;
double grid_corner_lat(grid_size, grid_corners) ;
grid_corner_lat:units = "degrees" ;
double grid_corner_lon(grid_size, grid_corners) ;
grid_corner_lon:units = "degrees" ;
// global attributes:
:title = "grob3s" ;
}
1.5.2.4. CDO grids
All supported grids can also be described with the CDO grid description. The following keywords can be used to describe a grid:
Keyword  Datatype  Description 



gridtype  STRING  Type of the grid (gaussian, lonlat, curvilinear, unstructured). 
gridsize  INTEGER  Size of the grid. 
xsize  INTEGER  Size in x direction (number of longitudes). 
ysize  INTEGER  Size in y direction (number of latitudes). 
xvals  FLOAT ARRAY  X values of the grid cell center. 
yvals  FLOAT ARRAY  Y values of the grid cell center. 
nvertex  INTEGER  Number of the vertices for all grid cells. 
xbounds  FLOAT ARRAY  X bounds of each gridbox. 
ybounds  FLOAT ARRAY  Y bounds of each gridbox. 
xfirst, xinc  FLOAT, FLOAT  Macros to define xvals with a constant increment, 
xfirst is the x value of the first grid cell center.  
yfirst, yinc  FLOAT, FLOAT  Macros to define yvals with a constant increment, 
yfirst is the y value of the first grid cell center.  
xunits  STRING  units of the x axis 
yunits  STRING  units of the y axis 
Which keywords are necessary depends on the gridtype. The following table gives an overview of the default values or the size with respect to the different grid types.






gridtype  lonlat  gaussian  projection  curvilinear  unstructured 






gridsize  xsize*ysize  xsize*ysize  xsize*ysize  xsize*ysize  ncell 






xsize  nlon  nlon  nx  nlon  gridsize 






ysize  nlat  nlat  ny  nlat  gridsize 






xvals  xsize  xsize  xsize  gridsize  gridsize 






yvals  ysize  ysize  ysize  gridsize  gridsize 






nvertex  2  2  2  4  nv 






xbounds  2*xsize  2*xsize  2*xsize  4*gridsize  nv*gridsize 






ybounds  2*ysize  2*ysize  2*xsize  4*gridsize  nv*gridsize 






xunits  degrees  degrees  m  degrees  degrees 






yunits  degrees  degrees  m  degrees  degrees 






The keywords nvertex, xbounds and ybounds are optional if area weights are not needed. The grid cell corners xbounds and ybounds have to rotate counterclockwise.
CDO grid description example of a T21 gaussian grid:
gridtype = gaussian
xsize = 64
ysize = 32
xfirst = 0
xinc = 5.625
yvals = 85.76 80.27 74.75 69.21 63.68 58.14 52.61 47.07
41.53 36.00 30.46 24.92 19.38 13.84 8.31 2.77
2.77 8.31 13.84 19.38 24.92 30.46 36.00 41.53
47.07 52.61 58.14 63.68 69.21 74.75 80.27 85.76
CDO grid description example of a global regular grid with 60x30 points:
gridtype = lonlat
xsize = 60
ysize = 30
xfirst = 177
xinc = 6
yfirst = 87
yinc = 6
The description for a projection is somewhat more complicated. Use the first section to describe the coordinates of the projection with the above keywords. Add the keyword grid_mapping_name to descibe the mapping between the given coordinates and the true latitude and longitude coordinates. grid_mapping_name takes a string value that contains the name of the projection. A list of attributes can be added to define the mapping. The name of the attributes depend on the projection. The valid names of the projection and there attributes follow the NetCDF CFConvention.
CDO supports the special grid mapping attribute proj_params. These parameter will be passed directly to the PROJ library to generate the geographic coordinates if needed.
The geographic coordinates of the following projections can be generated without the attribute proj_params, if all other attributes are available:

rotated_latitude_longitude

lambert_conformal_conic

lambert_azimuthal_equal_area

sinusoidal

polar_stereographic
It is recommend to set the attribute proj_params also for the above projections to make sure all PROJ parameter are set correctly.
Here is an example of a CDO grid description using the attribute proj_params to define the PROJ parameter of a polar stereographic projection:
gridtype = projection
xsize = 11
ysize = 11
xunits = "meter"
yunits = "meter"
xfirst = 638000
xinc = 150
yfirst = 3349350
yinc = 150
grid_mapping = crs
grid_mapping_name = polar_stereographic
proj_params = "+proj=stere +lon_0=45 +lat_ts=70 +lat_0=90 +x_0=0 +y_0=0"
The result is the same as using the CF conform Grid Mapping Attributes:
gridtype = projection
xsize = 11
ysize = 11
xunits = "meter"
yunits = "meter"
xfirst = 638000
xinc = 150
yfirst = 3349350
yinc = 150
grid_mapping = crs
grid_mapping_name = polar_stereographic
straight_vertical_longitude_from_pole = 45.
standard_parallel = 70.
latitude_of_projection_origin = 90.
false_easting = 0.
false_northing = 0.
CDO grid description example of a regional rotated lon/lat grid:
gridtype = projection
xsize = 81
ysize = 91
xunits = "degrees"
yunits = "degrees"
xfirst = 19.5
xinc = 0.5
yfirst = 25.0
yinc = 0.5
grid_mapping_name = rotated_latitude_longitude
grid_north_pole_longitude = 170
grid_north_pole_latitude = 32.5
Example CDO descriptions of a curvilinear and an unstructured grid can be found in Appendix D.
1.5.3 ICON  Grid File Server
The geographic coordinates of the ICON model are located on an unstructured grid. This grid is stored in a separate grid file independent of the model data. The grid files are made available to the general public via a file server. Furthermore, these grid files are located at DKRZ under /pool/data/ICON/grids.
With the CDO function setgrid,<gridfile> this grid information can be added to the data if needed. Here is an example:
cdo sellonlatbox,20,60,10,70 setgrid,<path_to_gridfile> icondatafile result
ICON model data in NetCDF format contains the global attribute grid_file_uri. This attribute contains a link to the appropriate grid file on the ICON grid file server. If the global attribute grid_file_uri is present and valid, the grid information can be added automatically. The setgrid function is then no longer required. The environment variable CDO_DOWNLOAD_PATH can be used to select a directory for storing the grid file. If this environment variable is set, the grid file will be automatically downloaded from the grid file server to this directory if needed. If the grid file already exists in the current directory, the environment variable does not need to be set.
If the grid files are available locally, like at DKRZ, they do not need to be fetched from the grid file server. Use the environment variable CDO_ICON_GRIDS to set the root directory of the ICON grids. Here is an example for the ICON grids at DKRZ:
CDO_ICON_GRIDS=/pool/data/ICON
1.6 Zaxis description
Sometimes it is necessary to change the description of a zaxis. This can be done with the operator setzaxis. This operator needs an ASCII formatted file with the description of the zaxis. The following keywords can be used to describe a zaxis:
Keyword  Datatype  Description 



zaxistype  STRING  type of the zaxis 
size  INTEGER  number of levels 
levels  FLOAT ARRAY  values of the levels 
lbounds  FLOAT ARRAY  lower level bounds 
ubounds  FLOAT ARRAY  upper level bounds 
vctsize  INTEGER  number of vertical coordinate parameters 
vct  FLOAT ARRAY  vertical coordinate table 
The keywords lbounds and ubounds are optional. vctsize and vct are only necessary to define hybrid model levels.
Available zaxis types:
Zaxis type  Description  Units 



surface  Surface  
pressure  Pressure level  pascal 
hybrid  Hybrid model level  
height  Height above ground  meter 
depth_below_sea  Depth below sea level  meter 
depth_below_land  Depth below land surface  centimeter 
isentropic  Isentropic (theta) level  kelvin 
Zaxis description example for pressure levels 100, 200, 500, 850 and 1000 hPa:
zaxistype = pressure
size = 5
levels = 10000 20000 50000 85000 100000
Zaxis description example for ECHAM5 L19 hybrid model levels:
zaxistype = hybrid
size = 19
levels = 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
vctsize = 40
vct = 0 2000 4000 6046.10938 8267.92578 10609.5117 12851.1016 14698.5
15861.125 16116.2383 15356.9258 13621.4609 11101.5625 8127.14453
5125.14062 2549.96875 783.195068 0 0 0
0 0 0 0.000338993268 0.00335718691 0.0130700432 0.0340771675
0.0706498027 0.12591666 0.201195419 0.295519829 0.405408859
0.524931908 0.646107674 0.759697914 0.856437683 0.928747177
0.972985268 0.992281914 1
Note that the vctsize is twice the number of levels plus two and the vertical coordinate table must be specified for the level interfaces.
1.7 Time axis
A time axis describes the time for every timestep. Two time axis types are available: absolute time and relative time axis. CDO tries to maintain the actual type of the time axis for all operators.
1.7.1 Absolute time
An absolute time axis has the current time to each time step. It can be used without knowledge of the calendar. This is preferably used by climate models. In NetCDF files the absolute time axis is represented by the unit of the time: "day as %Y%m%d.%f".
1.7.2 Relative time
A relative time is the time relative to a fixed reference time. The current time results from the reference time and the elapsed interval. The result depends on the calendar used. CDO supports the standard Gregorian, proleptic Gregorian, 360 days, 365 days and 366 days calendars. The relative time axis is preferably used by numerical weather prediction models. In NetCDF files the relative time axis is represented by the unit of the time: "timeunits since referencetime", e.g "days since 1989615 12:00".
1.7.3 Conversion of the time
Some programs which work with NetCDF data can only process relative time axes. Therefore it may be necessary to convert from an absolute into a relative time axis. This conversion can be done for each operator with the CDO option ’r’. To convert a relative into an absolute time axis use the CDO option ’a’.
1.8 Parameter table
A parameter table is an ASCII formated file to convert code numbers to variable names. Each variable has one line with its code number, name and a description with optional units in a blank separated list. It can only be used for GRIB, SERVICE, EXTRA and IEG formated files. The CDO option ’t <partab>’ sets the default parameter table for all input files. Use the operator ’setpartab’ to set the parameter table for a specific file.
Example of a CDO parameter table:
134 aps surface pressure [Pa]
141 sn snow depth [m]
147 ahfl latent heat flux [W/m**2]
172 slm land sea mask
175 albedo surface albedo
211 siced ice depth [m]
1.9 Missing values
Missing values are data points that are missing or invalid. Such data points are treated in a different way than valid data. Most CDO operators can handle missing values in a smart way. But if the missing value is within the range of valid data, it can lead to incorrect results. This applies to all arithmetic operations, but especially to logical operations when the missing value is 0 or 1.
The default missing value for GRIB, SERVICE, EXTRA and IEG files is 9.e^{33}. The CDO option ’m <missval>’ overwrites the default missing value. In NetCDF files the variable attribute ’_FillValue’ is used as a missing value. The operator ’setmissval’ can be used to set a new missing value.
The CDO use of the missing value is shown in the following tables, where one table is printed for each operation. The operations are applied to arbitrary numbers a, b, the special case 0, and the missing value miss. For example the table named "addition" shows that the sum of an arbitrary number a and the missing value is the missing value, and the table named "multiplication" shows that 0 multiplied by missing value results in 0.




addition  b  miss  




a  a + b  miss  




miss  miss  miss  








subtraction  b  miss  




a  a  b  miss  




miss  miss  miss  








multiplication  b  0  miss 




a  a * b  0  miss 




0  0  0  0 




miss  miss  0  miss 








division  b  0  miss 




a  a∕b  miss  miss 




0  0  miss  miss 




miss  miss  miss  miss 








maximum  b  miss  




a  max(a,b)  a  




miss  b  miss  








minimum  b  miss  




a  min(a,b)  a  




miss  b  miss  








sum  b  miss  




a  a + b  a  




miss  b  miss  




The handling of missing values by the operations "minimum" and "maximum" may be surprising, but the definition given here is more consistent with that expected in practice. Mathematical functions (e.g. log, sqrt, etc.) return the missing value if an argument is the missing value or an argument is out of range.
All statistical functions ignore missing values, treading them as not belonging to the sample, with the sideeffect of a reduced sample size.
1.9.1 Mean and average
An artificial distinction is made between the notions mean and average. The mean is regarded as a statistical function, whereas the average is found simply by adding the sample members and dividing the result by the sample size. For example, the mean of 1, 2, miss and 3 is (1 + 2 + 3)∕3 = 2, whereas the average is (1 + 2 + miss + 3)∕4 = miss∕4 = miss. If there are no missing values in the sample, the average and mean are identical.
1.10 Percentile
There is no standard definition of percentile. All definitions yield to similar results when the number of values is very large. The following percentile methods are available in CDO:


Percentile  
method 
Description



nrank  Nearest Rank method, the default method used in CDO 


nist  The primary method recommended by NIST 


rtype8  R’s type=8 method 


numpy  numpy.percentile with the option interpolation set to ’linear’ 


numpy_lower  numpy.percentile with the option interpolation set to ’lower’ 


numpy_higher  numpy.percentile with the option interpolation set to ’higher’ 


numpy_nearest  numpy.percentile with the option interpolation set to ’nearest’ 


The percentile method can be selected with the CDO option percentile. The Nearest Rank method is the default percentile method in CDO.
The different percentile methods can lead to different results, especially for small number of data values. Consider the ordered list {15, 20, 35, 40, 50, 55}, which contains six data values. Here is the result for the 30th, 40th, 50th, 75th and 100th percentiles of this list using the different percentile methods:








Percentile  numpy  numpy  numpy  
P 
nrank

nist

rtype8

numpy

lower  higher  nearest 








30th  20  21.5  23.5  27.5  20  35  35 








40th  35  32  33  35  35  35  35 








50th  35  37.5  37.5  37.5  35  40  40 








75th  50  51.25  50.42  47.5  40  50  50 








100th  55  55  55  55  55  55  55 








1.10.1 Percentile over timesteps
The amount of data for time series can be very large. All data values need to held in memory to calculate the percentile. The percentile over timesteps uses a histogram algorithm, to limit the amount of required memory. The default number of histogram bins is 101. That means the histogram algorithm is used, when the dataset has more than 101 time steps. The default can be overridden by setting the environment variable CDO_PCTL_NBINS to a different value. The histogram algorithm is implemented only for the Nearest Rank method.
1.11 Regions
The CDO operators maskregion and selregion can be used to mask and select regions. For this purpose, the region needs to be defined by the user. In CDO there are two possibilities to define regions.
One possibility is to define the regions with an ASCII file. Each region is defined by a polygon. Each line of the polygon contains the longitude and latitude coordinates of a point. A description file for regions can contain several polygons, these must be separated by a line with the character &.
Here is a simple example of a polygon for a box with longitudes from 120W to 90E and latitudes from 20N to 20S:
120 20
120 20
270 20
270 20
With the second option, predefined regions can be used via country codes. The region is specified with dcw:<CountryCode>. Country codes can be combined with the plus sign.
The ISO twoletter country codes can be found on https://en.wikipedia.org/wiki/ISO_31661_alpha2. For the coordinates of a country CDO uses the DCW (Digital Chart of the World) dataset from GMT. This dataset must be installed on the system and the environment variable DIR_DCW must point to it.
2 Reference manual
This section gives a description of all operators. Related operators are grouped to modules. For easier description all single input files are named infile or infile1, infile2, etc., and an arbitrary number of input files are named infiles. All output files are named outfile or outfile1, outfile2, etc. Further the following notion is introduced:

Timestep of infile

Element number of the field at timestep of infile

Timestep of outfile

Element number of the field at timestep of outfile
2.1 Information
This section contains modules to print information about datasets. All operators print there results to standard output.
Here is a short overview of all operators in this section:
info  Dataset information listed by parameter identifier 
infon  Dataset information listed by parameter name 
map  Dataset information and simple map 
sinfo  Short information listed by parameter identifier 
sinfon  Short information listed by parameter name 
diff  Compare two datasets listed by parameter id 
diffn  Compare two datasets listed by parameter name 
npar  Number of parameters 
nlevel  Number of levels 
nyear  Number of years 
nmon  Number of months 
ndate  Number of dates 
ntime  Number of timesteps 
ngridpoints  Number of gridpoints 
ngrids  Number of horizontal grids 
showformat  Show file format 
showcode  Show code numbers 
showname  Show variable names 
showstdname  Show standard names 
showlevel  Show levels 
showltype  Show GRIB level types 
showyear  Show years 
showmon  Show months 
showdate  Show date information 
showtime  Show time information 
showtimestamp  Show timestamp 
showattribute  Show a global attribute or a variable attribute 
partab  Parameter table 
codetab  Parameter code table 
griddes  Grid description 
zaxisdes  Zaxis description 
vct  Vertical coordinate table 
2.1.1 INFO  Information and simple statistics
Synopsis
<operator> infiles
Description
This module writes information about the structure and contents for each field of all input files to standard output. A field is a horizontal layer of a data variable. All input files need to have the same structure with the same variables on different timesteps. The information displayed depends on the chosen operator.
Operators
 info

Dataset information listed by parameter identifier
Prints information and simple statistics for each field of all input datasets. For each field the operator prints one line with the following elements:
Date and Time

Level, Gridsize and number of Missing values

Minimum, Mean and Maximum
The mean value is computed without the use of area weights! 
Parameter identifier

 infon

Dataset information listed by parameter name
The same as operator info but using the name instead of the identifier to label the parameter.  map

Dataset information and simple map
Prints information, simple statistics and a map for each field of all input datasets. The map will be printed only for fields on a regular lon/lat grid.
Example
To print information and simple statistics for each field of a dataset use:
cdo infon infile
This is an example result of a dataset with one 2D parameter over 12 timesteps:
1 : Date Time Level Size Miss : Minimum Mean Maximum : Name
1 : 19870131 12:00:00 0 2048 1361 : 232.77 266.65 305.31 : SST
2 : 19870228 12:00:00 0 2048 1361 : 233.64 267.11 307.15 : SST
3 : 19870331 12:00:00 0 2048 1361 : 225.31 267.52 307.67 : SST
4 : 19870430 12:00:00 0 2048 1361 : 215.68 268.65 310.47 : SST
5 : 19870531 12:00:00 0 2048 1361 : 215.78 271.53 312.49 : SST
6 : 19870630 12:00:00 0 2048 1361 : 212.89 272.80 314.18 : SST
7 : 19870731 12:00:00 0 2048 1361 : 209.52 274.29 316.34 : SST
8 : 19870831 12:00:00 0 2048 1361 : 210.48 274.41 315.83 : SST
9 : 19870930 12:00:00 0 2048 1361 : 210.48 272.37 312.86 : SST
10 : 19871031 12:00:00 0 2048 1361 : 219.46 270.53 309.51 : SST
11 : 19871130 12:00:00 0 2048 1361 : 230.98 269.85 308.61 : SST
12 : 19871231 12:00:00 0 2048 1361 : 241.25 269.94 309.27 : SST
2.1.2 SINFO  Short information
Synopsis
<operator> infiles
Description
This module writes information about the structure of infiles to standard output. infiles is an arbitrary number of input files. All input files need to have the same structure with the same variables on different timesteps. The information displayed depends on the chosen operator.
Operators
 sinfo

Short information listed by parameter identifier
Prints short information of a dataset. The information is divided into 4 sections. Section 1 prints one line per parameter with the following information:
institute and source

time c=constant v=varying

type of statistical processing

number of levels and zaxis number

horizontal grid size and number

data type

parameter identifier
Section 2 and 3 gives a short overview of all grid and vertical coordinates. And the last section contains short information of the time coordinate.

 sinfon

Short information listed by parameter name
The same as operator sinfo but using the name instead of the identifier to label the parameter.
Example
To print short information of a dataset use:
cdo sinfon infile
This is the result of an ECHAM5 dataset with 3 parameter over 12 timesteps:
1 : Institut Source T Steptype Levels Num Points Num Dtype : Name
1 : MPIMET ECHAM5 c instant 1 1 2048 1 F32 : GEOSP
2 : MPIMET ECHAM5 v instant 4 2 2048 1 F32 : T
3 : MPIMET ECHAM5 v instant 1 1 2048 1 F32 : TSURF
Grid coordinates :
1 : gaussian : points=2048 (64x32) F16
longitude : 0 to 354.375 by 5.625 degrees_east circular
latitude : 85.7606 to 85.7606 degrees_north
Vertical coordinates :
1 : surface : levels=1
2 : pressure : levels=4
level : 92500 to 20000 Pa
Time coordinate :
time : 12 steps
YYYYMMDD hh:mm:ss YYYYMMDD hh:mm:ss YYYYMMDD hh:mm:ss YYYYMMDD hh:mm:ss
19870131 12:00:00 19870228 12:00:00 19870331 12:00:00 19870430 12:00:00
19870531 12:00:00 19870630 12:00:00 19870731 12:00:00 19870831 12:00:00
19870930 12:00:00 19871031 12:00:00 19871130 12:00:00 19871231 12:00:00
2.1.3 DIFF  Compare two datasets field by field
Synopsis
<operator>[,options] infile1 infile2
Description
Compares the contents of two datasets field by field. The input datasets need to have the same structure and its fields need to have the same header information and dimensions. Try the option names if the number of variables differ. Exit status is 0 if inputs are the same and 1 if they differ.
Operators
 diff

Compare two datasets listed by parameter id
Provides statistics on differences between two datasets. For each pair of fields the operator prints one line with the following information:
Date and Time

Level, Gridsize and number of Missing values

Number of different values

Occurrence of coefficient pairs with different signs (S)

Occurrence of zero values (Z)

Maxima of absolute difference of coefficient pairs

Maxima of relative difference of nonzero coefficient pairs with equal signs

Parameter identifier

 diffn

Compare two datasets listed by parameter name
The same as operator diff. Using the name instead of the identifier to label the parameter.
Parameter
 maxcount

INTEGER Stop after maxcount different fields
 abslim

FLOAT Limit of the maximum absolute difference (default: 0)
 rellim

FLOAT Limit of the maximum relative difference (default: 1)
 names

STRING Consideration of the variable names of only one input file (left/right) or the intersection of both (intersect).
Example
To print the difference for each field of two datasets use:
cdo diffn infile1 infile2
This is an example result of two datasets with one 2D parameter over 12 timesteps:
Date Time Level Size Miss Diff : S Z Max_Absdiff Max_Reldiff : Name
1 : 19870131 12:00:00 0 2048 1361 273 : F F 0.00010681 4.1660e07 : SST
2 : 19870228 12:00:00 0 2048 1361 309 : F F 6.1035e05 2.3742e07 : SST
3 : 19870331 12:00:00 0 2048 1361 292 : F F 7.6294e05 3.3784e07 : SST
4 : 19870430 12:00:00 0 2048 1361 183 : F F 7.6294e05 3.5117e07 : SST
5 : 19870531 12:00:00 0 2048 1361 207 : F F 0.00010681 4.0307e07 : SST
7 : 19870731 12:00:00 0 2048 1361 317 : F F 9.1553e05 3.5634e07 : SST
8 : 19870831 12:00:00 0 2048 1361 219 : F F 7.6294e05 2.8849e07 : SST
9 : 19870930 12:00:00 0 2048 1361 188 : F F 7.6294e05 3.6168e07 : SST
10 : 19871031 12:00:00 0 2048 1361 297 : F F 9.1553e05 3.5001e07 : SST
11 : 19871130 12:00:00 0 2048 1361 234 : F F 6.1035e05 2.3839e07 : SST
12 : 19871231 12:00:00 0 2048 1361 267 : F F 9.3553e05 3.7624e07 : SST
11 of 12 records differ
2.1.4 NINFO  Print the number of parameters, levels or times
Synopsis
<operator> infile
Description
This module prints the number of variables, levels or times of the input dataset.
Operators
 npar

Number of parameters
Prints the number of parameters (variables).  nlevel

Number of levels
Prints the number of levels for each variable.  nyear

Number of years
Prints the number of different years.  nmon

Number of months
Prints the number of different combinations of years and months.  ndate

Number of dates
Prints the number of different dates.  ntime

Number of timesteps
Prints the number of timesteps.  ngridpoints

Number of gridpoints
Prints the number of gridpoints for each variable.  ngrids

Number of horizontal grids
Prints the number of horizontal grids.
Example
To print the number of parameters (variables) in a dataset use:
cdo npar infile
To print the number of months in a dataset use:
cdo nmon infile
2.1.5 SHOWINFO  Show variables, levels or times
Synopsis
<operator> infile
Description
This module prints the format, variables, levels or times of the input dataset.
Operators
 showformat

Show file format
Prints the file format of the input dataset.  showcode

Show code numbers
Prints the code number of all variables.  showname

Show variable names
Prints the name of all variables.  showstdname

Show standard names
Prints the standard name of all variables.  showlevel

Show levels
Prints all levels for each variable.  showltype

Show GRIB level types
Prints the GRIB level type for all zaxes.  showyear

Show years
Prints all years.  showmon

Show months
Prints all months.  showdate

Show date information
Prints date information of all timesteps (format YYYYMMDD).  showtime

Show time information
Prints time information of all timesteps (format hh:mm:ss).  showtimestamp

Show timestamp
Prints timestamp of all timesteps (format YYYYMMDDThh:mm:ss).
Example
To print the code number of all variables in a dataset use:
cdo showcode infile
This is an example result of a dataset with three variables:
129 130 139
To print all months in a dataset use:
cdo showmon infile
This is an examples result of a dataset with an annual cycle:
1 2 3 4 5 6 7 8 9 10 11 12
2.1.6 SHOWATTRIBUTE  Show attributes
Synopsis
showattribute[,attributes] infile
Description
This operator prints the attributes of the data variables of a dataset.
Each attribute has the following structure:
[var_nm@][att_nm]
 var_nm

Variable name (optional). Example: pressure
 att_nm

Attribute name (optional). Example: units
The value of var_nm is the name of the variable containing the attribute (named att_nm) that you want to print. Use wildcards to print the attribute att_nm of more than one variable. A value of var_nm of ’*’ will print the attribute att_nm of all data variables. If var_nm is missing then att_nm refers to a global attribute.
The value of att_nm is the name of the attribute you want to print. Use wildcards to print more than one attribute. A value of att_nm of ’*’ will print all attributes.
Parameter
 attributes

STRING Commaseparated list of attributes.
2.1.7 FILEDES  Dataset description
Synopsis
<operator> infile
Description
This module provides operators to print meta information about a dataset. The printed metadata depends on the chosen operator.
Operators
 partab

Parameter table
Prints all available meta information of the variables.  codetab

Parameter code table
Prints a code table with a description of all variables. For each variable the operator prints one line listing the code, name, description and units.  griddes

Grid description
Prints the description of all grids.  zaxisdes

Zaxis description
Prints the description of all zaxes.  vct

Vertical coordinate table
Prints the vertical coordinate table.
Example
Assume all variables of the dataset are on a Gausssian N16 grid. To print the grid description of this dataset use:
cdo griddes infile
Result:
gridtype : gaussian
gridsize : 2048
xname : lon
xlongname : longitude
xunits : degrees_east
yname : lat
ylongname : latitude
yunits : degrees_north
xsize : 64
ysize : 32
xfirst : 0
xinc : 5.625
yvals : 85.76058 80.26877 74.74454 69.21297 63.67863 58.1429 52.6065
47.06964 41.53246 35.99507 30.4575 24.91992 19.38223 13.84448
8.306702 2.768903 2.768903 8.306702 13.84448 19.38223
24.91992 30.4575 35.99507 41.53246 47.06964 52.6065
58.1429 63.67863 69.21297 74.74454 80.26877 85.76058
2.2 File operations
This section contains modules to perform operations on files.
Here is a short overview of all operators in this section:
apply  Apply operators on each input file. 
copy  Copy datasets 
clone  Clone datasets 
cat  Concatenate datasets 
tee  Duplicate a data stream 
pack  Pack data 
bitrounding  Bit rounding 
replace  Replace variables 
duplicate  Duplicates a dataset 
mergegrid  Merge grid 
merge  Merge datasets with different fields 
mergetime  Merge datasets sorted by date and time 
splitcode  Split code numbers 
splitparam  Split parameter identifiers 
splitname  Split variable names 
splitlevel  Split levels 
splitgrid  Split grids 
splitzaxis  Split zaxes 
splittabnum  Split parameter table numbers 
splithour  Split hours 
splitday  Split days 
splitseas  Split seasons 
splityear  Split years 
splityearmon  Split in years and months 
splitmon  Split months 
splitsel  Split time selection 
distgrid  Distribute horizontal grid 
collgrid  Collect horizontal grid 
2.2.1 APPLY  Apply operators
Synopsis
apply,operators infiles
Description
The apply utility runs the named operators on each input file. The input files must be enclosed in square brackets. This utility can only be used on a series of input files. These are all operators with more than one input file (infiles). Here is an incomplete list of these operators: copy, cat, merge, mergetime, select, ENSSTAT. The parameter operators is a blankseparated list of CDO operators. Use quotation marks if more than one operator is needed. Each operator may have only one input and output stream.
Parameter
 operators

STRING Blankseparated list of CDO operators.
Example
Suppose we have multiple input files with multiple variables on different time steps. The input files contain the variables U and V, among others. We are only interested in the absolute windspeed on all time steps. Here is the standard CDO solution for this task:
cdo expr,wind="sqrt(u*u+v*v)" mergetime infile1 infile2 infile3 outfile
This first joins all the time steps together and then calculates the wind speed. If there are many variables in the input files, this procedure is ineffective. In this case it is better to first calculate the wind speed:
cdo mergetime expr,wind="sqrt(u*u+v*v)" infile1 \
expr,wind="sqrt(u*u+v*v)" infile2 \
expr,wind="sqrt(u*u+v*v)" infile3 outfile
However, this can quickly become very confusing with more than 3 input files. The apply operator solves this problem:
cdo mergetime apply,expr,wind="sqrt(u*u+v*v)" [ infile1 infile2 infile3 ] outfile
Another example is the calculation of the mean value over several input files with ensmean. The input files contain several variables, but we are only interested in the variable named XXX:
cdo ensmean apply,selname,XXX [ infile1 infile2 infile3 ] outfile
2.2.2 COPY  Copy datasets
Synopsis
<operator> infiles outfile
Description
This module contains operators to copy, clone or concatenate datasets. infiles is an arbitrary number of input files. All input files need to have the same structure with the same variables on different timesteps.
Operators
 copy

Copy datasets
Copies all input datasets to outfile.  clone

Clone datasets
Copies all input datasets to outfile. In contrast to the copy operator, clone tries not to change the input data. GRIB records are neither decoded nor decompressed.  cat

Concatenate datasets
Concatenates all input datasets and appends the result to the end of outfile. If outfile does not exist it will be created.
Example
To change the format of a dataset to NetCDF use:
cdo f nc copy infile outfile.nc
Add the option ’r’ to create a relative time axis, as is required for proper recognition by GrADS or Ferret:
cdo r f nc copy infile outfile.nc
To concatenate 3 datasets with different timesteps of the same variables use:
cdo copy infile1 infile2 infile3 outfile
If the output dataset already exists and you wish to extend it with more timesteps use:
cdo cat infile1 infile2 infile3 outfile
2.2.3 TEE  Duplicate a data stream and write it to file
Synopsis
tee,outfile2 infile outfile1
Description
This operator copies the input dataset to outfile1 and outfile2. The first output stream in outfile1 can be further processesd with other cdo operators. The second output outfile2 is written to disk. It can be used to store intermediate results to a file.
Parameter
 outfile2

STRING Destination filename for the copy of the input file
Example
To compute the daily and monthy average of a dataset use:
cdo monavg tee,outfile_dayavg dayavg infile outfile_monavg
2.2.4 PACK  Pack data
Synopsis
pack infile outfile
Description
Packing reduces the data volume by reducing the precision of the stored numbers. It is implemented using the NetCDF attributes add_offset and scale_factor. The operator pack calculates the attributes add_offset and scale_factor for all variables. The default data type for all variables is automatically changed to 16bit integer. Use the CDO option b to change the data type to a different integer precision, if needed. Missing values are automatically transformed to the current data type.
2.2.5 BITROUNDING  Bit rounding
Synopsis
bitrounding[,params] infile outfile
Description
This operator calculates for each field the number of necessary mantissa bits to get a certain information level in the data. With this number of significant bits (numbits) a rounding of the data is performed. This allows the data to be compressed to a higher level.
The default value of the information level is 0.9999 and can be adjusted with the parameter inflevel. That means 99.99% of the information in the mantissa bits is preserved.
Alternatively, the number of significant bits can be set for all variables with the numbits parameter. Furthermore, numbits can be assigned for each variable via the filename parameter. In this case, numbits is still calculated for all variables if they are not present in the file.
The analysis of the bit information is based on the Julia library BitInformation.jl. The procedure to derive the number of significant mantissa bits was adapted from the Python library xbitinfo. Quantize to the number of mantissa bits is done with IEEE rounding using code from NetCDF 4.9.0.
Currently only 32bit float data is rounded. Data with missing values are not yet supported for the calculation of significant bits.
Parameter
 inflevel

FLOAT Information level (0  1) [default: 0.9999]
 addbits

INTEGER Add bits to the number of significant bits [default: 0]
 minbits

INTEGER Minimum value of the number of bits [default: 1]
 maxbits

INTEGER Maximum value of the number of bits [default: 23]
 numsteps

INTEGER Set to 1 to run the calculation only in the first time step
 numbits

INTEGER Set number of significant bits
 printbits

BOOL Print max. numbits per variable of 1st timestep to stdout [format: name=numbits]
 filename

STRING Read number of significant bits per variable from file [format: name=numbits]
Example
Apply bit rounding to all 32bit float fields, preserving 99.9% of the information, followed by compression and storage to NetCDF4:
cdo f nc4 z zip bitrounding,inflevel=0.999 infile outfile
Add the option ’v’ to view used number of mantissa bits for each field:
cdo v f nc4 z zip bitrounding,inflevel=0.999 infile outfile
2.2.6 REPLACE  Replace variables
Synopsis
replace infile1 infile2 outfile
Description
This operator replaces variables in infile1 by variables from infile2 and write the result to outfile. Both input datasets need to have the same number of timesteps. All variable names may only occur once!
Example
Assume the first input dataset infile1 has three variables with the names geosp, t and tslm1 and the second input dataset infile2 has only the variable tslm1. To replace the variable tslm1 in infile1 by tslm1 from infile2 use:
cdo replace infile1 infile2 outfile
2.2.7 DUPLICATE  Duplicates a dataset
Synopsis
duplicate[,ndup] infile outfile
Description
This operator duplicates the contents of infile and writes the result to outfile. The optional parameter sets the number of duplicates, the default is 2.
Parameter
 ndup

INTEGER Number of duplicates, default is 2.
2.2.8 MERGEGRID  Merge grid
Synopsis
mergegrid infile1 infile2 outfile
Description
Merges grid points of all variables from infile2 to infile1 and write the result to outfile. Only the non missing values of infile2 will be used. The horizontal grid of infile2 should be smaller or equal to the grid of infile1 and the resolution must be the same. Only rectilinear grids are supported. Both input files need to have the same variables and the same number of timesteps.
2.2.9 MERGE  Merge datasets
Synopsis
<operator> infiles outfile
Description
This module reads datasets from several input files, merges them and writes the resulting dataset to outfile.
Operators
 merge

Merge datasets with different fields
Merges time series of different fields from several input datasets. The number of fields per timestep written to outfile is the sum of the field numbers per timestep in all input datasets. The time series on all input datasets are required to have different fields and the same number of timesteps. The fields in each different input file either have to be different variables or different levels of the same variable. A mixture of different variables on different levels in different input files is not allowed.  mergetime

Merge datasets sorted by date and time
Merges all timesteps of all input files sorted by date and time. All input files need to have the same structure with the same variables on different timesteps. After this operation every input timestep is in outfile and all timesteps are sorted by date and time.
Environment
 SKIP_SAME_TIME

If set to 1, skips all consecutive timesteps with a double entry of the same timestamp.
Note
Operators of this module need to open all input files simultaneously. The maximum number of open files depends on the operating system!
Example
Assume three datasets with the same number of timesteps and different variables in each dataset. To merge these datasets to a new dataset use:
cdo merge infile1 infile2 infile3 outfile
Assume you split a 6 hourly dataset with splithour. This produces four datasets, one for each hour. The following command merges them together:
cdo mergetime infile1 infile2 infile3 infile4 outfile
2.2.10 SPLIT  Split a dataset
Synopsis
<operator>[,params] infile obase
Description
This module splits infile into pieces. The output files will be named <obase><xxx><suffix> where suffix is the filename extension derived from the file format. xxx and the contents of the output files depends on the chosen operator. params is a commaseparated list of processing parameters.
Operators
 splitcode

Split code numbers
Splits a dataset into pieces, one for each different code number. xxx will have three digits with the code number.  splitparam

Split parameter identifiers
Splits a dataset into pieces, one for each different parameter identifier. xxx will be a string with the parameter identifier.  splitname

Split variable names
Splits a dataset into pieces, one for each variable name. xxx will be a string with the variable name.  splitlevel

Split levels
Splits a dataset into pieces, one for each different level. xxx will have six digits with the level.  splitgrid

Split grids
Splits a dataset into pieces, one for each different grid. xxx will have two digits with the grid number.  splitzaxis

Split zaxes
Splits a dataset into pieces, one for each different zaxis. xxx will have two digits with the zaxis number.  splittabnum

Split parameter table numbers
Splits a dataset into pieces, one for each GRIB1 parameter table number. xxx will have three digits with the GRIB1 parameter table number.
Parameter
 swap

STRING Swap the position of obase and xxx in the output filename
 uuid=<attname>

STRING Add a UUID as global attribute <attname> to each output file
Environment
 CDO_FILE_SUFFIX

Set the default file suffix. This suffix will be added to the output file names instead of the filename extension derived from the file format. Set this variable to NULL to disable the adding of a file suffix.
Note
Operators of this module need to open all output files simultaneously. The maximum number of open files depends on the operating system!
Example
Assume an input GRIB1 dataset with three variables, e.g. code number 129, 130 and 139. To split this dataset into three pieces, one for each code number use:
cdo splitcode infile code
Result of ’dir code*’:
code129.grb code130.grb code139.grb
2.2.11 SPLITTIME  Split timesteps of a dataset
Synopsis
<operator> infile obase
splitmon[,format] infile obase
Description
This module splits infile into timesteps pieces. The output files will be named <obase><xxx><suffix> where suffix is the filename extension derived from the file format. xxx and the contents of the output files depends on the chosen operator.
Operators
 splithour

Split hours
Splits a file into pieces, one for each different hour. xxx will have two digits with the hour.  splitday

Split days
Splits a file into pieces, one for each different day. xxx will have two digits with the day.  splitseas

Split seasons
Splits a file into pieces, one for each different season. xxx will have three characters with the season.  splityear

Split years
Splits a file into pieces, one for each different year. xxx will have four digits with the year (YYYY).  splityearmon

Split in years and months
Splits a file into pieces, one for each different year and month. xxx will have six digits with the year and month (YYYYMM).  splitmon

Split months
Splits a file into pieces, one for each different month. xxx will have two digits with the month.
Parameter
 format

STRING Cstyle format for strftime() (e.g. %B for the full month name)
Environment
 CDO_FILE_SUFFIX

Set the default file suffix. This suffix will be added to the output file names instead of the filename extension derived from the file format. Set this variable to NULL to disable the adding of a file suffix.
Note
Operators of this module need to open all output files simultaneously. The maximum number of open files depends on the operating system!
Example
Assume the input GRIB1 dataset has timesteps from January to December. To split each month with all variables into one separate file use:
cdo splitmon infile mon
Result of ’dir mon*’:
mon01.grb mon02.grb mon03.grb mon04.grb mon05.grb mon06.grb
mon07.grb mon08.grb mon09.grb mon10.grb mon11.grb mon12.grb
2.2.12 SPLITSEL  Split selected timesteps
Synopsis
splitsel,nsets[,noffset[,nskip]] infile obase
Description
This operator splits infile into pieces, one for each adjacent sequence t_1,....,t_n of timesteps of the same selected time range. The output files will be named <obase><nnnnnn><suffix> where nnnnnn is the sequence number and suffix is the filename extension derived from the file format.
Parameter
 nsets

INTEGER Number of input timesteps for each output file
 noffset

INTEGER Number of input timesteps skipped before the first timestep range (optional)
 nskip

INTEGER Number of input timesteps skipped between timestep ranges (optional)
Environment
 CDO_FILE_SUFFIX

Set the default file suffix. This suffix will be added to the output file names instead of the filename extension derived from the file format. Set this variable to NULL to disable the adding of a file suffix.
2.2.13 DISTGRID  Distribute horizontal grid
Synopsis
distgrid,nx[,ny] infile obase
Description
This operator distributes a dataset into smaller pieces. Each output file contains a different region of the horizontal source grid. 2D Lon/Lat grids can be split into nx*ny pieces, where a target grid region contains a structured longitude/latitude box of the source grid. Data on an unstructured grid is split into nx pieces. The output files will be named <obase><xxx><suffix> where suffix is the filename extension derived from the file format. xxx will have five digits with the number of the target region.
Parameter
 nx

INTEGER Number of regions in x direction, or number of pieces for unstructured grids
 ny

INTEGER Number of regions in y direction [default: 1]
Note
This operator needs to open all output files simultaneously. The maximum number of open files depends on the operating system!
Example
Distribute data on a 2D Lon/Lat grid into 6 smaller files, each output file receives one half of x and a third of y of the source grid:
cdo distgrid,2,3 infile.nc obase
Below is a schematic illustration of this example:
On the left side is the data of the input file and on the right side is the data of the six output files.
2.2.14 COLLGRID  Collect horizontal grid
Synopsis
collgrid[,nx[,names]] infiles outfile
Description
This operator collects the data of the input files to one output file. All input files need to have the same variables and the same number of timesteps on a different horizonal grid region. If the source regions are on a structured lon/lat grid, all regions together must result in a new structured lat/long grid box. Data on an unstructured grid is concatenated in the order of the input files. The parameter nx needs to be specified only for curvilinear grids.
Parameter
 nx

INTEGER Number of regions in x direction [default: number of input files]
 names

STRING Commaseparated list of variable names [default: all variables]
Note
This operator needs to open all input files simultaneously. The maximum number of open files depends on the operating system!
Example
Collect the horizonal grid of 6 input files. Each input file contains a lon/lat region of the target grid:
cdo collgrid infile[16] outfile
Below is a schematic illustration of this example:
On the left side is the data of the six input files and on the right side is the collected data of the output file.
2.3 Selection
This section contains modules to select time steps, fields or a part of a field from a dataset.
Here is a short overview of all operators in this section:
select  Select fields 
delete  Delete fields 
selmulti  Select multiple fields 
delmulti  Delete multiple fields 
changemulti  Change identication of multiple fields 
selparam  Select parameters by identifier 
delparam  Delete parameters by identifier 
selcode  Select parameters by code number 
delcode  Delete parameters by code number 
selname  Select parameters by name 
delname  Delete parameters by name 
selstdname  Select parameters by standard name 
sellevel  Select levels 
sellevidx  Select levels by index 
selgrid  Select grids 
selzaxis  Select zaxes 
selzaxisname  Select zaxes by name 
selltype  Select GRIB level types 
seltabnum  Select parameter table numbers 
seltimestep  Select timesteps 
seltime  Select times 
selhour  Select hours 
selday  Select days 
selmonth  Select months 
selyear  Select years 
selseason  Select seasons 
seldate  Select dates 
selsmon  Select single month 
sellonlatbox  Select a longitude/latitude box 
selindexbox  Select an index box 
selregion  Select cells inside regions 
selcircle  Select cells inside a circle 
selgridcell  Select grid cells 
delgridcell  Delete grid cells 
samplegrid  Resample grid 
selyearidx  Select year by index 
bottomvalue  Extract bottom level 
topvalue  Extract top level 
isosurface  Extract isosurface 
2.3.1 SELECT  Select fields
Synopsis
<operator>,params infiles outfile
Description
This module selects some fields from infiles and writes them to outfile. infiles is an arbitrary number of input files. All input files need to have the same structure with the same variables on different timesteps. The fields selected depends on the chosen parameters. Parameter is a commaseparated list of "key=value" pairs. A range of integer values can be specified by first/last[/inc]. Wildcards are supported for string values.
Operators
 select

Select fields
Selects all fields with parameters in a user given list.  delete

Delete fields
Deletes all fields with parameters in a user given list.
Parameter
 name

STRING Commaseparated list of variable names.
 param

STRING Commaseparated list of parameter identifiers.
 code

INTEGER Commaseparated list or first/last[/inc] range of code numbers.
 level

FLOAT Commaseparated list of vertical levels.
 levrange

FLOAT First and last value of the level range.
 levidx

INTEGER Commaseparated list or first/last[/inc] range of index of levels.
 zaxisname

STRING Commaseparated list of zaxis names.
 zaxisnum

INTEGER Commaseparated list or first/last[/inc] range of zaxis numbers.
 ltype

INTEGER Commaseparated list or first/last[/inc] range of GRIB level types.
 gridname

STRING Commaseparated list of grid names.
 gridnum

INTEGER Commaseparated list or first/last[/inc] range of grid numbers.
 steptype

STRING Commaseparated list of timestep types (constant, avg, accum, min, max, range, diff, sum)
 date

STRING Commaseparated list of dates (format YYYYMMDDThh:mm:ss).
 startdate

STRING Start date (format YYYYMMDDThh:mm:ss).
 enddate

STRING End date (format YYYYMMDDThh:mm:ss).
 minute

INTEGER Commaseparated list or first/last[/inc] range of minutes.
 hour

INTEGER Commaseparated list or first/last[/inc] range of hours.
 day

INTEGER Commaseparated list or first/last[/inc] range of days.
 month

INTEGER Commaseparated list or first/last[/inc] range of months.
 season

STRING Commaseparated list of seasons (substring of DJFMAMJJASOND or ANN).
 year

INTEGER Commaseparated list or first/last[/inc] range of years.
 dom

STRING Commaseparated list of the day of month (e.g. 29feb).
 timestep

INTEGER Commaseparated list or first/last[/inc] range of timesteps. Negative values select timesteps from the end (NetCDF only).
 timestep_of_year

INTEGER Commaseparated list or first/last[/inc] range of timesteps of year.
 timestepmask

STRING Read timesteps from a mask file.
Example
Assume you have 3 inputfiles. Each inputfile contains the same variables for a different time period. To select the variable T,U and V on the levels 200, 500 and 850 from all 3 input files, use:
cdo select,name=T,U,V,level=200,500,850 infile1 infile2 infile3 outfile
To remove the February 29th use:
cdo delete,dom=29feb infile outfile
2.3.2 SELMULTI  Select multiple fields via GRIB1 parameters
Synopsis
<operator>,selectionspecification infile outfile
Description
This module selects multiple fields from infile and writes them to outfile. selectionspecification is a filename or inplace string with the selection specification. Each selectionspecification has the following compact notation format:
<type>(parameters; leveltype(s); levels)
 type

sel for select or del for delete (optional)
 parameters

GRIB1 parameter code number
 leveltype

GRIB1 level type
 levels

value of each level
Examples:
(1; 103; 0)
(33,34; 105; 10)
(11,17; 105; 2)
(71,73,74,75,61,62,65,117,67,122,121,11,131,66,84,111,112; 105; 0)
The following descriptive notation can also be used for selection specification from a file:
SELECT/DELETE, PARAMETER=parameters, LEVTYPE=leveltye(s), LEVEL=levels
Examples:
SELECT, PARAMETER=1, LEVTYPE=103, LEVEL=0
SELECT, PARAMETER=33/34, LEVTYPE=105, LEVEL=10
SELECT, PARAMETER=11/17, LEVTYPE=105, LEVEL=2
SELECT, PARAMETER=71/73/74/75/61/62/65/117/67/122, LEVTYPE=105, LEVEL=0
DELETE, PARAMETER=128, LEVTYPE=109, LEVEL=*
The following will convert Pressure from Pa into hPa; Temp from Kelvin to Celsius:
SELECT, PARAMETER=1, LEVTYPE= 103, LEVEL=0, SCALE=0.01
SELECT, PARAMETER=11, LEVTYPE=105, LEVEL=2, OFFSET=273.15
If SCALE and/or OFFSET are defined, then the data values are scaled as SCALE*(VALUEOFFSET).
Operators
 selmulti

Select multiple fields
 delmulti

Delete multiple fields
 changemulti

Change identication of multiple fields
Example
Change ECMWF GRIB code of surface pressure to Hirlam notation:
cdo changemulti,’{(134;1;*1;105;*)}’ infile outfile
2.3.3 SELVAR  Select fields
Synopsis
<operator>,params infile outfile
selcode,codes infile outfile
delcode,codes infile outfile
selname,names infile outfile
delname,names infile outfile
selstdname,stdnames infile outfile
sellevel,levels infile outfile
sellevidx,levidx infile outfile
selgrid,grids infile outfile
selzaxis,zaxes infile outfile
selzaxisname,zaxisnames infile outfile
selltype,ltypes infile outfile
seltabnum,tabnums infile outfile
Description
This module selects some fields from infile and writes them to outfile. The fields selected depends on the chosen operator and the parameters. A range of integer values can be specified by first/last[/inc].
Operators
 selparam

Select parameters by identifier
Selects all fields with parameter identifiers in a user given list.  delparam

Delete parameters by identifier
Deletes all fields with parameter identifiers in a user given list.  selcode

Select parameters by code number
Selects all fields with code numbers in a user given list or range.  delcode

Delete parameters by code number
Deletes all fields with code numbers in a user given list or range.  selname

Select parameters by name
Selects all fields with parameter names in a user given list.  delname

Delete parameters by name
Deletes all fields with parameter names in a user given list.  selstdname

Select parameters by standard name
Selects all fields with standard names in a user given list.  sellevel

Select levels
Selects all fields with levels in a user given list.  sellevidx

Select levels by index
Selects all fields with index of levels in a user given list or range.  selgrid

Select grids
Selects all fields with grids in a user given list.  selzaxis

Select zaxes
Selects all fields with zaxes in a user given list.  selzaxisname

Select zaxes by name
Selects all fields with zaxis names in a user given list.  selltype

Select GRIB level types
Selects all fields with GRIB level type in a user given list or range.  seltabnum

Select parameter table numbers
Selects all fields with parameter table numbers in a user given list or range.
Parameter
 params

STRING Commaseparated list of parameter identifiers.
 codes

INTEGER Commaseparated list or first/last[/inc] range of code numbers.
 names

STRING Commaseparated list of variable names.
 stdnames

STRING Commaseparated list of standard names.
 levels

FLOAT Commaseparated list of vertical levels.
 levidx

INTEGER Commaseparated list or first/last[/inc] range of index of levels.
 ltypes

INTEGER Commaseparated list or first/last[/inc] range of GRIB level types.
 grids

STRING Commaseparated list of grid names or numbers.
 zaxes

STRING Commaseparated list of zaxis types or numbers.
 zaxisnames

STRING Commaseparated list of zaxis names.
 tabnums

INTEGER Commaseparated list or range of parameter table numbers.
Example
Assume an input dataset has three variables with the code numbers 129, 130 and 139. To select the variables with the code number 129 and 139 use:
cdo selcode,129,139 infile outfile
You can also select the code number 129 and 139 by deleting the code number 130 with:
cdo delcode,130 infile outfile
2.3.4 SELTIME  Select timesteps
Synopsis
seltimestep,timesteps infile outfile
seltime,times infile outfile
selhour,hours infile outfile
selday,days infile outfile
selmonth,months infile outfile
selyear,years infile outfile
selseason,seasons infile outfile
seldate,startdate[,enddate] infile outfile
selsmon,month[,nts1[,nts2]] infile outfile
Description
This module selects user specified timesteps from infile and writes them to outfile. The timesteps selected depends on the chosen operator and the parameters. A range of integer values can be specified by first/last[/inc].
Operators
 seltimestep

Select timesteps
Selects all timesteps with a timestep in a user given list or range.  seltime

Select times
Selects all timesteps with a time in a user given list or range.  selhour

Select hours
Selects all timesteps with a hour in a user given list or range.  selday

Select days
Selects all timesteps with a day in a user given list or range.  selmonth

Select months
Selects all timesteps with a month in a user given list or range.  selyear

Select years
Selects all timesteps with a year in a user given list or range.  selseason

Select seasons
Selects all timesteps with a month of a season in a user given list.  seldate

Select dates
Selects all timesteps with a date in a user given range.  selsmon

Select single month
Selects a month and optional an arbitrary number of timesteps before and after this month.
Parameter
 timesteps

INTEGER Commaseparated list or first/last[/inc] range of timesteps. Negative values select timesteps from the end (NetCDF only).
 times

STRING Commaseparated list of times (format hh:mm:ss).
 hours

INTEGER Commaseparated list or first/last[/inc] range of hours.
 days

INTEGER Commaseparated list or first/last[/inc] range of days.
 months

INTEGER Commaseparated list or first/last[/inc] range of months.
 years

INTEGER Commaseparated list or first/last[/inc] range of years.
 seasons

STRING Commaseparated list of seasons (substring of DJFMAMJJASOND or ANN).
 startdate

STRING Start date (format YYYYMMDDThh:mm:ss).
 enddate

STRING End date (format YYYYMMDDThh:mm:ss) [default: startdate].
 nts1

INTEGER Number of timesteps before the selected month [default: 0].
 nts2

INTEGER Number of timesteps after the selected month [default: nts1].
2.3.5 SELBOX  Select a box
Synopsis
sellonlatbox,lon1,lon2,lat1,lat2 infile outfile
selindexbox,idx1,idx2,idy1,idy2 infile outfile
Description
Selects grid cells inside a lon/lat or index box.
Operators
 sellonlatbox

Select a longitude/latitude box
Selects grid cells inside a lon/lat box. The user must specify the longitude and latitude of the edges of the box. Only those grid cells are considered whose grid center lies within the lon/lat box. For rotated lon/lat grids the parameters must be specified in rotated coordinates.  selindexbox

Select an index box
Selects grid cells within an index box. The user must specify the indices of the edges of the box. The index of the left edge can be greater then the one of the right edge. Use negative indexing to start from the end. The input grid must be a regular lon/lat or a 2D curvilinear grid.
Parameter
 lon1

FLOAT Western longitude in degrees
 lon2

FLOAT Eastern longitude in degrees
 lat1

FLOAT Southern or northern latitude in degrees
 lat2

FLOAT Northern or southern latitude in degrees
 idx1

INTEGER Index of first longitude (1  nlon)
 idx2

INTEGER Index of last longitude (1  nlon)
 idy1

INTEGER Index of first latitude (1  nlat)
 idy2

INTEGER Index of last latitude (1  nlat)
Example
To select the region with the longitudes from 30W to 60E and latitudes from 30N to 80N from all input fields use:
cdo sellonlatbox,30,60,30,80 infile outfile
If the input dataset has fields on a Gaussian N16 grid, the same box can be selected with selindexbox by:
cdo selindexbox,60,11,3,11 infile outfile
2.3.6 SELREGION  Select horizontal regions
Synopsis
selregion,regions infile outfile
selcircle[,lon,lat,radius] infile outfile
Description
Selects all grid cells with the center point inside user defined regions or a circle. The resulting grid is unstructured.
Operators
 selregion

Select cells inside regions
Selects all grid cells with the center point inside the regions. The user has to give ASCII formatted files with different regions. A region is defined by a polygon. Each line of a polygon description file contains the longitude and latitude of one point. Each polygon description file can contain one or more polygons separated by a line with the character &.  selcircle

Select cells inside a circle
Selects all grid cells with the center point inside a circle. The circle is described by geographic coordinates of the center and the radius of the circle.
Parameter
 regions

STRING Commaseparated list of ASCII formatted files with different regions
 lon

FLOAT Longitude of the center of the circle in degrees, default lon=0.0
 lat

FLOAT Latitude of the center of the circle in degrees, default lat=0.0
 radius

STRING Radius of the circle, default radius=1deg (units: deg, rad, km, m)
2.3.7 SELGRIDCELL  Select grid cells
Synopsis
<operator>,indices infile outfile
Description
The operator selects grid cells of all fields from infile. The user must specify the index of each grid cell. The resulting grid in outfile is unstructured.
Operators
 selgridcell

Select grid cells
 delgridcell

Delete grid cells
Parameter
 indices

INTEGER Commaseparated list or first/last[/inc] range of indices
2.3.8 SAMPLEGRID  Resample grid
Synopsis
samplegrid,factor infile outfile
Description
This is a special operator for resampling the horizontal grid. No interpolation takes place. Resample factor=2 means every second grid point is removed. Only rectilinear and curvilinear source grids are supported by this operator.
Parameter
 factor

INTEGER Resample factor, typically 2, which will half the resolution
2.3.9 SELYEARIDX  Select year by index
Synopsis
selyearidx infile1 infile2 outfile
Description
Selects field elements from infile2 by a yearly time index from infile1. The yearly indices in infile1 should be the result of corresponding yearminidx and yearmaxidx operations, respectively.
2.3.10 SELSURFACE  Extract surface
Synopsis
<operator> infile outfile
isosurface,isovalue infile outfile
Description
This module computes a surface from all 3D variables. The result is a horizonal 2D field.
Operators
 bottomvalue

Extract bottom level
This operator selects the valid values at the bottom level. The NetCDF CF compliant attribute positive is used to determine where top and bottom are. If this attribute is missing, low values are bottom and high values are top.  topvalue

Extract top level
This operator selects the valid values at the top level. The NetCDF CF compliant attribute positive is used to determine where top and bottom are. If this attribute is missing, low values are bottom and high values are top.  isosurface

Extract isosurface
This operator computes an isosurface. The value of the isosurfce is specified by the parameter isovalue. The isosurface is calculated by linear interpolation between two layers.
Parameter
 isovalue

FLOAT Isosurface value
2.4 Conditional selection
This section contains modules to conditional select field elements. The fields in the first input file are handled as a mask. A value not equal to zero is treated as "true", zero is treated as "false".
Here is a short overview of all operators in this section:
ifthen  If then 
ifnotthen  If not then 
ifthenelse  If then else 
ifthenc  If then constant 
ifnotthenc  If not then constant 
reducegrid  Reduce input file variables to locations, where mask is nonzero. 
2.4.1 COND  Conditional select one field
Synopsis
<operator> infile1 infile2 outfile
Description
This module selects field elements from infile2 with respect to infile1 and writes them to outfile. The fields in infile1 are handled as a mask. A value not equal to zero is treated as "true", zero is treated as "false". The number of fields in infile1 has either to be the same as in infile2 or the same as in one timestep of infile2 or only one. The fields in outfile inherit the meta data from infile2.
Operators
 ifthen

If then
o(t,x) =  ifnotthen

If not then
o(t,x) =
Example
To select all field elements of infile2 if the corresponding field element of infile1 is greater than 0 use:
cdo ifthen infile1 infile2 outfile
2.4.2 COND2  Conditional select two fields
Synopsis
ifthenelse infile1 infile2 infile3 outfile
Description
This operator selects field elements from infile2 or infile3 with respect to infile1 and writes them to outfile. The fields in infile1 are handled as a mask. A value not equal to zero is treated as "true", zero is treated as "false". The number of fields in infile1 has either to be the same as in infile2 or the same as in one timestep of infile2 or only one. infile2 and infile3 need to have the same number of fields. The fields in outfile inherit the meta data from infile2.
o(t,x) =
Example
To select all field elements of infile2 if the corresponding field element of infile1 is greater than 0 and from infile3 otherwise use:
cdo ifthenelse infile1 infile2 infile3 outfile
2.4.3 CONDC  Conditional select a constant
Synopsis
<operator>,c infile outfile
Description
This module creates fields with a constant value or missing value. The fields in infile are handled as a mask. A value not equal to zero is treated as "true", zero is treated as "false".
Operators
 ifthenc

If then constant
o(t,x) =  ifnotthenc

If not then constant
o(t,x) =
Parameter
 c

FLOAT Constant
Example
To create fields with the constant value 7 if the corresponding field element of infile is greater than 0 use:
cdo ifthenc,7 infile outfile
2.4.4 MAPREDUCE  Reduce fields to userdefined mask
Synopsis
reducegrid,mask[,limitCoordsOutput] infile outfile
Description
This module holds an operator for data reduction based on a user defined mask. The output grid is unstructured and includes coordinate bounds. Bounds can be avoided by using the additional ’nobounds’ keyword. With ’nocoords’ given, coordinates a completely suppressed.
Parameter
 mask

STRING file which holds the mask field
 limitCoordsOutput

STRING optional parameter to limit coordinates output: ’nobounds’ disables coordinate bounds, ’nocoords’ avoids all coordinate information
Example
To limit data fields to land values, a mask has to be created first with
cdo gtc,0 topo,ni96 lsm_gme96.grb
Here a GME grid is used. Say temp_gme96.grb contains a global temperture field. The following command limits the global grid to landpoints.
cdo f nc reduce,lsm_gme96.grb temp_gme96.grb tempOnLand_gme96.nc
Note that output file type is NetCDF, because unstructured grids cannot be stored in GRIB format.
2.5 Comparison
This section contains modules to compare datasets. The resulting field is a mask containing 1 if the comparison is true and 0 if not.
Here is a short overview of all operators in this section:
eq  Equal 
ne  Not equal 
le  Less equal 
lt  Less than 
ge  Greater equal 
gt  Greater than 
eqc  Equal constant 
nec  Not equal constant 
lec  Less equal constant 
ltc  Less than constant 
gec  Greater equal constant 
gtc  Greater than constant 
2.5.1 COMP  Comparison of two fields
Synopsis
<operator> infile1 infile2 outfile
Description
This module compares two datasets field by field. The resulting field is a mask containing 1 if the comparison is true and 0 if not. The number of fields in infile1 should be the same as in infile2. One of the input files can contain only one timestep or one field. The fields in outfile inherit the meta data from infile1 or infile2. The type of comparison depends on the chosen operator.
Operators
 eq

Equal
o(t,x) =  ne

Not equal
o(t,x) =  le

Less equal
o(t,x) =  lt

Less than
o(t,x) =  ge

Greater equal
o(t,x) =  gt

Greater than
o(t,x) =
Example
To create a mask containing 1 if the elements of two fields are the same and 0 if the elements are different use:
cdo eq infile1 infile2 outfile
2.5.2 COMPC  Comparison of a field with a constant
Synopsis
<operator>,c infile outfile
Description
This module compares all fields of a dataset with a constant. The resulting field is a mask containing 1 if the comparison is true and 0 if not. The type of comparison depends on the chosen operator.
Operators
 eqc

Equal constant
o(t,x) =  nec

Not equal constant
o(t,x) =  lec

Less equal constant
o(t,x) =  ltc

Less than constant
o(t,x) =  gec

Greater equal constant
o(t,x) =  gtc

Greater than constant
o(t,x) =
Parameter
 c

FLOAT Constant
Example
To create a mask containing 1 if the field element is greater than 273.15 and 0 if not use:
cdo gtc,273.15 infile outfile
2.6 Modification
This section contains modules to modify the metadata, fields or part of a field in a dataset.
Here is a short overview of all operators in this section:
setattribute  Set attributes 
setpartabp  Set parameter table 
setpartabn  Set parameter table 
setcodetab  Set parameter code table 
setcode  Set code number 
setparam  Set parameter identifier 
setname  Set variable name 
setunit  Set variable unit 
setlevel  Set level 
setltype  Set GRIB level type 
setdate  Set date 
settime  Set time of the day 
setday  Set day 
setmon  Set month 
setyear  Set year 
settunits  Set time units 
settaxis  Set time axis 
settbounds  Set time bounds 
setreftime  Set reference time 
setcalendar  Set calendar 
shifttime  Shift timesteps 
chcode  Change code number 
chparam  Change parameter identifier 
chname  Change variable or coordinate name 
chunit  Change variable unit 
chlevel  Change level 
chlevelc  Change level of one code 
chlevelv  Change level of one variable 
setgrid  Set grid 
setgridtype  Set grid type 
setgridarea  Set grid cell area 
setgridmask  Set grid mask 
setzaxis  Set zaxis 
genlevelbounds  Generate level bounds 
invertlat  Invert latitudes 
invertlev  Invert levels 
shiftx  Shift x 
shifty  Shift y 
maskregion  Mask regions 
masklonlatbox  Mask a longitude/latitude box 
maskindexbox  Mask an index box 
setclonlatbox  Set a longitude/latitude box to constant 
setcindexbox  Set an index box to constant 
enlarge  Enlarge fields 
setmissval  Set a new missing value 
setctomiss  Set constant to missing value 
setmisstoc  Set missing value to constant 
setrtomiss  Set range to missing value 
setvrange  Set valid range 
setmisstonn  Set missing value to nearest neighbor 
setmisstodis  Set missing value to distanceweighted average 
setgridcell  Set the value of a grid cell 
2.6.1 SETATTRIBUTE  Set attributes
Synopsis
setattribute,attributes infile outfile
Description
This operator sets attributes of a dataset and writes the result to outfile. The new attributes are only available in outfile if the file format supports attributes.
Each attribute has the following structure:
[var_nm@]att_nm[:sdi]=[att_val{[var_nm@]att_nm}]
 var_nm

Variable name (optional). Example: pressure
 att_nm

Attribute name. Example: units
 att_val

Commaseparated list of attribute values. Example: pascal
The value of var_nm is the name of the variable containing the attribute (named att_nm) that you want to set. Use wildcards to set the attribute att_nm to more than one variable. A value of var_nm of ’*’ will set the attribute att_nm to all data variables. If var_nm is missing then att_nm refers to a global attribute.
The value of att_nm is the name of the attribute you want to set. For each attribute a string (att_nm:s), a double (att_nm:d) or an integer (att_nm:i) type can be defined. By default the native type is set.
The value of att_val is the contents of the attribute att_nm. att_val may be a single value or onedimensional array of elements. The type and the number of elements of an attribute will be detected automatically from the contents of the values. An already existing attribute att_nm will be overwritten or it will be removed if att_val is omitted. Alternatively, the values of an existing attribute can be copied. This attribute must then be enclosed in curly brackets.
A special meaning has the attribute name FILE. If this is the 1st attribute then all attributes are read from a file specified in the value of att_val.
Parameter
 attributes

STRING Commaseparated list of attributes.
Note
Attributes are evaluated by CDO when opening infile. Therefor the result of this operator is not available for other operators when this operator is used in chaining operators.
Example
To set the units of the variable pressure to pascal use:
cdo setattribute,pressure@units=pascal infile outfile
To set the global text attribute "my_att" to "my contents", use:
cdo setattribute,my_att="my contents" infile outfile
Result of ’ncdump h outfile’:
netcdf outfile {
dimensions: ...
variables: ...
// global attributes:
:my_att = "my contents" ;
}
2.6.2 SETPARTAB  Set parameter table
Synopsis
<operator>,table[,convert] infile outfile
Description
This module transforms data and metadata of infile via a parameter table and writes the result to outfile. A parameter table is an ASCII formatted file with a set of parameter entries for each variable. Each new set have to start with "¶meter" and to end with "/".
The following parameter table entries are supported:



Entry  Type  Description 



name  WORD  Name of the variable 



out_name  WORD  New name of the variable 



param  WORD  Parameter identifier (GRIB1: code[.tabnum]; GRIB2: num[.cat[.dis]]) 



out_param  WORD  New parameter identifier 



type  WORD  Data type (real or double) 



standard_name  WORD  As defined in the CF standard name table 



long_name  STRING  Describing the variable 



units  STRING  Specifying the units for the variable 



comment  STRING  Information concerning the variable 



cell_methods  STRING  Information concerning calculation of means or climatologies 



cell_measures  STRING  Indicates the names of the variables containing cell areas and volumes 



missing_value  FLOAT  Specifying how missing data will be identified 



valid_min  FLOAT  Minimum valid value 



valid_max  FLOAT  Maximum valid value 



ok_min_mean_abs  FLOAT  Minimum absolute mean 



ok_max_mean_abs  FLOAT  Maximum absolute mean 



factor  FLOAT  Scale factor 



delete  INTEGER  Set to 1 to delete variable 



convert  INTEGER  Set to 1 to convert the unit if necessary 



Unsupported parameter table entries are stored as variable attributes. The search key for the variable depends on the operator. Use setpartabn to search variables by the name. This is typically used for NetCDF datasets. The operator setpartabp searches variables by the parameter ID.
Operators
 setpartabp

Set parameter table
Search variables by the parameter identifier.  setpartabn

Set parameter table
Search variables by name.
Parameter
 table

STRING Parameter table file or name
 convert

STRING Converts the units if necessary
Example
Here is an example of a parameter table for one variable:
prompt> cat mypartab
¶meter
name = t
out_name = ta
standard_name = air_temperature
units = "K"
missing_value = 1.0e+20
valid_min = 157.1
valid_max = 336.3
/
To apply this parameter table to a dataset use:
cdo setpartabn,mypartab,convert infile outfile
This command renames the variable t to ta. The standard name of this variable is set to air_temperature and the unit is set to [K] (converts the unit if necessary). The missing value will be set to 1.0e+20. In addition it will be checked whether the values of the variable are in the range of 157.1 to 336.3.
2.6.3 SET  Set field info
Synopsis
setcodetab,table infile outfile
setcode,code infile outfile
setparam,param infile outfile
setname,name infile outfile
setunit,unit infile outfile
setlevel,level infile outfile
setltype,ltype infile outfile
Description
This module sets some field information. Depending on the chosen operator the parameter table, code number, parameter identifier, variable name or level is set.
Operators
 setcodetab

Set parameter code table
Sets the parameter code table for all variables.  setcode

Set code number
Sets the code number for all variables to the same given value.  setparam

Set parameter identifier
Sets the parameter identifier of the first variable.  setname

Set variable name
Sets the name of the first variable.  setunit

Set variable unit
Sets the unit of the first variable.  setlevel

Set level
Sets the first level of all variables.  setltype

Set GRIB level type
Sets the GRIB level type of all variables.
Parameter
 table

STRING Parameter table file or name
 code

INTEGER Code number
 param

STRING Parameter identifier (GRIB1: code[.tabnum]; GRIB2: num[.cat[.dis]])
 name

STRING Variable name
 level

FLOAT New level
 ltype

INTEGER GRIB level type
2.6.4 SETTIME  Set time
Synopsis
setdate,date infile outfile
settime,time infile outfile
setday,day infile outfile
setmon,month infile outfile
setyear,year infile outfile
settunits,units infile outfile
settaxis,date,time[,inc] infile outfile
settbounds,frequency infile outfile
setreftime,date,time[,units] infile outfile
setcalendar,calendar infile outfile
shifttime,sval infile outfile
Description
This module sets the time axis or part of the time axis. Which part of the time axis is overwritten/created depends on the chosen operator.
Operators
 setdate

Set date
Sets the date in every timestep to the same given value.  settime

Set time of the day
Sets the time in every timestep to the same given value.  setday

Set day
Sets the day in every timestep to the same given value.  setmon

Set month
Sets the month in every timestep to the same given value.  setyear

Set year
Sets the year in every timestep to the same given value.  settunits

Set time units
Sets the base units of a relative time axis.  settaxis

Set time axis
Sets the time axis.  settbounds

Set time bounds
Sets the time bounds.  setreftime

Set reference time
Sets the reference time of a relative time axis.  setcalendar

Set calendar
Sets the calendar of a relative time axis.  shifttime

Shift timesteps
Shifts all timesteps by the parameter sval.
Parameter
 day

INTEGER Value of the new day
 month

INTEGER Value of the new month
 year

INTEGER Value of the new year
 units

STRING Base units of the time axis (seconds, minutes, hours, days, months, years)
 date

STRING Date (format: YYYYMMDD)
 time

STRING Time (format: hh:mm:ss)
 inc

STRING Optional increment (seconds, minutes, hours, days, months, years) [default: 1hour]
 frequency

STRING Frequency of the time series (hour, day, month, year)
 calendar

STRING Calendar (standard, proleptic_gregorian, 360_day, 365_day, 366_day)
 sval

STRING Shift value (e.g. 3hour)
Example
To set the time axis to 19870116 12:00:00 with an increment of one month for each timestep use:
cdo settaxis,19870116,12:00:00,1mon infile outfile
Result of ’cdo showdate outfile’ for a dataset with 12 timesteps:
19870116 19870216 19870316 19870416 19870516 19870616 \
19870716 19870816 19870916 19871016 19871116 19871216
To shift this time axis by 15 days use:
cdo shifttime,15days infile outfile
Result of ’cdo showdate outfile’:
19870101 19870201 19870301 19870401 19870501 19870601 \
19870701 19870801 19870901 19871001 19871101 19871201
2.6.5 CHANGE  Change field header
Synopsis
chcode,oldcode,newcode[,...] infile outfile
chparam,oldparam,newparam,... infile outfile
chname,oldname,newname,... infile outfile
chunit,oldunit,newunit,... infile outfile
chlevel,oldlev,newlev,... infile outfile
chlevelc,code,oldlev,newlev infile outfile
chlevelv,name,oldlev,newlev infile outfile
Description
This module reads fields from infile, changes some header values and writes the results to outfile. The kind of changes depends on the chosen operator.
Operators
 chcode

Change code number
Changes some user given code numbers to new user given values.  chparam

Change parameter identifier
Changes some user given parameter identifiers to new user given values.  chname

Change variable or coordinate name
Changes some user given variable or coordinate names to new user given names.  chunit

Change variable unit
Changes some user given variable units to new user given units.  chlevel

Change level
Changes some user given levels to new user given values.  chlevelc

Change level of one code
Changes one level of a user given code number.  chlevelv

Change level of one variable
Changes one level of a user given variable name.
Parameter
 code

INTEGER Code number
 oldcode,newcode,...

INTEGER Pairs of old and new code numbers
 oldparam,newparam,...

STRING Pairs of old and new parameter identifiers
 name

STRING Variable name
 oldname,newname,...

STRING Pairs of old and new variable names
 oldlev

FLOAT Old level
 newlev

FLOAT New level
 oldlev,newlev,...

FLOAT Pairs of old and new levels
Example
To change the code number 98 to 179 and 99 to 211 use:
cdo chcode,98,179,99,211 infile outfile
2.6.6 SETGRID  Set grid information
Synopsis
setgrid,grid infile outfile
setgridtype,gridtype infile outfile
setgridarea,gridarea infile outfile
setgridmask,gridmask infile outfile
Description
This module modifies the metadata of the horizontal grid. Depending on the chosen operator a new grid description is set, the coordinates are converted or the grid cell area is added.
Operators
 setgrid

Set grid
Sets a new grid description. The input fields need to have the same grid size as the size of the target grid description.  setgridtype

Set grid type
Sets the grid type of all input fields. The following grid types are available: curvilinear

Converts a regular grid to a curvilinear grid
 unstructured

Converts a regular or curvilinear grid to an unstructured grid
 dereference

Dereference a reference to a grid
 regular

Linear interpolation of a reduced Gaussian grid to a regular Gaussian grid
 regularnn

Nearest neighbor interpolation of a reduced Gaussian grid to a regular Gaussian grid
 lonlat

Converts a regular lonlat grid stored as a curvilinear grid back to a lonlat grid
 setgridarea

Set grid cell area
Sets the grid cell area. The parameter gridarea is the path to a data file, the first field is used as grid cell area. The input fields need to have the same grid size as the grid cell area. The grid cell area is used to compute the weights of each grid cell if needed by an operator, e.g. for fldmean.  setgridmask

Set grid mask
Sets the grid mask. The parameter gridmask is the path to a data file, the first field is used as the grid mask. The input fields need to have the same grid size as the grid mask. The grid mask is used as the target grid mask for remapping, e.g. for remapbil.
Parameter
 grid

STRING Grid description file or name
 gridtype

STRING Grid type (curvilinear, unstructured, regular, lonlat or dereference)
 gridarea

STRING Data file, the first field is used as grid cell area
 gridmask

STRING Data file, the first field is used as grid mask
Example
Assuming a dataset has fields on a grid with 2048 elements without or with wrong grid description. To set the grid description of all input fields to a Gaussian N32 grid (8192 gridpoints) use:
cdo setgrid,n32 infile outfile
2.6.7 SETZAXIS  Set zaxis information
Synopsis
setzaxis,zaxis infile outfile
genlevelbounds[,zbot[,ztop]] infile outfile
Description
This module modifies the metadata of the vertical grid.
Operators
 setzaxis

Set zaxis
This operator sets the zaxis description of all variables with the same number of level as the new zaxis.  genlevelbounds

Generate level bounds
Generates the layer bounds of the zaxis.
Parameter
 zaxis

STRING Zaxis description file or name of the target zaxis
 zbot

FLOAT Specifying the bottom of the vertical column. Must have the same units as zaxis.
 ztop

FLOAT Specifying the top of the vertical column. Must have the same units as zaxis.
2.6.8 INVERT  Invert latitudes
Synopsis
invertlat infile outfile
Description
This operator inverts the latitudes of all fields on a rectilinear grid.
Example
To invert the latitudes of a 2D field from N>S to S>N use:
cdo invertlat infile outfile
2.6.9 INVERTLEV  Invert levels
Synopsis
invertlev infile outfile
Description
This operator inverts the levels of all 3D variables.
2.6.10 SHIFTXY  Shift field
Synopsis
<operator>,<nshift>,<cyclic>,<coord> infile outfile
Description
This module contains operators to shift all fields in x or y direction. All fields need to have the same horizontal rectilinear or curvilinear grid.
Operators
 shiftx

Shift x
Shifts all fields in x direction.  shifty

Shift y
Shifts all fields in y direction.
Parameter
 nshift

INTEGER Number of grid cells to shift (default: 1)
 cyclic

STRING If set, cells are filled up cyclic (default: missing value)
 coord

STRING If set, coordinates are also shifted
Example
To shift all input fields in the x direction by +1 cells and fill the new cells with missing value, use:
cdo shiftx infile outfile
To shift all input fields in the x direction by +1 cells and fill the new cells cyclic, use:
cdo shiftx,1,cyclic infile outfile
2.6.11 MASKREGION  Mask regions
Synopsis
maskregion,regions infile outfile
Description
Masks different regions of fields with a regular lon/lat grid. The elements inside a region are untouched, the elements outside are set to missing value. Considered are only those grid cells with the grid center inside the regions. All input fields must have the same horizontal grid. The user has to give ASCII formatted files with different regions. A region is defined by a polygon. Each line of a polygon description file contains the longitude and latitude of one point. Each polygon description file can contain one or more polygons separated by a line with the character &.
Parameter
 regions

STRING Commaseparated list of ASCII formatted files with different regions
Example
To mask the region with the longitudes from 120E to 90W and latitudes from 20N to 20S on all input fields use:
cdo maskregion,myregion infile outfile
For this example the description file of the region myregion should contain one polygon with the following four coordinates:
120 20
120 20
270 20
270 20
2.6.12 MASKBOX  Mask a box
Synopsis
masklonlatbox,lon1,lon2,lat1,lat2 infile outfile
maskindexbox,idx1,idx2,idy1,idy2 infile outfile
Description
Masks grid cells inside a lon/lat or index box. The elements inside the box are untouched, the elements outside are set to missing value. All input fields need to have the same horizontal grid. Use sellonlatbox or selindexbox if only the data inside the box are needed.
Operators
 masklonlatbox

Mask a longitude/latitude box
Masks grid cells inside a lon/lat box. The user must specify the longitude and latitude of the edges of the box. Only those grid cells are considered whose grid center lies within the lon/lat box. For rotated lon/lat grids the parameters must be specified in rotated coordinates.  maskindexbox

Mask an index box
Masks grid cells within an index box. The user must specify the indices of the edges of the box. The index of the left edge can be greater then the one of the right edge. Use negative indexing to start from the end. The input grid must be a regular lon/lat or a 2D curvilinear grid.
Parameter
 lon1

FLOAT Western longitude
 lon2

FLOAT Eastern longitude
 lat1

FLOAT Southern or northern latitude
 lat2

FLOAT Northern or southern latitude
 idx1

INTEGER Index of first longitude
 idx2

INTEGER Index of last longitude
 idy1

INTEGER Index of first latitude
 idy2

INTEGER Index of last latitude
Example
To mask the region with the longitudes from 120E to 90W and latitudes from 20N to 20S on all input fields use:
cdo masklonlatbox,120,90,20,20 infile outfile
If the input dataset has fields on a Gaussian N16 grid, the same box can be masked with maskindexbox by:
cdo maskindexbox,23,48,13,20 infile outfile
2.6.13 SETBOX  Set a box to constant
Synopsis
setclonlatbox,c,lon1,lon2,lat1,lat2 infile outfile
setcindexbox,c,idx1,idx2,idy1,idy2 infile outfile
Description
Sets a box of the rectangularly understood field to a constant value. The elements outside the box are untouched, the elements inside are set to the given constant. All input fields need to have the same horizontal grid.
Operators
 setclonlatbox

Set a longitude/latitude box to constant
Sets the values of a longitude/latitude box to a constant value. The user has to give the longitudes and latitudes of the edges of the box.  setcindexbox

Set an index box to constant
Sets the values of an index box to a constant value. The user has to give the indices of the edges of the box. The index of the left edge can be greater than the one of the right edge.
Parameter
 c

FLOAT Constant
 lon1

FLOAT Western longitude
 lon2

FLOAT Eastern longitude
 lat1

FLOAT Southern or northern latitude
 lat2

FLOAT Northern or southern latitude
 idx1

INTEGER Index of first longitude
 idx2

INTEGER Index of last longitude
 idy1

INTEGER Index of first latitude
 idy2

INTEGER Index of last latitude
Example
To set all values in the region with the longitudes from 120E to 90W and latitudes from 20N to 20S to the constant value 1.23 use:
cdo setclonlatbox,1.23,120,90,20,20 infile outfile
If the input dataset has fields on a Gaussian N16 grid, the same box can be set with setcindexbox by:
cdo setcindexbox,1.23,23,48,13,20 infile outfile
2.6.14 ENLARGE  Enlarge fields
Synopsis
enlarge,grid infile outfile
Description
Enlarge all fields of infile to a user given horizontal grid. Normally only the last field element is used for the enlargement. If however the input and output grid are regular lon/lat grids, a zonal or meridional enlargement is possible. Zonal enlargement takes place, if the xsize of the input field is 1 and the ysize of both grids are the same. For meridional enlargement the ysize have to be 1 and the xsize of both grids should have the same size.
Parameter
 grid

STRING Target grid description file or name
Example
Assumed you want to add two datasets. The first dataset is a field on a global grid (n field elements) and the second dataset is a global mean (1 field element). Before you can add these two datasets the second dataset have to be enlarged to the grid size of the first dataset:
cdo enlarge,infile1 infile2 tmpfile
cdo add infile1 tmpfile outfile
Or shorter using operator piping:
cdo add infile1 enlarge,infile1 infile2 outfile
2.6.15 SETMISS  Set missing value
Synopsis
setmissval,newmiss infile outfile
setctomiss,c infile outfile
setmisstoc,c infile outfile
setrtomiss,rmin,rmax infile outfile
setvrange,rmin,rmax infile outfile
setmisstonn infile outfile
setmisstodis[,neighbors] infile outfile
Description
This module sets part of a field to missing value or missing values to a constant value. Which part of the field is set depends on the chosen operator.
Operators
 setmissval

Set a new missing value
o(t,x) =  setctomiss

Set constant to missing value
o(t,x) =  setmisstoc

Set missing value to constant
o(t,x) =  setrtomiss

Set range to missing value
o(t,x) =  setvrange

Set valid range
o(t,x) =  setmisstonn

Set missing value to nearest neighbor
Set all missing values to the nearest non missing value.o(t,x) =
 setmisstodis

Set missing value to distanceweighted average
Set all missing values to the distanceweighted average of the nearest non missing values. The default number of nearest neighbors is 4.
Parameter
 neighbors

INTEGER Number of nearest neighbors
 newmiss

FLOAT New missing value
 c

FLOAT Constant
 rmin

FLOAT Lower bound
 rmax

FLOAT Upper bound
Example
setrtomiss
Assume an input dataset has one field with temperatures in the range from 246 to 304 Kelvin. To set all values below 273.15 Kelvin to missing value use:
cdo setrtomiss,0,273.15 infile outfile
Result of ’cdo info infile’:
1 : Date Time Code Level Size Miss : Minimum Mean Maximum
1 : 19871231 12:00:00 139 0 2048 0 : 246.27 276.75 303.71
Result of ’cdo info outfile’:
1 : Date Time Code Level Size Miss : Minimum Mean Maximum
1 : 19871231 12:00:00 139 0 2048 871 : 273.16 287.08 303.71
setmisstonn
Set all missing values to the nearest non missing value:
cdo setmisstonn infile outfile
Below is a schematic illustration of this example:
On the left side is input data with missing values in grey and on the right side the result with the filled missing values.
2.6.16 SETGRIDCELL  Set the value of a grid cell
Synopsis
setgridcell,params infile outfile
Description
This operator sets the value of the selected grid cells. The grid cells can be selected by a commaseparated list of grid cell indices or a mask. The mask is read from a data file, which may contain only one field. If no grid cells are selected, all values are set.
Parameter
 value

FLOAT Value of the grid cell
 cell

INTEGER Commaseparated list of grid cell indices
 mask

STRING Name of the data file which contains the mask
2.7 Arithmetic
This section contains modules to arithmetically process datasets.
Here is a short overview of all operators in this section:
expr  Evaluate expressions 
exprf  Evaluate expressions script 
aexpr  Evaluate expressions and append results 
aexprf  Evaluate expression script and append results 
abs  Absolute value 
int  Integer value 
nint  Nearest integer value 
pow  Power 
sqr  Square 
sqrt  Square root 
exp  Exponential 
ln  Natural logarithm 
log10  Base 10 logarithm 
sin  Sine 
cos  Cosine 
tan  Tangent 
asin  Arc sine 
acos  Arc cosine 
atan  Arc tangent 
reci  Reciprocal value 
not  Logical NOT 
addc  Add a constant 
subc  Subtract a constant 
mulc  Multiply with a constant 
divc  Divide by a constant 
minc  Minimum of a field and a constant 
maxc  Maximum of a field and a constant 
add  Add two fields 
sub  Subtract two fields 
mul  Multiply two fields 
div  Divide two fields 
min  Minimum of two fields 
max  Maximum of two fields 
atan2  Arc tangent of two fields 
dayadd  Add daily time series 
daysub  Subtract daily time series 
daymul  Multiply daily time series 
daydiv  Divide daily time series 
monadd  Add monthly time series 
monsub  Subtract monthly time series 
monmul  Multiply monthly time series 
mondiv  Divide monthly time series 
yearadd  Add yearly time series 
yearsub  Subtract yearly time series 
yearmul  Multiply yearly time series 
yeardiv  Divide yearly time series 
yhouradd  Add multiyear hourly time series 
yhoursub  Subtract multiyear hourly time series 
yhourmul  Multiply multiyear hourly time series 
yhourdiv  Divide multiyear hourly time series 
ydayadd  Add multiyear daily time series 
ydaysub  Subtract multiyear daily time series 
ydaymul  Multiply multiyear daily time series 
ydaydiv  Divide multiyear daily time series 
ymonadd  Add multiyear monthly time series 
ymonsub  Subtract multiyear monthly time series 
ymonmul  Multiply multiyear monthly time series 
ymondiv  Divide multiyear monthly time series 
yseasadd  Add multiyear seasonal time series 
yseassub  Subtract multiyear seasonal time series 
yseasmul  Multiply multiyear seasonal time series 
yseasdiv  Divide multiyear seasonal time series 
muldpm  Multiply with days per month 
divdpm  Divide by days per month 
muldpy  Multiply with days per year 
divdpy  Divide by days per year 
mulcoslat  Multiply with the cosine of the latitude 
divcoslat  Divide by cosine of the latitude 
2.7.1 EXPR  Evaluate expressions
Synopsis
expr,instr infile outfile
exprf,filename infile outfile
aexpr,instr infile outfile
aexprf,filename infile outfile
Description
This module arithmetically processes every timestep of the input dataset. Each individual assignment statement have to end with a semicolon. The special key _ALL_ is used as a template. A statement with a template is replaced for all variable names. Unlike regular variables, temporary variables are never written to the output stream. To define a temporary variable simply prefix the variable name with an underscore (e.g. _varname) when the variable is declared.
The following operators are supported:




Operator  Meaning  Example  Result 




=  assignment  x = y  Assigns y to x 




+  addition  x + y  Sum of x and y 




  subtraction  x  y  Difference of x and y 




*  multiplication  x * y  Product of x and y 




/  division  x / y  Quotient of x and y 




exponentiation  x y  Exponentiates x with y  




==  equal to  x == y  1, if x equal to y; else 0 




!=  not equal to  x != y  1, if x not equal to y; else 0 




>  greater than  x > y  1, if x greater than y; else 0 




<  less than  x < y  1, if x less than y; else 0 




>=  greater equal  x >= y  1, if x greater equal y; else 0 




<=  less equal  x <= y  1, if x less equal y; else 0 




<=>  less equal greater  x <=> y  1, if x less y; 1, if x greater y; else 0 




&&  logical AND  x && y  1, if x and y not equal 0; else 0 




  logical OR  x  y  1, if x or y not equal 0; else 0 




!  logical NOT  !x  1, if x equal 0; else 0 




?:  ternary conditional  x ? y : z  y, if x not equal 0, else z 




The following functions are supported:
Math intrinsics:
 abs(x)

Absolute value of x
 floor(x)

Round to largest integral value not greater than x
 ceil(x)

Round to smallest integral value not less than x
 float(x)

32bit float value of x
 int(x)

Integer value of x
 nint(x)

Nearest integer value of x
 sqr(x)

Square of x
 sqrt(x)

Square Root of x
 exp(x)

Exponential of x
 ln(x)

Natural logarithm of x
 log10(x)

Base 10 logarithm of x
 sin(x)

Sine of x, where x is specified in radians
 cos(x)

Cosine of x, where x is specified in radians
 tan(x)

Tangent of x, where x is specified in radians
 asin(x)

Arcsine of x, where x is specified in radians
 acos(x)

Arccosine of x, where x is specified in radians
 atan(x)

Arctangent of x, where x is specified in radians
 sinh(x)

Hyperbolic sine of x, where x is specified in radians
 cosh(x)

Hyperbolic cosine of x, where x is specified in radians
 tanh(x)

Hyperbolic tangent of x, where x is specified in radians
 asinh(x)

Inverse hyperbolic sine of x, where x is specified in radians
 acosh(x)

Inverse hyperbolic cosine of x, where x is specified in radians
 atanh(x)

Inverse hyperbolic tangent of x, where x is specified in radians
 rad(x)

Convert x from degrees to radians
 deg(x)

Convert x from radians to degrees
 rand(x)

Replace x by pseudorandom numbers in the range of 0 to 1
 isMissval(x)

Returns 1 where x is missing
 mod(x,y)

Floatingpoint remainder of x/ y
 min(x,y)

Minimum value of x and y
 max(x,y)

Maximum value of x and y
 pow(x,y)

Power function
 hypot(x,y)

Euclidean distance function, sqrt(x*x + y*y)
 atan2(x,y)

Arc tangent function of y/x, using signs to determine quadrants
Coordinates:
 clon(x)

Longitude coordinate of x (available only if x has geographical coordinates)
 clat(x)

Latitude coordinate of x (available only if x has geographical coordinates)
 gridarea(x)

Grid cell area of x (available only if x has geographical coordinates)
 clev(x)

Level coordinate of x (0, if x is a 2D surface variable)
 clevidx(x)

Level index of x (0, if x is a 2D surface variable)
 cthickness(x)

Layer thickness, upper minus lower level bound of x (1, if level bounds are missing)
 ctimestep()

Timestep number (1 to N)
 cdate()

Verification date as YYYYMMDD
 ctime()

Verification time as HHMMSS.millisecond
 cdeltat()

Difference between current and last timestep in seconds
 cday()

Day as DD
 cmonth()

Month as MM
 cyear()

Year as YYYY
 csecond()

Second as SS.millisecond
 cminute()

Minute as MM
 chour()

Hour as HH
Constants:
 ngp(x)

Number of horizontal grid points
 nlev(x)

Number of vertical levels
 size(x)

Total number of elements (ngp(x)*nlev(x))
 missval(x)

Returns the missing value of variable x
Statistical values over a field:
fldmin(x), fldmax(x), fldrange(x), fldsum(x), fldmean(x), fldavg(x), fldstd(x), fldstd1(x), fldvar(x), fldvar1(x), fldskew(x), fldkurt(x), fldmedian(x)
Zonal statistical values for regular 2D grids:
zonmin(x), zonmax(x), zonrange(x), zonsum(x), zonmean(x), zonavg(x), zonstd(x), zonstd1(x), zonvar(x), zonvar1(x), zonskew(x), zonkurt(x), zonmedian(x)
Vertical statistical values:
vertmin(x), vertmax(x), vertrange(x), vertsum(x), vertmean(x), vertavg(x), vertstd(x), vertstd1(x), vertvar(x), vertvar1(x)
Miscellaneous:
 sellevel(x,k)

Select level k of variable x
 sellevidx(x,k)

Select level index k of variable x
 sellevelrange(x,k1,k2)

Select all levels of variable x in the range k1 to k2
 sellevidxrange(x,k1,k2)

Select all level indices of variable x in the range k1 to k2
 remove(x)

Remove variable x from output stream
Operators
 expr

Evaluate expressions
The processing instructions are read from the parameter.  exprf

Evaluate expressions script
Contrary to expr the processing instructions are read from a file.  aexpr

Evaluate expressions and append results
Same as expr, but keep input variables and append results  aexprf

Evaluate expression script and append results
Same as exprf, but keep input variables and append results
Parameter
 instr

STRING Processing instructions (need to be ’quoted’ in most cases)
 filename

STRING File with processing instructions
Note
If the input stream contains duplicate entries of the same variable name then the last one is used.
Example
Assume an input dataset contains at least the variables ’aprl’, ’aprc’ and ’ts’. To create a new variable ’var1’ with the sum of ’aprl’ and ’aprc’ and a variable ’var2’ which convert the temperature ’ts’ from Kelvin to Celsius use:
cdo expr,’var1=aprl+aprc;var2=ts273.15;’ infile outfile
The same example, but the instructions are read from a file:
cdo exprf,myexpr infile outfile
The file myexpr contains:
var1 = aprl + aprc;
var2 = ts  273.15;
2.7.2 MATH  Mathematical functions
Synopsis
<operator> infile outfile
Description
This module contains some standard mathematical functions. All trigonometric functions calculate with radians.
Operators
 abs

Absolute value
o(t,x) = abs(i(t,x))  int

Integer value
o(t,x) = int(i(t,x))  nint

Nearest integer value
o(t,x) = nint(i(t,x))  pow

Power
o(t,x) = i(t,x)^{y}  sqr

Square
o(t,x) = i(t,x)^{2}  sqrt

Square root
o(t,x) =  exp

Exponential
o(t,x) = e^{i(t,x)}  ln

Natural logarithm
o(t,x) = ln(i(t,x))  log10

Base 10 logarithm
o(t,x) = log _{10}(i(t,x))  sin

Sine
o(t,x) = sin(i(t,x))  cos

Cosine
o(t,x) = cos(i(t,x))  tan

Tangent
o(t,x) = tan(i(t,x))  asin

Arc sine
o(t,x) = arcsin(i(t,x))  acos

Arc cosine
o(t,x) = arccos(i(t,x))  atan

Arc tangent
o(t,x) = arctan(i(t,x))  reci

Reciprocal value
o(t,x) = 1∕i(t,x)  not

Logical NOT
o(t,x) = 1,ifxequal0;else0
Example
To calculate the square root for all field elements use:
cdo sqrt infile outfile
2.7.3 ARITHC  Arithmetic with a constant
Synopsis
<operator>,c infile outfile
Description
This module performs simple arithmetic with all field elements of a dataset and a constant. The fields in outfile inherit the meta data from infile.
Operators
 addc

Add a constant
o(t,x) = i(t,x) + c  subc

Subtract a constant
o(t,x) = i(t,x)  c  mulc

Multiply with a constant
o(t,x) = i(t,x) * c  divc

Divide by a constant
o(t,x) = i(t,x)∕c  minc

Minimum of a field and a constant
o(t,x) = min(i(t,x),c)  maxc

Maximum of a field and a constant
o(t,x) = max(i(t,x),c)
Parameter
 c

FLOAT Constant
Example
To sum all input fields with the constant 273.15 use:
cdo addc,273.15 infile outfile
2.7.4 ARITH  Arithmetic on two datasets
Synopsis
<operator> infile1 infile2 outfile
Description
This module performs simple arithmetic of two datasets. The number of fields in infile1 should be the same as in infile2. The fields in outfile inherit the meta data from infile1. All operators in this module simply process one field after the other from the two input files. Neither the order of the variables nor the date is checked. One of the input files can contain only one timestep or one variable.
Operators
 add

Add two fields
o(t,x) = i_{1}(t,x) + i_{2}(t,x)  sub

Subtract two fields
o(t,x) = i_{1}(t,x)  i_{2}(t,x)  mul

Multiply two fields
o(t,x) = i_{1}(t,x) * i_{2}(t,x)  div

Divide two fields
o(t,x) = i_{1}(t,x)∕i_{2}(t,x)  min

Minimum of two fields
o(t,x) = min(i_{1}(t,x),i_{2}(t,x))  max

Maximum of two fields
o(t,x) = max(i_{1}(t,x),i_{2}(t,x))  atan2

Arc tangent of two fields
The atan2 operator calculates the arc tangent of two fields. The result is in radians, which is between PI and PI (inclusive).o(t,x) = atan2(i_{1}(t,x),i_{2}(t,x))
Example
To sum all fields of the first input file with the corresponding fields of the second input file use:
cdo add infile1 infile2 outfile
2.7.5 DAYARITH  Daily arithmetic
Synopsis
<operator> infile1 infile2 outfile
Description
This module performs simple arithmetic of a time series and one timestep with the same day, month and year. For each field in infile1 the corresponding field of the timestep in infile2 with the same day, month and year is used. The header information in infile1 have to be the same as in infile2. Usually infile2 is generated by an operator of the module DAYSTAT.
Operators
 dayadd

Add daily time series
Adds a time series and a daily time series.  daysub

Subtract daily time series
Subtracts a time series and a daily time series.  daymul

Multiply daily time series
Multiplies a time series and a daily time series.  daydiv

Divide daily time series
Divides a time series and a daily time series.
Example
To subtract a daily time average from a time series use:
cdo daysub infile dayavg infile outfile
2.7.6 MONARITH  Monthly arithmetic
Synopsis
<operator> infile1 infile2 outfile
Description
This module performs simple arithmetic of a time series and one timestep with the same month and year. For each field in infile1 the corresponding field of the timestep in infile2 with the same month and year is used. The header information in infile1 have to be the same as in infile2. Usually infile2 is generated by an operator of the module MONSTAT.
Operators
 monadd

Add monthly time series
Adds a time series and a monthly time series.  monsub

Subtract monthly time series
Subtracts a time series and a monthly time series.  monmul

Multiply monthly time series
Multiplies a time series and a monthly time series.  mondiv

Divide monthly time series
Divides a time series and a monthly time series.
Example
To subtract a monthly time average from a time series use:
cdo monsub infile monavg infile outfile
2.7.7 YEARARITH  Yearly arithmetic
Synopsis
<operator> infile1 infile2 outfile
Description
This module performs simple arithmetic of a time series and one timestep with the same year. For each field in infile1 the corresponding field of the timestep in infile2 with the same year is used. The header information in infile1 have to be the same as in infile2. Usually infile2 is generated by an operator of the module YEARSTAT.
Operators
 yearadd

Add yearly time series
Adds a time series and a yearly time series.  yearsub

Subtract yearly time series
Subtracts a time series and a yearly time series.  yearmul

Multiply yearly time series
Multiplies a time series and a yearly time series.  yeardiv

Divide yearly time series
Divides a time series and a yearly time series.
Example
To subtract a yearly time average from a time series use:
cdo yearsub infile yearavg infile outfile
2.7.8 YHOURARITH  Multiyear hourly arithmetic
Synopsis
<operator> infile1 infile2 outfile
Description
This module performs simple arithmetic of a time series and one timestep with the same hour and day of year. For each field in infile1 the corresponding field of the timestep in infile2 with the same hour and day of year is used. The header information in infile1 have to be the same as in infile2. Usually infile2 is generated by an operator of the module YHOURSTAT.
Operators
 yhouradd

Add multiyear hourly time series
Adds a time series and a multiyear hourly time series.  yhoursub

Subtract multiyear hourly time series
Subtracts a time series and a multiyear hourly time series.  yhourmul

Multiply multiyear hourly time series
Multiplies a time series and a multiyear hourly time series.  yhourdiv

Divide multiyear hourly time series
Divides a time series and a multiyear hourly time series.
Example
To subtract a multiyear hourly time average from a time series use:
cdo yhoursub infile yhouravg infile outfile
2.7.9 YDAYARITH  Multiyear daily arithmetic
Synopsis
<operator> infile1 infile2 outfile
Description
This module performs simple arithmetic of a time series and one timestep with the same day of year. For each field in infile1 the corresponding field of the timestep in infile2 with the same day of year is used. The header information in infile1 have to be the same as in infile2. Usually infile2 is generated by an operator of the module YDAYSTAT.
Operators
 ydayadd

Add multiyear daily time series
Adds a time series and a multiyear daily time series.  ydaysub

Subtract multiyear daily time series
Subtracts a time series and a multiyear daily time series.  ydaymul

Multiply multiyear daily time series
Multiplies a time series and a multiyear daily time series.  ydaydiv

Divide multiyear daily time series
Divides a time series and a multiyear daily time series.
Example
To subtract a multiyear daily time average from a time series use:
cdo ydaysub infile ydayavg infile outfile
2.7.10 YMONARITH  Multiyear monthly arithmetic
Synopsis
<operator> infile1 infile2 outfile
Description
This module performs simple arithmetic of a time series and one timestep with the same month of year. For each field in infile1 the corresponding field of the timestep in infile2 with the same month of year is used. The header information in infile1 have to be the same as in infile2. Usually infile2 is generated by an operator of the module YMONSTAT.
Operators
 ymonadd

Add multiyear monthly time series
Adds a time series and a multiyear monthly time series.  ymonsub

Subtract multiyear monthly time series
Subtracts a time series and a multiyear monthly time series.  ymonmul

Multiply multiyear monthly time series
Multiplies a time series and a multiyear monthly time series.  ymondiv

Divide multiyear monthly time series
Divides a time series and a multiyear monthly time series.
Example
To subtract a multiyear monthly time average from a time series use:
cdo ymonsub infile ymonavg infile outfile
2.7.11 YSEASARITH  Multiyear seasonal arithmetic
Synopsis
<operator> infile1 infile2 outfile
Description
This module performs simple arithmetic of a time series and one timestep with the same season. For each field in infile1 the corresponding field of the timestep in infile2 with the same season is used. The header information in infile1 have to be the same as in infile2. Usually infile2 is generated by an operator of the module YSEASSTAT.
Operators
 yseasadd

Add multiyear seasonal time series
Adds a time series and a multiyear seasonal time series.  yseassub

Subtract multiyear seasonal time series
Subtracts a time series and a multiyear seasonal time series.  yseasmul

Multiply multiyear seasonal time series
Multiplies a time series and a multiyear seasonal time series.  yseasdiv

Divide multiyear seasonal time series
Divides a time series and a multiyear seasonal time series.
Example
To subtract a multiyear seasonal time average from a time series use:
cdo yseassub infile yseasavg infile outfile
2.7.12 ARITHDAYS  Arithmetic with days
Synopsis
<operator> infile outfile
Description
This module multiplies or divides each timestep of a dataset with the corresponding days per month or days per year. The result of these functions depends on the used calendar of the input data.
Operators
 muldpm

Multiply with days per month
o(t,x) = i(t,x) * days_per_month  divdpm

Divide by days per month
o(t,x) = i(t,x)∕days_per_month  muldpy

Multiply with days per year
o(t,x) = i(t,x) * days_per_year  divdpy

Divide by days per year
o(t,x) = i(t,x)∕days_per_year
2.7.13 ARITHLAT  Arithmetic with latitude
Synopsis
<operator> infile outfile
Description
This module multiplies or divides each field element with the cosine of the latitude.
Operators
 mulcoslat

Multiply with the cosine of the latitude
o(t,x) = i(t,x) * cos(latitude(x))  divcoslat

Divide by cosine of the latitude
o(t,x) = i(t,x)∕cos(latitude(x))
2.8 Statistical values
This section contains modules to compute statistical values of datasets. In this program there is the different notion of "mean" and "average" to distinguish two different kinds
of treatment of missing values. While computing the mean, only the not missing values are considered to belong to the sample with the side effect of a probably reduced sample size. Computing the
average is just adding the sample members and divide the result by the sample size. For example, the mean of 1, 2, miss and 3 is (1+2+3)/3 = 2, whereas the average is (1+2+miss+3)/4 = miss/4 =
miss. If there are no missing values in the sample, the average and the mean are identical.
CDO is using the verification time to identify the time range for temporal statistics. The time bounds are never used!
In this section the abbreviations as in the following table are used:
and the Heavyside function jumping at .
timcumsum  Cumulative sum over all timesteps 
consecsum  Consecutive Sum 
consects  Consecutive Timesteps 
varsmin  Variables minimum 
varsmax  Variables maximum 
varsrange  Variables range 
varssum  Variables sum 
varsmean  Variables mean 
varsavg  Variables average 
varsstd  Variables standard deviation 
varsstd1  Variables standard deviation (n1) 
varsvar  Variables variance 
varsvar1  Variables variance (n1) 
ensmin  Ensemble minimum 
ensmax  Ensemble maximum 
ensrange  Ensemble range 
enssum  Ensemble sum 
ensmean  Ensemble mean 
ensavg  Ensemble average 
ensstd  Ensemble standard deviation 
ensstd1  Ensemble standard deviation (n1) 
ensvar  Ensemble variance 
ensvar1  Ensemble variance (n1) 
ensskew  Ensemble skewness 
enskurt  Ensemble kurtosis 
ensmedian  Ensemble median 
enspctl  Ensemble percentiles 
ensrkhistspace  Ranked Histogram averaged over time 
ensrkhisttime  Ranked Histogram averaged over space 
ensroc  Ensemble Receiver Operating characteristics 
enscrps  Ensemble CRPS and decomposition 
ensbrs  Ensemble Brier score 
fldmin  Field minimum 
fldmax  Field maximum 
fldrange  Field range 
fldsum  Field sum 
fldint  Field integral 
fldmean  Field mean 
fldavg  Field average 
fldstd  Field standard deviation 
fldstd1  Field standard deviation (n1) 
fldvar  Field variance 
fldvar1  Field variance (n1) 
fldskew  Field skewness 
fldkurt  Field kurtosis 
fldmedian  Field median 
fldpctl  Field percentiles 
zonmin  Zonal minimum 
zonmax  Zonal maximum 
zonrange  Zonal range 
zonsum  Zonal sum 
zonmean  Zonal mean 
zonavg  Zonal average 
zonstd  Zonal standard deviation 
zonstd1  Zonal standard deviation (n1) 
zonvar  Zonal variance 
zonvar1  Zonal variance (n1) 
zonskew  Zonal skewness 
zonkurt  Zonal kurtosis 
zonmedian  Zonal median 
zonpctl  Zonal percentiles 
mermin  Meridional minimum 
mermax  Meridional maximum 
merrange  Meridional range 
mersum  Meridional sum 
mermean  Meridional mean 
meravg  Meridional average 
merstd  Meridional standard deviation 
merstd1  Meridional standard deviation (n1) 
mervar  Meridional variance 
mervar1  Meridional variance (n1) 
merskew  Meridional skewness 
merkurt  Meridional kurtosis 
mermedian  Meridional median 
merpctl  Meridional percentiles 
gridboxmin  Gridbox minimum 
gridboxmax  Gridbox maximum 
gridboxrange  Gridbox range 
gridboxsum  Gridbox sum 
gridboxmean  Gridbox mean 
gridboxavg  Gridbox average 
gridboxstd  Gridbox standard deviation 
gridboxstd1  Gridbox standard deviation (n1) 
gridboxvar  Gridbox variance 
gridboxvar1  Gridbox variance (n1) 
gridboxskew  Gridbox skewness 
gridboxkurt  Gridbox kurtosis 
gridboxmedian  Gridbox median 
remapmin  Remap minimum 
remapmax  Remap maximum 
remaprange  Remap range 
remapsum  Remap sum 
remapmean  Remap mean 
remapavg  Remap average 
remapstd  Remap standard deviation 
remapstd1  Remap standard deviation (n1) 
remapvar  Remap variance 
remapvar1  Remap variance (n1) 
remapskew  Remap skewness 
remapkurt  Remap kurtosis 
remapmedian  Remap median 
vertmin  Vertical minimum 
vertmax  Vertical maximum 
vertrange  Vertical range 
vertsum  Vertical sum 
vertmean  Vertical mean 
vertavg  Vertical average 
vertstd  Vertical standard deviation 
vertstd1  Vertical standard deviation (n1) 
vertvar  Vertical variance 
vertvar1  Vertical variance (n1) 
timselmin  Time selection minimum 
timselmax  Time selection maximum 
timselrange  Time selection range 
timselsum  Time selection sum 
timselmean  Time selection mean 
timselavg  Time selection average 
timselstd  Time selection standard deviation 
timselstd1  Time selection standard deviation (n1) 
timselvar  Time selection variance 
timselvar1  Time selection variance (n1) 
timselpctl  Time range percentiles 
runmin  Running minimum 
runmax  Running maximum 
runrange  Running range 
runsum  Running sum 
runmean  Running mean 
runavg  Running average 
runstd  Running standard deviation 
runstd1  Running standard deviation (n1) 
runvar  Running variance 
runvar1  Running variance (n1) 
runpctl  Running percentiles 
timmin  Time minimum 
timmax  Time maximum 
timrange  Time range 
timsum  Time sum 
timmean  Time mean 
timavg  Time average 
timstd  Time standard deviation 
timstd1  Time standard deviation (n1) 
timvar  Time variance 
timvar1  Time variance (n1) 
timpctl  Time percentiles 
hourmin  Hourly minimum 
hourmax  Hourly maximum 
hourrange  Hourly range 
hoursum  Hourly sum 
hourmean  Hourly mean 
houravg  Hourly average 
hourstd  Hourly standard deviation 
hourstd1  Hourly standard deviation (n1) 
hourvar  Hourly variance 
hourvar1  Hourly variance (n1) 
hourpctl  Hourly percentiles 
daymin  Daily minimum 
daymax  Daily maximum 
dayrange  Daily range 
daysum  Daily sum 
daymean  Daily mean 
dayavg  Daily average 
daystd  Daily standard deviation 
daystd1  Daily standard deviation (n1) 
dayvar  Daily variance 
dayvar1  Daily variance (n1) 
daypctl  Daily percentiles 
monmin  Monthly minimum 
monmax  Monthly maximum 
monrange  Monthly range 
monsum  Monthly sum 
monmean  Monthly mean 
monavg  Monthly average 
monstd  Monthly standard deviation 
monstd1  Monthly standard deviation (n1) 
monvar  Monthly variance 
monvar1  Monthly variance (n1) 
monpctl  Monthly percentiles 
yearmonmean  Yearly mean from monthly data 
yearmin  Yearly minimum 
yearmax  Yearly maximum 
yearminidx  Yearly minimum indices 
yearmaxidx  Yearly maximum indices 
yearrange  Yearly range 
yearsum  Yearly sum 
yearmean  Yearly mean 
yearavg  Yearly average 
yearstd  Yearly standard deviation 
yearstd1  Yearly standard deviation (n1) 
yearvar  Yearly variance 
yearvar1  Yearly variance (n1) 
yearpctl  Yearly percentiles 
seasmin  Seasonal minimum 
seasmax  Seasonal maximum 
seasrange  Seasonal range 
seassum  Seasonal sum 
seasmean  Seasonal mean 
seasavg  Seasonal average 
seasstd  Seasonal standard deviation 
seasstd1  Seasonal standard deviation (n1) 
seasvar  Seasonal variance 
seasvar1  Seasonal variance (n1) 
seaspctl  Seasonal percentiles 
yhourmin  Multiyear hourly minimum 
yhourmax  Multiyear hourly maximum 
yhourrange  Multiyear hourly range 
yhoursum  Multiyear hourly sum 
yhourmean  Multiyear hourly mean 
yhouravg  Multiyear hourly average 
yhourstd  Multiyear hourly standard deviation 
yhourstd1  Multiyear hourly standard deviation (n1) 
yhourvar  Multiyear hourly variance 
yhourvar1  Multiyear hourly variance (n1) 
dhourmin  Multiday hourly minimum 
dhourmax  Multiday hourly maximum 
dhourrange  Multiday hourly range 
dhoursum  Multiday hourly sum 
dhourmean  Multiday hourly mean 
dhouravg  Multiday hourly average 
dhourstd  Multiday hourly standard deviation 
dhourstd1  Multiday hourly standard deviation (n1) 
dhourvar  Multiday hourly variance 
dhourvar1  Multiday hourly variance (n1) 
ydaymin  Multiyear daily minimum 
ydaymax  Multiyear daily maximum 
ydayrange  Multiyear daily range 
ydaysum  Multiyear daily sum 
ydaymean  Multiyear daily mean 
ydayavg  Multiyear daily average 
ydaystd  Multiyear daily standard deviation 
ydaystd1  Multiyear daily standard deviation (n1) 
ydayvar  Multiyear daily variance 
ydayvar1  Multiyear daily variance (n1) 
ydaypctl  Multiyear daily percentiles 
ymonmin  Multiyear monthly minimum 
ymonmax  Multiyear monthly maximum 
ymonrange  Multiyear monthly range 
ymonsum  Multiyear monthly sum 
ymonmean  Multiyear monthly mean 
ymonavg  Multiyear monthly average 
ymonstd  Multiyear monthly standard deviation 
ymonstd1  Multiyear monthly standard deviation (n1) 
ymonvar  Multiyear monthly variance 
ymonvar1  Multiyear monthly variance (n1) 
ymonpctl  Multiyear monthly percentiles 
yseasmin  Multiyear seasonal minimum 
yseasmax  Multiyear seasonal maximum 
yseasrange  Multiyear seasonal range 
yseassum  Multiyear seasonal sum 
yseasmean  Multiyear seasonal mean 
yseasavg  Multiyear seasonal average 
yseasstd  Multiyear seasonal standard deviation 
yseasstd1  Multiyear seasonal standard deviation (n1) 
yseasvar  Multiyear seasonal variance 
yseasvar1  Multiyear seasonal variance (n1) 
yseaspctl  Multiyear seasonal percentiles 
ydrunmin  Multiyear daily running minimum 
ydrunmax  Multiyear daily running maximum 
ydrunsum  Multiyear daily running sum 
ydrunmean  Multiyear daily running mean 
ydrunavg  Multiyear daily running average 
ydrunstd  Multiyear daily running standard deviation 
ydrunstd1  Multiyear daily running standard deviation (n1) 
ydrunvar  Multiyear daily running variance 
ydrunvar1  Multiyear daily running variance (n1) 
ydrunpctl  Multiyear daily running percentiles 
2.8.1 TIMCUMSUM  Cumulative sum over all timesteps
Synopsis
timcumsum infile outfile
Description
The timcumsum operator calculates the cumulative sum over all timesteps. Missing values are treated as numeric zero when summing.
o(t,x) = sum{i(t′,x),0 < t′≤ t}
2.8.2 CONSECSTAT  Consecute timestep periods
Synopsis
<operator> infile outfile
Description
This module computes periods over all timesteps in infile where a certain property is valid. The property can be chosen by creating a mask from the original data, which is the expected input format for operators of this module. Depending on the operator full information about each period or just its length and ending date are computed.
Operators
 consecsum

Consecutive Sum
This operator computes periods of consecutive timesteps similar to a runsum, but periods are finished, when the mask value is 0. That way multiple periods can be found. Timesteps from the input are preserved. Missing values are handled like 0, i.e. finish periods of consecutive timesteps.  consects

Consecutive Timesteps
In contrast to the operator above consects only computes the length of each period together with its last timestep. To be able to perform statistical analysis like min, max or mean, everything else is set to missing value.
Example
For a given time series of daily temperatures, the periods of summer days can be calculated with inplace maskting the input field:
cdo consects gtc,20.0 infile1 outfile
2.8.3 VARSSTAT  Statistical values over all variables
Synopsis
<operator> infile outfile
Description
This module computes statistical values over all variables for each timestep. Depending on the chosen operator the minimum, maximum, range, sum, average, variance or standard deviation is written to outfile. All input variables need to have the same gridsize and the same number of levels.
Operators
 varsmin

Variables minimum
For every timestep the minimum over all variables is computed.  varsmax

Variables maximum
For every timestep the maximum over all variables is computed.  varsrange

Variables range
For every timestep the range over all variables is computed.  varssum

Variables sum
For every timestep the sum over all variables is computed.  varsmean

Variables mean
For every timestep the mean over all variables is computed.  varsavg

Variables average
For every timestep the average over all variables is computed.  varsstd

Variables standard deviation
For every timestep the standard deviation over all variables is computed. Normalize by n.  varsstd1

Variables standard deviation (n1)
For every timestep the standard deviation over all variables is computed. Normalize by (n1).  varsvar

Variables variance
For every timestep the variance over all variables is computed. Normalize by n.  varsvar1

Variables variance (n1)
For every timestep the variance over all variables is computed. Normalize by (n1).
2.8.4 ENSSTAT  Statistical values over an ensemble
Synopsis
<operator> infiles outfile
enspctl,p infiles outfile
Description
This module computes statistical values over an ensemble of input files. Depending on the chosen operator, the minimum, maximum, range, sum, average, standard deviation, variance, skewness, kurtosis, median or a certain percentile over all input files is written to outfile. All input files need to have the same structure with the same variables. The date information of a timestep in outfile is the date of the first input file.
Operators
 ensmin

Ensemble minimum
o(t,x) = min{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}  ensmax

Ensemble maximum
o(t,x) = max{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}  ensrange

Ensemble range
o(t,x) = range{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}  enssum

Ensemble sum
o(t,x) = sum{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}  ensmean

Ensemble mean
o(t,x) = mean{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}  ensavg

Ensemble average
o(t,x) = avg{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}  ensstd

Ensemble standard deviation
Normalize by n.o(t,x) = std{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}
 ensstd1

Ensemble standard deviation (n1)
Normalize by (n1).o(t,x) = std1{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}
 ensvar

Ensemble variance
Normalize by n.o(t,x) = var{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}
 ensvar1

Ensemble variance (n1)
Normalize by (n1).o(t,x) = var1{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}
 ensskew

Ensemble skewness
o(t,x) = skew{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}  enskurt

Ensemble kurtosis
o(t,x) = kurt{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}  ensmedian

Ensemble median
o(t,x) = median{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}  enspctl

Ensemble percentiles
o(t,x) = pth percentile{i_{1}(t,x),i_{2}(t,x),,i_{n}(t,x)}
Parameter
 p

FLOAT Percentile number in 0, ..., 100
Note
Operators of this module need to open all input files simultaneously. The maximum number of open files depends on the operating system!
Example
To compute the ensemble mean over 6 input files use:
cdo ensmean infile1 infile2 infile3 infile4 infile5 infile6 outfile
Or shorter with filename substitution:
cdo ensmean infile[16] outfile
To compute the 50th percentile (median) over 6 input files use:
cdo enspctl,50 infile1 infile2 infile3 infile4 infile5 infile6 outfile
2.8.5 ENSSTAT2  Statistical values over an ensemble
Synopsis
<operator> obsfile ensfiles outfile
Description
This module computes statistical values over the ensemble of ensfiles using obsfile as a reference. Depending on the operator a ranked Histogram or a roccurve over all Ensembles ensfiles with reference to obsfile is written to outfile. The date and grid information of a timestep in outfile is the date of the first input file. Thus all input files are required to have the same structure in terms of the gridsize, variable definitions and number of timesteps.
All Operators in this module use obsfile as the reference (for instance an observation) whereas ensfiles are understood as an ensemble consisting of n (where n is the number of ensfiles) members.
The operators ensrkhistspace and ensrkhisttime compute Ranked Histograms. Therefor the vertical axis is utilized as the Histogram axis, which prohibits the use of files containing more than one level. The histogram axis has nensfiles+1 bins with level 0 containing for each grid point the number of observations being smaller as all ensembles and level nensfiles+1 indicating the number of observations being larger than all ensembles.
ensrkhistspace computes a ranked histogram at each timestep reducing each horizontal grid to a 1x1 grid and keeping the time axis as in obsfile. Contrary ensrkhistspace computes a histogram at each grid point keeping the horizontal grid for each variable and reducing the timeaxis. The time information is that from the last timestep in obsfile.
Operators
 ensrkhistspace

Ranked Histogram averaged over time
 ensrkhisttime

Ranked Histogram averaged over space
 ensroc

Ensemble Receiver Operating characteristics
Example
To compute a rank histogram over 5 input files ensfile1ensfile5 given an observation in obsfile use:
cdo ensrkhisttime obsfile ensfile1 ensfile2 ensfile3 ensfile4 ensfile5 outfile
Or shorter with filename substitution:
cdo ensrkhisttime obsfile ensfile[15] outfile
2.8.6 ENSVAL  Ensemble validation tools
Synopsis
enscrps rfile infiles outfilebase
ensbrs,x rfile infiles outfilebase
Description
This module computes ensemble validation scores and their decomposition such as the Brier and cumulative ranked probability score (CRPS). The first file is used as a reference it can be a climatology, observation or reanalysis against which the skill of the ensembles given in infiles is measured. Depending on the operator a number of output files is generated each containing the skill score and its decomposition corresponding to the operator. The output is averaged over horizontal fields using appropriate weights for each level and timestep in rfile.
All input files need to have the same structure with the same variables. The date information of a timestep in outfile is the date of the first input file. The output files are named as <outfilebase>.<type>.<filesuffix> where <type> depends on the operator and <filesuffix> is determined from the output file type. There are three output files for operator enscrps and four output files for operator ensbrs.
The CRPS and its decomposition into Reliability and the potential CRPS are calculated by an appropriate averaging over the field members (note, that the CRPS does *not* average linearly). In the three output files <type> has the following meaning: crps for the CRPS, reli for the reliability and crpspot for the potential crps. The relation CRPS = CRPS_{pot} + RELI
holds.
The Brier score of the Ensemble given by infiles with respect to the reference given in rfile and the threshold x is calculated. In the four output files <type> has the following meaning: brs for the Brier score wrt threshold x; brsreli for the Brier score reliability wrt threshold x; brsreso for the Brier score resolution wrt threshold x; brsunct for the Brier score uncertainty wrt threshold x. In analogy to the CRPS the following relation holds: BRS(x) = RELI(x)  RESO(x) + UNCT(x).
The implementation of the decomposition of the CRPS and Brier Score follows Hans Hersbach (2000): Decomposition of the Continuous Ranked Probability Score for Ensemble Prediction Systems, in: Weather and Forecasting (15) pp. 559570.
The CRPS code decomposition has been verified against the CRAN  ensemble validation package from R. Differences occur when gridcell area is not uniform as the implementation in R does not account for that.
Operators
 enscrps

Ensemble CRPS and decomposition
 ensbrs

Ensemble Brier score
Ensemble Brier Score and Decomposition
Example
To compute the field averaged Brier score at x=5 over an ensemble with 5 members ensfile15 w.r.t. the reference rfile and write the results to files obase.brs.<suff>, obase.brsreli<suff>, obase.brsreso<suff>, obase.brsunct<suff> where <suff> is determined from the output file type, use
cdo ensbrs,5 rfile ensfile1 ensfile2 ensfile3 ensfile4 ensfile5 obase
or shorter using file name substitution:
cdo ensbrs,5 rfile ensfile[15] obase
2.8.7 FLDSTAT  Statistical values over a field
Synopsis
<operator>,weights infile outfile
fldpctl,p infile outfile
Description
This module computes statistical values of all input fields. A field is a horizontal layer of a data variable. Depending on the chosen operator, the minimum, maximum, range, sum, integral, average, standard deviation, variance, skewness, kurtosis, median or a certain percentile of the field is written to outfile.
Operators
 fldmin

Field minimum
For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = min{i(t,x′),x_{1} < x′≤ x_{n}}  fldmax

Field maximum
For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = max{i(t,x′),x_{1} < x′≤ x_{n}}  fldrange

Field range
For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = range{i(t,x′),x_{1} < x′≤ x_{n}}  fldsum

Field sum
For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = sum{i(t,x′),x_{1} < x′≤ x_{n}}  fldint

Field integral
For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = sum{i(t,x′) * cellarea(x′),x_{1} < x′≤ x_{n}}  fldmean

Field mean
For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = mean{i(t,x′),x_{1} < x′≤ x_{n}}weighted by area weights obtained by the input field.
 fldavg

Field average
For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = avg{i(t,x′),x_{1} < x′≤ x_{n}}weighted by area weights obtained by the input field.
 fldstd

Field standard deviation
Normalize by n. For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = std{i(t,x′),x_{1} < x′≤ x_{n}}weighted by area weights obtained by the input field.
 fldstd1

Field standard deviation (n1)
Normalize by (n1). For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = std1{i(t,x′),x_{1} < x′≤ x_{n}}weighted by area weights obtained by the input field.
 fldvar

Field variance
Normalize by n. For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = var{i(t,x′),x_{1} < x′≤ x_{n}}weighted by area weights obtained by the input field.
 fldvar1

Field variance (n1)
Normalize by (n1). For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = var1{i(t,x′),x_{1} < x′≤ x_{n}}weighted by area weights obtained by the input field.
 fldskew

Field skewness
For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = skew{i(t,x′),x_{1} < x′≤ x_{n}}  fldkurt

Field kurtosis
For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = kurt{i(t,x′),x_{1} < x′≤ x_{n}}  fldmedian

Field median
For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = median{i(t,x′),x_{1} < x′≤ x_{n}}  fldpctl

Field percentiles
For every gridpoint x_1,...,x_n of the same field it is:
o(t,1) = pth percentile{i(t,x′),x_{1} < x′≤ x_{n}}
Parameter
 weights

BOOL weights=FALSE disables weighting by grid cell area [default: weights=TRUE]
 p

FLOAT Percentile number in 0, ..., 100
Example
To compute the field mean of all input fields use:
cdo fldmean infile outfile
To compute the 90th percentile of all input fields use:
cdo fldpctl,90 infile outfile
2.8.8 ZONSTAT  Zonal statistical values
Synopsis
<operator> infile outfile
zonmean[,zonaldes] infile outfile
zonpctl,p infile outfile
Description
This module computes zonal statistical values of the input fields. Depending on the chosen operator, the zonal minimum, maximum, range, sum, average, standard deviation, variance, skewness, kurtosis, median or a certain percentile of the field is written to outfile. Operators of this module require all variables on the same regular lon/lat grid. Only the zonal mean (zonmean) can be calculated for data on an unstructured grid if the latitude bins are defined with the optional parameter zonaldes.
Operators
 zonmin

Zonal minimum
For every latitude the minimum over all longitudes is computed.  zonmax

Zonal maximum
For every latitude the maximum over all longitudes is computed.  zonrange

Zonal range
For every latitude the range over all longitudes is computed.  zonsum

Zonal sum
For every latitude the sum over all longitudes is computed.  zonmean

Zonal mean
For every latitude the mean over all longitudes is computed. Use the optional parameter zonaldes for data on an unstructured grid.  zonavg

Zonal average
For every latitude the average over all longitudes is computed.  zonstd

Zonal standard deviation
For every latitude the standard deviation over all longitudes is computed. Normalize by n.  zonstd1

Zonal standard deviation (n1)
For every latitude the standard deviation over all longitudes is computed. Normalize by (n1).  zonvar

Zonal variance
For every latitude the variance over all longitudes is computed. Normalize by n.  zonvar1

Zonal variance (n1)
For every latitude the variance over all longitudes is computed. Normalize by (n1).  zonskew

Zonal skewness
For every latitude the skewness over all longitudes is computed.  zonkurt

Zonal kurtosis
For every latitude the kurtosis over all longitudes is computed.  zonmedian

Zonal median
For every latitude the median over all longitudes is computed.  zonpctl

Zonal percentiles
For every latitude the pth percentile over all longitudes is computed.
Parameter
 p

FLOAT Percentile number in 0, ..., 100
 zonaldes

STRING Description of the zonal latitude bins needed for data on an unstructured grid. A predefined zonal description is zonal_<DY>. DY is the increment of the latitudes in degrees.
Example
To compute the zonal mean of all input fields use:
cdo zonmean infile outfile
To compute the 50th meridional percentile (median) of all input fields use:
cdo zonpctl,50 infile outfile
2.8.9 MERSTAT  Meridional statistical values
Synopsis
<operator> infile outfile
merpctl,p infile outfile
Description
This module computes meridional statistical values of the input fields. Depending on the chosen operator, the meridional minimum, maximum, range, sum, average, standard deviation, variance, skewness, kurtosis, median or a certain percentile of the field is written to outfile. Operators of this module require all variables on the same regular lon/lat grid.
Operators
 mermin

Meridional minimum
For every longitude the minimum over all latitudes is computed.  mermax

Meridional maximum
For every longitude the maximum over all latitudes is computed.  merrange

Meridional range
For every longitude the range over all latitudes is computed.  mersum

Meridional sum
For every longitude the sum over all latitudes is computed.  mermean

Meridional mean
For every longitude the area weighted mean over all latitudes is computed.  meravg

Meridional average
For every longitude the area weighted average over all latitudes is computed.  merstd

Meridional standard deviation
For every longitude the standard deviation over all latitudes is computed. Normalize by n.  merstd1

Meridional standard deviation (n1)
For every longitude the standard deviation over all latitudes is computed. Normalize by (n1).  mervar

Meridional variance
For every longitude the variance over all latitudes is computed. Normalize by n.  mervar1

Meridional variance (n1)
For every longitude the variance over all latitudes is computed. Normalize by (n1).  merskew

Meridional skewness
For every longitude the skewness over all latitudes is computed.  merkurt

Meridional kurtosis
For every longitude the kurtosis over all latitudes is computed.  mermedian

Meridional median
For every longitude the median over all latitudes is computed.  merpctl

Meridional percentiles
For every longitude the pth percentile over all latitudes is computed.
Parameter
 p

FLOAT Percentile number in 0, ..., 100
Example
To compute the meridional mean of all input fields use:
cdo mermean infile outfile
To compute the 50th meridional percentile (median) of all input fields use:
cdo merpctl,50 infile outfile
2.8.10 GRIDBOXSTAT  Statistical values over grid boxes
Synopsis
<operator>,nx,ny infile outfile
Description
This module computes statistical values over surrounding grid boxes. Depending on the chosen operator, the minimum, maximum, range, sum, average, standard deviation, variance, skewness, kurtosis or median of the neighboring grid boxes is written to outfile. All gridbox operators only work on quadrilateral curvilinear grids.
Operators
 gridboxmin

Gridbox minimum
Minimum value of the selected grid boxes.  gridboxmax

Gridbox maximum
Maximum value of the selected grid boxes.  gridboxrange

Gridbox range
Range (maxmin value) of the selected grid boxes.  gridboxsum

Gridbox sum
Sum of the selected grid boxes.  gridboxmean

Gridbox mean
Mean of the selected grid boxes.  gridboxavg

Gridbox average
Average of the selected grid boxes.  gridboxstd

Gridbox standard deviation
Standard deviation of the selected grid boxes. Normalize by n.  gridboxstd1

Gridbox standard deviation (n1)
Standard deviation of the selected grid boxes. Normalize by (n1).  gridboxvar

Gridbox variance
Variance of the selected grid boxes. Normalize by n.  gridboxvar1

Gridbox variance (n1)
Variance of the selected grid boxes. Normalize by (n1).  gridboxskew

Gridbox skewness
Skewness of the selected grid boxes.  gridboxkurt

Gridbox kurtosis
Kurtosis of the selected grid boxes.  gridboxmedian

Gridbox median
Median of the selected grid boxes.
Parameter
 nx

INTEGER Number of grid boxes in x direction
 ny

INTEGER Number of grid boxes in y direction
Example
To compute the mean over 10x10 grid boxes of the input field use:
cdo gridboxmean,10,10 infile outfile
2.8.11 REMAPSTAT  Remaps source points to target cells
Synopsis
<operator>,grid infile outfile
Description
This module maps source points to target cells by calculating a statistical value from the source points. Each target cell contains the statistical value from all source points within that target cell. If there are no source points within a target cell, it gets a missing value. The target grid must be regular lon/lat or Gaussian. Depending on the chosen operator the minimum, maximum, range, sum, average, variance, standard deviation, skewness, kurtosis or median of source points is computed.
Operators
 remapmin

Remap minimum
Minimum value of the source points.  remapmax

Remap maximum
Maximum value of the source points.  remaprange

Remap range
Range (maxmin value) of the source points.  remapsum

Remap sum
Sum of the source points.  remapmean

Remap mean
Mean of the source points.  remapavg

Remap average
Average of the source points.  remapstd

Remap standard deviation
Standard deviation of the source points. Normalize by n.  remapstd1

Remap standard deviation (n1)
Standard deviation of the source points. Normalize by (n1).  remapvar

Remap variance
Variance of the source points. Normalize by n.  remapvar1

Remap variance (n1)
Variance of the source points. Normalize by (n1).  remapskew

Remap skewness
Skewness of the source points.  remapkurt

Remap kurtosis
Kurtosis of the source points.  remapmedian

Remap median
Median of the source points.
Parameter
 grid

STRING Target grid description file or name
Example
To compute the mean over source points within the taget cells, use:
cdo remapmean,<targetgrid> infile outfile
If some of the target cells contain missing values, use the Operator setmisstonn to fill these missing values with the nearest neighbor cell:
cdo setmisstonn remapmean,<targetgrid> infile outfile
2.8.12 VERTSTAT  Vertical statistical values
Synopsis
<operator>,weights infile outfile
Description
This module computes statistical values over all levels of the input variables. According to chosen operator the vertical minimum, maximum, range, sum, average, variance or standard deviation is written to outfile.
Operators
 vertmin

Vertical minimum
For every gridpoint the minimum over all levels is computed.  vertmax

Vertical maximum
For every gridpoint the maximum over all levels is computed.  vertrange

Vertical range
For every gridpoint the range over all levels is computed.  vertsum

Vertical sum
For every gridpoint the sum over all levels is computed.  vertmean

Vertical mean
For every gridpoint the layer weighted mean over all levels is computed.  vertavg

Vertical average
For every gridpoint the layer weighted average over all levels is computed.  vertstd

Vertical standard deviation
For every gridpoint the standard deviation over all levels is computed. Normalize by n.  vertstd1

Vertical standard deviation (n1)
For every gridpoint the standard deviation over all levels is computed. Normalize by (n1).  vertvar

Vertical variance
For every gridpoint the variance over all levels is computed. Normalize by n.  vertvar1

Vertical variance (n1)
For every gridpoint the variance over all levels is computed. Normalize by (n1).
Parameter
 weights

BOOL weights=FALSE disables weighting by layer thickness [default: weights=TRUE]
Example
To compute the vertical sum of all input variables use:
cdo vertsum infile outfile
2.8.13 TIMSELSTAT  Time range statistical values
Synopsis
<operator>,nsets[,noffset[,nskip]] infile outfile
Description
This module computes statistical values for a selected number of timesteps. According to the chosen operator the minimum, maximum, range, sum, average, variance or standard deviation of the selected timesteps is written to outfile. The time of outfile is determined by the time in the middle of all contributing timesteps of infile. This can be change with the CDO option timestat_date <firstmiddlelast>.
Operators
 timselmin

Time selection minimum
For every adjacent sequence t_1,...,t_n of timesteps of the same selected time range it is:
o(t,x) = min{i(t′,x),t_{1} < t′≤ t_{n}}  timselmax

Time selection maximum
For every adjacent sequence t_1,...,t_n of timesteps of the same selected time range it is:
o(t,x) = max{i(t′,x),t_{1} < t′≤ t_{n}}  timselrange

Time selection range
For every adjacent sequence t_1,...,t_n of timesteps of the same selected time range it is:
o(t,x) = range{i(t′,x),t_{1} < t′≤ t_{n}}  timselsum

Time selection sum
For every adjacent sequence t_1,...,t_n of timesteps of the same selected time range it is:
o(t,x) = sum{i(t′,x),t_{1} < t′≤ t_{n}}  timselmean

Time selection mean
For every adjacent sequence t_1,...,t_n of timesteps of the same selected time range it is:
o(t,x) = mean{i(t′,x),t_{1} < t′≤ t_{n}}  timselavg

Time selection average
For every adjacent sequence t_1,...,t_n of timesteps of the same selected time range it is:
o(t,x) = avg{i(t′,x),t_{1} < t′≤ t_{n}}  timselstd

Time selection standard deviation
Normalize by n. For every adjacent sequence t_1,...,t_n of timesteps of the same selected time range it is:
o(t,x) = std{i(t′,x),t_{1} < t′≤ t_{n}}  timselstd1

Time selection standard deviation (n1)
Normalize by (n1). For every adjacent sequence t_1,...,t_n of timesteps of the same selected time range it is:
o(t,x) = std1{i(t′,x),t_{1} < t′≤ t_{n}}  timselvar

Time selection variance
Normalize by n. For every adjacent sequence t_1,...,t_n of timesteps of the same selected time range it is:
o(t,x) = var{i(t′,x),t_{1} < t′≤ t_{n}}  timselvar1

Time selection variance (n1)
Normalize by (n1). For every adjacent sequence t_1,...,t_n of timesteps of the same selected time range it is:
o(t,x) = var1{i(t′,x),t_{1} < t′≤ t_{n}}
Parameter
 nsets

INTEGER Number of input timesteps for each output timestep
 noffset

INTEGER Number of input timesteps skipped before the first timestep range (optional)
 nskip

INTEGER Number of input timesteps skipped between timestep ranges (optional)
Example
Assume an input dataset has monthly means over several years. To compute seasonal means from monthly means the first two month have to be skipped:
cdo timselmean,3,2 infile outfile
2.8.14 TIMSELPCTL  Time range percentile values
Synopsis
timselpctl,p,nsets[,noffset[,nskip]] infile1 infile2 infile3 outfile
Description
This operator computes percentile values over a selected number of timesteps in infile1. The algorithm uses histograms with minimum and maximum bounds given in infile2 and infile3, respectively. The default number of histogram bins is 101. The default can be overridden by setting the environment variable CDO_PCTL_NBINS to a different value. The files infile2 and infile3 should be the result of corresponding timselmin and timselmax operations, respectively. The time of outfile is determined by the time in the middle of all contributing timesteps of infile1. This can be change with the CDO option timestat_date <firstmiddlelast>.
For every adjacent sequence t_1,...,t_n of timesteps of the
same selected time range it is:
o(t,x) = pth percentile{i(t′,x),t_{1} < t′≤ t_{n}}
Parameter
 p

FLOAT Percentile number in 0, ..., 100
 nsets

INTEGER Number of input timesteps for each output timestep
 noffset

INTEGER Number of input timesteps skipped before the first timestep range (optional)
 nskip

INTEGER Number of input timesteps skipped between timestep ranges (optional)
Environment
 CDO_PCTL_NBINS

Sets the number of histogram bins. The default number is 101.
2.8.15 RUNSTAT  Running statistical values
Synopsis
<operator>,nts infile outfile
Description
This module computes running statistical values over a selected number of timesteps. Depending on the chosen operator the minimum, maximum, range, sum, average, variance or standard deviation of a selected number of consecutive timesteps read from infile is written to outfile. The time of outfile is determined by the time in the middle of all contributing timesteps of infile. This can be change with the CDO option timestat_date <firstmiddlelast>.
Operators
 runmin

Running minimum
o(t + (nts  1)∕2,x) = min{i(t,x),i(t + 1,x),...,i(t + nts  1,x)}  runmax

Running maximum
o(t + (nts  1)∕2,x) = max{i(t,x),i(t + 1,x),...,i(t + nts  1,x)}  runrange

Running range
o(t + (nts  1)∕2,x) = range{i(t,x),i(t + 1,x),...,i(t + nts  1,x)}  runsum

Running sum
o(t + (nts  1)∕2,x) = sum{i(t,x),i(t + 1,x),...,i(t + nts  1,x)}  runmean

Running mean
o(t + (nts  1)∕2,x) = mean{i(t,x),i(t + 1,x),...,i(t + nts  1,x)}  runavg

Running average
o(t + (nts  1)∕2,x) = avg{i(t,x),i(t + 1,x),...,i(t + nts  1,x)}  runstd

Running standard deviation
Normalize by n.o(t + (nts  1)∕2,x) = std{i(t,x),i(t + 1,x),...,i(t + nts  1,x)}
 runstd1

Running standard deviation (n1)
Normalize by (n1).o(t + (nts  1)∕2,x) = std1{i(t,x),i(t + 1,x),...,i(t + nts  1,x)}
 runvar

Running variance
Normalize by n.o(t + (nts  1)∕2,x) = var{i(t,x),i(t + 1,x),...,i(t + nts  1,x)}
 runvar1

Running variance (n1)
Normalize by (n1).o(t + (nts  1)∕2,x) = var1{i(t,x),i(t + 1,x),...,i(t + nts  1,x)}
Parameter
 nts

INTEGER Number of timesteps
Environment
 CDO_TIMESTAT_DATE

Sets the time stamp in outfile to the "first", "middle" or "last" contributing timestep of infile.
Example
To compute the running mean over 9 timesteps use:
cdo runmean,9 infile outfile
2.8.16 RUNPCTL  Running percentile values
Synopsis
runpctl,p,nts infile outfile
Description
This module computes running percentiles over a selected number of timesteps in infile. The time of outfile is determined by the time in the middle of all contributing timesteps of infile. This can be change with the CDO option timestat_date <firstmiddlelast>.
o(t + (nts  1)∕2,x) = pth percentile{i(t,x),i(t + 1,x),...,i(t + nts  1,x)}
Parameter
 p

FLOAT Percentile number in 0, ..., 100
 nts

INTEGER Number of timesteps
Example
To compute the running 50th percentile (median) over 9 timesteps use:
cdo runpctl,50,9 infile outfile
2.8.17 TIMSTAT  Statistical values over all timesteps
Synopsis
<operator> infi