Project

General

Profile

Segmentation fault for specific NetCDF file

Added by Jesper Larsen over 5 years ago

Hi CDO people

I encountered an issue with cdo and wanted to report it. When I run the following command with the provided input files I get a segmentation fault:

$ cdo -collgrid tmp2e.nc tmp1e.nc tmp3.nc
Segmentation fault (core dumped)

$ cdo -V
Climate Data Operators version 1.7.0 (http://mpimet.mpg.de/cdo)
Compiler: gcc -g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wall -pedantic -fPIC -fopenmp
version: gcc (Ubuntu 5.3.1-9ubuntu2) 5.3.1 20160220
Features: DATA PTHREADS OpenMP4 HDF5 NC4/HDF5/threadsafe OPeNDAP SZ Z UDUNITS2 PROJ.4 MAGICS CURL FFTW3 SSE2
Libraries: HDF5/1.8.16 proj/4.92 curl/7.47.0
Filetypes: srv ext ieg grb grb2 nc nc2 nc4 nc4c
CDI library version : 1.7.0
GRIB_API library version : 1.14.4
netCDF library version : 4.4.0 of Mar 29 2016 11:41:40 $
HDF5 library version : 1.8.16
SERVICE library version : 1.4.0
EXTRA library version : 1.4.0
IEG library version : 1.4.0
FILE library version : 1.8.2

If I remove the coordinates attribute from the VMDR variable it does not segfault.


Replies (9)

RE: Segmentation fault for specific NetCDF file - Added by Ralf Mueller over 5 years ago

hi!

I checked your data, its regular gridded, 'coordinates' attribute is not needed. using 'time' in this coodinate attribute is not supported by CDO. And since your coordinates are not time dependent, it;s not needed at all.

After removing the attribute, cdo-1.9.5 produces now segfault. result is uploaded

hth
ralf

aa.nc (2.1 MB) aa.nc

RE: Segmentation fault for specific NetCDF file - Added by Jesper Larsen over 5 years ago

Hi Ralf

Thanks for the response. I have fixed the issue as you suggested:-)

The reason there is no time variable is simply because I only extracted a single timestep from the files before uploading them. And you are of course right that the data is regularly gridded so the coordinates attribute is redundant.

I should however add that I got the data from http://marine.copernicus.eu/ (the global ocean model) so it is probably used in a number of places.

Best regards,
Jesper

RE: Segmentation fault for specific NetCDF file - Added by mathieu woillez over 5 years ago

Hi all,

Like Jesper, I am also experiencing a segmentation fault when using collgrid.

$ cdo -collgrid tmp1.nc tmp2.nc out.nc
Incident de segmentation (core dumped)

Below are the netcdfs I want to merge spatially. Could you help me figure out what is wrong with my necdfs?

Thanks,

Mathieu

RE: Segmentation fault for specific NetCDF file - Added by Ralf Mueller over 5 years ago

Hi Mathieu!
I can confirm the segfault with current release 1.9.5.

Thx for the report!

cheers
ralf

RE: Segmentation fault for specific NetCDF file - Added by Diana Bern over 5 years ago

Hi Ralf,

I am also having the same issue getting segfault with 1.9.5 version. Could you tell please how to fix it?

Thanks,
Diana

RE: Segmentation fault for specific NetCDF file - Added by Ralf Mueller over 5 years ago

sorry, currently I cannot - I managed to run the call without errors using a build with clang and no optimization. So if you can compile with clang on your own, that might be an option for now

hth
ralf

RE: Segmentation fault for specific NetCDF file - Added by Diana Bern over 5 years ago

Hi Ralf,

Could you please give more info how to build with clang?

Thanks,
DB

RE: Segmentation fault for specific NetCDF file - Added by Ralf Mueller over 5 years ago

depending on your host system you have to install clang and call configure with

./configure CC=clang CXX=clang++
with all the additional options for netcdf and so on

RE: Segmentation fault for specific NetCDF file - Added by Uwe Schulzweida over 5 years ago

Hi Mathieu,

The file tmp1.nc contains 323x243 cells and the file tmp2.nc contains 208x528 cells. For the CDO operator collgrid each source region must be a structured longitude/latitude grid box. And all regions together muss give a new structured longitude/latitude grid box. This is not the case with your input files.
We will improve the error message for this case.

Cheers,
Uwe

    (1-9/9)