Filename extension | .nc, .cdf |
---|---|
Internet media type | application/netcdf |
Magic number | CDF\001 |
Developed by | UCAR |
Type of format | scientific binary data |
Extended from | CDF |
NetCDF (Network Common Data Form) is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The project homepage is hosted by the Unidata program at the University Corporation for Atmospheric Research (UCAR). They are also the chief source of netCDF software, standards development, updates, etc. The format is an open standard. NetCDF Classic and 64-bit Offset Format are an international standard of the Open Geospatial Consortium[1].
The project is actively supported by UCAR. The recently released (2008) version 4.0 greatly enhances the data model by allowing the use of the HDF5 data file format. Version 4.1 adds support for C and Fortran client access to specified subsets of remote data via OPeNDAP.
The format was originally based on the conceptual model of the NASA CDF but has since diverged and is not compatible with it.
Contents |
The netCDF libraries support 3 different binary formats for netCDF files:
All formats are "self-describing". This means that there is a header which describes the layout of the rest of the file, in particular the data arrays, as well as arbitrary file metadata in the form of name/value attributes. The format is platform independent, with issues such as endianness being addressed in the software libraries. The data are stored in a fashion that allows efficient subsetting.
Starting with version 4.0 of the netCDF API[2] allows the use of the HDF5 data format. NetCDF users can create HDF5 files with benefits not available with the netCDF format, such as much larger files and multiple unlimited dimensions.
Full backward compatibility in accessing old netCDF files and using previous versions of the C and Fortran APIs is supported.
The software libraries supplied by UCAR provide read-write access to netCDF files, encoding and decoding the necessary arrays and metadata. The core library is written in C, and provides an API for C, C++ and two APIs for Fortran applications, one for Fortran 77, and one for Fortran 90. An independent implementation, also developed and maintained by Unidata, is written in 100% Java, which extends the core data model and adds additional functionality. Interfaces to netCDF based on the C library are also available in other languages including R (ncdf[3] and ncvar packages), Perl, Python, Ruby, MATLAB, IDL, and Octave. The specification of the API calls is very similar across the different languages, apart from inevitable differences of syntax. The API calls for version 2 were rather different from those in version 3, but are also supported by versions 3 and 4 for backward compatibility. Application programmers using supported languages need not normally be concerned with the file structure itself, even though it is available as open formats.
A wide range of application software has been written which makes use of netCDF files. These range from command line utilities to graphical visualization packages. A number are listed below, and a longer list[4] is on the UCAR website.
It is commonly used in climatology, meteorology and oceanography applications (e.g., weather forecasting, climate change) and GIS applications.
It is an input/output format for many GIS applications, and for general scientific data exchange. To quote from their site "NetCDF (network Common Data Form) is an interface for array-oriented data access and a library that provides an implementation of the interface. The netCDF library also defines a machine-independent format for representing scientific data."
The Climate and Forecast (CF) conventions are metadata conventions for earth science data, intended to promote the processing and sharing of files created with the NetCDF Application Programmer Interface (API). The conventions define metadata that are included in the same file as the data (thus making the file "self-describing"), that provide a definitive description of what the data in each variable represents, and of the spatial and temporal properties of the data (including information about grids, such as grid cell bounds and cell averaging methods). This enables users of data from different sources to decide which data are comparable, and allows building applications with powerful extraction, regridding, and display capabilities.
An extension of netCDF for parallel computing called Parallel-NetCDF (or PnetCDF) has been developed by Argonne National Laboratory and Northwestern University.[13] This is built upon MPI-IO, the I/O extension to MPI communications. Using the high-level netCDF data structures, the Parallel-NetCDF libraries can make use of optimizations to efficiently distribute the file read and write applications between multiple processors. The Parallel-NetCDF package can read/write only classic and 64-bit offset formats. Parallel-NetCDF cannot read or write the HDF5-based format available with netCDF-4.0. The Parallel-NetCDF package uses different, but similar APIs in Fortran and C.
Parallel I/O in the Unidata netCDF library has been supported since release 4.0, for HDF5 data files. Since version 4.1.1 the Unidata NetCDF C library supports parallel I/O to classic and 64-bit offset files using the Parallel-NetCDF library, but with the NetCDF API.
The netCDF C library, and the libraries based on it (Fortran 77 and Fortran 90, C++, and all third-party libraries) can, starting with version 4.1.1, read some data in other data formats. Data in the HDF5 format can be read, with some restrictions. Data in the HDF4 format can be read by the netCDF C library if created using the HDF4 Scientific Data (SD) API.
The NetCDF-Java library currently reads the following file formats and remote access protocols:
There are a number of other formats in development. Since each of these is accessed transparently through the NetCDF API, the NetCDF-Java library is said to implement a Common Data Model for scientific datasets.
The Common Data Model has three layers, which build on top of each other to add successively richer semantics:
The Data model of the data access layer is a generalization of the NetCDF-3 data model, and substantially the same as the NetCDF-4 data model. The coordinate system layer implements and extends the concepts in the Climate and Forecast Metadata Conventions. The scientific data type layer allows data to be manipulated in coordinate space, analogous to the Open Geospatial Consortium specifications. The identification of coordinate systems and data typing is ongoing, but users can plug in their own classes at runtime for specialized processing.
This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.