NetCDF  4.9.3
NetCDF Byterange Support

Introduction

Suppose that you have the URL to a remote dataset which is a normal netcdf-3 or netcdf-4 file.

The netCDF-c library now supports read-only access to such datasets using the HTTP byte range capability [], assuming that the remote server supports byte-range access.

Two examples:

  1. A Thredds server supporting the Thredds "fileserver" Thredds protocol, and containing a netcdf classic file.
    • location: "https://remotetest.unidata.ucar.edu/thredds/fileserver/testdata/2004050300_eta_211.nc#mode=bytes"
  2. An Amazon S3 dataset containing a netcdf enhanced file.
    • location: "http://noaa-goes16.s3.amazonaws.com/ABI-L1b-RadC/2017/059/03/OR_ABI-L1b-RadC-M3C13_G16_s20170590337505_e20170590340289_c20170590340316.nc#mode=bytes"

Other remote servers may also provide byte-range access in a similar form.

It is important to note that this is not intended as a true production capability because it is believed that this kind of access can be quite slow. In addition, the byte-range IO drivers do not currently do any sort of optimization or caching.

Configuration

This capability is enabled using the option *–enable-byterange* option to the *./configure* command for Automake. For Cmake, the option flag is *-DNETCDF_ENABLE_BYTERANGE=true*.

This capability requires access to libcurl, and an error will occur if byterange is enabled, but libcurl could not be located. In this, it is similar to the DAP2 and DAP4 capabilities.

Run-time Usage

In order to use this capability at run-time, with ncdump for example, it is necessary to provide a URL pointing to the basic dataset to be accessed. The URL must be annotated to tell the netcdf-c library that byte-range access should be used. This is indicated by appending the phrase #mode=bytes to the end of the URL. The two examples above show how this will look.

In order to determine the kind of file being accessed, the netcdf-c library will read what is called the "magic number" from the beginning of the remote dataset. This magic number is a specific set of bytes that indicates the kind of file: classic, enhanced, cdf5, etc.

Architecture

Internally, this capability is implemented with the following drivers:

  1. libdispatch/dhttp.c – wrap libcurl operations.
  2. libsrc/httpio.c – provide byte-range reading to the netcdf-3 dispatcher.
  3. libhdf5/H5FDhttp.c – provide byte-range reading to the netcdf-4 dispatcher for non-cloud storage.
  4. H5FDros3.c – provide byte-range reading to the netcdf-4 dispatcher for cloud storage (Amazon S3 currently).

Both httpio.c and H5FDhttp.c are adapters that use dhttp.c to do the work. Testing for the magic number is also carried out by using the dhttp.c code. H5FDros3 is also an adapter, but specialized for cloud storage access.

NetCDF Classic Access

The netcdf-3 code in the directory libsrc is built using a secondary dispatch mechanism called ncio. This allows the netcdf-3 code be independent of the lowest level IO access mechanisms. This is how in-memory and mmap based access is implemented. The file httpio.c is the dispatcher used to provide byte-range IO for the netcdf-3 code. Note that httpio.c is mostly just an adapter between the ncio API and the dhttp.c code.

NetCDF Enhanced Access

Non-Cloud Access

Similar to the netcdf-3 code, the HDF5 library provides a secondary dispatch mechanism H5FD. This allows the HDF5 code to be independent of the lowest level IO access mechanisms. The netcdf-4 code in libhdf5 is built on the HDF5 library, so it indirectly inherits the H5FD mechanism.

The file H5FDhttp.c implements the H5FD dispatcher API and provides byte-range IO for the netcdf-4 code (and for the HDF5 library as a side effect). It only works for non-cloud servers such as the Unidata Thredds server.

Note that H5FDhttp.c is mostly just an adapter between the H5FD API and the dhttp.c code.