The HDF® Group and the Lawrence Berkeley National Lab (LBNL) hosted a virtual HDF5® User Group Meeting on October 13th to October 16th, 2020. We are working on the recordings and they will be posted soon.
New features include support for running on Microsoft Azure and support for POSIX-based storage. A complete list of new features can be found on github.
Learn more about HSDS
HDF® supports n-dimensional datasets and each element in the dataset may itself be a complex object.
HDF® is portable, with no vendor lock-in, and is a self-describing file format, meaning everything all data and metadata can be passed along in one file.
HDF® is a software library that runs on a range of computational platforms, from laptops to massively parallel systems, and implements a high-level API with C, C++, Fortran 90, and Java interfaces. HDF has a large ecosystem with 700+ Github projects.
HDF® is high-performance I/O with a rich set of integrated performance features that allow for access time and storage space optimizations.
There is no limit on the number or size of data objects in the collection, giving great flexibility for big data.
HDF5® allows you to keep the metadata with the data, streamlining data lifecycles and pipelines.