Tuesdays at 1:00 p.m. CDT: join us for a series of weekly, unscripted, live events! The HDF Group’s Gerd Heber will try to answer attendee questions and, for example, go over the previous week’s HDF Forum posts. The HDF Clinics are free sessions intended to help users tackle real-world HDF problems from a common cold to severe headaches and offer relief where that’s possible. As time permits, we will include how-tos, offer advice on tool usage, review your code samples, teach you survival in the documentation jungle, and discuss what’s new or just around the corner in the land of HDF.
HDF® supports n-dimensional datasets and each element in the dataset may itself be a complex object.
HDF® is portable, with no vendor lock-in, and is a self-describing file format, meaning everything all data and metadata can be passed along in one file.
HDF® is a software library that runs on a range of computational platforms, from laptops to massively parallel systems, and implements a high-level API with C, C++, Fortran 90, and Java interfaces. HDF has a large ecosystem with 700+ Github projects.
HDF® is high-performance I/O with a rich set of integrated performance features that allow for access time and storage space optimizations.
There is no limit on the number or size of data objects in the collection, giving great flexibility for big data.
HDF5® allows you to keep the metadata with the data, streamlining data lifecycles and pipelines.