The European HDF Users Group (HUG) Summer 2021 was held July 7-8. We hosted a variety of talks from users throughout the HDF5 community on topics like new storage architectures and parallel and cloud IO, performance and debugging issues, analysis and visualization, wrappers and VOL connectors, new uses of HDF5, and HDF5 applications in science and industry.
Tuesdays at 1:00 p.m. CDT: join us for a series of weekly, unscripted, live events! The HDF Group’s Gerd Heber will try to answer attendee questions and, for example, go over the previous week’s HDF Forum posts. The HDF Clinics are free sessions intended to help users tackle real-world HDF problems from a common cold to severe headaches and offer relief where that’s possible. As time permits, we will include how-tos, offer advice on tool usage, review your code samples, teach you survival in the documentation jungle, and discuss what’s new or just around the corner in the land of HDF.
New features include support for running on Microsoft Azure and support for POSIX-based storage. A complete list of new features can be found on github.
Learn more about HSDS
We’re excited to announce the 2021 HDF5 Users Group (HUG), happening virtually on October 12-15, 2021. We are currently soliciting submissions in two categories “Paper and Presentation” (abstracts due June 1) or “Presentation” (abstracts due August 1). Check out the call for papers now. For more information, stay tuned to the conference website at https://www.hdfgroup.org/hug/hug21/.
HDF® supports n-dimensional datasets and each element in the dataset may itself be a complex object.
HDF® is portable, with no vendor lock-in, and is a self-describing file format, meaning everything all data and metadata can be passed along in one file.
HDF® is a software library that runs on a range of computational platforms, from laptops to massively parallel systems, and implements a high-level API with C, C++, Fortran 90, and Java interfaces. HDF has a large ecosystem with 700+ Github projects.
HDF® is high-performance I/O with a rich set of integrated performance features that allow for access time and storage space optimizations.
There is no limit on the number or size of data objects in the collection, giving great flexibility for big data.
HDF5® allows you to keep the metadata with the data, streamlining data lifecycles and pipelines.