Today, I am writing to all of you to announce the launch of both a Community Edition (CE) and a subscription-based Enterprise Support Edition (ESE) for HDF5. This model is similar to Red Hat, Lustre, and other open source projects. We are moving down this pathway to address the challenges that continuously face us in achieving sustainability and increasing community involvement....
On March 30th, we announced the release of HDF5 version 1.10.2. With this release, we accomplished all the tasks planned for the major HDF5 1.10 series. It is time for applications to start migrating (or start their migration) from HDF5 1.8 to the new major release as we will be dropping support for HDF5 1.8 in Summer 2019. In this blog post , we will focus only on the major new features and bug fixes in HDF5 1.10.2. Hopefully, after reading about those, you will be convinced it is time to upgrade to HDF5 1.10.2....
The HDF Server allows producers of complex datasets to share their results with a wide audience base. We used it to develop the Global Fire Emissions Database (GFED) Analysis Tool, a website which guides the user through our dataset. A simple webmap interface allows users to select an area of interest and produce data visualization charts. ...
The HDF Group’s HDF Server has been nominated for Best Use of HPC in the Cloud, and Best HPC Software Product or Technology in HPCWire’s 2016 Readers’ Choice Awards.
HDF Server is a Python-based web service that enables full read/write web access to HDF data – it can be used to send and receive HDF5 data using an HTTP-based REST interface.
While HDF5 provides powerful scalability and speed for complex datasets of all sizes, many HDF5 data sets used in HPC environments are extremely large and cannot easily be downloaded or moved across the internet to access data on an as-needed basis. Users often only need to access a small subset of the data. Using HDF Server, data can be kept in one...
UPDATE January 19, 2016: The HDF5-1.10.0-alpha1 release is now available, adding Collective Metadata I/O to these features:
– Concurrent Access to an HDF5 File: Single Writer / Multiple Reader (SWMR)
– Virtual Dataset (VDS)
– Scalable Chunk Indexing
– Persistent Free File Space Tracking
We’re pleased to announce the release of HDF5 1.10.0-alpha0.
HDF5 1.10.0, planned for release in Spring, 2016, is a major release containing many new features. On January 6, 2016 we announced the release of the first alpha version of the software.
The alpha0 release contains some (but not all) of the features that will be in HDF5 1.10.0. The Single Writer/Multiple Reader and Virtual Data Set features, below, are both contained in this alpha release as are scalable chunk indexing and persistent free file space tracking. More features, such as enhancements to parallel HDF5 and support for compressing contiguous datasets will be added in upcoming alpha releases.
Editor’s Note: Since this post was written in 2015, The HDF Group has developed Kita, a new product that addresses the challenges of adapting large scale array-based computing to the cloud and object storage while intelligently handling the full data management life cycle. If this is something that interests you, we’d love to hear from you.
HDF Server is a new product from The HDF Group which enables HDF5 resources to be accessed and modified using Hypertext Transfer Protocol (HTTP).
HDF Server , released in February 2015, was first developed as a proof of concept that enabled remote access to HDF5 content using a RESTful API. HDF Server version 0.1.0 wasn’t yet intended for use in a production environment since it didn’t initially provide a set of security features and controls. Following its successful debut, The HDF Group incorporated additional planned features. The newest version of HDF Server provides exciting capabilities for accessing HDF5 data in an easy and secure way.