Innovation

Highly Scalable Data Service principal architect John Readey covers an update to the Highly Scalable Data Service. The max request size limit per HTTP request no longer applies with the latest HSDS update. In the new version large requests are streamed back to the client as the bytes are fetched from storage. Regardless of the size of the read request, the amount of memory used by the service is limited and clients will start to see bytes coming back while the server is still processing the tail chunks in the selection. The same applies for write operations—the service will fetch some bytes from the connection, update the storage, and fetch more bytes until the entire request is complete. Learn more about...

If you are looking to store HDF5 data in the cloud there are several different technologies that can be used and choosing between them can be somewhat confusing. In this post, I thought it would be helpful to cover some of the options with the hope of helping HDF users make the best decision for their deployment. Each project will have its own requirements and special considerations, so please take this as just a starting point....

The purpose of this introduction is to highlight and celebrate a community contribution the impact of which we are just beginning to understand. Its principal author, Mr. Lucas C. Villa Real, calls it HDF5-UDF and describes it as "a mechanism to generate HDF5 dataset values on-the-fly using user-defined functions (UDFs)." This matter- of-fact characterization is quite accurate, but I would like to provide some context for what this means for us users of HDF5....

The HDF Group's formal comment to a DOE request on stewardship for scientific and high-performance computing was published in the Federal Register. We had a joint position paper at January's ASCR Workshop on Visualization for Scientific Discovery, Decision-Making, & Communication. The paper is called Whither Visualization Logic and was written by Leigh Orf (University of Wisconsin), Lucas Villa Real (IBM Research), and Gerd Heber (The HDF Group). Thomas Caswell (active h5py contributor, Brookhaven National Laboratory) also has a position paper called Visualization of Structured Data. Additionally, we have two position papers at the ASCR Workshop on the Management and Storage of Scientific Data. The first was led by Jerome Soumagne (The HDF Group), The Twilight of I/O as a User Concept and is joint work with Andres Marquez from the PNNL....

We are excited to announce a new strategy of delivering HDF5 features: Experimental Releases. Experimental releases allow us to get major new features into the hands of our users so that they can test the features and provide feedback before we integrate them into a subsequent maintenance release. Our first experimental release, HDF5 version 1.13.0, is now available....

HSDS (Highly Scalable Data Service) is a REST based service for HDF data, part of HDF Cloud, our set of solutions for cloud deployments. In the recent blog about the latest release of HSDS we discussed many of the new features in the 0.6 release including support for Azure....

The HDF Group’s technical mission is to provide rapid, easy and permanent access to complex data. FishEye's vision is "Synthesizing the world’s real-time data". This white paper is intended for embedded system users, software engineers, integrators, and testers that use or want to use HDF5 to access, collect, use and analyze machine data. FishEye has developed an innovative process that provides the most efficient method to expose data from embedded systems that simplifies and liberates data for real-time analysis, machine learning, and cloud-enabled services....