Release of HDF5 1.10.0-alpha1
single write/multiple read, virtual dataset, scalable chunk indexing, free file space tracking, collective metadata I/O
single write/multiple read, virtual dataset, scalable chunk indexing, free file space tracking, collective metadata I/O
HDF Server is a new product from The HDF Group which enables HDF5 resources to be accessed and modified using Hypertext Transfer Protocol (HTTP). The newest version of HDF Server provides exciting capabilities for accessing data in an easy and secure way.
We are currently planning for a Q2 2016 release of the product. In the meantime, we are working with a few early adopters on finalizing the initial feature set. If you have additional questions about HDF5/ODBC, or if you would like to become an early adopter, please contact us
John Readey, The HDF Group We’re pleased to announce that The HDF Group is now a member of the Open Commons Consortium (formerly Open Cloud Consortium), a not for profit that manages and operates cloud computing and data commons infrastructure to support scientific, medical, health care and environmental research. The HDF Group will be participating
Joel Plutchak, The HDF Group The HDF Group’s support for and use of the Java Programming Language consists of Java wrappers for the HDF4 and HDF5 C libraries, an Object Model definition and implementation, and HDFView, a graphical file viewing application. In this article we’ll discuss what we’re doing now with Java, and look toward
Anthony Scopatz, Assistant Professor at the University of South Carolina, HDF guest blogger “Python is great and its ecosystem for scientific computing is world class. HDF5 is amazing and is rightly the gold standard for persistence for scientific data. Many people use HDF5 from Python, and this number is only growing due to pandas’ HDFStore.
Quincey Koziol, The HDF Group “A supercomputer is a device for turning compute-bound problems into I/O-bound problems.” – Ken Batcher, Prof. Emeritus, Kent State University. HDF5 began out of a collaboration between the National Center for Supercomputing Applications (NCSA) and the US Department of Energy’s Advanced Simulation and Computing Program (ASC), so high-performance computing (HPC)
David Dotson, doctoral student, Center for Biological Physics, Arizona State University; HDF Guest Blogger Recently I had the pleasure of meeting Anthony Scopatz for the first time at SciPy 2015, and we talked shop. I was interested in his opinions on MDSynthesis, a Python package our lab has designed to help manage the complexity of raw and derived
Mohamad Chaarawi, The HDF Group Second in a series: Parallel HDF5 In my previous blog post, I discussed the need for parallel I/O and a few paradigms for doing parallel I/O from applications. HDF5 is an I/O middleware library that supports (or will support in the near future) most of the I/O paradigms we talked
The current improvement of using collective I/O to reduce the number of independent processes accessing the file system helped to improve the metadata reads for cgp_open substantially, yielding 100-1000 times faster execution times over the previous implementation.