I/O Tag

Mark Miller, Lawrence Livermore National Laboratory, Guest Blogger The HDF5 library has supported the I/O requirements of HPC codes at Lawrence Livermore National Labs (LLNL) since the late 90’s. In particular, HDF5 used in the Multiple Independent File (MIF) parallel I/O paradigm has supported LLNL code’s scalable I/O requirements and has recently been gainfully used at scales as large as 1,000,000 parallel tasks. What is the MIF Parallel I/O Paradigm? In the MIF paradigm, a computational object (an array, a mesh, etc.) is decomposed into pieces and distributed, perhaps unevenly, over parallel tasks. For I/O, the tasks are organized into groups and each group writes one file using round-robin exclusive access for the tasks in the group. Writes within groups are serialized but...

Elena Pourmal, The HDF Group What happened to my compression? One of the most powerful features of HDF5 is the ability to compress or otherwise modify, or “filter,” your data during I/O. By far, the most common user-defined filters are ones that perform data compression.  As you know, there are many compression options. There are filters provided by the HDF5 library (“predefined filters,”) which include several types of filters for data compression, data shuffling and checksum. Users can implement their own “user-defined filters” and employ them with the HDF5 library. [caption id="attachment_10741" align="alignright" width="300"] Cars in a 1973 Philadelphia junkyard – image from National Archives and Records Administration[/caption] While the programming model and usage of the compression filters is straightforward, it is possible for...