Webinar Followup: Parallel I/O with HDF5 and Performance Tuning Techniques
On June 26, 2020, The HDF Group employee, Scot Breitenfeld presented a webinar called “Parallel I/O with HDF5 and Performance Tuning Techniques.”
On June 26, 2020, The HDF Group employee, Scot Breitenfeld presented a webinar called “Parallel I/O with HDF5 and Performance Tuning Techniques.”
The HDF Group (HDF®), the maintainers and creators of the open source HDF5 library and file format, has joined the COVID-19 High Performance Computing (HPC) Consortium as an affiliate to provide expertise that can enhance and accelerate COVID-19 research.
A slide deck and recording for the June 5, 2020 webinar, “An Introduction to HDF5 in HPC Environments.”
Damaris is a middleware that enriches existing HPC data format libraries (e.g. HDF5) with data aggregation and asynchronous data management capabilities. At the same time, it can be employed for in situ analysis and visualization purposes.
The HDF Group’s HDF Server has been nominated for Best Use of HPC in the Cloud, and Best HPC Software Product or Technology in HPCWire’s 2016 Readers’ Choice Awards. HDF Server is a Python-based web service that enables full read/write web access to HDF data – it can be used to send and receive HDF5
Champaign, IL — The HDF Group today announced that its Board of Directors has appointed David Pearah as its new Chief Executive Officer. The HDF Group is a software company dedicated to creating high performance computing technology to address many of today’s Big Data challenges. Pearah replaces Mike Folk upon his retirement after ten years
We are excited and pleased to announce HDF5-1.10.0, the most powerful version of our flagship software ever.> This major new release of HDF5 is more powerful than ever before and packed with new capabilities that address important data challenges faced by our user community. HDF5 1.10.0 contains many important new features and changes, including those
Quincey Koziol, The HDF Group “A supercomputer is a device for turning compute-bound problems into I/O-bound problems.” – Ken Batcher, Prof. Emeritus, Kent State University. HDF5 began out of a collaboration between the National Center for Supercomputing Applications (NCSA) and the US Department of Energy’s Advanced Simulation and Computing Program (ASC), so high-performance computing (HPC)
The current improvement of using collective I/O to reduce the number of independent processes accessing the file system helped to improve the metadata reads for cgp_open substantially, yielding 100-1000 times faster execution times over the previous implementation.
Mohamad Chaarawi, The HDF Group First in a series: parallel HDF5 What costs applications a lot of time and resources rather than doing actual computation? Slow I/O. It is well known that I/O subsystems are very slow compared to other parts of a computing system. Applications use I/O to store simulation output for future use