Tobias Weinzierl, Durham University, UK, Sven Köppel, FIAS, Germany, Michael Bader, TUM, Germany, HDF Guest Bloggers
ExaHyPE develops a solver engine for hyperbolic differential equations solved on adaptive Cartesian meshes. It supports various HDF5 output formats.
Exascale computing is expected to allow scientists and engineers to simulate, and ultimately understand, wave phenomena with unprecedented accuracy over unprecedented time spans. To harvest the power of exascale machines, well-suited software however has to become available. ExaHyPE is a H2020 project writing a PDE solver engine—similar to a 3D computer game engine—that will allow groups with a decent CSE expertise to write their own solver for hyperbolic equation systems within a year. The resulting solver will scale to exascale.
This is made possible by the unique fusion of three methodological concepts: High order ADER-DG discretisations in combination with Finite Volume limiters, adaptive Cartesian meshes resulting from spacetrees, and compute kernels tailored specifically to particular compute architectures. While the consortium delivers a generic code base, they validate the capability of the software by means of two examples (demonstrator applications): a seismic risk assessment code and a code simulating astrophysics scenarios such as rotating and merging binary neutron stars and black holes. ExaHyPE software will help to improve our understanding of seismic hazards and causes and characteristics of gravitational waves emitted by rotating binary star systems.
Long-running extreme scale simulations always suffer from massive output files and, thus, massive I/O bandwidth demands. This problem is amplified by dynamically changing AMR, as user data has to be enriched by complicated metadata. The ExaHyPE consortium first has started to map their output data onto VTK’s unstructured meshes to allow quick assessment of output data with standardised, free visualisation software. This mapping is straightforward as ExaHyPE relies on a generalised octree idea, but does not exploit the data’s structuredness. Second, the consortium adopted well-established output formats from the FLASH Code and the Einstein Toolkit Carpet AMR code which are based upon HDF5. While those formats do support parallel I/O and allow the consortium to rely on a set of established data analysis tools, the arising memory footprint remains comparable to storing unstructured grids in binary format for Flash, and is unacceptably high for Cactus/Carpet.
Third, the consortium thus has decided to implement their own HDF5-based output format which projects the spacetree’s topology onto an unstructured set of tiny patches held in plain tables. While metadata such as mesh topology is not explicitly stored anymore, the plain table approach fits to massively parallel I/O and benefits from on-the-fly compression. Empirical evidence suggests that this HDF5-based file format yields files that are by more than a factor of two smaller than FLASH AMR files or binary VTK files—even though the grid may change in each and every time step of the simulation and the data dumps thus have to hold spatial data per time step.
Acknowledgements:
This work is made possible through and received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 671698 (ExaHyPE).
For more information, see:
Tobias Weinzierl, Durham University, UK, Sven Köppel, FIAS, Germany, and Michael Bader, Technical University of Munich, Germany, are members of the ExaHyPE consortium. Michael serves as project PI and his group focuses on performance engineering and parallel I/O integrating into multithreaded multi-rank environments. Sven is interested in computational astrophysics, runs the ExaHyPE astro experiments and integrates ExaHyPE’s outputs into the data analysis workflows of Luciano Rezzolla’s group at FIAS. Tobias is in charge of the AMR code base Peano developed by his group and drives the development of tailored ExaHyPE output formats.