The HDF Group is New OCC Member
John Readey, The HDF Group
We’re pleased to announce that The HDF Group is now a member of the Open Commons Consortium (formerly Open Cloud Consortium), a not for profit that manages and operates cloud computing and data commons infrastructure to support scientific, medical, health care and environmental research.
The HDF Group will be participating in the NOAA Data Alliance Working Group (WG) on the WG committee that will determine the datasets to be hosted in the NOAA data commons as well as tools to be used in the computational ecosystem surrounding the NOAA data commons.
“The Open Commons Consortium (OCC) is a truly innovative concept for supporting scientific computing,” said Mike Folk, The HDF Group’s President. “Their cloud computing and data commons infrastructure supports a wide range of research, and OCC’s membership spans government, academia, and the private sector. This is a good opportunity for us to learn about how we can best serve these communities.”
The HDF Group will also participate in the Open Science Data Cloud working group and receive resource allocations on the OSDC Griffin resource. The HDF Group’s John Readey is working with the OCC and others to investigate ways to use Griffin effectively. Readey says, “Griffin is a great testbed for cloud-based systems. With access to object storage (using the AWS/S3 api) and the ability to programmatically create VM’s, we will explore new methods for the analysis of scientific datasets.”
“Making it easier for scientists to analyze data.” https://youtu.be/LHbsS6znPOE
Readey continued, “Currently we are working on techniques to use ipyparallel (a Python based library for distributed computing) to tackle data analysis problems that would take an excessively long time to run on a single system.*
Also, we will be using Griffin to performance-test HDF Server (our new REST-based service for HDF data) to understand how it performs with varying number of clients. By working with the consortium’s many science participants, combined with Griffin’s storage architecture and compute cluster, we have a unique opportunity to explore ways to improve both computational and data services in the cloud.”
*More on ipyparallel in an upcoming blog article.