Handling trillions of supercomputer files just got simpler

Handling trillions of supercomputer files just got simpler
Gary Grider, left, and Brad Settlemyer discuss the new Los Alamos and Carnegie Mellon software product, DeltaFS, released to the software distribution site GitHub this week. Credit: Los Alamos National Laboratory

A new distributed file system for high-performance computing available today via the software collaboration site GitHub provides unprecedented performance for creating, updating and managing extreme numbers of files.

"We designed DeltaFS to enable the creation of trillions of ," said Brad Settlemyer, a Los Alamos computer scientist and project leader. Los Alamos National Laboratory and Carnegie Mellon University jointly developed DeltaFS. "Such a tool aids researchers in solving classical problems in high-performance computing, such as particle trajectory tracking or vortex detection."

DeltaFS builds a file system that appears to the user just like any other file system, doesn't require specialized hardware, and is exactly tailored to assisting the scientist in new discoveries when using a high-performance computing platform.

"One of the foremost challenges, and primary goals of DeltaFS, was scaling across thousands of servers without requiring a portion of them be dedicated to the file system," said George Amvrosiadis, assistant research professor at Carnegie Mellon University and a coauthor on the project. "This frees administrators from having to decide how to allocate resources for the file system, which will become a necessity when exascale machines become a reality."

Credit: Los Alamos National Laboratory

The brings about two important changes in computing. First, DeltaFS enables new strategies for designing the supercomputers themselves, dramatically changing the cost of creating and managing files. In addition, DeltaFS radically improves the performance of highly selective queries, dramatically reducing time to .

DeltaFS is a transient, software-defined service that allows data to be accessed from a handful up to hundreds of thousands of computers based on the user's performance requirements.

"The storage techniques used in DeltaFS are applicable in many scientific domains, but we believe that by alleviating the metadata bottleneck we have really shown a way for designing and procuring much more efficient HPC storage systems," Settlemyer said.

More information: GitHub link: github.com/pdlfs/deltafs/

Citation: Handling trillions of supercomputer files just got simpler (2019, March 15) retrieved 29 March 2024 from https://phys.org/news/2019-03-trillions-supercomputer-simpler.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Large Hadron Collider pushing computing to the limits

32 shares

Feedback to editors