Handling large collections of files

From Alliance Doc
Revision as of 18:34, 2 July 2019 by Stubbsda (talk | contribs)
Jump to navigation Jump to search


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.




In certain domains, notably AI and Machine Learning, it is common to have to manage very large collections of files, meaning hundreds of thousands or more. The individual files may be fairly small, e.g. less than a few hundred kilobytes. In these cases, a problem arises due to filesystem quotas on Compute Canada clusters that limit the number of filesystem objects. So how can a user or group of users store these necessary data sets on the cluster? In this page we will present a variety of different solutions, each with its own pros and cons, so you may judge for yourself which is an appropriate one for you.

Finding folders with lots of files

As always in optimization, you better start finding where it is worth doing some cleanup. You may consider the following code which will recursively count all files in folders in the current directory:

for FOLDER in $(find . -maxdepth 1 -type d | tail -n +2); do
  echo -ne "$FOLDER:\t"
  find $FOLDER -type f | wc -l
done

Using the local disk ($SLURM_TMPDIR)

Archiving tools

DAR

A disk archive utility. See Dar.

HDF5

SQLite

SquashFS

Random Access Read-Only Tar Mount (Ratarmount)

Cleaning up hidden files

git

When working with Git, over time the number of files in the hidden .git repository subdirectory can grow significantly. Using git repack will pack many of the files together into a few large database files and greatly speed up Git's operations.