Handling large collections of files: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
mNo edit summary
Line 14: Line 14:


=git=
=git=
When working with Git, over time the number of files in the hidden <code>.git</code> repository subdirectory can grow significantly. Using <code>git repack</code> will pack many of the files together into a few large database files and greatly speed Git's operations.
When working with Git, over time the number of files in the hidden <code>.git</code> repository subdirectory can grow significantly. Using <code>git repack</code> will pack many of the files together into a few large database files and greatly speed up Git's operations.

Revision as of 15:30, 2 July 2019


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.




In certain domains it is common to have to manage very large collections - meaning hundreds of thousands or more - of files, which individually are often though not always fairly small, e.g. less than a few hundred kilobytes. In these cases, a problem naturally arises from storing such data on Compute Canada clusters due to the filesystem quotas that limit the number of distinct filesystem objects to 500K for the project space (by default) and 1M for the scratch space in most instances. So how can a user or group of users store these necessary data sets on the cluster? In this page we will present a variety of different solutions and workarounds, each of which has its own pros and cons, and allow you as a reader to judge for yourself which is the optimal approach for you.

DAR

HDF5

SQLite

SquashFS

ratarmount

git

When working with Git, over time the number of files in the hidden .git repository subdirectory can grow significantly. Using git repack will pack many of the files together into a few large database files and greatly speed up Git's operations.