cc_staff
290
edits
No edit summary |
No edit summary |
||
Line 368: | Line 368: | ||
Within these limits, jobs will still have to wait in the queue. The waiting time depends on many factors such as the allocation amount, how much allocation was used in the recent past, the number of nodes and the walltime, and how many other jobs are waiting in the queue. | Within these limits, jobs will still have to wait in the queue. The waiting time depends on many factors such as the allocation amount, how much allocation was used in the recent past, the number of nodes and the walltime, and how many other jobs are waiting in the queue. | ||
== | == File Input/Output Tips == | ||
It is important to understand the file systems, so as to perform your file I/O (Input/Output) responsibly. Refer to the [[Data_Management | Data Management]] page for details about the file systems. | |||
* Your files can be seen on all Niagara login and compute nodes. | |||
* $HOME, $SCRATCH, and $PROJECT all use the parallel file system called GPFS. | |||
* GPFS is a high-performance file system which provides rapid reads and writes to large data sets in parallel from many nodes. | |||
* Accessing data sets which consist of many, small files leads to poor performance on GPFS. | |||
If you | * Avoid reading and writing lots of small amounts of data to disk. Many small files on the system waste space and are slower to access, read and write. If you must write many small files, use [[User_Ramdisk | ramdisk]]. | ||
* Write data out in a binary format. This is faster and takes less space. | |||
* The [[Burst Buffer]] is another option for I/O heavy-jobs and for speeding up [[Checkpoints|checkpoints]]. | |||
The | |||
== Example submission script (MPI) == <!--T:90--> | == Example submission script (MPI) == <!--T:90--> |