rsnt_translations
57,772
edits
(Marked this version for translation) |
No edit summary |
||
Line 332: | Line 332: | ||
<!--T:20836--> | <!--T:20836--> | ||
No input file modifications are required to restart the analysis. | No input file modifications are required to restart the analysis. | ||
</tab> | </tab> | ||
<tab name="temporary directory script"> | <tab name="temporary directory script"> | ||
Line 367: | Line 367: | ||
cp -f * $SLURM_SUBMIT_DIR | cp -f * $SLURM_SUBMIT_DIR | ||
}} | }} | ||
<!--T:20837--> | <!--T:20837--> | ||
To write restart data for a total of 12 time increments specify in the input file: | To write restart data for a total of 12 time increments specify in the input file: | ||
Line 373: | Line 373: | ||
Check for completed restart information in relevant output files: | Check for completed restart information in relevant output files: | ||
egrep -i "step|restart" testet*.com testet*.msg testet*.sta | egrep -i "step|restart" testet*.com testet*.msg testet*.sta | ||
</tab> | </tab> | ||
<tab name="temporary directory restart script"> | <tab name="temporary directory restart script"> | ||
Line 409: | Line 409: | ||
cp -f * $SLURM_SUBMIT_DIR | cp -f * $SLURM_SUBMIT_DIR | ||
}} | }} | ||
<!--T:20838--> | <!--T:20838--> | ||
No input file modifications are required to restart the analysis. | No input file modifications are required to restart the analysis. | ||
</tab> | </tab> | ||
</tabs> | </tabs> | ||
=== Multiple node computing === <!--T:20839--> | === Multiple node computing === <!--T:20839--> | ||
{{File | {{File | ||
|name="scriptep1-mpi.txt" | |name="scriptep1-mpi.txt" | ||
Line 452: | Line 452: | ||
}} | }} | ||
== Node memory == <!--T:20840--> | == Node memory == <!--T:20840--> | ||
An estimate for the total slurm node memory (--mem=) required for a simulation to run fully in ram (without being virtualized to scratch disk) can be obtained by examining the Abaqus output <code>test.dat</code> file. For example, a simulation that requires a fairly large amount of memory might show: | An estimate for the total slurm node memory (--mem=) required for a simulation to run fully in ram (without being virtualized to scratch disk) can be obtained by examining the Abaqus output <code>test.dat</code> file. For example, a simulation that requires a fairly large amount of memory might show: | ||
Line 471: | Line 471: | ||
To run your simulation interactively and monitor the memory consumption, do the following: | To run your simulation interactively and monitor the memory consumption, do the following: | ||
1) ssh into a cluster, obtain an allocation on a compute node (such as gra100), run abaqus ie) | 1) ssh into a cluster, obtain an allocation on a compute node (such as gra100), run abaqus ie) | ||
{{Commands | {{Commands | ||
|salloc --time=0:30:00 --cpus-per-task=8 --mem=64G --account=def-piname | |salloc --time=0:30:00 --cpus-per-task=8 --mem=64G --account=def-piname | ||
Line 478: | Line 478: | ||
|abaqus job=test input=Sample.inp scratch=$SCRATCH cpus=8 mp_mode=threads interactive | |abaqus job=test input=Sample.inp scratch=$SCRATCH cpus=8 mp_mode=threads interactive | ||
}} | }} | ||
<!--T:20842--> | <!--T:20842--> | ||
2) ssh into the cluster again, ssh into the compute node with the allocation, run top ie) | 2) ssh into the cluster again, ssh into the compute node with the allocation, run top ie) | ||
{{Commands|ssh gra100 | {{Commands|ssh gra100 | ||
|top -u $USER}} | |top -u $USER}} | ||
<!--T:20843--> | <!--T:20843--> | ||
3) watch the VIRT and RES columns until steady peak memory values are observed | 3) watch the VIRT and RES columns until steady peak memory values are observed |