Abaqus/en: Difference between revisions

Updating to match new version of source page
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 289: Line 289:


No input file modifications are required to restart the analysis.
No input file modifications are required to restart the analysis.
</tab>
</tab>
<tab name="temporary directory script">
<tab name="temporary directory script">
Line 323: Line 324:
cp -f * $SLURM_SUBMIT_DIR
cp -f * $SLURM_SUBMIT_DIR
}}
}}
To write restart data for a total of 12 time increments specify in the input file:
To write restart data for a total of 12 time increments specify in the input file:
  *RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO
  *RESTART, WRITE, OVERLAY, NUMBER INTERVAL=12, TIME MARKS=NO
Check for completed restart information in relevant output files:
Check for completed restart information in relevant output files:
  egrep -i "step|restart" testet*.com testet*.msg testet*.sta
  egrep -i "step|restart" testet*.com testet*.msg testet*.sta
</tab>
</tab>
<tab name="temporary directory restart script">
<tab name="temporary directory restart script">
Line 362: Line 365:
cp -f  * $SLURM_SUBMIT_DIR
cp -f  * $SLURM_SUBMIT_DIR
}}
}}
No input file modifications are required to restart the analysis.
No input file modifications are required to restart the analysis.
</tab>
</tab>
</tabs>
</tabs>


=== Multiple node computing ===  
=== Multiple node computing ===  
{{File
{{File
   |name="scriptep1-mpi.txt"
   |name="scriptep1-mpi.txt"
Line 399: Line 406:
   scratch=$SCRATCH cpus=$SLURM_NTASKS interactive mp_mode=mpi
   scratch=$SCRATCH cpus=$SLURM_NTASKS interactive mp_mode=mpi
}}
}}


== Node memory ==
== Node memory ==
Line 415: Line 423:
To run your simulation interactively and monitor the memory consumption, do the following:
To run your simulation interactively and monitor the memory consumption, do the following:
1) ssh into a cluster, obtain an allocation on a compute node (such as gra100), run abaqus ie)
1) ssh into a cluster, obtain an allocation on a compute node (such as gra100), run abaqus ie)
{{Commands
{{Commands
|salloc --time=0:30:00 --cpus-per-task=8 --mem=64G --account=def-piname
|salloc --time=0:30:00 --cpus-per-task=8 --mem=64G --account=def-piname
Line 421: Line 430:
|abaqus job=test input=Sample.inp scratch=$SCRATCH cpus=8 mp_mode=threads interactive
|abaqus job=test input=Sample.inp scratch=$SCRATCH cpus=8 mp_mode=threads interactive
}}
}}
2) ssh into the cluster again, ssh into the compute node with the allocation, run top ie)
2) ssh into the cluster again, ssh into the compute node with the allocation, run top ie)
{{Commands|ssh gra100
{{Commands|ssh gra100
|top -u $USER}}
|top -u $USER}}
3) watch the VIRT and RES columns until steady peak memory values are observed
3) watch the VIRT and RES columns until steady peak memory values are observed


38,757

edits