Abaqus: Difference between revisions

Jump to navigation Jump to search
2,085 bytes added ,  25 days ago
m
no edit summary
mNo edit summary
mNo edit summary
Line 233: Line 233:


=== Multiple node computing === <!--T:20832-->
=== Multiple node computing === <!--T:20832-->
Users with large memory or compute needs (and correspondingly large licenses) can use the following script to perform mpi-based computing over an arbitrary range of nodes ideally left to the scheduler to  automatically determine.  A companion template script to perform restart multinode jobs is not currently provided due to additional limitations when they can be used.
Users with large memory or compute needs (and correspondingly large licenses) can use the following script to perform mpi-based computing over an arbitrary range of nodes ideally left to the scheduler to  automatically determine.  A companion template script to perform restart multinode jobs is not currently provided due to additional limitations when they can be used. The memory estimate per compute process required to minimize I/O can be found in the output dat file of a completed job.  If mp_host_split is not specified (or it is set to 1) then the total number of compute processes will equal the number of nodes.  In such case the mem-per-cpu value can be roughly determined from the largest memory estimate times the number of nodes divided by the number or ntasks.  If however a value for mp_host_split is specified (greater than 1) than the mem-per-cpu value can instead be roughly determined from the largest memory estimate times the number of nodes times the value of mp_host_split divided by the number of tasks.  Note that mp_host_split must be less than or equal to the number of cores per node assigned by slurm at runtime otherwise abaqus will terminate which can optionally be controlled by uncommenting to specify a value for tasks-per-node.  The following definitive statement is given in every output dat file for reference:


<!--T:20885-->
THE UPPER LIMIT OF MEMORY THAT CAN BE ALLOCATED BY ABAQUS WILL IN GENERAL DEPEND ON THE VALUE OF
THE "MEMORY" PARAMETER AND THE AMOUNT OF PHYSICAL MEMORY AVAILABLE ON THE MACHINE. PLEASE SEE
THE "ABAQUS ANALYSIS USER'S MANUAL" FOR MORE DETAILS. THE ACTUAL USAGE OF MEMORY AND OF DISK
SPACE FOR SCRATCH DATA WILL DEPEND ON THIS UPPER LIMIT AS WELL AS THE MEMORY REQUIRED TO MINIMIZE
I/O. IF THE MEMORY UPPER LIMIT IS GREATER THAN THE MEMORY REQUIRED TO MINIMIZE I/O, THEN THE ACTUAL
MEMORY USAGE WILL BE CLOSE TO THE ESTIMATED "MEMORY TO MINIMIZE I/O" VALUE, AND THE SCRATCH DISK
USAGE WILL BE CLOSE-TO-ZERO; OTHERWISE, THE ACTUAL MEMORY USED WILL BE CLOSE TO THE PREVIOUSLY
MENTIONED MEMORY LIMIT, AND THE SCRATCH DISK USAGE WILL BE ROUGHLY PROPORTIONAL TO THE DIFFERENCE
BETWEEN THE ESTIMATED "MEMORY TO MINIMIZE I/O" AND THE MEMORY UPPER LIMIT. HOWEVER ACCURATE
ESTIMATE OF THE SCRATCH DISK SPACE IS NOT POSSIBLE.


<!--T:20885-->
<!--T:20885-->
Line 244: Line 255:
#SBATCH --account=def-group    # Specify account
#SBATCH --account=def-group    # Specify account
#SBATCH --time=00-06:00        # Specify days-hrs:mins
#SBATCH --time=00-06:00        # Specify days-hrs:mins
# SBATCH --nodes=2            # Best to leave commented
##SBATCH --nodes=2            # Uncomment to specify (optional)
#SBATCH --ntasks=8            # Specify number of cores
#SBATCH --ntasks=8            # Specify number of cores
#SBATCH --mem-per-cpu=16G      # Specify memory per core
#SBATCH --mem-per-cpu=4G      # Specify memory per core
##SBATCH --tasks-per-node=4    # Uncomment to specify (optional)
#SBATCH --cpus-per-task=1      # Do not change !
#SBATCH --cpus-per-task=1      # Do not change !


Line 255: Line 267:
<!--T:20887-->
<!--T:20887-->
unset SLURM_GTIDS
unset SLURM_GTIDS
export MPI_IC_ORDER='tcp'
#export MPI_IC_ORDER='tcp'
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "LM_LICENSE_FILE=$LM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
echo "ABAQUSLM_LICENSE_FILE=$ABAQUSLM_LICENSE_FILE"
Line 273: Line 285:
<!--T:20890-->
<!--T:20890-->
abaqus job=testsp1-mpi input=mystd-sim.inp \
abaqus job=testsp1-mpi input=mystd-sim.inp \
   scratch=$SCRATCH cpus=$SLURM_NTASKS interactive mp_mode=mpi
   scratch=$$SLURM_TMPDIR cpus=$SLURM_NTASKS interactive mp_mode=mpi \
  #mp_host_split=1  # number of mp processes per host >= 1 (uncomment to specify)
}}
}}


cc_staff
1,894

edits

Navigation menu