LS-DYNA: Difference between revisions
mNo edit summary |
mNo edit summary |
||
Line 33: | Line 33: | ||
<!--T:23--> | <!--T:23--> | ||
Modules for running jobs on a single compute node can be listed with: <code>module spider ls-dyna</code>. Jobs may be submitted to the queue with: <code>sbatch script-smp.sh</code>. The following sample slurm script shows how to to run LS-DYNA with 8 cores on a single cluster compute node. The AUTO setting allows explicit simulations to allocate more memory than the 100M word default at runtime: | Modules for running jobs on a single compute node can be listed with: <code>module spider ls-dyna</code>. Jobs may be submitted to the queue with: <code>sbatch script-smp.sh</code>. The following sample slurm script shows how to to run LS-DYNA with 8 cores on a single cluster compute node. The AUTO setting allows explicit simulations to allocate more memory than the 100M word size default at runtime: | ||
{{File | {{File | ||
|name=script-smp.sh | |name=script-smp.sh |
Revision as of 18:56, 26 April 2021
Introduction[edit]
LS-DYNA is available on all our systems. It is used for many applications to solve problems in multiphysics, solid mechanics, heat transfer and fluid dynamics. Analyses are performed as separate phenomena or coupled physics simulations such as thermal stress or fluid structure interaction. LSTC was recently purchased by ANSYS so the LS-DYNA software may eventually be exclusively provided as part of the ANSYS module. For now we recommend using the LS-DYNA software traditionally provided by LSTC as documented in this wiki page.
Licensing[edit]
Compute Canada is a hosting provider for LS-DYNA. This means that we have LS-DYNA software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have local licenses that can be used on our clusters. Researchers can also purchase a license directly from the company to run on a SHARCNET license server for dedicated use on any Compute Canada system.
Once a license is set up, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load a ls-dyna Compute Canada module, and it should find your license automatically. For assistance please contact our Technical support.
Configuring your license file[edit]
Our module for LS-DYNA is designed to look for license information in a few places, one of which is your home folder. If you have your own license server, you can write the information to access it in the following format:
#LICENSE_TYPE: network
#LICENSE_SERVER:<port>@<server>
and put this file in the folder $HOME/.licenses/ on each cluster where you plan to submit LS-DYNA jobs. Note that firewall changes will need to be done on both our side and yours. To arrange this, send an email containing the service port and IP address of your floating license server to Technical support. To check if your new license file is responding run the following commands:
module load ls-dyna
ls-dyna_s
The output should contain defined a (non-empty) value for Licensed to:
. Press ^C to exit.
Cluster batch job submission[edit]
LS-DYNA provides binaries for running jobs on a single compute node (SMP - Shared Memory Parallel using OpenMP) or across a multiple compute nodes (MPP - Message Passing Parallel using MPI). Note that for the sample slurm scripts that follow the StdEnv/2020 module must be loaded before ls-dyna/12.0 or ls-dyna-mpi/12.0 can be loaded:
Single node jobs[edit]
Modules for running jobs on a single compute node can be listed with: module spider ls-dyna
. Jobs may be submitted to the queue with: sbatch script-smp.sh
. The following sample slurm script shows how to to run LS-DYNA with 8 cores on a single cluster compute node. The AUTO setting allows explicit simulations to allocate more memory than the 100M word size default at runtime:
#!/bin/bash
#SBATCH --account=def-account # Specify
#SBATCH --time=0-03:00 # D-HH:MM
#SBATCH --cpus-per-task=8 # Specify number of cores
#SBATCH --mem=16G # Specify total memory
#SBATCH --nodes=1 # Do not change
module load StdEnv/2016.4 # Versions =< 11.1
module load ls-dyna/11.1
# module load StdEnv/2020 # Versions >= 12.0
# module load ls-dyna/12.0
export LSTC_MEMORY=AUTO
ls-dyna_s ncpu=$SLURM_CPUS_ON_NODE i=airbag.deploy.k memory=100M
where ls-dyna_s = single precision smp solver ls-dyna_d = double precision smp solver
Multiple node jobs[edit]
The module versions available for running jobs on multiple nodes (using the prebuilt MPP prebuilt binaries provided by LS-DYNA) can be listed with module spider ls-dyna-mpi
. To submit jobs to the queue use: sbatch script-mpp.sh
. Sample scripts for submitting jobs to a specified total number of whole nodes (by node) *OR* a specified total number of cores (by core) are provided next:
Specify node count[edit]
Jobs can be submitted to a specified number of whole compute nodes with the following script:
#!/bin/bash
#SBATCH --account=def-account # Specify
#SBATCH --time=0-03:00 # D-HH:MM
#SBATCH --ntasks-per-node=32 # Specify number cores per node (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --nodes=2 # Specify number compute nodes (1 or more)
#SBATCH --mem=0 # Use all memory per compute node (do not change)
# module load StdEnv/2016.4 # Versions =< 11.1
# module load ls-dyna-mpi/11.1
module load StdEnv/2020 # Versions >= 12.0
module load ls-dyna-mpi/12.0
export LSTC_MEMORY=AUTO
srun ls-dyna_d i=airbag.deploy.k memory=8G memory2=200M
where ls-dyna_s = single precision mpp solver ls-dyna_d = double precision mpp solver
Specify core count[edit]
Jobs can also be submitted to an arbitrary number of compute nodes by specifying the number of cores. This approach allows the slurm scheduler to automatically determine the number of compute nodes to minimize the wait time in the queue, at the cost of less memory available for the master processor to decompose the model:
#!/bin/bash
#SBATCH --account=def-account # Specify
#SBATCH --time=0-03:00 # D-HH:MM
#SBATCH --ntasks=64 # Specify total number of cores
#SBATCH --mem-per-cpu=2G # Specify memory per core
# module load StdEnv/2016.4 # Versions =< 11.1
# module load ls-dyna-mpi/11.1
module load StdEnv/2020 # Versions >= 12.0
module load ls-dyna-mpi/12.0
export LSTC_MEMORY=AUTO
srun ls-dyna_d i=airbag.deploy.k
where ls-dyna_s = single precision mpp solver ls-dyna_d = double precision mpp solver
Depending on the simulation LS-DYNA maybe unable to efficiently use many cores. Therefore before running a simulation test the scaling properties of the simulation by gradually increasing the number of cores to determine the optimal value before simulation slowdown occurs. To determine the Job Wall-clock time, CPU Efficiency and Memory Efficiency for a successfully completed job use the seff jobnumber command.
Remote visualization[edit]
LSTC provides LS-PrePost for pre- and post-processing of LS-DYNA models. This program is made available by a separate module. It does not require a license and can be used on any cluster node or on the Graham VDI nodes by following these steps:
Cluster nodes[edit]
Connect to a compute node or to a login node with TigerVNC.
module load ls-prepost lsprepost
VDI nodes[edit]
Connect to gra-vdi with TigerVNC.
module load CcEnv StdEnv module load ls-prepost lsprepost