LS-DYNA: Difference between revisions
(Marked this version for translation) |
No edit summary |
||
Line 4: | Line 4: | ||
<translate> | <translate> | ||
= Introduction = <!--T:1--> | = Introduction = <!--T:1--> | ||
[http://www.lstc.com LS-DYNA] is available on all | [http://www.lstc.com LS-DYNA] is available on all our systems. It is used for many [http://www.lstc.com/applications applications] to solve problems in multi-physics, solid mechanics, heat transfer and fluid dynamics either as separate phenomena or as coupled physic such as thermal stress or fluid structure interaction. LSTC was recently purchased by ANSYS, therefore the software may eventually be provided as part of the ANSYS module. For now we recommended using LS-DYNA as documented in this wiki page. | ||
= Licensing = <!--T:2--> | = Licensing = <!--T:2--> | ||
Compute Canada is a hosting provider for LS-DYNA. This means that we have LS-DYNA software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have local licenses that can be used on our clusters. Researchers can also purchase a license directly from the company to run on a | Compute Canada is a hosting provider for LS-DYNA. This means that we have LS-DYNA software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have local licenses that can be used on our clusters. Researchers can also purchase a license directly from the company to run on a SHARCNET license server for dedicated use on any Compute Canada system. | ||
<!--T:3--> | <!--T:3--> | ||
Once a license is | Once a license is set up, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load a ls-dyna Compute Canada module, and it should find your license automatically. For assistance please contact our [[Technical support]]. | ||
== Configuring your license file == <!--T:4--> | == Configuring your license file == <!--T:4--> | ||
Our module for LS-DYNA is designed to look for license information in a few places | Our module for LS-DYNA is designed to look for license information in a few places, one of which is your home folder. If you have your own license server, you can write the information to access it in the following format: | ||
{{File | {{File | ||
|name=ls-dyna.lic | |name=ls-dyna.lic | ||
Line 20: | Line 20: | ||
#LICENSE_TYPE: network | #LICENSE_TYPE: network | ||
#LICENSE_SERVER:<port>@<server> | #LICENSE_SERVER:<port>@<server> | ||
}}and put this file in the folder <tt>$HOME/.licenses/</tt> on each cluster where you plan to submit | }}and put this file in the folder <tt>$HOME/.licenses/</tt> on each cluster where you plan to submit LS-DYNA jobs. Note that firewall changes will need to be done on both our side and yours. To arrange this, send an email containing the service port and IP address of your floating license server to [[Technical support]]. To check if your new license file is responding run the following commands: | ||
<code>module load ls-dyna | <code>module load ls-dyna | ||
ls-dyna_s</code> | ls-dyna_s</code> | ||
The output should contain defined (non-empty) values for <code>Licensed to:</code> and <code>Issued by:</code>, press ^c to exit. | The output should contain defined (non-empty) values for <code>Licensed to:</code> and <code>Issued by:</code>, press ^c to exit. | ||
== Cluster | == Cluster batch job submission == <!--T:20--> | ||
<!--T:29--> | <!--T:29--> | ||
LS-DYNA provides software for running jobs on a single node with SMP (Shared Memory Parallel) binaries and multiple nodes with MPP (Message Passing Parallel) binaries where parallelization is achieved using OpenMP and MPI respectively. As shown in the | LS-DYNA provides software for running jobs on a single node with SMP (Shared Memory Parallel) binaries and multiple nodes with MPP (Message Passing Parallel) binaries where parallelization is achieved using OpenMP and MPI respectively. As shown in the Slurm scripts below, the StdEnv/2020 module must be loaded before ls-dyna/12.0 or ls-dyna-mpi/12.0 can be loaded: | ||
=== SMP Slurm | === SMP Slurm script=== <!--T:22--> | ||
<!--T:23--> | <!--T:23--> | ||
Modules for running jobs on a single node can be listed with: <code>module spider ls-dyna</code>. Jobs maybe submitted to the queue with: <code>sbatch script-smp.sh</code>. The following is a sample | Modules for running jobs on a single node can be listed with: <code>module spider ls-dyna</code>. Jobs maybe submitted to the queue with: <code>sbatch script-smp.sh</code>. The following is a sample Slurm job submission script suitable for running LS-DYNA efficiently with up to 8 cores on a single cluster compute node: | ||
{{File | {{File | ||
|name=script-smp.sh | |name=script-smp.sh | ||
Line 58: | Line 58: | ||
ls-dyna_d = double precision smp solver | ls-dyna_d = double precision smp solver | ||
=== MPP Slurm | === MPP Slurm script === <!--T:24--> | ||
<!--T:25--> | <!--T:25--> | ||
Modules for running jobs on multiple nodes can be listed with: <code>module spider ls-dyna-mpi</code>. Jobs maybe submitted to the queue with: <code>sbatch script-mpp.sh</code>. The following is a sample | Modules for running jobs on multiple nodes can be listed with: <code>module spider ls-dyna-mpi</code>. Jobs maybe submitted to the queue with: <code>sbatch script-mpp.sh</code>. The following is a sample Slurm script suitable for running LS-DYNA jobs with 8 or more cores across multiple cluster compute nodes. The number of nodes is determined automatically by the scheduler and therefore does not need to be specified: | ||
{{File | {{File | ||
|name=script-mpp.sh | |name=script-mpp.sh | ||
Line 92: | Line 92: | ||
<!--T:28--> | <!--T:28--> | ||
LSTC provides [https://www.lstc.com/products/ls-prepost LS-PrePost] for pre and post processing of LS-DYNA [https://www.dynaexamples.com/ models]. This program is made available by a separate module. It does not require a license and can be used on any cluster node or the | LSTC provides [https://www.lstc.com/products/ls-prepost LS-PrePost] for pre- and post-processing of LS-DYNA [https://www.dynaexamples.com/ models]. This program is made available by a separate module. It does not require a license and can be used on any cluster node or on the Graham VDI nodes by following these steps: | ||
=== Cluster | === Cluster nodes === <!--T:36--> | ||
Connect to a compute or login node with [https://docs.computecanada.ca/wiki/VNC#Connect TigerVNC] | Connect to a compute node or to a login node with [https://docs.computecanada.ca/wiki/VNC#Connect TigerVNC] | ||
module load ls-prepost | module load ls-prepost | ||
lsprepost | lsprepost | ||
=== VDI | === VDI nodes === <!--T:37--> | ||
Connect to gra-vdi with [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC] | Connect to gra-vdi with [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC] | ||
module load CcEnv StdEnv | module load CcEnv StdEnv |
Revision as of 21:16, 10 November 2020
Introduction[edit]
LS-DYNA is available on all our systems. It is used for many applications to solve problems in multi-physics, solid mechanics, heat transfer and fluid dynamics either as separate phenomena or as coupled physic such as thermal stress or fluid structure interaction. LSTC was recently purchased by ANSYS, therefore the software may eventually be provided as part of the ANSYS module. For now we recommended using LS-DYNA as documented in this wiki page.
Licensing[edit]
Compute Canada is a hosting provider for LS-DYNA. This means that we have LS-DYNA software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have local licenses that can be used on our clusters. Researchers can also purchase a license directly from the company to run on a SHARCNET license server for dedicated use on any Compute Canada system.
Once a license is set up, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load a ls-dyna Compute Canada module, and it should find your license automatically. For assistance please contact our Technical support.
Configuring your license file[edit]
Our module for LS-DYNA is designed to look for license information in a few places, one of which is your home folder. If you have your own license server, you can write the information to access it in the following format:
#LICENSE_TYPE: network
#LICENSE_SERVER:<port>@<server>
and put this file in the folder $HOME/.licenses/ on each cluster where you plan to submit LS-DYNA jobs. Note that firewall changes will need to be done on both our side and yours. To arrange this, send an email containing the service port and IP address of your floating license server to Technical support. To check if your new license file is responding run the following commands:
module load ls-dyna
ls-dyna_s
The output should contain defined (non-empty) values for Licensed to:
and Issued by:
, press ^c to exit.
Cluster batch job submission[edit]
LS-DYNA provides software for running jobs on a single node with SMP (Shared Memory Parallel) binaries and multiple nodes with MPP (Message Passing Parallel) binaries where parallelization is achieved using OpenMP and MPI respectively. As shown in the Slurm scripts below, the StdEnv/2020 module must be loaded before ls-dyna/12.0 or ls-dyna-mpi/12.0 can be loaded:
SMP Slurm script[edit]
Modules for running jobs on a single node can be listed with: module spider ls-dyna
. Jobs maybe submitted to the queue with: sbatch script-smp.sh
. The following is a sample Slurm job submission script suitable for running LS-DYNA efficiently with up to 8 cores on a single cluster compute node:
#!/bin/bash
#SBATCH --account=def-account # Specify
#SBATCH --time=0-03:00 # D-HH:MM
#SBATCH --cpus-per-task=8 # Number of cores
#SBATCH --mem=16G # Memory per node
module load StdEnv/2016.4 # Versions =< 11.1
module load ls-dyna/11.1
# module load StdEnv/2020 # Versions >= 12.0
# module load ls-dyna/12.0
ls-dyna_s ncpu=$SLURM_CPUS_ON_NODE i=airbag.deploy.k
where ls-dyna_s = single precision smp solver ls-dyna_d = double precision smp solver
MPP Slurm script[edit]
Modules for running jobs on multiple nodes can be listed with: module spider ls-dyna-mpi
. Jobs maybe submitted to the queue with: sbatch script-mpp.sh
. The following is a sample Slurm script suitable for running LS-DYNA jobs with 8 or more cores across multiple cluster compute nodes. The number of nodes is determined automatically by the scheduler and therefore does not need to be specified:
#!/bin/bash
#SBATCH --account=def-account # Specify
#SBATCH --time=0-03:00 # D-HH:MM
#SBATCH --ntasks=8 # Number of cores
#SBATCH --mem-per-cpu=2G # Memory per core
# module load StdEnv/2016.4 # Versions =< 11.1
# module load ls-dyna-mpi/11.1
module load StdEnv/2020 # Versions >= 12.0
module load ls-dyna-mpi/12.0
srun ls-dyna_d i=airbag.deploy.k
where ls-dyna_s = single precision mpp solver ls-dyna_d = double precision mpp solver
Depending on the complexity of the simulation, LS-DYNA may not be able to efficiently use very many cores efficiently. Therefore before running a simulation to completion, test the scaling properties of the simulation by gradually increasing the number of cores to determine the optimal value before simulation slowdown occurs. To determine the Job Wall-clock time, CPU Efficiency and Memory Efficiency for a successfully completed job use the seff jobnumber command.
Remote visualization[edit]
LSTC provides LS-PrePost for pre- and post-processing of LS-DYNA models. This program is made available by a separate module. It does not require a license and can be used on any cluster node or on the Graham VDI nodes by following these steps:
Cluster nodes[edit]
Connect to a compute node or to a login node with TigerVNC module load ls-prepost lsprepost
VDI nodes[edit]
Connect to gra-vdi with TigerVNC module load CcEnv StdEnv module load ls-prepost lsprepost