Ansys: Difference between revisions

Jump to navigation Jump to search
192 bytes removed ,  2 months ago
m
no edit summary
mNo edit summary
mNo edit summary
Line 1,317: Line 1,317:


<!--T:1091-->
<!--T:1091-->
Ansys Rocky can be run interactively on a graham compute node for testing Rocky simulations in headless (non-gpu) mode before running inside a slurm script by reserving a compute node with  <code>salloc --time=04:00:00 --nodes=1 --tasks=3 [--gpus=t4:1] --mem=32G --account=def-account</code> and then loading the following local modules <code>module load rocky/2023R2 ansys/2023R2</code> and then finally running <code>Rocky --simulate "mysim.rocky" --resume=0 --ncpus=48 [--use-gpu=USE_GPU --gpu-num=GPU_NUM]</code> where the contents in the square brackets are optional when using Rocky with a [https://www.ansys.com/blog/mastering-multi-gpu-ansys-rocky-software-enhancing-its-performance gpu].
Besides being able to run simulations in gui mode (as discussed in the Graphical usage section below) [https://www.ansys.com/products/fluids/ansys-rocky Ansys Rocky] can also run simulations in non-gui (or headless) mode.  Both modes support running Rocky with cpus only or with cpus and [https://www.ansys.com/blog/mastering-multi-gpu-ansys-rocky-software-enhancing-its-performance gpus].  In the below section two sample slurm scripts are  provided where each script would be submitted to the graham queue with the sbatch command as per usual.  At the time of this writing neither script has been tested and therefore extensive customization will likely be required.  Its important to note that these scripts are only usable on graham since the rocky module which they both load is only (at the present time) installed on graham (locally).


=== Slurm scripts === <!--T:1092-->
=== Slurm scripts === <!--T:1092-->


<!--T:1093-->
<!--T:1093-->
Ansys Rocky batch jobs may be submitted to graham cluster queue with the following two scripts.  At the time of this writing they are untested and therefore are provided more as a starting point.  Note that these scripts are only usable on graham since the rocky module they load is only installed locally (on graham) at the present time.  A full listing of command line options for the Rockey program can be obtained by running <code>Rocky -h</code> on the command line after loading any rocky module (currently only 2023R2 is available with 2024R1 and 2024R2 to be added asap).  In regards to using Rocky with gpus, when solving coupled problems, the number of cpus requested for the coupled application should be increased until is scalability limit is reached, using all cpus on the same node if possible.  On the other hand, if Rocky is being run with gpu(s) to solve standalone uncoupled problems, then only a minimal number of supporting cpus(s) should be specified that will allow Rocky to run optimally, for instance only 2cpus or possibly 3cpus maybe required.  When running Rocky with more than 4cpus <I>rocky_hpc</I> licenses will be required which the SHARCNET license does provide.
To get a full listing of command line options run <code>Rocky -h</code> on the command line after loading any rocky module (currently only rocky/2023R2 is available on graham with 2024R1 and 2024R2 to be added asap).  In regards to using Rocky with gpus for solving coupled problems, the number of cpus you should request from slurm (on the same node) should be increased to a maximum until the scalability limit of the coupled application is reached.  On the other hand, if Rocky is being run with gpus to solve standalone uncoupled problems, then only a minimal number of cpus should be requested that will allow be sufficient for Rocky to still run optimally.  For instance only 2cpus or possibly 3cpus maybe required.  Finally when Rocky is run with more than 4cpus then <I>rocky_hpc</I> licenses will be required which the SHARCNET license does provide.


<!--T:1577-->
<!--T:1577-->
cc_staff
1,894

edits

Navigation menu