Ansys: Difference between revisions

Jump to navigation Jump to search
556 bytes added ,  2 months ago
m
no edit summary
mNo edit summary
mNo edit summary
Line 1,322: Line 1,322:


<!--T:1093-->
<!--T:1093-->
Ansys Rocky batch jobs may be submitted to graham cluster queue with the following two scripts.  These scripts are only usable on graham since rocky is only installed (locally) on this cluster at the present time.  A full listing of command line options can be obtained by running Rocky on the command line with the <code>-h</code> switch. Please note these scripts have not been tested as of this writing.   If a coupled solution is being run (for instance with Ansys fluent) then these should be done using cpus of the same node.  When running Rocky with more than 4cpus <I>rocky_hpc</I> licenses will be used which are included in the SHARCNET license.
Ansys Rocky batch jobs may be submitted to graham cluster queue with the following two scripts.  At the time of this writing they are untested and therefore are provided more as a starting point.  Note that these scripts are only usable on graham since the rocky module they load is only installed locally (on graham) at the present time.  A full listing of command line options for the Rockey program can be obtained by running <code>Rocky -h</code> on the command line after loading any rocky module (currently only 2023R2 is available with 2024R1 and 2024R2 to be added asap).   In regards to using Rocky with gpus, when solving coupled problems, the number of cpus requested for the coupled application should be increased until is scalability limit is reached, using all cpus on the same node if possible.   On the other hand, if Rocky is being run with gpu(s) to solve standalone uncoupled problems, then only a minimal number of supporting cpus(s) should be specified that will allow Rocky to run optimally, for instance only 2cpus or possibly 3cpus maybe required.  When running Rocky with more than 4cpus <I>rocky_hpc</I> licenses will be required which the SHARCNET license does provide.


<!--T:1577-->
<!--T:1577-->
Line 1,336: Line 1,336:
#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --time=00-02:00        # Specify time (DD-HH:MM)
#SBATCH --time=00-02:00        # Specify time (DD-HH:MM)
#SBATCH --mem=24G              # Specify memory (set to 0 for all node memory)
#SBATCH --mem=24G              # Specify memory (set to 0 to use all node memory)
#SBATCH --cpus-per-task=6      # Specify cores (graham 32 or 44 to use all cores)
#SBATCH --cpus-per-task=6      # Specify cores (graham 32 or 44 to use all cores)
#SBATCH --nodes=1              # Request one node (do not change)
#SBATCH --nodes=1              # Request one node (do not change)
Line 1,345: Line 1,345:


<!--T:2810-->
<!--T:2810-->
Rocky --simulate “mysim.rocky” --resume=0 --ncpus=48 --use-gpu=0
Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=0
}}
}}
</tab>
</tab>
Line 1,358: Line 1,358:
#SBATCH --account=account      # Specify your account (def or reg)
#SBATCH --account=account      # Specify your account (def or reg)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --mem=24G              # Specify memory (set to 0 for all node memory)
#SBATCH --mem=24G              # Specify memory (set to 0 to use all node memory)
#SBATCH --cpus-per-task=6      # Specify cores (graham 32 or 44 to use all cores)
#SBATCH --cpus-per-task=6      # Specify cores (graham 32 or 44 to use all cores)
#SBATCH --gres=gpu:t4:2       # Specify gpu type : gpu quantity (4 max)
#SBATCH --gres=gpu:v100:2     # Specify gpu type : gpu quantity
#SBATCH --nodes=1              # Request one node (do not change)
#SBATCH --nodes=1              # Request one node (do not change)


cc_staff
1,894

edits

Navigation menu