COMSOL: Difference between revisions
mNo edit summary |
mNo edit summary |
||
Line 111: | Line 111: | ||
<!--T:208--> | <!--T:208--> | ||
module load StdEnv/2020 | module load StdEnv/2020 | ||
module load comsol/6.0 | module load comsol/6.1.0.357 # Specify a version | ||
<!--T:209--> | <!--T:209--> |
Revision as of 22:52, 9 November 2023
Introduction[edit]
COMSOL is a general-purpose software for modelling engineering applications. We would like to thank COMSOL, Inc. for allowing its software to be hosted on our clusters via a special agreement.
We recommend that you consult the documentation included with the software under File > Help > Documentation prior to attempting to use COMSOL on one of our clusters. Links to the COMSOL blog, Knowledge Base, Support Centre and Documentation can be found at the bottom of the COMSOL homepage. Searchable online COMSOL documentation is also available here.
Licensing[edit]
We are a hosting provider for COMSOL. This means that we have COMSOL software installed on our clusters, but we do not provide a generic license accessible to everyone. Many institutions, faculties, and departments already have licenses that can be used on our clusters. Alternatively, you can purchase a license from CMC for use anywhere in Canada, or purchase a dedicated Floating Network License directly from COMSOL to be hosted on a SHARCNET license server.
Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases such as CMC, this has already been done. You should then be able to load the COMSOL modules, and it should find its license automatically. If this is not the case, please contact our Technical support, so that we can arrange this for you.
Configuring your own license file[edit]
Our module for COMSOL is designed to look for license information in a few places. One of those places is your /home folder. If you have your own license server, you can write the information to access it in the following format, where <server> is your license server hostname and <port> is the flex port number of the license server:
SERVER <server> ANY <port>
USE_SERVER
Put this file in the $HOME/.licenses/ folder.
Local license setup[edit]
For researchers wanting to use a new local institutional license server, firewall changes will need to be done to the network on both the Alliance (system/cluster) side and the institutional (server) side. To arrange this, send an email containing 1) the COMSOL lmgrd TCP flex port number (typically 1718 default) and 2) the static LMCOMSOL TCP vendor port number (typically 1719 default) and finally 3) the fully qualified hostname of your COMSOL license server, to Technical support. Once this is complete, create a corresponding comsol.lic
text file as shown above.
CMC license setup[edit]
Researchers who own a COMSOL license subscription from CMC should use the following preconfigured public IP settings in their comsol.lic
file:
- Béluga: SERVER 10.20.73.21 ANY 6601 (IP changed May 18, 2022)
- Cedar: SERVER 172.16.0.101 ANY 6601
- Graham: SERVER 199.241.167.222 ANY 6601
- Narval: SERVER 10.100.64.10 ANY 6601
- Niagara: SERVER 172.16.205.198 ANY 6601
If initial license checkout attempts fail, contact <cmcsupport@cmc.ca> to verify they have your username on file.
Installed products[edit]
To check which modules and products are available for use, start COMSOL in graphical mode and then click Options -> Licensed and Used Products
on the upper pull-down menu. For a more detailed explanation, click here. If a module/product is missing or reports being unlicensed, contact Technical support as a reinstall of the CVMFS module you are using maybe required.
Submit jobs[edit]
Single compute node[edit]
Sample submission script to run a COMSOL job with eight cores on a single compute node:
#!/bin/bash
#SBATCH --time=0-03:00 # Specify (d-hh:mm)
#SBATCH --account=def-group # Specify (some account)
#SBATCH --mem=32G # Specify (set to 0 to use all memory on each node)
#SBATCH --cpus-per-task=8 # Specify (set to 32or44 graham, 32or48 cedar, 40 beluga, 48or64 narval to use all cores)
#SBATCH --nodes=1 # Do not change
#SBATCH --ntasks-per-node=1 # Do not change
INPUTFILE="ModelToSolve.mph" # Specify input filename
OUTPUTFILE="SolvedModel.mph" # Specify output filename
module load StdEnv/2020
module load comsol/6.1.0.357 # Specify a version
echo "-XX:ActiveProcessorCount=1" > java.opts # Uncomment to avoid job startup hangs
comsol batch -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -np $SLURM_CPUS_ON_NODE
Depending on the complexity of the simulation, COMSOL may not be able to efficiently use very many cores. Therefore, it is advisable to test the scaling of your simulation by gradually increasing the number of cores. If near-linear speedup is obtained using all cores on a compute node, consider running the job over multiple full nodes using the next Slurm script.
Multiple compute nodes[edit]
Sample submission script to run a COMSOL job with eight cores distributed evenly over two compute nodes. Ideal for very large simulations (that exceed the capabilities of a single compute node), this script supports restarting interrupted jobs, allocation of large temporary files to /scratch and utilizing the default comsolbatch.ini file settings. There is also an option to modify the Java heap memory described below the script.
#!/bin/bash
#SBATCH --time=0-03:00 # Specify (d-hh:mm)
#SBATCH --account=def-account # Specify (some account)
#SBATCH --mem=16G # Specify (set to 0 to use all memory on each node)
#SBATCH --cpus-per-task=4 # Specify (set to 32or44 graham, 32or48 cedar, 40 beluga, 48or64 narval to use all cores)
#SBATCH --nodes=2 # Specify (the number of compute nodes to use for the job)
#SBATCH --ntasks-per-node=1 # Do not change
INPUTFILE="ModelToSolve.mph" # Specify input filename
OUTPUTFILE="SolvedModel.mph" # Specify output filename
module load StdEnv/2020
module load comsol/6.1.0.357 # Specify a version
RECOVERYDIR=$SCRATCH/comsol/recoverydir
mkdir -p $RECOVERYDIR
cp -f ${EBROOTCOMSOL}/bin/glnxa64/comsolbatch.ini comsolbatch.ini
cp -f ${EBROOTCOMSOL}/mli/startup/java.opts java.opts
#export I_MPI_COLL_EXTERNAL=0 # Uncomment on narval
comsol batch -mpibootstrap slurm -inputfile $INPUTFILE -outputfile $OUTPUTFILE \
-recoverydir $RECOVERYDIR -tmpdir $SLURM_TMPDIR -comsolinifile comsolbatch.ini -alivetime 15 \
#-recover -continue # Uncomment this line to restart solving from latest recovery files
Note1: To increase the java heap, add the following two sed
lines after the two cp -f
lines.
For further information please see Out of Memory article.
#sed -i 's/-Xmx2g/-Xmx4g/g' comsolbatch.ini #sed -i 's/-Xmx768m/-Xmx2g/g' java.opts
Note2: Please note the recently installed comsol/6.0.0.405 module may not perform optimally on Narval when running jobs across multiple nodes with the above Slurm script. Until more testing can be done its recommended the original comsol/6.0 module be used instead. No such problems appear when running COMSOL on single nodes with this latest version. Further updates on the matter will be posted here asap when more information becomes available.
Graphical use[edit]
COMSOL can be run interactively in full graphical mode using either of the following methods.
On cluster nodes[edit]
Suitable to interactively run computationally intensive test jobs requiring up to all cores or memory on a single cluster node.
- 1) Connect to a compute node (3hr time limit) with TigerVNC
- 2) Open a terminal window in vncviewer and run:
export XDG_RUNTIME_DIR=${SLURM_TMPDIR}
- 3) Start COMSOL Multiphysics 5.6.0.280 (or newer versions)
module load StdEnv/2020
module load comsol/5.6
comsol
- 4) Start COMSOL Multiphysics 5.5.0.292 (or older versions)
module load StdEnv/2016
module load comsol/5.5
comsol
On VDI nodes[edit]
Suitable interactive use on gra-vdi includes: running a single test job with up to 8 cores for up to 24 hours, create or modify simulation input files, perform post-processing or data visualization tasks.
- 1) Connect to gra-vdi (no time limit) with TigerVNC
- 2) Open a terminal window in vncviewer
- 3) Start COMSOL Multiphysics 5.6.0.280 (or newer versions)
module load CcEnv StdEnv/2020
module spider comsol
module load comsol/5.6
comsol
- 4) Start COMSOL Multiphysics 5.5.0.292 (or older versions)
module load CcEnv StdEnv/2016
module spider comsol
module load comsol/5.5
comsol
Parameter sweeps[edit]
Batch sweep[edit]
When working interactively in the COMSOL gui, parametric problems may be solved using the Batch Sweep approach. Multiple parameter sweeps maybe carried out as shown in this video. Speedup due to Task Parallism may also be realized.
Cluster sweep[edit]
To run a parameter sweep on a cluster, a job must be submitted to the scheduler from the command line using sbatch slurmscript
. For a detailed discussion regarding additional required arguments, see a and b for details. Support for submitting parametric simulations to the cluster queue from the graphical interface using a Cluster Sweep node is not available at this time.