COMSOL
Introduction
COMSOL is a general-purpose software for modeling engineering applications. Compute Canada would like to thank COMSOL, Inc. for allowing its software to be hosted on Compute Canada clusters via a special agreement.
We recommend that Compute Canada users who wish to run COMSOL consult the documentation included with the software under File > Help > Documentation prior to attempting to use it on one of the clusters. Some of the basic manuals are available here.
Licensing
Compute Canada is a hosting provider for COMSOL. This means that we have COMSOL software installed on our clusters, but we do not provide a generic license accessible to everyone. Many institutions, faculties, and departments already have licenses that can be used on our clusters. Alternatively, researchers can purchase a license from CMC for use anywhere in Canada, or purchase a dedicated Floating Network License directly from the company for use on Compute Canada systems to run on a SHARCNET license server.
Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases such as CMC, this has already been done. You should then be able to load the COMSOL modules, and it should find its license automatically. If this is not the case, please contact our Technical support, so that we can arrange this for you.
Configuring your own license file
Our module for COMSOL is designed to look for license information in a few places. One of those places is your /home folder. If you have your own license server, you can write the information to access it in the following format:
SERVER <server> ANY <port>
USE_SERVER
and put this file in the folder $HOME/.licenses/, where <server> is your license server and <port> is the port number of the license server. Note that firewall changes will need to be done on both our side and your side. To arrange this, send an email containing the service port and IP address of your floating COMSOL license server to Technical support.
CMC license setup
Researchers who purchase a COMSOL license subscription from CMC may use the following settings in their comsol.lic
file:
- Béluga: SERVER 132.219.136.89 ANY 6601
- Graham: SERVER 199.241.162.97 ANY 6601
- Cedar: SERVER 172.16.121.25 ANY 6601
If initial license checkout attempts fail contact <cmcsupport@cmc.ca> to verify they have your Compute Canada username on file.
Installed products
To check which products are installed and thus available for use with your license, run:
module load comsol/version ls $EBROOTCOMSOL/applications | grep -i Module ls $EBROOTCOMSOL/applications | grep -i LiveLink
Comparing the output with https://www.comsol.com/products for the current version, it may be noticed the Fuel_Cell_and_Electrolyzer_Module, Polymer_Flow_Module, and Liquid_and_Gas_Properties_Module are not installed (as of July 2021) with comsol/5.6
. Such modules will appear in the Other products section found by starting COMSOL in gui mode and clicking Options -> Licensed and Used Products
regardless if your license supports them or not. A more detailed explanation is provided here. Future installations should include all products by default. If a product you require has not been installed contact Technical support and request it be added.
Submit jobs
Single compute node
Sample submission script to run a COMSOL job with eight cores on a single cluster compute node:
#!/bin/bash
#SBATCH --time=0-03:00 # Specify (d-hh:mm)
#SBATCH --account=def-group # Specify (some account)
#SBATCH --mem=3G # Specify (set to 0 to use all available node memory when using all cores)
#SBATCH --cpus-per-task=8 # Specify (set to 32or44 on graham, 32or48 on cedar, or 40 on beluga to use all cores)
#SBATCH --nodes=1 # Do not change
#SBATCH --ntasks-per-node=1 # Do not change
FILENAME="MyModel.mph" # Specify (inputfile filename)
# Uncomment a version to use
#module load StdEnv/2016.4 comsol/5.5
#module load StdEnv/2020 comsol/5.6
comsol batch -inputfile $FILENAME -outputfile solved_out.mph -np $SLURM_CPUS_ON_NODE
Depending on the complexity of the simulation, COMSOL may not be able to efficiently use very many cores. Therefore, it is advisable to test the scaling of your simulation by gradually increasing the number of cores. If near-linear speedup is obtained using all cores on a compute node then consider running the job over multiple full nodes using the next Slurm script.
Multiple compute nodes
Sample submission script to run a COMSOL job with eight cores distributed evenly over two cluster compute nodes. Ideal for very large simulations (that exceed the capabilities of a single compute node), this script supports restarting interrupted jobs, allocation of large temporary files to /scratch and utilizing the default comsolbatch.ini file settings. There is also an option to modify the Java heap memory described below the script.
#!/bin/bash
#SBATCH --time=0-03:00 # Specify (d-hh:mm)
#SBATCH --account=def-account # Specify (some account)
#SBATCH --mem=3G # Specify (set to 0 to use all available node memory when using all cores)
#SBATCH --cpus-per-task=4 # Specify (set to 32or44 on graham, 32or48 on cedar, or 40 on beluga to use all cores)
#SBATCH --nodes=2 # Specify (the number of compute nodes)
#SBATCH --ntasks-per-node=1 # Do not change
FILENAME="MyModel.mph" # Specify (inputfile filename)
module load StdEnv/2020
module load comsol/5.6
SCRTMP=/scratch/$USER/comsol/tmpdir
SCRREC=/scratch/$USER/comsol/recoverydir
mkdir -p $SCRTMP $SCRREC
cp -f ${EBROOTCOMSOL}/bin/glnxa64/comsolbatch.ini comsolbatch.ini
cp -f ${EBROOTCOMSOL}/mli/startup/java.opts java.opts
comsol batch -inputfile $FILENAME -outputfile solved_out.mph -recover \
-nn $SLURM_NTASKS -nnhost $SLURM_NTASKS_PER_NODE -np $SLURM_CPUS_PER_TASK \
-recoverydir $SCRREC -tmpdir $SCRTMP -comsolinifile comsolbatch.ini -alivetime 15
To increase the java heap, add the following sed lines immediately after the cp -f
commands:
sed -i 's/-Xmx2g/-Xmx4g/g' comsolbatch.ini sed -i 's/-Xmx768m/-Xmx2g/g' java.opts
For further information, please see this Out of Memory article.
Graphical use
COMSOL can be run interactively in full graphical mode using either of the following methods.
On cluster nodes
Suitable to interactively run computationally intensive test jobs requiring up to all cores or memory on a single cluster node.
- 1) Connect to a compute node (3hr time limit) with TigerVNC
- 2) Start COMSOL Multiphysics 5.5.0.292
module load StdEnv/2016.4
module load comsol/5.5
comsol
- 3) Start COMSOL Multiphysics 5.6.0.280
module load StdEnv/2020
module load comsol/5.6
comsol
On VDI nodes
Suitable to interactively run a single test job with up to 8 cores, create or modify simulation input files, post process or visualize data.
- 1) Connect to gra-vdi (no time limit) with TigerVNC
- 2) Start COMSOL Multiphysics 5.5.0.292 (or older versions)
module load CcEnv StdEnv/2016.4
module load comsol/5.5
comsol
- 3) Start COMSOL Multiphysics 5.6.0.280 (or newer versions)
module load CcEnv StdEnv/2020
module load comsol/5.6
comsol
Users with complex visualization models may benefit from hardware level graphics acceleration. This capability may only be obtained by using a local COMSOL module installed on gra-vdi. If a specific version is needed, submit a problem ticket to help@sharcnet.ca.
- 1) Connect to gra-vdi (no time limit) with TigerVNC
- 2) Start a COMSOL Multiphysics version such as:
module load SnEnv
module load comsol/5.6.0.280
comsol
Note: COMSOL will be automatically killed after 24 hours of use on gra-vdi.
Parameter sweeps
Batch sweep
When working interactively in the COMSOL gui, parametric problems may be solved using the Batch Sweep approach. Multiple parameter sweeps maybe carried out as shown in this video. Speedup due to Task Parallism may also be realized.
Cluster sweep
To run a parameter sweep on a cluster, a job must be submitted to the scheduler from the command line using sbatch slurmscript. For a detailed discussion regarding additional required arguments, see a and b for details. Support for submitting parametric simulations to the cluster queue from the graphical interface using a Cluster Sweep node is not available at this time.