COMSOL: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
mNo edit summary
No edit summary
 
(128 intermediate revisions by 5 users not shown)
Line 4: Line 4:
<translate>
<translate>
= Introduction = <!--T:1-->
= Introduction = <!--T:1-->
[http://www.comsol.com COMSOL] is a general-purpose platform software for modeling engineering applications. Compute Canada would like to  
[http://www.comsol.com COMSOL] is a general-purpose software for modelling engineering applications. We would like to thank COMSOL, Inc. for allowing its software to be hosted on our clusters via a special agreement.  
thank COMSOL, Inc. for allowing its software to be hosted on Compute Canada clusters via a special agreement.  
[[File:Logo comsol blue 1571x143.png|thumb]]
[[File:Logo comsol blue 1571x143.png|thumb]]
We recommend that Compute Canada users who wish to run COMSOL consult the documentation included with the software under <tt>File > Help > Documentation</tt> prior to attempting to use it on one of the clusters. Some of the basic manuals are available [https://www.comsol.com/documentation here].
We recommend that you consult the documentation included with the software under <i>File > Help > Documentation</i> prior to attempting to use COMSOL on one of our clusters. Links to the COMSOL blog, Knowledge Base, Support Centre and Documentation can be found at the bottom of the [http://www.comsol.com COMSOL home page].  Searchable online COMSOL documentation is also available [https://doc.comsol.com/ here].


= Licensing = <!--T:2-->
= Licensing = <!--T:2-->
Compute Canada is a hosting provider for COMSOL. This means that we have COMSOL software installed on our clusters, but we do not provide a generic license accessible to everyone. Many institutions, faculties, and departments already have licenses that can be used on our clusters.  Alternatively researchers can purchase a license from [https://account.cmc.ca/en/WhatWeOffer/Products/CMC-00200-00368.aspx CMC] for use anywhere in Canada, or purchase a dedicated [https://www.comsol.com/products/licensing Floating Network License] directly from the company for use on Compute Canada systems to run on a Sharcnet license server.
We are a hosting provider for COMSOL. This means that we have COMSOL software installed on our clusters, but we do not provide a generic license accessible to everyone. Many institutions, faculties, and departments already have licenses that can be used on our clusters.  Alternatively, you can purchase a license from [https://account.cmc.ca/en/WhatWeOffer/Products/CMC-00200-00368.aspx CMC] for use anywhere in Canada. Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. If you have purchased a CMC license and will be connecting to the CMC license server, this has already been done. Once the license server work is done and your <i>~/.licenses/comsol.lic</i> has been created, you can load any COMSOL module and begin using the software. If this is not the case, please contact our [[technical support]].
 
<!--T:3-->
Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases such as CMC, this has already been done. You should then be able to load the COMSOL modules, and it should find its license automatically. If this is not the case, please contact our [[Technical support]], so that we can arrange this for you.


== Configuring your own license file == <!--T:4-->
== Configuring your own license file == <!--T:4-->
Our module for COMSOL is designed to look for license information in a few places. One of those places is your home folder. If you have your own license server, you can write the information to access it in the following format:  
Our COMSOL module is designed to look for license information in a few places, one of which is your <I>~/.licenses</I> directory. If you have your own license server then specify it by creating a text file <code>$HOME/.licenses/comsol.lic</code> with the following information:
{{File
{{File
|name=comsol.lic
|name=comsol.lic
Line 24: Line 20:
USE_SERVER
USE_SERVER
}}
}}
and put this file in the folder <tt>$HOME/.licenses/</tt>. Here <tt><server></tt> is your license server and <tt><port></tt> is the port number of the license server. Note that firewall changes will need to be done on both our side and your side. To arrange this, send an email containing the service port and ip address of your floating comsol license server to [[Technical support]].
Where <code><server></code> is your license server hostname and <code><port></code> is the flex port number of the license server.
 
=== Local license setup === <!--T:194-->
 
<!--T:195-->
For researchers wanting to use a new local institutional license server, firewall changes will need to be done to the network on both the Alliance (system/cluster) side and the institutional (server) side. To arrange this, send an email to [[technical support]] containing 1) the COMSOL lmgrd TCP flex port number (typically 1718 default) and 2) the static LMCOMSOL TCP vendor port number (typically 1719 default) and finally 3) the fully qualified hostname of your COMSOL license server.  Once this is complete, create a corresponding <i>comsol.lic</i> text file as shown above.


=== CMC License Setup === <!--T:197-->
=== CMC license setup === <!--T:197-->


<!--T:198-->
<!--T:198-->
Researchers who purchase a comsol license subscription from CMC may use the following settings in their <code>comsol.lic</code> file:
Researchers who own a COMSOL license subscription from CMC should use the following preconfigured public IP settings in their <i>comsol.lic</i> file:


<!--T:199-->
<!--T:199-->
* Béluga: SERVER 132.219.136.89 ANY 6601
* Béluga: SERVER 10.20.73.21 ANY 6601 (IP changed May 18, 2022)
* Graham: SERVER 199.241.162.97 ANY 6601
* Cedar: SERVER 172.16.0.101 ANY 6601
* Cedar: SERVER 172.16.121.25 ANY 6601
* Graham: SERVER 199.241.167.222 ANY 6601
* Narval: SERVER 10.100.64.10 ANY 6601
* Niagara: SERVER 172.16.205.198 ANY 6601


<!--T:200-->
<!--T:200-->
If initial license checkout attempts fail contact <cmcsupport@cmc.ca> to verify they have your Compute Canada username on file.
If initial license checkout attempts fail, contact <cmcsupport@cmc.ca> to verify they have your username on file.
 
== Installed products == <!--T:30-->
 
<!--T:31-->
To check which [https://www.comsol.com/products modules and products] are available for use, start COMSOL in [[#Graphical_use|graphical mode]] and then click <i>Options -> Licensed and Used Products</i> on the upper pull-down menu.  For a more detailed explanation, click [https://doc.comsol.com/6.0/docserver/#!/com.comsol.help.comsol/comsol_ref_customizing.16.09.html  here].  If a module/product is missing or reports being unlicensed, contact [[technical support]] as a reinstall of the CVMFS module you are using may be required.
 
== Installed versions == <!--T:32-->
To check the full version number either start comsol in [https://docs.alliancecan.ca/wiki/COMSOL#Graphical_use gui] mode and inspect the lower right corner messages window OR more simply login to a cluster and run comsol in batch mode as follows:
[login-node:~] salloc --time=0:01:00 --nodes=1 --cores=1 --mem=1G --account=def-someuser
[login-node:~] module load comsol/6.2
[login-node:~] comsol batch -version
COMSOL Multiphysics 6.2.0.290
which corresponds to COMSOL 6.2 Update 1.  In other words, when a new [https://www.comsol.com/release-history COMSOL release] is installed, it will use the abbreviated 6.X version format but for convenience will contain the latest available update at the time of installation.  As additional [https://www.comsol.com/product-update product updates] are released they will instead utilize the full 6.X.Y.Z version format.  For example, [https://www.comsol.com/product-update/6.2 Update 3] can be loaded on a cluster with the <code>module load comsol/6.2.0.415</code> OR <code>module load comsol</code> commands.  We recommend using the moat recent update to take advantage of all the latest improvements. That said, if you want to continue using any module version (6.X or 6.X.Y.Z). you can be assured by definition that the software contained in these modules will remain exactly the same.
 
<!--T:223-->
To check which versions are available in the standard environment you have loaded ( typically <code>StdEnv/2023</code> ) run the <code>module avail comsol</code> command.  Lastly, to check which versions are available in ALL available standard environments, use the <code>module spider comsol</code> command.


= Submit jobs = <!--T:5-->
= Submit jobs = <!--T:5-->


== Single Compute Node == <!--T:6-->
== Single compute node == <!--T:6-->


Sample submission script to run a COMSOL job with eight cores on a single cluster compute node: <!--T:8-->
<!--T:8-->
Sample submission script to run a COMSOL job with eight cores on a single compute node:
{{File
{{File
|name=mysub1.sh
|name=mysub1.sh
Line 49: Line 69:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --time=0-03:00           # Specify (d-hh:mm)
#SBATCH --time=0-03:00             # Specify (d-hh:mm)
#SBATCH --account=def-group       # Specify (some account)
#SBATCH --account=def-group       # Specify (some account)
#SBATCH --mem=3G                 # Specify (set to 0 to use all available node memory when using all cores)
#SBATCH --mem=32G                 # Specify (set to 0 to use all memory on each node)
#SBATCH --cpus-per-task=8         # Specify (set to 32or44 on graham, 32or48 on cedar, or 40 on beluga to use all cores)
#SBATCH --cpus-per-task=8         # Specify (set to 32or44 graham, 32or48 cedar, 40 beluga, 48or64 narval to use all cores)
#SBATCH --nodes=1                 # Do not change
#SBATCH --nodes=1                 # Do not change
#SBATCH --ntasks-per-node=1       # Do not change
#SBATCH --ntasks-per-node=1       # Do not change


FILENAME="MyModel.mph"           # Specify (inputfile filename)
<!--T:205-->
INPUTFILE="ModelToSolve.mph"       # Specify input filename
OUTPUTFILE="SolvedModel.mph"      # Specify output filename


module load StdEnv/2016.4
<!--T:206-->
module load comsol/5.5
# module load StdEnv/2020          # Versions < 6.2
module load StdEnv/2023
module load comsol/6.2


<!--T:10-->
<!--T:10-->
comsol batch -inputfile $FILENAME -outputfile solved_out.mph -np $SLURM_CPUS_ON_NODE
comsol batch -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -np $SLURM_CPUS_ON_NODE
}}
}}


<!--T:12-->
<!--T:12-->
Note that depending on the complexity of the simulation, COMSOL may not be able to efficiently use very many cores.  Therefore please test the scaling of your simulation by increasing the number of cores in <tt>#SBATCH --cpus-per-task=X</tt> from X=1 to the maximum number of cores on the compute node you are using. If you still get a good speed-up running on a full compute node then consider running the job over multiple full nodes by adjusting the following submission script.
Depending on the complexity of the simulation, COMSOL may not be able to efficiently use very many cores.  Therefore, it is advisable to test the scaling of your simulation by gradually increasing the number of cores. If near-linear speedup is obtained using all cores on a compute node, consider running the job over multiple full nodes using the next Slurm script.


== Multiple Compute Nodes == <!--T:14-->
== Multiple compute nodes == <!--T:14-->


Sample submission script to run a COMSOL job with eight cores distributed evenly over two cluster compute nodes.  Ideal for very large simulations (that exceed the capabilities of a single compute node) this script supports restarting interupted jobs, allocation of large temporary files to scratch and utilizing the default comsolbatch.ini file settings.  There is also an option to modify tha java heap memory as described below the script.<!--T:16-->
<!--T:16-->
<tabs><tab name="...">
Sample submission script to run a COMSOL job with eight cores distributed evenly over two compute nodes.  Ideal for very large simulations (that exceed the capabilities of a single compute node), this script supports restarting interrupted jobs, allocating large temporary files to /scratch and utilizing the default <i>comsolbatch.ini</i> file settings.  There is also an option to modify the Java heap memory described below the script.
 
<!--T:212-->
{{File
{{File
|name=mysub2.sh
|name=script-dis.sh
|lang="bash"
|lang="bash"
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --time=0-03:00           # Specify (d-hh:mm)
#SBATCH --time=0-03:00             # Specify (d-hh:mm)
#SBATCH --account=def-account     # Specify (some account)
#SBATCH --account=def-account     # Specify (some account)
#SBATCH --mem=3G                 # Specify (set to 0 to use all available node memory when using all cores)
#SBATCH --mem=16G                 # Specify (set to 0 to use all memory on each node)
#SBATCH --cpus-per-task=4         # Specify (set to 32or44 on graham, 32or48 on cedar, or 40 on beluga to use all cores)
#SBATCH --cpus-per-task=4         # Specify (set to 32or44 graham, 32or48 cedar, 40 beluga, 48or64 narval to use all cores)
#SBATCH --nodes=2                 # Specify (the number of compute nodes)
#SBATCH --nodes=2                 # Specify (the number of compute nodes to use for the job)
#SBATCH --ntasks-per-node=1       # Do not change
#SBATCH --ntasks-per-node=1       # Do not change


FILENAME="MyModel.mph"           # Specify (inputfile filename)
<!--T:207-->
INPUTFILE="ModelToSolve.mph"       # Specify input filename
OUTPUTFILE="SolvedModel.mph"      # Specify output filename


module load StdEnv/2020
<!--T:208-->
module load comsol/5.6
# module load StdEnv/2020         # Versions < 6.2
module load StdEnv/2023
module load comsol/6.2


SCRTMP=/scratch/$USER/comsol/tmpdir
<!--T:209-->
SCRREC=/scratch/$USER/comsol/recoverydir
RECOVERYDIR=$SCRATCH/comsol/recoverydir
mkdir -p $SCRTMP $SCRREC
mkdir -p $RECOVERYDIR


<!--T:210-->
cp -f ${EBROOTCOMSOL}/bin/glnxa64/comsolbatch.ini comsolbatch.ini
cp -f ${EBROOTCOMSOL}/bin/glnxa64/comsolbatch.ini comsolbatch.ini
cp -f ${EBROOTCOMSOL}/mli/startup/java.opts java.opts
cp -f ${EBROOTCOMSOL}/mli/startup/java.opts java.opts


comsol batch -inputfile $FILENAME -outputfile solved_out.mph -recover \
<!--T:217-->
-nn $SLURM_NTASKS -nnhost $SLURM_NTASKS_PER_NODE -np $SLURM_CPUS_PER_TASK \
# export I_MPI_COLL_EXTERNAL=0      # Uncomment this line on narval
-recoverydir $SCRREC -tmpdir $SCRTMP -comsolinifile comsolbatch.ini -alivetime 15
 
<!--T:211-->
comsol batch -inputfile $INPUTFILE -outputfile $OUTPUTFILE -np $SLURM_CPUS_ON_NODE -nn $SLURM_NNODES \
-recoverydir $RECOVERYDIR -tmpdir $SLURM_TMPDIR -comsolinifile comsolbatch.ini -alivetime 15 \
# -recover -continue                # Uncomment this line to restart solving from latest recovery files
 
<!--T:221-->
}}
}}


Add the following sed lines to the above slurm script (immediately after the cp commands) to increase the maximum java heap memory. See https://www.comsol.ch/support/knowledgebase/1243 for further information ...
<!--T:218-->
Note 1: If your multiple node job crashes on startup with a java segmentation fault, try increasing the java heap by adding the following two <code>sed</code> lines after the two <code>cp -f</code> lines.  If it does not help, try further changing both 4g values to 8g. For further information see [https://www.comsol.ch/support/knowledgebase/1243 Out of Memory].
sed -i 's/-Xmx2g/-Xmx4g/g' comsolbatch.ini
sed -i 's/-Xmx768m/-Xmx4g/g' java.opts


sed -i 's/-Xmx2g/-Xmx4g/g' comsolbatch.ini
<!--T:219-->
  sed -i 's/-Xmx768m/-Xmx2g/g' java.opts
Note 2:  On Narval, jobs may run slow when submitted with comsol/6.0.0.405 to multiple nodes using the above Slurm scriptIf this occurs, use comsol/6.0 instead and open a ticket to report the problem. The latest comsol/6.1.X modules have not been tested on Narval yet.


</tab></tabs>
<!--T:220-->
Note 3:  On Graham, there is a small chance jobs will run slow or hang during startup when submitted to a single node with the above <i>script-smp.sh</i> script.  If this occurs, use the multiple node <i>script-dis.sh</i> script instead adding <code>#SBATCH --nodes=1</code> and then open a ticket to report the problem.


= Graphical use = <!--T:120-->
= Graphical use = <!--T:120-->


<!--T:122-->
<!--T:122-->
Comsol can be run interactively in full graphical mode using either of the following methods.
COMSOL can be run interactively in full graphical mode using either of the following methods.


== On a cluster == <!--T:121-->
== On cluster nodes == <!--T:121-->


<!--T:201-->
<!--T:201-->
Suitable to interactively run computationally intensive test jobs requiring upto all cores or memory on a single cluster node.
Suitable to interactively run computationally intensive test jobs using ALL available cores and memory reserved by <code>salloc</code> on a single cluster node:


<!--T:196-->
<!--T:196-->
Connect to a compute node (3hr time limit) with [https://docs.computecanada.ca/wiki/VNC#Compute_Nodes TigerVNC]
: 1) Connect to a compute node (3-hour time limit) with [[VNC#Compute_nodes|TigerVNC]].
# <code>module load StdEnv/2016.4</code>
: 2) Open a terminal window in vncviewer and run:
# <code>module load comsol/5.5</code>   &nbsp;&nbsp;&nbsp; (COMSOL Multiphysics 5.5 Build:292)
::;  <code>export XDG_RUNTIME_DIR=${SLURM_TMPDIR}</code>
# <code>comsol</code>
: 3) Start COMSOL Multiphysics 6.2 (or newer versions).
::; <code>module load StdEnv/2023</code>
::; <code>module load comsol/6.2</code>
::; <code>comsol</code> (uses all cores requested by salloc) 
: 4) Start COMSOL Multiphysics 5.6 (or newer versions).
::; <code>module load StdEnv/2020</code>  
::; <code>module load comsol/6.1.0.357</code>
::; <code>comsol</code> (uses all cores requested by salloc)
: 5) Start COMSOL Multiphysics 5.5 (or older versions).
::; <code>module load StdEnv/2016</code>
::; <code>module load comsol/5.5</code>
::; <code>comsol</code> (uses all cores requested by salloc)


== On gra-vdi == <!--T:123-->
== On VDI nodes == <!--T:123-->


<!--T:124-->
<!--T:124-->
Suitable to create or modify simulation input files, post process or visualize data, interactively run a single test job with upto 8cores.
Suitable interactive use on gra-vdi includes: running compute calculations with maximum of 12 cores, creating or modifying simulation input files, performing post-processing or data visualization tasks.  Since each gra-vdi server is shared with many other users, we request you limit your COMSOL usage to 12 cores as shown below (especially when running long calculations) to not overload the system and potentially inconvenience others.  For interactive and shorter meshing calculation, using 16 cores should be fine.  If you need more cores when working in graphical mode, then use COMSOL on a cluster compute node (as shown above) where you can reserve up to all available cores and memory on a node and have exclusive use of the resource.


<!--T:125-->
<!--T:125-->
# Connect to gra-vdi (no time limit) with [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC]
: 1) Connect to gra-vdi (no time limit) with [[VNC#Compute_nodes|TigerVNC]].
# <code>module load CcEnv StdEnv/2016.4</code>  
: 2) Open a terminal window in vncviewer.
# <code>module load comsol/5.5</code> &nbsp;&nbsp;&nbsp; (COMSOL Multiphysics 5.5 Build:292)
: 3) Start COMSOL Multiphysics 6.2 (or newer versions).
# <code>comsol</code><br>
::; <code>module load CcEnv StdEnv/2023</code>
 
::; <code>module avail comsol</code>
<!--T:126-->
::; <code>module load comsol/6.2</code>
Users with complex visualization models may benefit from hardware level graphics acceleration.<br>
::; <code>comsol -np 12</code> (limits use to 12 cores)
This capability may only be obtained by using one of the local modules installed on gra-vdi.<br>
: 4) Start COMSOL Multiphysics 6.2 (or older versions).
If a specific version is required submit a problem ticket directed to the Sharcnet support team.
::; <code>module load CcEnv StdEnv/2020</code>
::; <code>module avail comsol</code>  
::; <code>module load comsol/6.1.0.357</code>
::; <code>comsol -np 12</code> (limits use to 12 cores)
: 5) Start COMSOL Multiphysics 5.5 (or older versions).
::; <code>module load CcEnv StdEnv/2016</code>
::; <code>module avail comsol</code>
::; <code>module load comsol/5.5</code>
::; <code>comsol -np 12</code> (limits use to 12 cores)


<!--T:128-->
<!--T:222-->
# Connect to gra-vdi (no time limit) with [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC]
Note: If all the upper menu items are greyed out immediately after COMSOL starts in GUI mode and therefore not clickable, then your <i>~/.comsol</i> maybe corrupted. To fix the problem rename (or remove) your entire <i>~/.comsol</i> directory and try starting COMSOL again.  This could occur if you previously loaded a COMSOL module from the local SnEnv on gra-vdi.
# <code>module load SnEnv</code>
# <code>module load comsol/5.3.1.348</code&nbsp;&nbsp;&nbsp; (COMSOL Multiphysics 5.3a Build:348)
# <code>comsol</code>


=Parameter Sweeps= <!--T:130-->
=Parameter sweeps= <!--T:130-->


==Batch Sweep== <!--T:132-->
==Batch sweep== <!--T:132-->


<!--T:202-->
<!--T:202-->
When working interactively in the Comsol GUI parametric problems maybe solved using the [https://www.comsol.com/blogs/the-power-of-the-batch-sweep/ Batch Sweep] approach.  Multiple parameter sweeps maybe carried out as shown in this Comsol  [https://www.comsol.com/video/performing-parametric-sweep-study-comsol-multiphysics video].  Speedup due to [https://www.comsol.com/blogs/added-value-task-parallelism-batch-sweeps/ Task Parallism] maybe also be realized.
When working interactively in the COMSOL GUI, parametric problems may be solved using the [https://www.comsol.com/blogs/the-power-of-the-batch-sweep/ Batch Sweep] approach.  Multiple parameter sweeps maybe carried out as shown in [https://www.comsol.com/video/performing-parametric-sweep-study-comsol-multiphysics this video].  Speedup due to [https://www.comsol.com/blogs/added-value-task-parallelism-batch-sweeps/ Task Parallism] may also be realized.


==Cluster Sweep== <!--T:203-->
==Cluster sweep== <!--T:203-->




<!--T:204-->
<!--T:204-->
To run a parameter sweep on a cluster, a job must be submitted to the schedular from the command line using "sbatch slurmscript".  For a detailed discussion regarding additional required comsol arguments see [https://www.comsol.com/support/knowledgebase/1250 a] and [https://www.comsol.com/blogs/how-to-use-job-sequences-to-save-data-after-solving-your-model/ b] for details. Support for submitting parametric simulations to the cluster queue from the Comsol graphical interface using a [https://www.comsol.com/blogs/how-to-use-the-cluster-sweep-node-in-comsol-multiphysics/ Cluster Sweep node] is not available at this time.
To run a parameter sweep on a cluster, a job must be submitted to the scheduler from the command line using <code>sbatch slurmscript</code>.  For a discussion regarding additional required arguments, see [https://www.comsol.com/support/knowledgebase/1250 a] and [https://www.comsol.com/blogs/how-to-use-job-sequences-to-save-data-after-solving-your-model/ b] for details. Support for submitting parametric simulations to the cluster queue from the graphical interface using a [https://www.comsol.com/blogs/how-to-use-the-cluster-sweep-node-in-comsol-multiphysics/ Cluster Sweep node] is not available at this time.
</translate>
</translate>

Latest revision as of 20:35, 5 July 2024

Other languages:

Introduction[edit]

COMSOL is a general-purpose software for modelling engineering applications. We would like to thank COMSOL, Inc. for allowing its software to be hosted on our clusters via a special agreement.

Logo comsol blue 1571x143.png

We recommend that you consult the documentation included with the software under File > Help > Documentation prior to attempting to use COMSOL on one of our clusters. Links to the COMSOL blog, Knowledge Base, Support Centre and Documentation can be found at the bottom of the COMSOL home page. Searchable online COMSOL documentation is also available here.

Licensing[edit]

We are a hosting provider for COMSOL. This means that we have COMSOL software installed on our clusters, but we do not provide a generic license accessible to everyone. Many institutions, faculties, and departments already have licenses that can be used on our clusters. Alternatively, you can purchase a license from CMC for use anywhere in Canada. Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. If you have purchased a CMC license and will be connecting to the CMC license server, this has already been done. Once the license server work is done and your ~/.licenses/comsol.lic has been created, you can load any COMSOL module and begin using the software. If this is not the case, please contact our technical support.

Configuring your own license file[edit]

Our COMSOL module is designed to look for license information in a few places, one of which is your ~/.licenses directory. If you have your own license server then specify it by creating a text file $HOME/.licenses/comsol.lic with the following information:

File : comsol.lic

SERVER <server> ANY <port>
USE_SERVER


Where <server> is your license server hostname and <port> is the flex port number of the license server.

Local license setup[edit]

For researchers wanting to use a new local institutional license server, firewall changes will need to be done to the network on both the Alliance (system/cluster) side and the institutional (server) side. To arrange this, send an email to technical support containing 1) the COMSOL lmgrd TCP flex port number (typically 1718 default) and 2) the static LMCOMSOL TCP vendor port number (typically 1719 default) and finally 3) the fully qualified hostname of your COMSOL license server. Once this is complete, create a corresponding comsol.lic text file as shown above.

CMC license setup[edit]

Researchers who own a COMSOL license subscription from CMC should use the following preconfigured public IP settings in their comsol.lic file:

  • Béluga: SERVER 10.20.73.21 ANY 6601 (IP changed May 18, 2022)
  • Cedar: SERVER 172.16.0.101 ANY 6601
  • Graham: SERVER 199.241.167.222 ANY 6601
  • Narval: SERVER 10.100.64.10 ANY 6601
  • Niagara: SERVER 172.16.205.198 ANY 6601

If initial license checkout attempts fail, contact <cmcsupport@cmc.ca> to verify they have your username on file.

Installed products[edit]

To check which modules and products are available for use, start COMSOL in graphical mode and then click Options -> Licensed and Used Products on the upper pull-down menu. For a more detailed explanation, click here. If a module/product is missing or reports being unlicensed, contact technical support as a reinstall of the CVMFS module you are using may be required.

Installed versions[edit]

To check the full version number either start comsol in gui mode and inspect the lower right corner messages window OR more simply login to a cluster and run comsol in batch mode as follows:

[login-node:~] salloc --time=0:01:00 --nodes=1 --cores=1 --mem=1G --account=def-someuser
[login-node:~] module load comsol/6.2
[login-node:~] comsol batch -version
COMSOL Multiphysics 6.2.0.290

which corresponds to COMSOL 6.2 Update 1. In other words, when a new COMSOL release is installed, it will use the abbreviated 6.X version format but for convenience will contain the latest available update at the time of installation. As additional product updates are released they will instead utilize the full 6.X.Y.Z version format. For example, Update 3 can be loaded on a cluster with the module load comsol/6.2.0.415 OR module load comsol commands. We recommend using the moat recent update to take advantage of all the latest improvements. That said, if you want to continue using any module version (6.X or 6.X.Y.Z). you can be assured by definition that the software contained in these modules will remain exactly the same.

To check which versions are available in the standard environment you have loaded ( typically StdEnv/2023 ) run the module avail comsol command. Lastly, to check which versions are available in ALL available standard environments, use the module spider comsol command.

Submit jobs[edit]

Single compute node[edit]

Sample submission script to run a COMSOL job with eight cores on a single compute node:

File : mysub1.sh

#!/bin/bash
#SBATCH --time=0-03:00             # Specify (d-hh:mm)
#SBATCH --account=def-group        # Specify (some account)
#SBATCH --mem=32G                  # Specify (set to 0 to use all memory on each node)
#SBATCH --cpus-per-task=8          # Specify (set to 32or44 graham, 32or48 cedar, 40 beluga, 48or64 narval to use all cores)
#SBATCH --nodes=1                  # Do not change
#SBATCH --ntasks-per-node=1        # Do not change

INPUTFILE="ModelToSolve.mph"       # Specify input filename
OUTPUTFILE="SolvedModel.mph"       # Specify output filename

# module load StdEnv/2020          # Versions < 6.2
module load StdEnv/2023
module load comsol/6.2

comsol batch -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -np $SLURM_CPUS_ON_NODE


Depending on the complexity of the simulation, COMSOL may not be able to efficiently use very many cores. Therefore, it is advisable to test the scaling of your simulation by gradually increasing the number of cores. If near-linear speedup is obtained using all cores on a compute node, consider running the job over multiple full nodes using the next Slurm script.

Multiple compute nodes[edit]

Sample submission script to run a COMSOL job with eight cores distributed evenly over two compute nodes. Ideal for very large simulations (that exceed the capabilities of a single compute node), this script supports restarting interrupted jobs, allocating large temporary files to /scratch and utilizing the default comsolbatch.ini file settings. There is also an option to modify the Java heap memory described below the script.


File : script-dis.sh

#!/bin/bash
#SBATCH --time=0-03:00             # Specify (d-hh:mm)
#SBATCH --account=def-account      # Specify (some account)
#SBATCH --mem=16G                  # Specify (set to 0 to use all memory on each node)
#SBATCH --cpus-per-task=4          # Specify (set to 32or44 graham, 32or48 cedar, 40 beluga, 48or64 narval to use all cores)
#SBATCH --nodes=2                  # Specify (the number of compute nodes to use for the job)
#SBATCH --ntasks-per-node=1        # Do not change

INPUTFILE="ModelToSolve.mph"       # Specify input filename
OUTPUTFILE="SolvedModel.mph"       # Specify output filename

# module load StdEnv/2020          # Versions < 6.2
module load StdEnv/2023
module load comsol/6.2

RECOVERYDIR=$SCRATCH/comsol/recoverydir
mkdir -p $RECOVERYDIR

cp -f ${EBROOTCOMSOL}/bin/glnxa64/comsolbatch.ini comsolbatch.ini
cp -f ${EBROOTCOMSOL}/mli/startup/java.opts java.opts

# export I_MPI_COLL_EXTERNAL=0      # Uncomment this line on narval 

comsol batch -inputfile $INPUTFILE -outputfile $OUTPUTFILE -np $SLURM_CPUS_ON_NODE -nn $SLURM_NNODES \
-recoverydir $RECOVERYDIR -tmpdir $SLURM_TMPDIR -comsolinifile comsolbatch.ini -alivetime 15 \
# -recover -continue                # Uncomment this line to restart solving from latest recovery files


Note 1: If your multiple node job crashes on startup with a java segmentation fault, try increasing the java heap by adding the following two sed lines after the two cp -f lines. If it does not help, try further changing both 4g values to 8g. For further information see Out of Memory.

sed -i 's/-Xmx2g/-Xmx4g/g' comsolbatch.ini
sed -i 's/-Xmx768m/-Xmx4g/g' java.opts

Note 2: On Narval, jobs may run slow when submitted with comsol/6.0.0.405 to multiple nodes using the above Slurm script. If this occurs, use comsol/6.0 instead and open a ticket to report the problem. The latest comsol/6.1.X modules have not been tested on Narval yet.

Note 3: On Graham, there is a small chance jobs will run slow or hang during startup when submitted to a single node with the above script-smp.sh script. If this occurs, use the multiple node script-dis.sh script instead adding #SBATCH --nodes=1 and then open a ticket to report the problem.

Graphical use[edit]

COMSOL can be run interactively in full graphical mode using either of the following methods.

On cluster nodes[edit]

Suitable to interactively run computationally intensive test jobs using ALL available cores and memory reserved by salloc on a single cluster node:

1) Connect to a compute node (3-hour time limit) with TigerVNC.
2) Open a terminal window in vncviewer and run:
export XDG_RUNTIME_DIR=${SLURM_TMPDIR}
3) Start COMSOL Multiphysics 6.2 (or newer versions).
module load StdEnv/2023
module load comsol/6.2
comsol (uses all cores requested by salloc)
4) Start COMSOL Multiphysics 5.6 (or newer versions).
module load StdEnv/2020
module load comsol/6.1.0.357
comsol (uses all cores requested by salloc)
5) Start COMSOL Multiphysics 5.5 (or older versions).
module load StdEnv/2016
module load comsol/5.5
comsol (uses all cores requested by salloc)

On VDI nodes[edit]

Suitable interactive use on gra-vdi includes: running compute calculations with maximum of 12 cores, creating or modifying simulation input files, performing post-processing or data visualization tasks. Since each gra-vdi server is shared with many other users, we request you limit your COMSOL usage to 12 cores as shown below (especially when running long calculations) to not overload the system and potentially inconvenience others. For interactive and shorter meshing calculation, using 16 cores should be fine. If you need more cores when working in graphical mode, then use COMSOL on a cluster compute node (as shown above) where you can reserve up to all available cores and memory on a node and have exclusive use of the resource.

1) Connect to gra-vdi (no time limit) with TigerVNC.
2) Open a terminal window in vncviewer.
3) Start COMSOL Multiphysics 6.2 (or newer versions).
module load CcEnv StdEnv/2023
module avail comsol
module load comsol/6.2
comsol -np 12 (limits use to 12 cores)
4) Start COMSOL Multiphysics 6.2 (or older versions).
module load CcEnv StdEnv/2020
module avail comsol
module load comsol/6.1.0.357
comsol -np 12 (limits use to 12 cores)
5) Start COMSOL Multiphysics 5.5 (or older versions).
module load CcEnv StdEnv/2016
module avail comsol
module load comsol/5.5
comsol -np 12 (limits use to 12 cores)

Note: If all the upper menu items are greyed out immediately after COMSOL starts in GUI mode and therefore not clickable, then your ~/.comsol maybe corrupted. To fix the problem rename (or remove) your entire ~/.comsol directory and try starting COMSOL again. This could occur if you previously loaded a COMSOL module from the local SnEnv on gra-vdi.

Parameter sweeps[edit]

Batch sweep[edit]

When working interactively in the COMSOL GUI, parametric problems may be solved using the Batch Sweep approach. Multiple parameter sweeps maybe carried out as shown in this video. Speedup due to Task Parallism may also be realized.

Cluster sweep[edit]

To run a parameter sweep on a cluster, a job must be submitted to the scheduler from the command line using sbatch slurmscript. For a discussion regarding additional required arguments, see a and b for details. Support for submitting parametric simulations to the cluster queue from the graphical interface using a Cluster Sweep node is not available at this time.