COMSOL: Difference between revisions
mNo edit summary |
No edit summary |
||
(128 intermediate revisions by 5 users not shown) | |||
Line 4: | Line 4: | ||
<translate> | <translate> | ||
= Introduction = <!--T:1--> | = Introduction = <!--T:1--> | ||
[http://www.comsol.com COMSOL] is a general-purpose | [http://www.comsol.com COMSOL] is a general-purpose software for modelling engineering applications. We would like to thank COMSOL, Inc. for allowing its software to be hosted on our clusters via a special agreement. | ||
thank COMSOL, Inc. for allowing its software to be hosted on | |||
[[File:Logo comsol blue 1571x143.png|thumb]] | [[File:Logo comsol blue 1571x143.png|thumb]] | ||
We recommend that | We recommend that you consult the documentation included with the software under <i>File > Help > Documentation</i> prior to attempting to use COMSOL on one of our clusters. Links to the COMSOL blog, Knowledge Base, Support Centre and Documentation can be found at the bottom of the [http://www.comsol.com COMSOL home page]. Searchable online COMSOL documentation is also available [https://doc.comsol.com/ here]. | ||
= Licensing = <!--T:2--> | = Licensing = <!--T:2--> | ||
We are a hosting provider for COMSOL. This means that we have COMSOL software installed on our clusters, but we do not provide a generic license accessible to everyone. Many institutions, faculties, and departments already have licenses that can be used on our clusters. Alternatively, you can purchase a license from [https://account.cmc.ca/en/WhatWeOffer/Products/CMC-00200-00368.aspx CMC] for use anywhere in Canada. Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. If you have purchased a CMC license and will be connecting to the CMC license server, this has already been done. Once the license server work is done and your <i>~/.licenses/comsol.lic</i> has been created, you can load any COMSOL module and begin using the software. If this is not the case, please contact our [[technical support]]. | |||
Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. | |||
== Configuring your own license file == <!--T:4--> | == Configuring your own license file == <!--T:4--> | ||
Our module | Our COMSOL module is designed to look for license information in a few places, one of which is your <I>~/.licenses</I> directory. If you have your own license server then specify it by creating a text file <code>$HOME/.licenses/comsol.lic</code> with the following information: | ||
{{File | {{File | ||
|name=comsol.lic | |name=comsol.lic | ||
Line 24: | Line 20: | ||
USE_SERVER | USE_SERVER | ||
}} | }} | ||
Where <code><server></code> is your license server hostname and <code><port></code> is the flex port number of the license server. | |||
=== Local license setup === <!--T:194--> | |||
<!--T:195--> | |||
For researchers wanting to use a new local institutional license server, firewall changes will need to be done to the network on both the Alliance (system/cluster) side and the institutional (server) side. To arrange this, send an email to [[technical support]] containing 1) the COMSOL lmgrd TCP flex port number (typically 1718 default) and 2) the static LMCOMSOL TCP vendor port number (typically 1719 default) and finally 3) the fully qualified hostname of your COMSOL license server. Once this is complete, create a corresponding <i>comsol.lic</i> text file as shown above. | |||
=== CMC | === CMC license setup === <!--T:197--> | ||
<!--T:198--> | <!--T:198--> | ||
Researchers who | Researchers who own a COMSOL license subscription from CMC should use the following preconfigured public IP settings in their <i>comsol.lic</i> file: | ||
<!--T:199--> | <!--T:199--> | ||
* Béluga: SERVER | * Béluga: SERVER 10.20.73.21 ANY 6601 (IP changed May 18, 2022) | ||
* Graham: SERVER 199.241. | * Cedar: SERVER 172.16.0.101 ANY 6601 | ||
* | * Graham: SERVER 199.241.167.222 ANY 6601 | ||
* Narval: SERVER 10.100.64.10 ANY 6601 | |||
* Niagara: SERVER 172.16.205.198 ANY 6601 | |||
<!--T:200--> | <!--T:200--> | ||
If initial license checkout attempts fail contact <cmcsupport@cmc.ca> to verify they have your | If initial license checkout attempts fail, contact <cmcsupport@cmc.ca> to verify they have your username on file. | ||
== Installed products == <!--T:30--> | |||
<!--T:31--> | |||
To check which [https://www.comsol.com/products modules and products] are available for use, start COMSOL in [[#Graphical_use|graphical mode]] and then click <i>Options -> Licensed and Used Products</i> on the upper pull-down menu. For a more detailed explanation, click [https://doc.comsol.com/6.0/docserver/#!/com.comsol.help.comsol/comsol_ref_customizing.16.09.html here]. If a module/product is missing or reports being unlicensed, contact [[technical support]] as a reinstall of the CVMFS module you are using may be required. | |||
== Installed versions == <!--T:32--> | |||
To check the full version number either start comsol in [https://docs.alliancecan.ca/wiki/COMSOL#Graphical_use gui] mode and inspect the lower right corner messages window OR more simply login to a cluster and run comsol in batch mode as follows: | |||
[login-node:~] salloc --time=0:01:00 --nodes=1 --cores=1 --mem=1G --account=def-someuser | |||
[login-node:~] module load comsol/6.2 | |||
[login-node:~] comsol batch -version | |||
COMSOL Multiphysics 6.2.0.290 | |||
which corresponds to COMSOL 6.2 Update 1. In other words, when a new [https://www.comsol.com/release-history COMSOL release] is installed, it will use the abbreviated 6.X version format but for convenience will contain the latest available update at the time of installation. As additional [https://www.comsol.com/product-update product updates] are released they will instead utilize the full 6.X.Y.Z version format. For example, [https://www.comsol.com/product-update/6.2 Update 3] can be loaded on a cluster with the <code>module load comsol/6.2.0.415</code> OR <code>module load comsol</code> commands. We recommend using the moat recent update to take advantage of all the latest improvements. That said, if you want to continue using any module version (6.X or 6.X.Y.Z). you can be assured by definition that the software contained in these modules will remain exactly the same. | |||
<!--T:223--> | |||
To check which versions are available in the standard environment you have loaded ( typically <code>StdEnv/2023</code> ) run the <code>module avail comsol</code> command. Lastly, to check which versions are available in ALL available standard environments, use the <code>module spider comsol</code> command. | |||
= Submit jobs = <!--T:5--> | = Submit jobs = <!--T:5--> | ||
== Single | == Single compute node == <!--T:6--> | ||
Sample submission script to run a COMSOL job with eight cores on a single | <!--T:8--> | ||
Sample submission script to run a COMSOL job with eight cores on a single compute node: | |||
{{File | {{File | ||
|name=mysub1.sh | |name=mysub1.sh | ||
Line 49: | Line 69: | ||
|contents= | |contents= | ||
#!/bin/bash | #!/bin/bash | ||
#SBATCH --time=0-03:00 | #SBATCH --time=0-03:00 # Specify (d-hh:mm) | ||
#SBATCH --account=def-group | #SBATCH --account=def-group # Specify (some account) | ||
#SBATCH --mem= | #SBATCH --mem=32G # Specify (set to 0 to use all memory on each node) | ||
#SBATCH --cpus-per-task=8 | #SBATCH --cpus-per-task=8 # Specify (set to 32or44 graham, 32or48 cedar, 40 beluga, 48or64 narval to use all cores) | ||
#SBATCH --nodes=1 | #SBATCH --nodes=1 # Do not change | ||
#SBATCH --ntasks-per-node=1 | #SBATCH --ntasks-per-node=1 # Do not change | ||
<!--T:205--> | |||
INPUTFILE="ModelToSolve.mph" # Specify input filename | |||
OUTPUTFILE="SolvedModel.mph" # Specify output filename | |||
module load StdEnv/ | <!--T:206--> | ||
module load comsol/ | # module load StdEnv/2020 # Versions < 6.2 | ||
module load StdEnv/2023 | |||
module load comsol/6.2 | |||
<!--T:10--> | <!--T:10--> | ||
comsol batch -inputfile $ | comsol batch -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -np $SLURM_CPUS_ON_NODE | ||
}} | }} | ||
<!--T:12--> | <!--T:12--> | ||
Depending on the complexity of the simulation, COMSOL may not be able to efficiently use very many cores. Therefore, it is advisable to test the scaling of your simulation by gradually increasing the number of cores. If near-linear speedup is obtained using all cores on a compute node, consider running the job over multiple full nodes using the next Slurm script. | |||
== Multiple | == Multiple compute nodes == <!--T:14--> | ||
Sample submission script to run a COMSOL job with eight cores distributed evenly over two | <!--T:16--> | ||
Sample submission script to run a COMSOL job with eight cores distributed evenly over two compute nodes. Ideal for very large simulations (that exceed the capabilities of a single compute node), this script supports restarting interrupted jobs, allocating large temporary files to /scratch and utilizing the default <i>comsolbatch.ini</i> file settings. There is also an option to modify the Java heap memory described below the script. | |||
<!--T:212--> | |||
{{File | {{File | ||
|name= | |name=script-dis.sh | ||
|lang="bash" | |lang="bash" | ||
|contents= | |contents= | ||
#!/bin/bash | #!/bin/bash | ||
#SBATCH --time=0-03:00 | #SBATCH --time=0-03:00 # Specify (d-hh:mm) | ||
#SBATCH --account=def-account | #SBATCH --account=def-account # Specify (some account) | ||
#SBATCH --mem= | #SBATCH --mem=16G # Specify (set to 0 to use all memory on each node) | ||
#SBATCH --cpus-per-task=4 | #SBATCH --cpus-per-task=4 # Specify (set to 32or44 graham, 32or48 cedar, 40 beluga, 48or64 narval to use all cores) | ||
#SBATCH --nodes=2 | #SBATCH --nodes=2 # Specify (the number of compute nodes to use for the job) | ||
#SBATCH --ntasks-per-node=1 | #SBATCH --ntasks-per-node=1 # Do not change | ||
<!--T:207--> | |||
INPUTFILE="ModelToSolve.mph" # Specify input filename | |||
OUTPUTFILE="SolvedModel.mph" # Specify output filename | |||
module load StdEnv/2020 | <!--T:208--> | ||
module load comsol/ | # module load StdEnv/2020 # Versions < 6.2 | ||
module load StdEnv/2023 | |||
module load comsol/6.2 | |||
<!--T:209--> | |||
RECOVERYDIR=$SCRATCH/comsol/recoverydir | |||
mkdir -p $ | mkdir -p $RECOVERYDIR | ||
<!--T:210--> | |||
cp -f ${EBROOTCOMSOL}/bin/glnxa64/comsolbatch.ini comsolbatch.ini | cp -f ${EBROOTCOMSOL}/bin/glnxa64/comsolbatch.ini comsolbatch.ini | ||
cp -f ${EBROOTCOMSOL}/mli/startup/java.opts java.opts | cp -f ${EBROOTCOMSOL}/mli/startup/java.opts java.opts | ||
comsol batch -inputfile $ | <!--T:217--> | ||
# export I_MPI_COLL_EXTERNAL=0 # Uncomment this line on narval | |||
<!--T:211--> | |||
comsol batch -inputfile $INPUTFILE -outputfile $OUTPUTFILE -np $SLURM_CPUS_ON_NODE -nn $SLURM_NNODES \ | |||
-recoverydir $RECOVERYDIR -tmpdir $SLURM_TMPDIR -comsolinifile comsolbatch.ini -alivetime 15 \ | |||
# -recover -continue # Uncomment this line to restart solving from latest recovery files | |||
<!--T:221--> | |||
}} | }} | ||
<!--T:218--> | |||
Note 1: If your multiple node job crashes on startup with a java segmentation fault, try increasing the java heap by adding the following two <code>sed</code> lines after the two <code>cp -f</code> lines. If it does not help, try further changing both 4g values to 8g. For further information see [https://www.comsol.ch/support/knowledgebase/1243 Out of Memory]. | |||
sed -i 's/-Xmx2g/-Xmx4g/g' comsolbatch.ini | |||
sed -i 's/-Xmx768m/-Xmx4g/g' java.opts | |||
<!--T:219--> | |||
Note 2: On Narval, jobs may run slow when submitted with comsol/6.0.0.405 to multiple nodes using the above Slurm script. If this occurs, use comsol/6.0 instead and open a ticket to report the problem. The latest comsol/6.1.X modules have not been tested on Narval yet. | |||
</ | <!--T:220--> | ||
Note 3: On Graham, there is a small chance jobs will run slow or hang during startup when submitted to a single node with the above <i>script-smp.sh</i> script. If this occurs, use the multiple node <i>script-dis.sh</i> script instead adding <code>#SBATCH --nodes=1</code> and then open a ticket to report the problem. | |||
= Graphical use = <!--T:120--> | = Graphical use = <!--T:120--> | ||
<!--T:122--> | <!--T:122--> | ||
COMSOL can be run interactively in full graphical mode using either of the following methods. | |||
== On | == On cluster nodes == <!--T:121--> | ||
<!--T:201--> | <!--T:201--> | ||
Suitable to interactively run computationally intensive test jobs | Suitable to interactively run computationally intensive test jobs using ALL available cores and memory reserved by <code>salloc</code> on a single cluster node: | ||
<!--T:196--> | <!--T:196--> | ||
: 1) Connect to a compute node (3-hour time limit) with [[VNC#Compute_nodes|TigerVNC]]. | |||
: 2) Open a terminal window in vncviewer and run: | |||
::; <code>export XDG_RUNTIME_DIR=${SLURM_TMPDIR}</code> | |||
: 3) Start COMSOL Multiphysics 6.2 (or newer versions). | |||
::; <code>module load StdEnv/2023</code> | |||
::; <code>module load comsol/6.2</code> | |||
::; <code>comsol</code> (uses all cores requested by salloc) | |||
: 4) Start COMSOL Multiphysics 5.6 (or newer versions). | |||
::; <code>module load StdEnv/2020</code> | |||
::; <code>module load comsol/6.1.0.357</code> | |||
::; <code>comsol</code> (uses all cores requested by salloc) | |||
: 5) Start COMSOL Multiphysics 5.5 (or older versions). | |||
::; <code>module load StdEnv/2016</code> | |||
::; <code>module load comsol/5.5</code> | |||
::; <code>comsol</code> (uses all cores requested by salloc) | |||
== On | == On VDI nodes == <!--T:123--> | ||
<!--T:124--> | <!--T:124--> | ||
Suitable | Suitable interactive use on gra-vdi includes: running compute calculations with maximum of 12 cores, creating or modifying simulation input files, performing post-processing or data visualization tasks. Since each gra-vdi server is shared with many other users, we request you limit your COMSOL usage to 12 cores as shown below (especially when running long calculations) to not overload the system and potentially inconvenience others. For interactive and shorter meshing calculation, using 16 cores should be fine. If you need more cores when working in graphical mode, then use COMSOL on a cluster compute node (as shown above) where you can reserve up to all available cores and memory on a node and have exclusive use of the resource. | ||
<!--T:125--> | <!--T:125--> | ||
: 1) Connect to gra-vdi (no time limit) with [[VNC#Compute_nodes|TigerVNC]]. | |||
: 2) Open a terminal window in vncviewer. | |||
: 3) Start COMSOL Multiphysics 6.2 (or newer versions). | |||
::; <code>module load CcEnv StdEnv/2023</code> | |||
::; <code>module avail comsol</code> | |||
< | ::; <code>module load comsol/6.2</code> | ||
::; <code>comsol -np 12</code> (limits use to 12 cores) | |||
: 4) Start COMSOL Multiphysics 6.2 (or older versions). | |||
::; <code>module load CcEnv StdEnv/2020</code> | |||
::; <code>module avail comsol</code> | |||
::; <code>module load comsol/6.1.0.357</code> | |||
::; <code>comsol -np 12</code> (limits use to 12 cores) | |||
: 5) Start COMSOL Multiphysics 5.5 (or older versions). | |||
::; <code>module load CcEnv StdEnv/2016</code> | |||
::; <code>module avail comsol</code> | |||
::; <code>module load comsol/5.5</code> | |||
::; <code>comsol -np 12</code> (limits use to 12 cores) | |||
<!--T: | <!--T:222--> | ||
Note: If all the upper menu items are greyed out immediately after COMSOL starts in GUI mode and therefore not clickable, then your <i>~/.comsol</i> maybe corrupted. To fix the problem rename (or remove) your entire <i>~/.comsol</i> directory and try starting COMSOL again. This could occur if you previously loaded a COMSOL module from the local SnEnv on gra-vdi. | |||
=Parameter | =Parameter sweeps= <!--T:130--> | ||
==Batch | ==Batch sweep== <!--T:132--> | ||
<!--T:202--> | <!--T:202--> | ||
When working interactively in the | When working interactively in the COMSOL GUI, parametric problems may be solved using the [https://www.comsol.com/blogs/the-power-of-the-batch-sweep/ Batch Sweep] approach. Multiple parameter sweeps maybe carried out as shown in [https://www.comsol.com/video/performing-parametric-sweep-study-comsol-multiphysics this video]. Speedup due to [https://www.comsol.com/blogs/added-value-task-parallelism-batch-sweeps/ Task Parallism] may also be realized. | ||
==Cluster | ==Cluster sweep== <!--T:203--> | ||
<!--T:204--> | <!--T:204--> | ||
To run a parameter sweep on a cluster, a job must be submitted to the | To run a parameter sweep on a cluster, a job must be submitted to the scheduler from the command line using <code>sbatch slurmscript</code>. For a discussion regarding additional required arguments, see [https://www.comsol.com/support/knowledgebase/1250 a] and [https://www.comsol.com/blogs/how-to-use-job-sequences-to-save-data-after-solving-your-model/ b] for details. Support for submitting parametric simulations to the cluster queue from the graphical interface using a [https://www.comsol.com/blogs/how-to-use-the-cluster-sweep-node-in-comsol-multiphysics/ Cluster Sweep node] is not available at this time. | ||
</translate> | </translate> |
Latest revision as of 20:35, 5 July 2024
Introduction[edit]
COMSOL is a general-purpose software for modelling engineering applications. We would like to thank COMSOL, Inc. for allowing its software to be hosted on our clusters via a special agreement.
We recommend that you consult the documentation included with the software under File > Help > Documentation prior to attempting to use COMSOL on one of our clusters. Links to the COMSOL blog, Knowledge Base, Support Centre and Documentation can be found at the bottom of the COMSOL home page. Searchable online COMSOL documentation is also available here.
Licensing[edit]
We are a hosting provider for COMSOL. This means that we have COMSOL software installed on our clusters, but we do not provide a generic license accessible to everyone. Many institutions, faculties, and departments already have licenses that can be used on our clusters. Alternatively, you can purchase a license from CMC for use anywhere in Canada. Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. If you have purchased a CMC license and will be connecting to the CMC license server, this has already been done. Once the license server work is done and your ~/.licenses/comsol.lic has been created, you can load any COMSOL module and begin using the software. If this is not the case, please contact our technical support.
Configuring your own license file[edit]
Our COMSOL module is designed to look for license information in a few places, one of which is your ~/.licenses directory. If you have your own license server then specify it by creating a text file $HOME/.licenses/comsol.lic
with the following information:
SERVER <server> ANY <port>
USE_SERVER
Where <server>
is your license server hostname and <port>
is the flex port number of the license server.
Local license setup[edit]
For researchers wanting to use a new local institutional license server, firewall changes will need to be done to the network on both the Alliance (system/cluster) side and the institutional (server) side. To arrange this, send an email to technical support containing 1) the COMSOL lmgrd TCP flex port number (typically 1718 default) and 2) the static LMCOMSOL TCP vendor port number (typically 1719 default) and finally 3) the fully qualified hostname of your COMSOL license server. Once this is complete, create a corresponding comsol.lic text file as shown above.
CMC license setup[edit]
Researchers who own a COMSOL license subscription from CMC should use the following preconfigured public IP settings in their comsol.lic file:
- Béluga: SERVER 10.20.73.21 ANY 6601 (IP changed May 18, 2022)
- Cedar: SERVER 172.16.0.101 ANY 6601
- Graham: SERVER 199.241.167.222 ANY 6601
- Narval: SERVER 10.100.64.10 ANY 6601
- Niagara: SERVER 172.16.205.198 ANY 6601
If initial license checkout attempts fail, contact <cmcsupport@cmc.ca> to verify they have your username on file.
Installed products[edit]
To check which modules and products are available for use, start COMSOL in graphical mode and then click Options -> Licensed and Used Products on the upper pull-down menu. For a more detailed explanation, click here. If a module/product is missing or reports being unlicensed, contact technical support as a reinstall of the CVMFS module you are using may be required.
Installed versions[edit]
To check the full version number either start comsol in gui mode and inspect the lower right corner messages window OR more simply login to a cluster and run comsol in batch mode as follows:
[login-node:~] salloc --time=0:01:00 --nodes=1 --cores=1 --mem=1G --account=def-someuser [login-node:~] module load comsol/6.2 [login-node:~] comsol batch -version COMSOL Multiphysics 6.2.0.290
which corresponds to COMSOL 6.2 Update 1. In other words, when a new COMSOL release is installed, it will use the abbreviated 6.X version format but for convenience will contain the latest available update at the time of installation. As additional product updates are released they will instead utilize the full 6.X.Y.Z version format. For example, Update 3 can be loaded on a cluster with the module load comsol/6.2.0.415
OR module load comsol
commands. We recommend using the moat recent update to take advantage of all the latest improvements. That said, if you want to continue using any module version (6.X or 6.X.Y.Z). you can be assured by definition that the software contained in these modules will remain exactly the same.
To check which versions are available in the standard environment you have loaded ( typically StdEnv/2023
) run the module avail comsol
command. Lastly, to check which versions are available in ALL available standard environments, use the module spider comsol
command.
Submit jobs[edit]
Single compute node[edit]
Sample submission script to run a COMSOL job with eight cores on a single compute node:
#!/bin/bash
#SBATCH --time=0-03:00 # Specify (d-hh:mm)
#SBATCH --account=def-group # Specify (some account)
#SBATCH --mem=32G # Specify (set to 0 to use all memory on each node)
#SBATCH --cpus-per-task=8 # Specify (set to 32or44 graham, 32or48 cedar, 40 beluga, 48or64 narval to use all cores)
#SBATCH --nodes=1 # Do not change
#SBATCH --ntasks-per-node=1 # Do not change
INPUTFILE="ModelToSolve.mph" # Specify input filename
OUTPUTFILE="SolvedModel.mph" # Specify output filename
# module load StdEnv/2020 # Versions < 6.2
module load StdEnv/2023
module load comsol/6.2
comsol batch -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -np $SLURM_CPUS_ON_NODE
Depending on the complexity of the simulation, COMSOL may not be able to efficiently use very many cores. Therefore, it is advisable to test the scaling of your simulation by gradually increasing the number of cores. If near-linear speedup is obtained using all cores on a compute node, consider running the job over multiple full nodes using the next Slurm script.
Multiple compute nodes[edit]
Sample submission script to run a COMSOL job with eight cores distributed evenly over two compute nodes. Ideal for very large simulations (that exceed the capabilities of a single compute node), this script supports restarting interrupted jobs, allocating large temporary files to /scratch and utilizing the default comsolbatch.ini file settings. There is also an option to modify the Java heap memory described below the script.
#!/bin/bash
#SBATCH --time=0-03:00 # Specify (d-hh:mm)
#SBATCH --account=def-account # Specify (some account)
#SBATCH --mem=16G # Specify (set to 0 to use all memory on each node)
#SBATCH --cpus-per-task=4 # Specify (set to 32or44 graham, 32or48 cedar, 40 beluga, 48or64 narval to use all cores)
#SBATCH --nodes=2 # Specify (the number of compute nodes to use for the job)
#SBATCH --ntasks-per-node=1 # Do not change
INPUTFILE="ModelToSolve.mph" # Specify input filename
OUTPUTFILE="SolvedModel.mph" # Specify output filename
# module load StdEnv/2020 # Versions < 6.2
module load StdEnv/2023
module load comsol/6.2
RECOVERYDIR=$SCRATCH/comsol/recoverydir
mkdir -p $RECOVERYDIR
cp -f ${EBROOTCOMSOL}/bin/glnxa64/comsolbatch.ini comsolbatch.ini
cp -f ${EBROOTCOMSOL}/mli/startup/java.opts java.opts
# export I_MPI_COLL_EXTERNAL=0 # Uncomment this line on narval
comsol batch -inputfile $INPUTFILE -outputfile $OUTPUTFILE -np $SLURM_CPUS_ON_NODE -nn $SLURM_NNODES \
-recoverydir $RECOVERYDIR -tmpdir $SLURM_TMPDIR -comsolinifile comsolbatch.ini -alivetime 15 \
# -recover -continue # Uncomment this line to restart solving from latest recovery files
Note 1: If your multiple node job crashes on startup with a java segmentation fault, try increasing the java heap by adding the following two sed
lines after the two cp -f
lines. If it does not help, try further changing both 4g values to 8g. For further information see Out of Memory.
sed -i 's/-Xmx2g/-Xmx4g/g' comsolbatch.ini sed -i 's/-Xmx768m/-Xmx4g/g' java.opts
Note 2: On Narval, jobs may run slow when submitted with comsol/6.0.0.405 to multiple nodes using the above Slurm script. If this occurs, use comsol/6.0 instead and open a ticket to report the problem. The latest comsol/6.1.X modules have not been tested on Narval yet.
Note 3: On Graham, there is a small chance jobs will run slow or hang during startup when submitted to a single node with the above script-smp.sh script. If this occurs, use the multiple node script-dis.sh script instead adding #SBATCH --nodes=1
and then open a ticket to report the problem.
Graphical use[edit]
COMSOL can be run interactively in full graphical mode using either of the following methods.
On cluster nodes[edit]
Suitable to interactively run computationally intensive test jobs using ALL available cores and memory reserved by salloc
on a single cluster node:
- 1) Connect to a compute node (3-hour time limit) with TigerVNC.
- 2) Open a terminal window in vncviewer and run:
export XDG_RUNTIME_DIR=${SLURM_TMPDIR}
- 3) Start COMSOL Multiphysics 6.2 (or newer versions).
module load StdEnv/2023
module load comsol/6.2
comsol
(uses all cores requested by salloc)
- 4) Start COMSOL Multiphysics 5.6 (or newer versions).
module load StdEnv/2020
module load comsol/6.1.0.357
comsol
(uses all cores requested by salloc)
- 5) Start COMSOL Multiphysics 5.5 (or older versions).
module load StdEnv/2016
module load comsol/5.5
comsol
(uses all cores requested by salloc)
On VDI nodes[edit]
Suitable interactive use on gra-vdi includes: running compute calculations with maximum of 12 cores, creating or modifying simulation input files, performing post-processing or data visualization tasks. Since each gra-vdi server is shared with many other users, we request you limit your COMSOL usage to 12 cores as shown below (especially when running long calculations) to not overload the system and potentially inconvenience others. For interactive and shorter meshing calculation, using 16 cores should be fine. If you need more cores when working in graphical mode, then use COMSOL on a cluster compute node (as shown above) where you can reserve up to all available cores and memory on a node and have exclusive use of the resource.
- 1) Connect to gra-vdi (no time limit) with TigerVNC.
- 2) Open a terminal window in vncviewer.
- 3) Start COMSOL Multiphysics 6.2 (or newer versions).
module load CcEnv StdEnv/2023
module avail comsol
module load comsol/6.2
comsol -np 12
(limits use to 12 cores)
- 4) Start COMSOL Multiphysics 6.2 (or older versions).
module load CcEnv StdEnv/2020
module avail comsol
module load comsol/6.1.0.357
comsol -np 12
(limits use to 12 cores)
- 5) Start COMSOL Multiphysics 5.5 (or older versions).
module load CcEnv StdEnv/2016
module avail comsol
module load comsol/5.5
comsol -np 12
(limits use to 12 cores)
Note: If all the upper menu items are greyed out immediately after COMSOL starts in GUI mode and therefore not clickable, then your ~/.comsol maybe corrupted. To fix the problem rename (or remove) your entire ~/.comsol directory and try starting COMSOL again. This could occur if you previously loaded a COMSOL module from the local SnEnv on gra-vdi.
Parameter sweeps[edit]
Batch sweep[edit]
When working interactively in the COMSOL GUI, parametric problems may be solved using the Batch Sweep approach. Multiple parameter sweeps maybe carried out as shown in this video. Speedup due to Task Parallism may also be realized.
Cluster sweep[edit]
To run a parameter sweep on a cluster, a job must be submitted to the scheduler from the command line using sbatch slurmscript
. For a discussion regarding additional required arguments, see a and b for details. Support for submitting parametric simulations to the cluster queue from the graphical interface using a Cluster Sweep node is not available at this time.