COMSOL: Difference between revisions
mNo edit summary |
No edit summary |
||
Line 74: | Line 74: | ||
|contents= | |contents= | ||
#!/bin/bash | #!/bin/bash | ||
#SBATCH --account=def-account # Specify | #SBATCH --account=def-account # Specify | ||
#SBATCH --time=00- | #SBATCH --time=00-03:00 # dd-hh:mm | ||
#SBATCH --mem=3G # Change (set to 0 | #SBATCH --mem=3G # Change (set to 0 when using all cores) | ||
#SBATCH --nodes=2 # Change | #SBATCH --nodes=2 # Change (set to 1 or more compute nodes) | ||
#SBATCH --cpus-per-task=4 # Change (set | #SBATCH --cpus-per-task=4 # Change (set for all cores: graham=32or44, cedar=32or48, beluga=40) | ||
#SBATCH --ntasks-per-node=1 # | #SBATCH --ntasks-per-node=1 # Do not change | ||
FILENAME="MyModel.mph" # Specify | |||
comsol batch -inputfile FILENAME | |||
-nn $SLURM_NTASKS -nnhost $SLURM_NTASKS_PER_NODE -np $SLURM_CPUS_PER_TASK}} | module load StdEnv/2020 | ||
module load comsol/5.6 | |||
SCRTMP=/scratch/$USER/comsol/tmpdir | |||
SCRREC=/scratch/$USER/comsol/recoverydir | |||
mkdir -p $SCRTMP $SCRREC | |||
cp -f ${EBROOTCOMSOL}/bin/glnxa64/comsolbatch.ini comsolbatch.ini | |||
cp -f ${EBROOTCOMSOL}/mli/startup/java.opts java.opts | |||
# uncomment sed lines as required to increase java heap memory size | |||
# for reference see https://www.comsol.ch/support/knowledgebase/1243 | |||
#sed -i 's/-Xmx.*/-Xmx4g/g' comsolbatch.ini | |||
#sed -i 's/-Xmx.*/-Xmx2g/g' java.opts | |||
comsol batch -inputfile $FILENAME -outputfile solved_out.mph -recover \ | |||
-nn $SLURM_NTASKS -nnhost $SLURM_NTASKS_PER_NODE -np $SLURM_CPUS_PER_TASK \ | |||
-recoverydir $SCRREC -tmpdir $SCRTMP -comsolinifile comsolbatch.ini -alivetime 15}} | |||
= Graphical use = <!--T:120--> | = Graphical use = <!--T:120--> |
Revision as of 03:22, 14 December 2020
Introduction
COMSOL is a general-purpose platform software for modeling engineering applications. Compute Canada would like to thank COMSOL, Inc. for allowing its software to be hosted on Compute Canada clusters via a special agreement.
We recommend that Compute Canada users who wish to run COMSOL consult the documentation included with the software under File > Help > Documentation prior to attempting to use it on one of the clusters. Some of the basic manuals are available here.
Licensing
Compute Canada is a hosting provider for COMSOL. This means that we have COMSOL software installed on our clusters, but we do not provide a generic license accessible to everyone. Many institutions, faculties, and departments already have licenses that can be used on our clusters. Alternatively researchers can purchase a license from CMC for use anywhere in Canada, or purchase a dedicated Floating Network License directly from the company for use on Compute Canada systems to run on a Sharcnet license server.
Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases such as CMC, this has already been done. You should then be able to load the COMSOL modules, and it should find its license automatically. If this is not the case, please contact our Technical support, so that we can arrange this for you.
Configuring your own license file
Our module for COMSOL is designed to look for license information in a few places. One of those places is your home folder. If you have your own license server, you can write the information to access it in the following format:
SERVER <server> ANY <port>
USE_SERVER
and put this file in the folder $HOME/.licenses/. Here <server> is your license server and <port> is the port number of the license server. Note that firewall changes will need to be done on both our side and your side. To arrange this, send an email containing the service port and ip address of your floating comsol license server to Technical support.
CMC License Setup
Researchers who purchase a comsol license subscription from CMC may use the following settings in their comsol.lic
file:
- Béluga: SERVER 132.219.136.89 ANY 6601
- Graham: SERVER 199.241.162.97 ANY 6601
- Cedar: SERVER 172.16.121.25 ANY 6601
If initial license checkout attempts fail contact <cmcsupport@cmc.ca> to verify they have your Compute Canada username on file.
Submit jobs
Single Compute Node
Sample submission script to run a COMSOL job with eight cores on a single cluster compute node:
#!/bin/bash
#SBATCH --account=def-account # Specify
#SBATCH --time=0-03:00 # Specify (d-hh:mm)
#SBATCH --mem=3G # Specify total memory used
#SBATCH --cpus-per-task=8 # Specify number cores used
#SBATCH --nodes=1 # Do not change
#SBATCH --ntasks-per-node=1 # Do not change
module load StdEnv/2020
module load comsol/5.6
comsol batch -np $SLURM_CPUS_ON_NODE -inputfile FILENAME.mph -outputfile solved_out.mph
Note that depending on the complexity of the simulation, COMSOL may not be able to efficiently use very many cores. Therefore please test the scaling of your simulation by increasing the number of cores in #SBATCH --cpus-per-task=X from X=1 to the maximum number of cores on the compute node you are using. If you still get a good speed-up running on a full compute node then consider running the job over multiple full nodes by adjusting the following submission script.
Multiple Compute Nodes
Sample submission script to run a COMSOL job with eight cores distributed evenly over two cluster compute nodes:
#!/bin/bash
#SBATCH --account=def-account # Specify
#SBATCH --time=00-03:00 # dd-hh:mm
#SBATCH --mem=3G # Change (set to 0 when using all cores)
#SBATCH --nodes=2 # Change (set to 1 or more compute nodes)
#SBATCH --cpus-per-task=4 # Change (set for all cores: graham=32or44, cedar=32or48, beluga=40)
#SBATCH --ntasks-per-node=1 # Do not change
FILENAME="MyModel.mph" # Specify
module load StdEnv/2020
module load comsol/5.6
SCRTMP=/scratch/$USER/comsol/tmpdir
SCRREC=/scratch/$USER/comsol/recoverydir
mkdir -p $SCRTMP $SCRREC
cp -f ${EBROOTCOMSOL}/bin/glnxa64/comsolbatch.ini comsolbatch.ini
cp -f ${EBROOTCOMSOL}/mli/startup/java.opts java.opts
# uncomment sed lines as required to increase java heap memory size
# for reference see https://www.comsol.ch/support/knowledgebase/1243
#sed -i 's/-Xmx.*/-Xmx4g/g' comsolbatch.ini
#sed -i 's/-Xmx.*/-Xmx2g/g' java.opts
comsol batch -inputfile $FILENAME -outputfile solved_out.mph -recover \
-nn $SLURM_NTASKS -nnhost $SLURM_NTASKS_PER_NODE -np $SLURM_CPUS_PER_TASK \
-recoverydir $SCRREC -tmpdir $SCRTMP -comsolinifile comsolbatch.ini -alivetime 15
Graphical use
Comsol can be run interactively in full graphical mode using either of the following methods.
On a cluster
Suitable to interactively run computationally intensive test jobs requiring upto all cores or memory on a single cluster node.
- Connect to a compute node (3hr time limit) with TigerVNC
module load comsol/5.5
(COMSOL Multiphysics 5.5 Build:292)comsol
On gra-vdi
Suitable to create or modify simulation input files, post process or visualize data, interactively run a single test job with upto 8cores.
- Connect to gra-vdi (no time limit) with TigerVNC
module load CcEnv StdEnv
module load comsol/5.5
(COMSOL Multiphysics 5.5 Build:292)comsol
Users with complex visualization models may benefit from hardware level graphics acceleration.
This capability may only be obtained by using one of the local modules installed on gra-vdi.
If a specific version is required submit a problem ticket directed to the Sharcnet support team.
- Connect to gra-vdi (no time limit) with TigerVNC
module load SnEnv
module load comsol/5.3.1.348
(COMSOL Multiphysics 5.3a Build:348)comsol
Parameter Sweeps
Batch Sweep
When working interactively in the Comsol GUI parametric problems maybe solved using the Batch Sweep approach. Multiple parameter sweeps maybe carried out as shown in this Comsol video. Speedup due to Task Parallism maybe also be realized.
Cluster Sweep
To run a parameter sweep on a cluster, a job must be submitted to the schedular from the command line using "sbatch slurmscript". For a detailed discussion regarding additional required comsol arguments see a and b for details. Support for submitting parametric simulations to the cluster queue from the Comsol graphical interface using a Cluster Sweep node is not available at this time.