Materials Studio
This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.
Compute Canada does not have permission to install Materials Studio centrally on all clusters. However, if you do have valid Materials Studio licence(s) and software, below is a recipe will assist installing it in Compute Canada clusters in your account.
Installing Materials Studio 2018
This recipe has been tested with Materials Studio 2018.
If you have access to Materials Studio 2018, you will need two things to proceed. First, you must have the archive file that contains the installer. This file should be named MaterialsStudio2018.tgz. Second, you will must have the IP address / DNS name and the port of an already configured license server that you are going to use to connect to.
Once you have these, upload the MaterialsStudio2018.tgz file to your /home folder on the cluster you intend to use. Then, run the command
[name@server ~]$ MS_LICENSE_SERVER=<port>@<server> eb MaterialsStudio-2018-dummy-dummy.eb --sourcepath=$HOME
Once this command has completed, log out from the cluster and log back in. You should then be able to load the module through:
[name@server ~]$ module load materialsstudio/2018
In order to be able to access the license server from the compute nodes, you will need to contact our technical support so that we can configure our firewall(s) to permit the software to connect to your licence server.
Team Installation[edit]
If you are a PI holding the Materials Studio licence, you can install Materials Studio once so those working under you can use that installation. Since normally team work is stored in /project
space, determine which project directory you want to use. Suppose it is ~/projects/A_DIRECTORY
, then you'll need to know these two values:
1. Determine the actual path of A_DIRECTORY as follows:
[name@server ~]$ PI_PROJECT_DIR=$(readlink -f ~/projects/A_DIRECTORY)
[name@server ~]$ echo $PI_PROJECT_DIR
2. Determine the group of A_DIRECTORY as follows:
[name@server ~]$ PI_GROUP=$(stat -c%G $PI_PROJECT_DIR)
[name@server ~]$ echo $PI_GROUP
With these values known, install Materials Studio as follows:
- Change your default group to your team's
def-
group, e.g.,[name@server ~]$ newgrp $PI_GROUP
- Open the permissions of your project directory so your team can access it, e.g.,
[name@server ~]$ chmod g+rsx $PI_PROJECT_DIR
- Create a install directory within such, e.g.,
[name@server ~]$ mkdir $PI_PROJECT_DIR/MatStudio2018
- Install the software, e.g.,
[name@server ~]$ MS_LICENSE_SERVER=<port>@<server> eb MaterialsStudio-2018-dummy-dummy.eb --installpath=$PI_PROJECT_DIR/MatStudio2018 --sourcepath=$HOME
Before the software can be run, the following must be run first:
- Load the module information for the installed software, e.g.,
[name@server ~]$ module use $PI_PROJECT_DIR/MatStudio2018/modules/2017/Core/
- Your team members may wish to add this to their
~/.bashrc
file.
- Your team members may wish to add this to their
- Load the materialsstudio module, i.e.,
[name@server ~]$ module load materialsstudio
- Optional: If you want files to be written readable by group members, change your default group to the team
def-
group, e.g.,[name@server ~]$ newgrp $PI_GROUP
NOTE: In any scripts, etc. used, be sure to replace the above PI_GROUP and PI_PROJECT_DIR variables with what their values are.
Examples of Slurm Job Submission Scripts[edit]
The examples below assume that you have installed Materials Studio 2018 according to the instructions above.
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=32
#SBATCH --time=12:00:00
module load materialsstudio/2018
# Create a list of nodes to be used for the job
DSD_MachineList="machines.LINUX"
slurm_hl2hl.py --format HP-MPI > $DSD_MachineList
export DSD_MachineList
# Job to run
RunDMol3.sh -np $SLURM_NTASKS Brucite001f
Below is an example of a Slurm job script that relies on Materials Studio's RunCASTEP.sh command:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=2
#SBATCH --mem-per-cpu=1M
#SBATCH --time=0-12:00
module load materialsstudio/2018
DSD_MachineList="mpd.hosts"
slurm_hl2hl.py --format MPIHOSTLIST >$DSD_MachineList
export DSD_MachineList
RunCASTEP.sh -np $SLURM_NTASKS castepjob
if [ -f castepjob_NMR.param ]; then
cp castepjob.check castepjob_NMR.check
RunCASTEP.sh -np $SLURM_NTASKS castepjob_NMR
fi
Installing Earlier Versions Of Materials Studio[edit]
If you require using an earlier version of Materials Studio than 2018, you will need to install in into a Singularity container. This involves:
- Creating a Singularity container with a compatible distribution of Linux installed in it.
- Installing Materials Studio into that container.
- Uploading the Singularity container to your account and using it there.
- NOTE: In order to be able to access the license server from the compute nodes, you will need to contact our technical support so that we can configure our firewall(s) to permit the software to connect to your licence server.
Please be aware that you might be restricted to whole-node (single-ndoe) jobs as the version of MPI inside the container might not be able to be used across nodes.