Materials Studio: Difference between revisions
No edit summary |
No edit summary |
||
Line 11: | Line 11: | ||
In order to be able to access the license server from the compute nodes, you will need to [[Technical support|contact our technical support]] so that we can configure our firewall(s) to permit the software to connect to your licence server. | In order to be able to access the license server from the compute nodes, you will need to [[Technical support|contact our technical support]] so that we can configure our firewall(s) to permit the software to connect to your licence server. | ||
== Team Installation == | |||
If you are a PI holding the Materials Studio licence, you can install Materials Studio once so those working under you can use that installation. Normally your team's group would be <code>def-</code> followed by your login, i.e., | |||
{{Commands|PI_GROUP{{=}}$(groups {{!}} tr ' ' '\n' {{!}} grep ^def-)|echo $PI_GROUP}} | |||
and normally team-shared files would be installed in your project space, e.g., | |||
{{Commands|PI_PROJECT_DIR{{=}}$(readlink ~/projects/$PI_GROUP)|echo $PI_PROJECT_DIR}} | |||
With these values known, install Materials Studio as follows: | |||
# Change your default group to your team's <code>def-</code> group, e.g., {{Command|newgrp $PI_GROUP}} | |||
# Open the permissions of your project directory so your team can access it, e.g., {{Command|chmod g+rsx $PI_PROJECT_DIR}} | |||
# Create a install directory within such, e.g., {{Command|mkdir $PI_PROJECT_DIR/MatStudio2018}} | |||
# Install the software, e.g., {{Command|MS_LICENSE_SERVER{{=}}<port>@<server> eb MaterialsStudio-2018-dummy-dummy.eb --installpath{{=}}$PI_PROJECT_DIR/MatStudio2018 --sourcepath{{=}}$HOME}} | |||
Before the software can be run, the following must be run first: | |||
# Change your default group to the team <code>def-</code> group, e.g., {{Command|newgrp $PI_GROUP}} | |||
# Load the module information for the installed software, e.g., {{Command|module use $PI_PROJECT_DIR/MatStudio2018/modules/2017/Core/}} | |||
# Load the materialsstudio module, i.e., {{Command|module load materialsstudio}} | |||
'''NOTE:''' In any scripts, etc. used, be sure to replace the above PI_GROUP and PI_PROJECT_DIR variables with what their values are. | |||
= Examples of Slurm Job Submission Scripts = | = Examples of Slurm Job Submission Scripts = |
Revision as of 15:13, 11 May 2018
This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.
Compute Canada does not have permission to install Materials Studio centrally on all clusters. However, if you do have valid Materials Studio licence(s) and software, below is a recipe will assist installing it in Compute Canada clusters in your account.
Installing Materials Studio 2018
This recipe has been tested for Materials Studio 2018.
If you have access to Materials Studio 2018, you will need two things to proceed. First, you must have the archive file that contains the installer. This file should be named MaterialsStudio2018.tgz. Second, you will must have the IP address / DNS name and the port of an already configured license server that you are going to use to connect to.
Once you have these, upload the MaterialsStudio2018.tgz file to your /home folder on the cluster you intend to use. Then, run the command
[name@server ~]$ MS_LICENSE_SERVER=<port>@<server> eb MaterialsStudio-2018-dummy-dummy.eb --sourcepath=$HOME
Once this command has completed, log out from the cluster and log back in. You should then be able to load the module through:
[name@server ~]$ module load materialsstudio/2018
In order to be able to access the license server from the compute nodes, you will need to contact our technical support so that we can configure our firewall(s) to permit the software to connect to your licence server.
Team Installation[edit]
If you are a PI holding the Materials Studio licence, you can install Materials Studio once so those working under you can use that installation. Normally your team's group would be def-
followed by your login, i.e.,
[name@server ~]$ PI_GROUP=$(groups | tr ' ' '\n' | grep ^def-)
[name@server ~]$ echo $PI_GROUP
and normally team-shared files would be installed in your project space, e.g.,
[name@server ~]$ PI_PROJECT_DIR=$(readlink ~/projects/$PI_GROUP)
[name@server ~]$ echo $PI_PROJECT_DIR
With these values known, install Materials Studio as follows:
- Change your default group to your team's
def-
group, e.g.,[name@server ~]$ newgrp $PI_GROUP
- Open the permissions of your project directory so your team can access it, e.g.,
[name@server ~]$ chmod g+rsx $PI_PROJECT_DIR
- Create a install directory within such, e.g.,
[name@server ~]$ mkdir $PI_PROJECT_DIR/MatStudio2018
- Install the software, e.g.,
[name@server ~]$ MS_LICENSE_SERVER=<port>@<server> eb MaterialsStudio-2018-dummy-dummy.eb --installpath=$PI_PROJECT_DIR/MatStudio2018 --sourcepath=$HOME
Before the software can be run, the following must be run first:
- Change your default group to the team
def-
group, e.g.,[name@server ~]$ newgrp $PI_GROUP
- Load the module information for the installed software, e.g.,
[name@server ~]$ module use $PI_PROJECT_DIR/MatStudio2018/modules/2017/Core/
- Load the materialsstudio module, i.e.,
[name@server ~]$ module load materialsstudio
NOTE: In any scripts, etc. used, be sure to replace the above PI_GROUP and PI_PROJECT_DIR variables with what their values are.
Examples of Slurm Job Submission Scripts[edit]
The examples below assume that you have installed Materials Studio 2018 according to the instructions above.
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=32
#SBATCH --time=12:00:00
module load materialsstudio/2018
# Create a list of nodes to be used for the job
DSD_MachineList="machines.LINUX"
slurm_hl2hl.py --format HP-MPI > $DSD_MachineList
export DSD_MachineList
# Job to run
RunDMol3.sh -np $SLURM_NTASKS Brucite001f
Below is an example of a Slurm job script that relies on Materials Studio's RunCASTEP.sh command:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=2
#SBATCH --mem-per-cpu=1M
#SBATCH --time=0-12:00
module load materialsstudio/2018
DSD_MachineList="mpd.hosts"
slurm_hl2hl.py --format MPIHOSTLIST >$DSD_MachineList
export DSD_MachineList
RunCASTEP.sh -np $SLURM_NTASKS castepjob
if [ -f castepjob_NMR.param ]; then
cp castepjob.check castepjob_NMR.check
RunCASTEP.sh -np $SLURM_NTASKS castepjob_NMR
fi