Materials Studio

From Alliance Doc
Revision as of 03:17, 9 May 2018 by Preney (talk | contribs)
Jump to navigation Jump to search


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.



Compute Canada does not have permission to install Materials Studio centrally on all clusters. However, if you do have valid Materials Studio licence(s) and software, below is a recipe will assist installing it in Compute Canada clusters in your account.

Installing Materials Studio 2018

Note

This recipe has been tested for Materials Studio 2018.


If you have access to Materials Studio 2018, you will need two things to proceed. First, you must have the archive file that contains the installer. This file should be named MaterialsStudio2018.tgz. Second, you will must have the IP address / DNS name and the port of an already configured license server that you are going to use to connect to.

Once you have these, upload the MaterialsStudio2018.tgz file to your /home folder on the cluster you intend to use. Then, run the command

Question.png
[name@server ~]$ MS_LICENSE_SERVER=<port>@<server> eb MaterialsStudio-2018-dummy-dummy.eb --sourcepath=$HOME

Once this command has completed, log out from the cluster and log back in. You should then be able to load the module through:

Question.png
[name@server ~]$ module load materialsstudio/2018

In order to be able to access the license server from the compute nodes, you will need to contact our technical support so that we can configure our firewall(s) to permit the software to connect to your licence server.

Examples of Slurm Job Submission Scripts[edit]

The examples below assume that you have installed Materials Studio 2018 according to the instructions above.

File : file.txt

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=32
#SBATCH --time=12:00:00

module load materialsstudio/2018

# Create a list of nodes to be used for the job
DSD_MachineList="machines.LINUX"
slurm_hl2hl.py --format HP-MPI > $DSD_MachineList
export DSD_MachineList

# Job to run
RunDMol3.sh -np $SLURM_NTASKS Brucite001f


Below is an example of a Slurm job script that relies on Materials Studio's RunCASTEP.sh command:

File : file.txt

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=2
#SBATCH --mem-per-cpu=1M
#SBATCH --time=0-12:00

module load materialsstudio/2018

DSD_MachineList="mpd.hosts"
slurm_hl2hl.py --format MPIHOSTLIST >$DSD_MachineList
export DSD_MachineList

RunCASTEP.sh -np $SLURM_NPROCS castepjob

if [ -f castepjob_NMR.param ]; then
  cp castepjob.check castepjob_NMR.check
  RunCASTEP.sh -np $SLURM_NPROCS castepjob_NMR
fi