Materials Studio

From Alliance Doc
Revision as of 15:33, 11 May 2018 by Preney (talk | contribs)
Jump to navigation Jump to search


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.



Compute Canada does not have permission to install Materials Studio centrally on all clusters. However, if you do have valid Materials Studio licence(s) and software, below is a recipe will assist installing it in Compute Canada clusters in your account.

Installing Materials Studio 2018

Note

This recipe has been tested with Materials Studio 2018.


If you have access to Materials Studio 2018, you will need two things to proceed. First, you must have the archive file that contains the installer. This file should be named MaterialsStudio2018.tgz. Second, you will must have the IP address / DNS name and the port of an already configured license server that you are going to use to connect to.

Once you have these, upload the MaterialsStudio2018.tgz file to your /home folder on the cluster you intend to use. Then, run the command

Question.png
[name@server ~]$ MS_LICENSE_SERVER=<port>@<server> eb MaterialsStudio-2018-dummy-dummy.eb --sourcepath=$HOME

Once this command has completed, log out from the cluster and log back in. You should then be able to load the module through:

Question.png
[name@server ~]$ module load materialsstudio/2018

In order to be able to access the license server from the compute nodes, you will need to contact our technical support so that we can configure our firewall(s) to permit the software to connect to your licence server.

Team Installation[edit]

If you are a PI holding the Materials Studio licence, you can install Materials Studio once so those working under you can use that installation. Normally your team's group would be def- followed by your login, i.e.,

[name@server ~]$ PI_GROUP=$(groups | tr ' ' '\n' | grep ^def-)
[name@server ~]$ echo $PI_GROUP

and normally team-shared files would be installed in your project space, e.g.,

[name@server ~]$ PI_PROJECT_DIR=$(readlink -f ~/projects/$PI_GROUP)
[name@server ~]$ echo $PI_PROJECT_DIR

With these values known, install Materials Studio as follows:

  1. Change your default group to your team's def- group, e.g.,
    Question.png
    [name@server ~]$ newgrp $PI_GROUP
    
  2. Open the permissions of your project directory so your team can access it, e.g.,
    Question.png
    [name@server ~]$ chmod g+rsx $PI_PROJECT_DIR
    
  3. Create a install directory within such, e.g.,
    Question.png
    [name@server ~]$ mkdir $PI_PROJECT_DIR/MatStudio2018
    
  4. Install the software, e.g.,
    Question.png
    [name@server ~]$ MS_LICENSE_SERVER=<port>@<server> eb MaterialsStudio-2018-dummy-dummy.eb --installpath=$PI_PROJECT_DIR/MatStudio2018 --sourcepath=$HOME
    

Before the software can be run, the following must be run first:

  1. Change your default group to the team def- group, e.g.,
    Question.png
    [name@server ~]$ newgrp $PI_GROUP
    
  2. Load the module information for the installed software, e.g.,
    Question.png
    [name@server ~]$ module use $PI_PROJECT_DIR/MatStudio2018/modules/2017/Core/
    
  3. Load the materialsstudio module, i.e.,
    Question.png
    [name@server ~]$ module load materialsstudio
    

NOTE: In any scripts, etc. used, be sure to replace the above PI_GROUP and PI_PROJECT_DIR variables with what their values are.

Examples of Slurm Job Submission Scripts[edit]

The examples below assume that you have installed Materials Studio 2018 according to the instructions above.

File : file.txt

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=32
#SBATCH --time=12:00:00

module load materialsstudio/2018

# Create a list of nodes to be used for the job
DSD_MachineList="machines.LINUX"
slurm_hl2hl.py --format HP-MPI > $DSD_MachineList
export DSD_MachineList

# Job to run
RunDMol3.sh -np $SLURM_NTASKS Brucite001f


Below is an example of a Slurm job script that relies on Materials Studio's RunCASTEP.sh command:

File : file.txt

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=2
#SBATCH --mem-per-cpu=1M
#SBATCH --time=0-12:00

module load materialsstudio/2018

DSD_MachineList="mpd.hosts"
slurm_hl2hl.py --format MPIHOSTLIST >$DSD_MachineList
export DSD_MachineList

RunCASTEP.sh -np $SLURM_NTASKS castepjob

if [ -f castepjob_NMR.param ]; then
  cp castepjob.check castepjob_NMR.check
  RunCASTEP.sh -np $SLURM_NTASKS castepjob_NMR
fi


Installing Earlier Versions Of Materials Studio[edit]

If you require using an earlier version of Materials Studio than 2018, you will need to install in into a Singularity container. This involves:

  1. Creating a Singularity container with a compatible distribution of Linux installed in it.
  2. Installing Materials Studio into that container.
  3. Uploading the Singularity container to your account and using it there.
    • NOTE: In order to be able to access the license server from the compute nodes, you will need to contact our technical support so that we can configure our firewall(s) to permit the software to connect to your licence server.