JupyterNotebook: Difference between revisions
No edit summary |
m (→Graham cluster) |
||
Line 3: | Line 3: | ||
==Graham cluster== | ==Graham cluster== | ||
Jupyter notebook comes in one Python model on Graham. You can get it working on the login node (not | Jupyter notebook comes in one Python model on Graham. You can get it working on the login node (not recommended), and the compute nodes (highly recommended). Note that any notebook running on the login node will be killed after sometime. You will have to submit a job requesting the # of CPU (or even GPU), amount of memory and runtime. Here, we give the instructions to submit a Jupyter job. | ||
===== Load the module ===== | ===== Load the module ===== |
Revision as of 21:07, 28 September 2017
This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.
Graham cluster
Jupyter notebook comes in one Python model on Graham. You can get it working on the login node (not recommended), and the compute nodes (highly recommended). Note that any notebook running on the login node will be killed after sometime. You will have to submit a job requesting the # of CPU (or even GPU), amount of memory and runtime. Here, we give the instructions to submit a Jupyter job.
Load the module
Log onto graham.sharcnet.ca (or graham.computecanada.ca) and load the python module
ssh user@graham.sharcnet.ca module load python35-scipy-stack
Install Python modules
pip3.5 install pycuda --user
Submit the job
Create a bash script for submitting a jupyter job on the slurm scheduler, i.e., slurm_jupyter.sh and add
#!/bin/bash #SBATCH --gres=gpu:2 #only if you need GPU #SBATCH --time=0-01:00 #runtime d-hh:mm #SBATCH --nodes 1 #how many nodes #SBATCH --ntasks-per-node 32 #number of cores per node #SBATCH --mem-per-cpu 4000 # memory in MB #SBATCH --job-name tunnel #name of the job #SBATCH --output jupyter-log-%J.txt #name of the log file #SBATCH --mail-type=BEGIN #send email if job has started #SBATCH --mail-user=<email_address> #send the email to this email_address ## load modules that you might need, in this case cuda for pycuda module load cuda ## get tunneling info XDG_RUNTIME_DIR="" ipnport=$(shuf -i8000-9999 -n1) ipnip=$(hostname -i | xargs) ## print tunneling instructions to jupyter-log-{jobid}.txt echo -e " Copy/Paste this in your local terminal to ssh tunnel with remote ----------------------------------------------------------------- sshuttle -r $USER@graham.sharcnet.ca -v $ipnip/24 ----------------------------------------------------------------- Then open a browser on your local machine to the following address ------------------------------------------------------------------ http://$ipnip:$ipnport (prefix w/ https:// if using password) ------------------------------------------------------------------ " ## start an ipcluster instance and launch jupyter server jupyter-notebook --no-browser --port=$ipnport --ip=$ipnip
Making the tunneling
Open a new terminal window, and run the sshuttle command to port-forward the jupyter port, e.g.
sshuttle -r jnandez@graham.sharcnet.ca -v 10.29.76.66/24
Opening in a browser
Open your local browser and type, e.g.
http://10.29.76.66:8850/