JupyterNotebook: Difference between revisions
Line 18: | Line 18: | ||
===== Submit the job ===== | ===== Submit the job ===== | ||
Create a bash script for submitting a | Create a bash script for submitting a Jupyter job on the slurm scheduler, i.e., slurm_jupyter.sh and add | ||
<pre> | <pre> | ||
#!/bin/bash | #!/bin/bash | ||
#SBATCH --gres=gpu: | #SBATCH --gres=gpu:1 #number of GPUs if needed | ||
#SBATCH --time=0-01:00 # | #SBATCH --time=0-01:00 #time in dd-hr:mm | ||
#SBATCH --nodes 1 # | #SBATCH --nodes 1 #nodes | ||
#SBATCH --ntasks-per-node | #SBATCH --ntasks-per-node 2 #cores per node | ||
#SBATCH --mem-per-cpu 4000 # | #SBATCH --mem-per-cpu 4000 #mem in MB | ||
#SBATCH --job-name tunnel #name of the job | #SBATCH --job-name tunnel #name of the job | ||
#SBATCH --output jupyter-log-%J.txt # | #SBATCH --output jupyter-log-%J.txt #output file | ||
#SBATCH --mail-type=BEGIN #send email | #SBATCH --mail-type=BEGIN #send email when job begins | ||
#SBATCH --mail-user=< | #SBATCH --mail-user=<email_to_notify> #to this email address | ||
#SBATCH --account=<account> #account | |||
#load cuda (remove if you don't need GPUs) and python modules | |||
module load cuda | |||
module load | module load python35-scipy-stack | ||
## get tunneling info | ## get tunneling info | ||
XDG_RUNTIME_DIR="" | XDG_RUNTIME_DIR="" | ||
#choose a random port | |||
ipnport=$(shuf -i8000-9999 -n1) | ipnport=$(shuf -i8000-9999 -n1) | ||
#get hostname's IP and remove whitespaces | |||
ipnip=$(hostname -i | xargs) | ipnip=$(hostname -i | xargs) | ||
## print tunneling instructions to jupyter-log-{jobid}.txt | ## print tunneling instructions to jupyter-log-{jobid}.txt | ||
echo -e " | echo -e " | ||
Copy/Paste this in your local terminal to ssh tunnel with remote | |||
----------------------------------------------------------------- | |||
sshuttle -r $USER@{graham|cedar}.computecanada.ca -v $ipnip/24 | |||
----------------------------------------------------------------- | |||
Then open a browser on your local machine to the following address | |||
------------------------------------------------------------------ | |||
http://$ipnip:$ipnport (prefix w/ https:// if using password) | |||
------------------------------------------------------------------ | |||
" | |||
## start an ipcluster instance and launch jupyter server | ## start an ipcluster instance and launch jupyter server | ||
jupyter-notebook --no-browser --port=$ipnport --ip=$ipnip | jupyter-notebook --no-browser --port=$ipnport --ip=$ipnip |
Revision as of 15:50, 29 September 2017
This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.
Graham cluster
Jupyter notebook comes in one Python model on Graham. You can get it working on the login node (not recommended), and the compute nodes (highly recommended). Note that Login nodes impose various user- and process-based limits, so notebooks running there may be killed if they consume significant cpu-time or memory. You will have to submit a job requesting the # of CPU (or even GPU), amount of memory and runtime. Here, we give the instructions to submit a Jupyter job.
Load the module
Log onto graham.sharcnet.ca (or graham.computecanada.ca) and load the python module
ssh user@graham.sharcnet.ca module load python35-scipy-stack
Install Python modules
easy_install --user pycuda
Submit the job
Create a bash script for submitting a Jupyter job on the slurm scheduler, i.e., slurm_jupyter.sh and add
#!/bin/bash #SBATCH --gres=gpu:1 #number of GPUs if needed #SBATCH --time=0-01:00 #time in dd-hr:mm #SBATCH --nodes 1 #nodes #SBATCH --ntasks-per-node 2 #cores per node #SBATCH --mem-per-cpu 4000 #mem in MB #SBATCH --job-name tunnel #name of the job #SBATCH --output jupyter-log-%J.txt #output file #SBATCH --mail-type=BEGIN #send email when job begins #SBATCH --mail-user=<email_to_notify> #to this email address #SBATCH --account=<account> #account #load cuda (remove if you don't need GPUs) and python modules module load cuda module load python35-scipy-stack ## get tunneling info XDG_RUNTIME_DIR="" #choose a random port ipnport=$(shuf -i8000-9999 -n1) #get hostname's IP and remove whitespaces ipnip=$(hostname -i | xargs) ## print tunneling instructions to jupyter-log-{jobid}.txt echo -e " Copy/Paste this in your local terminal to ssh tunnel with remote ----------------------------------------------------------------- sshuttle -r $USER@{graham|cedar}.computecanada.ca -v $ipnip/24 ----------------------------------------------------------------- Then open a browser on your local machine to the following address ------------------------------------------------------------------ http://$ipnip:$ipnport (prefix w/ https:// if using password) ------------------------------------------------------------------ " ## start an ipcluster instance and launch jupyter server jupyter-notebook --no-browser --port=$ipnport --ip=$ipnip
Making the tunneling
Open a new terminal window, and run the sshuttle command to port-forward the jupyter port, e.g.
sshuttle -r jnandez@graham.sharcnet.ca -v 10.29.76.66/24
Opening in a browser
Open your local browser and type, e.g.
http://10.29.76.66:8850/