TensorFlow/fr: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
(Created page with "Activez le nouvel environnement. {{Commande|source tensorflow/bin/activate}} Installez les paquets numpy et TensorFlow dans ce nouvel environnement. {{Commande|pip install ten...")
Line 13: Line 13:
{{Commande|virtualenv tensorflow}}
{{Commande|virtualenv tensorflow}}


Activate your newly created python virtual environment:
Activez le nouvel environnement.
{{Command|source tensorflow/bin/activate}}
{{Commande|source tensorflow/bin/activate}}
Install the numpy and Tensorflow wheels into your newly created virtual environment
Installez les paquets numpy et TensorFlow dans ce nouvel environnement.
{{Command|pip install tensorflow}}
{{Commande|pip install tensorflow}}


==Submitting a Tensorflow job==
==Submitting a Tensorflow job==

Revision as of 16:03, 18 July 2017

Other languages:

Installation

Les directives suivantes servent à installer TensorFlow dans votre répertoire home à l'aide des paquets binaires (Python wheels) préparés par Calcul Canada; ces paquets se trouvent dans /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/.
Le paquet TensorFlow sera installé dans un environment virtuel Python avec la commande pip.
Ces directives sont valides pour Python 3.5.2; avec Python 3.5.Y ou 2.7.X, utilisez un des autres modules Python.

Chargez les modules requis par TensorFlow.

Question.png
[nom@serveur ~]$ module load cuda cudnn python/3.5.2

Créez un nouvel environnement Python.

Question.png
[nom@serveur ~]$ virtualenv tensorflow

Activez le nouvel environnement.

Question.png
[nom@serveur ~]$ source tensorflow/bin/activate

Installez les paquets numpy et TensorFlow dans ce nouvel environnement.

Question.png
[nom@serveur ~]$ pip install tensorflow

Submitting a Tensorflow job

Once you have the above setup completed you can submit a Tensorflow job as

Question.png
[name@server ~]$ sbatch tensorflow-test.sh

The job submission script has the contents

File : tensorflow-test.sh

#!/bin/bash
#SBATCH --gres=gpu:1              # request GPU "generic resource"
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID

module load cuda cudnn python/3.5.2
source tensorflow/bin/activate
python ./tensorflow-test.py


while the Python script has the form,

File : tensorflow-test.py

import tensorflow as tf
node1 = tf.constant(3.0, dtype=tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
print(node1, node2)
sess = tf.Session()
print(sess.run([node1, node2]))


Once the above job has completed (should take less than a minute) you should see an output file called something like cdr116-122907.out with contents similar to the following example,

File : cdr116-122907.out

2017-07-10 12:35:19.489458: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties:
name: Tesla P100-PCIE-12GB
major: 6 minor: 0 memoryClockRate (GHz) 1.3285
pciBusID 0000:82:00.0
Total memory: 11.91GiB
Free memory: 11.63GiB
2017-07-10 12:35:19.491097: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0
2017-07-10 12:35:19.491156: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   Y
2017-07-10 12:35:19.520737: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla P100-PCIE-12GB, pci bus id: 0000:82:00.0)
Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0", shape=(), dtype=float32)
[3.0, 4.0]