|
|
Line 1: |
Line 1: |
| <languages />
| | #REDIRECT [[PyTorch#LibTorch]] |
| [[Category:Software]]
| |
| <translate>
| |
| | |
| {{Draft}}
| |
| | |
| LibTorch allows one to implement both C++ extensions to [[PyTorch]] and '''pure C++ machine learning applications'''. It contains "all headers, libraries and CMake configuration files required to depend on PyTorch."<ref>https://pytorch.org/cppdocs/installing.html (Retrieved 2019-07-12)</ref>
| |
| | |
| == How to use LibTorch ==
| |
| | |
| === Download ===
| |
| | |
| <syntaxhighlight>
| |
| wget https://download.pytorch.org/libtorch/cu100/libtorch-shared-with-deps-latest.zip
| |
| unzip libtorch-shared-with-deps-latest.zip
| |
| cd libtorch
| |
| export LIBTORCH_ROOT=$(pwd) # this variable is used in the example below
| |
| </syntaxhighlight>
| |
| | |
| Patch the library:
| |
| <syntaxhighlight>
| |
| sed -i -e 's/\/usr\/local\/cuda\/lib64\/libculibos.a;dl;\/usr\/local\/cuda\/lib64\/libculibos.a;//g' share/cmake/Caffe2/Caffe2Targets.cmake
| |
| </syntaxhighlight>
| |
| | |
| === Compile a minimal example ===
| |
| | |
| Create <tt>example-app.cpp</tt>:
| |
| | |
| <syntaxhighlight>
| |
| #include <torch/torch.h>
| |
| #include <iostream>
| |
| | |
| int main() {
| |
| torch::Device device(torch::kCPU);
| |
| if (torch::cuda::is_available()) {
| |
| std::cout << "CUDA is available! Using GPU." << std::endl;
| |
| device = torch::Device(torch::kCUDA);
| |
| }
| |
| | |
| torch::Tensor tensor = torch::rand({2, 3}).to(device);
| |
| std::cout << tensor << std::endl;
| |
| }
| |
| </syntaxhighlight>
| |
| | |
| Create <tt>CMakeLists.txt</tt>:
| |
| | |
| <syntaxhighlight>
| |
| cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
| |
| project(example-app)
| |
| | |
| find_package(Torch REQUIRED)
| |
| | |
| add_executable(example-app example-app.cpp)
| |
| target_link_libraries(example-app "${TORCH_LIBRARIES}")
| |
| set_property(TARGET example-app PROPERTY CXX_STANDARD 11)
| |
| </syntaxhighlight>
| |
| | |
| Load the necessary modules:
| |
| | |
| <syntaxhighlight>
| |
| module load cmake intel/2018.3 cuda/10 cudnn
| |
| </syntaxhighlight>
| |
| | |
| Compile the program:
| |
| | |
| <syntaxhighlight>
| |
| mkdir build
| |
| cd build
| |
| cmake -DCMAKE_PREFIX_PATH="$LIBTORCH_ROOT;$EBROOTCUDA;$EBROOTCUDNN" ..
| |
| make
| |
| </syntaxhighlight>
| |
| | |
| Run the program:
| |
| | |
| <syntaxhighlight>
| |
| ./example-app
| |
| </syntaxhighlight>
| |
| | |
| To test an application with CUDA, request an [[Running_jobs#Interactive_jobs|interactive job]] with a [[Using_GPUs_with_Slurm|GPU]].
| |
| | |
| == Resources ==
| |
| | |
| https://pytorch.org/cppdocs/
| |
| | |
| </translate>
| |