LibTorch: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Created page with "<languages /> Category:Software <translate> {{Draft}} LibTorch allows to implement both C++ extensions to PyTorch and '''pure C++ machine learning applications'''. I...")
 
No edit summary
Line 6: Line 6:


LibTorch allows to implement both C++ extensions to [[PyTorch]] and '''pure C++ machine learning applications'''. It contains the binary distributions of all headers, libraries and CMake configuration files required to depend on PyTorch<ref>https://pytorch.org/cppdocs/installing.html (Retrieved 2019-07-12)</ref>.
LibTorch allows to implement both C++ extensions to [[PyTorch]] and '''pure C++ machine learning applications'''. It contains the binary distributions of all headers, libraries and CMake configuration files required to depend on PyTorch<ref>https://pytorch.org/cppdocs/installing.html (Retrieved 2019-07-12)</ref>.
== How to use LibTorch ==
=== Download ===
<syntaxhighlight>
wget https://download.pytorch.org/libtorch/cu100/libtorch-shared-with-deps-latest.zip
unzip libtorch-shared-with-deps-latest.zip
cd libtorch
export LIBTORCH_ROOT=$(pwd)  # this variable is used in the example below
</syntaxhighlight>
Patch the library:
<syntaxhighlight>
sed -i -e 's/\/usr\/local\/cuda\/lib64\/libculibos.a;dl;\/usr\/local\/cuda\/lib64\/libculibos.a;//g' share/cmake/Caffe2/Caffe2Targets.cmake
</syntaxhighlight>
=== Compile a minimal example ===
Create <tt>example-app.cpp</tt>:
<syntaxhighlight>
#include <torch/torch.h>
#include <iostream>
int main() {
    torch::Device device(torch::kCPU);
    if (torch::cuda::is_available()) {
        std::cout << "CUDA is available! Using GPU." << std::endl;
        device = torch::Device(torch::kCUDA);
    }
    torch::Tensor tensor = torch::rand({2, 3}).to(device);
    std::cout << tensor << std::endl;
}
</syntaxhighlight>
Create <tt>CMakeLists.txt</tt>:
<syntaxhighlight>
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(example-app)
find_package(Torch REQUIRED)
add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 11)
</syntaxhighlight>
Load the necessary modules:
<syntaxhighlight>
module load cmake intel/2018.3 cuda/10 cudnn
</syntaxhighlight>
Compile the program:
<syntaxhighlight>
mkdir build
cd build
cmake -DCMAKE_PREFIX_PATH="$LIBTORCH_ROOT;$EBROOTCUDA;$EBROOTCUDNN" ..
make
</syntaxhighlight>
Run the program:
<syntaxhighlight>
./example-app
</syntaxhighlight>
To test CUDA, request an interactive job with a GPU.
== Resources ==
https://pytorch.org/cppdocs/


</translate>
</translate>

Revision as of 19:27, 12 July 2019



This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.




LibTorch allows to implement both C++ extensions to PyTorch and pure C++ machine learning applications. It contains the binary distributions of all headers, libraries and CMake configuration files required to depend on PyTorch[1].

How to use LibTorch

Download

wget https://download.pytorch.org/libtorch/cu100/libtorch-shared-with-deps-latest.zip
unzip libtorch-shared-with-deps-latest.zip
cd libtorch
export LIBTORCH_ROOT=$(pwd)  # this variable is used in the example below

Patch the library:

sed -i -e 's/\/usr\/local\/cuda\/lib64\/libculibos.a;dl;\/usr\/local\/cuda\/lib64\/libculibos.a;//g' share/cmake/Caffe2/Caffe2Targets.cmake

Compile a minimal example

Create example-app.cpp:

#include <torch/torch.h>
#include <iostream>

int main() {
    torch::Device device(torch::kCPU);
    if (torch::cuda::is_available()) {
        std::cout << "CUDA is available! Using GPU." << std::endl;
        device = torch::Device(torch::kCUDA);
    }

    torch::Tensor tensor = torch::rand({2, 3}).to(device);
    std::cout << tensor << std::endl;
}

Create CMakeLists.txt:

cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(example-app)

find_package(Torch REQUIRED)

add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 11)

Load the necessary modules:

module load cmake intel/2018.3 cuda/10 cudnn

Compile the program:

mkdir build
cd build
cmake -DCMAKE_PREFIX_PATH="$LIBTORCH_ROOT;$EBROOTCUDA;$EBROOTCUDNN" ..
make

Run the program:

./example-app

To test CUDA, request an interactive job with a GPU.

Resources

https://pytorch.org/cppdocs/