RAPIDS: Difference between revisions

Jump to navigation Jump to search
96 bytes added ,  3 years ago
Marked this version for translation
No edit summary
(Marked this version for translation)
Line 11: Line 11:
* '''cuDF''', a Python GPU DataFrame library (built on the Apache Arrow columnar memory format) for loading, joining, aggregating, filtering, and otherwise manipulating data.
* '''cuDF''', a Python GPU DataFrame library (built on the Apache Arrow columnar memory format) for loading, joining, aggregating, filtering, and otherwise manipulating data.


<!--T:38-->
* '''cuML''', a suite of libraries that implement machine learning algorithms and mathematical primitive functions that share compatible APIs with other RAPIDS projects.
* '''cuML''', a suite of libraries that implement machine learning algorithms and mathematical primitive functions that share compatible APIs with other RAPIDS projects.


<!--T:39-->
* '''cuGraph''', a GPU accelerated graph analytics library, with functionality like NetworkX, which is seamlessly integrated into the RAPIDS data science platform.
* '''cuGraph''', a GPU accelerated graph analytics library, with functionality like NetworkX, which is seamlessly integrated into the RAPIDS data science platform.


<!--T:40-->
* '''Cyber Log Accelerators (CLX or ''clicks'')''', a collection of RAPIDS examples for security analysts, data scientists, and engineers to quickly get started applying RAPIDS and GPU acceleration to real-world cybersecurity use cases.
* '''Cyber Log Accelerators (CLX or ''clicks'')''', a collection of RAPIDS examples for security analysts, data scientists, and engineers to quickly get started applying RAPIDS and GPU acceleration to real-world cybersecurity use cases.


<!--T:41-->
* '''cuxFilter''', a connector library, which provides the connections between different visualization libraries and a GPU dataframe without much hassle. This also allows you to use charts from different libraries in a single dashboard, while also providing the interaction.
* '''cuxFilter''', a connector library, which provides the connections between different visualization libraries and a GPU dataframe without much hassle. This also allows you to use charts from different libraries in a single dashboard, while also providing the interaction.


<!--T:42-->
* '''cuSpatial''', a GPU accelerated C++/Python library for accelerating GIS workflows including point-in-polygon, spatial join, coordinate systems, shape primitives, distances, and trajectory analysis.
* '''cuSpatial''', a GPU accelerated C++/Python library for accelerating GIS workflows including point-in-polygon, spatial join, coordinate systems, shape primitives, distances, and trajectory analysis.


<!--T:43-->
* '''cuSignal''', which leverages CuPy, Numba, and the RAPIDS ecosystem for GPU accelerated signal processing. In some cases, cuSignal is a direct port of Scipy Signal to leverage GPU compute resources via CuPy but also contains Numba CUDA kernels for additional speedups for selected functions.
* '''cuSignal''', which leverages CuPy, Numba, and the RAPIDS ecosystem for GPU accelerated signal processing. In some cases, cuSignal is a direct port of Scipy Signal to leverage GPU compute resources via CuPy but also contains Numba CUDA kernels for additional speedups for selected functions.


<!--T:44-->
* '''cuCIM''', an extensible toolkit designed to provide GPU accelerated I/O, computer vision & image processing primitives for N-Dimensional images with a focus on biomedical imaging.
* '''cuCIM''', an extensible toolkit designed to provide GPU accelerated I/O, computer vision & image processing primitives for N-Dimensional images with a focus on biomedical imaging.


<!--T:45-->
* '''RAPIDS Memory Manager (RMM)''', a central place for all device memory allocations in cuDF (C++ and Python) and other RAPIDS libraries. In addition, it is a replacement allocator for CUDA Device Memory (and CUDA Managed Memory) and a pool allocator to make CUDA device memory allocation / deallocation faster and asynchronous.  
* '''RAPIDS Memory Manager (RMM)''', a central place for all device memory allocations in cuDF (C++ and Python) and other RAPIDS libraries. In addition, it is a replacement allocator for CUDA Device Memory (and CUDA Managed Memory) and a pool allocator to make CUDA device memory allocation / deallocation faster and asynchronous.  


<!--T:4-->
= Singularity images= <!--T:4-->  
= Singularity images=


<!--T:5-->
<!--T:5-->
To build a Singularity image for RAPIDS, the first thing to do is to find and select a Docker image provided by NVIDIA.
To build a Singularity image for RAPIDS, the first thing to do is to find and select a Docker image provided by NVIDIA.


<!--T:6-->
==Finding a Docker image==  <!--T:6-->
==Finding a Docker image==
   
   
There are three types of RAPIDS Docker images: ''base'', ''runtime'', and ''devel''. For each type, multiple images are provided for different combinations of RAPIDS and CUDA versions, either on Ubuntu or on CentOS. You can find the Docker <tt>pull</tt> command for a selected image under the '''Tags''' tab on each site.   
There are three types of RAPIDS Docker images: ''base'', ''runtime'', and ''devel''. For each type, multiple images are provided for different combinations of RAPIDS and CUDA versions, either on Ubuntu or on CentOS. You can find the Docker <tt>pull</tt> command for a selected image under the '''Tags''' tab on each site.   
Line 60: Line 66:
It usually takes from thirty to sixty minutes to complete the image-building process. Since the image size is relatively large, you need to have enough memory and disk space on the server to build such an image.
It usually takes from thirty to sixty minutes to complete the image-building process. Since the image size is relatively large, you need to have enough memory and disk space on the server to build such an image.


<!--T:12-->
=Working on clusters with a Singularity image=  <!--T:12-->
=Working on clusters with a Singularity image=
Once you have a Singularity image for RAPIDS ready in your account, you can request an interactive session on a GPU node or submit a batch job to Slurm if you have your RAPIDS code ready.
Once you have a Singularity image for RAPIDS ready in your account, you can request an interactive session on a GPU node or submit a batch job to Slurm if you have your RAPIDS code ready.


<!--T:13-->
==Working interactively on a GPU node== <!--T:13-->  
==Working interactively on a GPU node==


<!--T:14-->
<!--T:14-->
Line 100: Line 104:
As there is no direct Internet connection on a compute node on Graham, you would need to set up an SSH tunnel with port forwarding between your local computer and the GPU node. See [[Jupyter#Connecting_to_Jupyter_Notebook|detailed instructions for connecting to Jupyter Notebook]].
As there is no direct Internet connection on a compute node on Graham, you would need to set up an SSH tunnel with port forwarding between your local computer and the GPU node. See [[Jupyter#Connecting_to_Jupyter_Notebook|detailed instructions for connecting to Jupyter Notebook]].


<!--T:21-->
==Submitting a RAPIDS job to the Slurm scheduler== <!--T:21-->
==Submitting a RAPIDS job to the Slurm scheduler==  


<!--T:22-->
<!--T:22-->
Line 135: Line 138:
}}
}}


<!--T:25-->
=Helpful links= <!--T:25-->
=Helpful links=


<!--T:26-->
<!--T:26-->
rsnt_translations
56,430

edits

Navigation menu