CUDA tutorial: Difference between revisions

Jump to navigation Jump to search
no edit summary
(Marked this version for translation)
No edit summary
Line 3: Line 3:
[[Category:Software]]
[[Category:Software]]
=Introduction= <!--T:1-->
=Introduction= <!--T:1-->
This tutorial introduces the Graphics Processing Unit (GPU) as a massively parallel computing device, the CUDA parallel programming language, and some of the CUDA numerical libraries for use in high performance computing.
This tutorial introduces the graphics processing unit (GPU) as a massively parallel computing device; the CUDA parallel programming language; and some of the CUDA numerical libraries for high performance computing.
{{Prerequisites
{{Prerequisites
|title=Prerequisites for this tutorial
|title=Prerequisites
|content=
|content=
This tutorial uses CUDA to accelerate C or C++ code. A working knowledge of one of these languages is therefore required to gain the most benefit out of it. Even though Fortran is also supported by CUDA, for the purpose of this tutorial we only cover the CUDA C/C++. From here on, we use term CUDA C to refer "CUDA C and C++". CUDA C is essentially a C/C++ that allow one to execute function on both GPU and CPU.
This tutorial uses CUDA to accelerate C or C++ code; a working knowledge of one of these languages is therefore required to gain the most benefit. Even though Fortran is also supported by CUDA, for the purpose of this tutorial we only cover CUDA C/C++. From here on, we use term CUDA C to refer "CUDA C and C++". CUDA C is essentially a C/C++ that allow one to execute function on both GPUs and CPUs.
}}
}}
{{Objectives
{{Objectives
|title=Learning objectives
|title=Learning objectives
|content=
|content=
* Understanding the architecture of a GPU.
* Understand the architecture of a GPU
* Understanding the workflow of a CUDA program  
* Understand the workflow of a CUDA program  
* Managing GPU memory and understanding the various types of GPU memory
* Manage and understand the various types of GPU memories
* Writing and compiling a minimal CUDA code and compiling CUDA examples
* Write and compile an example of CUDA code
}}
}}
=What is a GPU?=
=What is a GPU?=
GPU, or a graphics processing unit, is a single-chip processor that performs rapid mathematical calculations, primarily for the purpose of rendering images.
GPU, or a graphics processing unit, is a single-chip processor that performs rapid mathematical calculations for the purpose of rendering images.
However, in the recent years, such capability is being harnessed more broadly to accelerate computational workloads of the cutting-edge scientific research areas.
However, in the recent years, such capability has been harnessed more broadly to accelerate computational workloads of cutting-edge scientific research.


=What is CUDA?= <!--T:23-->
=What is CUDA?= <!--T:23-->
'''CUDA''' = '''C'''ompute '''U'''nified '''D'''evice '''A'''rchitecture
CUDA stands for ''compute unified device architecture''. It is a scalable parallel programming model and software environment for parallel computing which
Provides access to instructions and memory of massively parallel elements in a GPU.
provides access to instructions and memory of massively parallel elements in GPUs.
Another definition: CUDA is a scalable parallel programming model and software environment for parallel computing.
Another definition: CUDA is a .


=CUDA GPU Architecture = <!--T:24-->
=CUDA GPU Architecture = <!--T:24-->
rsnt_translations
56,437

edits

Navigation menu