CUDA tutorial: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 7: Line 7:
|title=Prerequisites
|title=Prerequisites
|content=
|content=
This tutorial uses CUDA to accelerate C or C++ code; a working knowledge of one of these languages is therefore required to gain the most benefit. Even though Fortran is also supported by CUDA, for the purpose of this tutorial we only cover CUDA C/C++. From here on, we use term CUDA C to refer "CUDA C and C++". CUDA C is essentially a C/C++ that allow one to execute function on both GPUs and CPUs.
This tutorial uses CUDA to accelerate C or C++ code; a working knowledge of one of these languages is therefore required to gain the most benefit. Even though Fortran is also supported by CUDA, for the purpose of this tutorial we only cover CUDA C/C++. From here on, we use term CUDA C to refer "CUDA C and C++". CUDA C is essentially a C/C++ that allows one to execute functions on both GPUs and CPUs.
}}
}}
{{Objectives
{{Objectives
Line 18: Line 18:
}}
}}
=What is a GPU?=
=What is a GPU?=
GPU, or a graphics processing unit, is a single-chip processor that performs rapid mathematical calculations for the purpose of rendering images.
A GPU, or graphics processing unit, is a single-chip processor that performs rapid mathematical calculations for the purpose of rendering images.
However, in the recent years, such capability has been harnessed more broadly to accelerate computational workloads of cutting-edge scientific research.
However, in the recent years, such capability has been harnessed more broadly to accelerate computational workloads in cutting-edge scientific research.


=What is CUDA?= <!--T:23-->
=What is CUDA?= <!--T:23-->
CUDA stands for ''compute unified device architecture''. It is a scalable parallel programming model and software environment for parallel computing which  
CUDA stands for ''compute unified device architecture''. It is a scalable parallel programming model and software environment for parallel computing which  
provides access to instructions and memory of massively parallel elements in GPUs.
provides access to instructions and memory of massively parallel elements in GPUs.
Another definition: CUDA is a .


=CUDA GPU Architecture = <!--T:24-->
=CUDA GPU Architecture = <!--T:24-->
rsnt_translations
56,563

edits