Bureaucrats, cc_docs_admin, cc_staff
337
edits
No edit summary |
No edit summary |
||
Line 13: | Line 13: | ||
Provides access to instructions and memory of massively parallel elements in GPU. | Provides access to instructions and memory of massively parallel elements in GPU. | ||
Another definition: CUDA is scalable parallel programming model and software environment for parallel computing. | Another definition: CUDA is scalable parallel programming model and software environment for parallel computing. | ||
= | =CUDA Programming Model= | ||
Before we start talking about programming model, let us go over some useful terminology: | |||
*Host – The CPU and its memory (host memory) | *Host – The CPU and its memory (host memory) | ||
*Device – The GPU and its memory (device memory) | *Device – The GPU and its memory (device memory) | ||
The CUDA programming model is a heterogeneous model in which both the CPU and GPU are used. | The CUDA programming model is a heterogeneous model in which both the CPU and GPU are used. | ||
CUDA code is capable of managing memory of both CPU and GPU as well as executing GPU functions, called kernels. Such kernels are executed by many GPU threads in parallel. Here is the 5-steps recipe of a typical CUDA code: | CUDA code is capable of managing memory of both CPU and GPU as well as executing GPU functions, called kernels. Such kernels are executed by many GPU threads in parallel. Here is the 5-steps recipe of a typical CUDA code: |