CUDA tutorial: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
Line 13: Line 13:
Provides access to instructions and memory of massively parallel elements in GPU.
Provides access to instructions and memory of massively parallel elements in GPU.
Another definition: CUDA is scalable parallel programming model and software environment for parallel computing.  
Another definition: CUDA is scalable parallel programming model and software environment for parallel computing.  
=CUDA GPU Architecture =
There two main components of the GPU:
* Global memory
** Similar to CPU memory
** Accessible by both CPU and GPU
*Streaming multiprocessors (SMs)
**They perform actual computations
**Each SM has its own control init, registers, execution pipelines, etc
=CUDA Programming Model=
=CUDA Programming Model=
Before we start talking about programming model, let us go over some useful terminology:
Before we start talking about programming model, let us go over some useful terminology:
Bureaucrats, cc_docs_admin, cc_staff
337

edits

Navigation menu