CUDA tutorial: Difference between revisions
Jump to navigation
Jump to search
(Created page with "=Introduction= ==What is CUDA ?== '''CUDA''' = '''C'''ompute '''U'''nified '''D'''evice '''A'''rchitecture") |
No edit summary |
||
Line 1: | Line 1: | ||
=Introduction= | =Introduction= | ||
==What is GPU ?== | |||
GPU, or a graphics processing unit, is a single-chip processor that performs rapid mathematical calculations, primarily for the purpose of rendering images. | |||
However, in the recent years, such capability is being harnessed more broadly to accelerate computational workloads of the cutting-edge scientific research areas. | |||
==What is CUDA ?== | ==What is CUDA ?== | ||
'''CUDA''' = '''C'''ompute '''U'''nified '''D'''evice '''A'''rchitecture | '''CUDA''' = '''C'''ompute '''U'''nified '''D'''evice '''A'''rchitecture | ||
Provides access to instructions and memory of massively parallel elements in GPU. | |||
Another definition: CUDA is scalable parallel programming model and software environment for parallel computing. | |||
== Terminology == | |||
* Bulleted list item | |||
Host – The CPU and its memory (host memory) | |||
Device – The GPU and its memory (device memory) |
Revision as of 18:54, 3 November 2016
Introduction
What is GPU ?
GPU, or a graphics processing unit, is a single-chip processor that performs rapid mathematical calculations, primarily for the purpose of rendering images. However, in the recent years, such capability is being harnessed more broadly to accelerate computational workloads of the cutting-edge scientific research areas.
What is CUDA ?
CUDA = Compute Unified Device Architecture Provides access to instructions and memory of massively parallel elements in GPU. Another definition: CUDA is scalable parallel programming model and software environment for parallel computing.
Terminology
- Bulleted list item
Host – The CPU and its memory (host memory) Device – The GPU and its memory (device memory)