CUDA tutorial: Difference between revisions

Jump to navigation Jump to search
No edit summary
Line 55: Line 55:
* each GPU core (streaming processor) execute a sequential '''Thread''', where '''Thread''' is a smallest set of instructions handled by the operating system's schedule.
* each GPU core (streaming processor) execute a sequential '''Thread''', where '''Thread''' is a smallest set of instructions handled by the operating system's schedule.
* all GPU cores execute the kernel in a SIMT fashion (Single Instruction Multiple Threads)
* all GPU cores execute the kernel in a SIMT fashion (Single Instruction Multiple Threads)
Usually the following procedure is recommended when it comes to executing on GPU:
1. Copy input data from CPU memory to GPU memory
2. Load GPU program (Kernel) and execute it
3. Copy results from GPU memory back to CPU memory


= First CUDA C Program=
= First CUDA C Program=
Bureaucrats, cc_docs_admin, cc_staff
337

edits

Navigation menu