CUDA tutorial: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
Line 73: Line 73:
* Thread IDs: 1D, 2D, or 3D (threadIdx.x, threadIdx.y, threadIdx.z)
* Thread IDs: 1D, 2D, or 3D (threadIdx.x, threadIdx.y, threadIdx.z)
Such model simplifies memory addressing when processing multidimmensional data.
Such model simplifies memory addressing when processing multidimmensional data.
= Threads Scheduling =
Usually streaming microprocessor (SM) executes one threading block at a time. The code is executed in groups of 32 threads (called Warps). A hardware scheduller is free to assign blocks to any SM at any time. Furthermore, when SM gets the block assigned to it, it does not mean that this particular block will be executed non-stop. In fact, the scheduler can postpone/suspend execution os such block under certain conditions when e.x. data becomes unavailable (indeed, it takes quite some time to read data from the global GPU memory). When it happens, the scheduler takes another threading block which is ready for execution.


= First CUDA C Program=
= First CUDA C Program=
Bureaucrats, cc_docs_admin, cc_staff
337

edits

Navigation menu