OpenACC Tutorial - Adding directives: Difference between revisions
No edit summary |
No edit summary |
||
Line 81: | Line 81: | ||
<translate> | <translate> | ||
== The <tt>kernels</tt> directive == | |||
The <tt>kernels</tt> directive is what we call a ''descriptive'' directive. It is used to tell the compiler that the programmer thinks this region can be made parallel. At this point, the compiler is free to do whatever it wants with this information. Typically, it will | |||
# Analyze the code to try to identify parallelism | |||
# If found, identify which data must be transferred and when | |||
# Create a kernel | |||
# Offload the kernel to the GPU | |||
One example of this directive is the following code: | |||
<syntaxhighlight lang="cpp" line> | |||
#pragma acc kernels | |||
{ | |||
for (int i=0; i<N; i++) | |||
{ | |||
C[i] = A[i] + B[i]; | |||
} | |||
} | |||
</syntaxhighlight> | |||
[[OpenACC Tutorial|Back to the lesson plan]] | [[OpenACC Tutorial|Back to the lesson plan]] | ||
</translate> | </translate> |
Revision as of 16:17, 9 May 2016
- Understand the process of offloading
- Understand what is an OpenACC directive.
- Understand what is the difference between the loop and kernels directive.
- Understand how to build a program with OpenACC
- Understand what aliasing is in C/C++
- Learn how to use compiler feedback and how fix false aliasing.
Offloading to a GPU
The first thing to realize when trying to port a code to a GPU is that they do not share the same memory as the CPU. In other words, a GPU does not have direct access to the host memory. The host memory is generally larger, but slower than the GPU memory. To use a GPU, data must therefore be transferred from the main program to the GPU through the PCI bus, which has a much lower bandwidth than either memories. This means that managing data transfer between the host and the GPU will be of paramount importance. Transferring the data and the code onto the device is called offloading.
OpenACC directives
OpenACC directives are much like OpenMP directives. They take the form of pragma in C/C++, and comments in Fortran. The advantages of this method are numerous. First, since it involves very minor modifications to the code, changes can be done incrementally, one pragma at a time. This is especially useful for debugging purpose, since making a single change at a time allows one to quickly identify which change created a bug. Second, OpenACC support can be disabled at compile time. When OpenACC support is disabled, the pragma are considered comments, and ignored by the compiler. This means that a single source code can be used to compile both an accelerated version and a normal version. Third, since all of the offloading work is done by the compiler, the same code can be compiled for various accelerator types: GPUs, MIC (Xeon Phi) or CPUs. It also means that a new generation of devices only requires one to update the compiler, not to change the code.
In the following example, we take a code comprised of two loops. The first one initializes two vectors, and the second performs a SAXPY, a basic vector addition operation.
C/C++ | Fortran |
---|---|
#pragma acc kernels
{
for (int i=0; i<N; i++)
{
x[i] = 1.0;
y[i] = 2.0;
}
for (int i=0; i<N; i++)
{
y[i] = a * x[i] + y[i];
}
}
|
!$acc kernels
do i=1,N
x(i) = 1.0
y(i) = 2.0
end do
y(:) = a*x(:) + y(:)
!$acc end kernels
|
Both in the C/C++ and the Fortran cases, the compiler will identify two kernels. In C/C++, the two kernels will correspond to the inside of each loops. In Fortran, the kernels will be the inside of the first loop, as well as the inside of the implicit loop that Fortran performs when it does an array operation.
Note that in C/C++, the OpenACC block is delimited using curly brackets, while in Fortran, the same comment needs to be repeated, with the end keyword added.
Loops vs Kernels
When the compiler reaches an OpenACC kernels directive, it will analyze the code in order to identify sections that can be parallelized. This often corresponds to the body of loops. When such a case is identified, the compiler will wrap the body of the code into a special function called a kernel. This function makes it clear that each call to the function is independent from any other call. The function is then compiled to enable it to run on an accelerator. Since each call is independent, each one of the thousands cores of the accelerator can run the function for one specific index in parallel.
Loop | Kernels |
---|---|
for (int i=0; i<N; i++)
{
C[i] = A[i] + B[i];
}
|
void loopBody(A,B,C,i)
{
C[i] = A[i] + B[i];
}
|
Calculate 0 - N in order | Each compute core calculates one value of i. |
The kernels directive
The kernels directive is what we call a descriptive directive. It is used to tell the compiler that the programmer thinks this region can be made parallel. At this point, the compiler is free to do whatever it wants with this information. Typically, it will
- Analyze the code to try to identify parallelism
- If found, identify which data must be transferred and when
- Create a kernel
- Offload the kernel to the GPU
One example of this directive is the following code:
#pragma acc kernels
{
for (int i=0; i<N; i++)
{
C[i] = A[i] + B[i];
}
}