Tutoriel OpenACC : Profileurs

Revision as of 20:09, 9 May 2017 by Diane27 (talk | contribs)
Other languages:


Objectifs d'apprentissage
  • comprendre ce qu'est un profileur
  • savoir utiliser PGPROF
  • comprendre la performance du code
  • savoir concentrer vos efforts et réécrire les routines qui exigent beaucoup de temps


Code profiling

Why would one need to profile code? Because it's the only way to understand:

  1. Where time is being spent (Hotspots)
  2. How the code is performing
  3. Where to focus your time

What is so important about hotspots in the code ? Amdahl's law says that "Parallelizing the most time-consuming routines (i.e. the hotspots) will have the most impact".

Build the Sample Code

For this example we will use code from the repositories. Download the package and change to the cpp or f90 directory. The object of this exercise is to compile and link the code, obtain an executable, and then profile it.

Choix du compilateur

En date de mai 2016, relativement peu de compilateurs offraient les fonctionnalités d'OpenACC. Les plus avancés en ce sens sont les compilateurs du Portland Group de NVidia et ceux de Cray. Pour ce est qui de GNU, l'implémentation d'OpenACC était expérimentale et devrait être complète dans la version 6.

Dans ce tutoriel, nous utilisons la version 16.3 des compilateurs du Portland Group qui sont gratuits pour des fins de recherche universitaire.


 
[name@server ~]$ make 
pgc++ -fast   -c -o main.o main.cpp
"vector.h", line 30: warning: variable "vcoefs" was declared but never
       referenced
       double *vcoefs=v.coefs;
                    ^

pgc++ main.o -o cg.x -fast

After the executable is created, we are going to profile that code.

Choix du profileur

Dans ce tutoriel, nous utilisons plusieurs des profileurs suivants :

  • PGPROF : outil simple mais puissant pour l'analyse de programmes parallèles écrits avec OpenMP, OpenACC ou CUDA;

rappelons que PGPROF est gratuit pour des fins de recherche universitaire.

  • NVVP (NVIDIA Visual Profiler) : outil d'analyse multiplateforme pour des programmes écrits avec OpenACC et CUDA C/C++.
  • NVPROF : version ligne de commande du NVIDIA Visual Profiler



PGPROF Profiler

 
Starting a new PGPROF session

These next pictures demonstrate how to start with the PGPROF profiler. The first step is to initiate a new session. Then, browse for an executable file of the code you want to profile. Finally, specify the profiling options; for example, if you need to profile CPU activity then click the "Profile execution of the CPU" box.

NVIDIA Visual Profiler

Another profiler available for OpenACC applications is the NVIDIA Visual Profiler. It's a crossplatform analyzing tool for code written with OpenACC and CUDA C/C++ instructions.

 
NVVP profiler
 
Browse for the executable you want to profile

NVIDIA NVPROF Command Line Profiler

NVIDIA also provides a command line version called NVPROF, similar to GPU prof

 
[name@server ~]$ nvprof --cpu-profiling on ./cgi.x 
<Program output >
======== CPU profiling result (bottom up):
84.25% matvec(matrix const &, vector const &, vector const &)
84.25% main
9.50% waxpby(double, vector const &, double, vector const &, vector const &)
3.37% dot(vector const &, vector const &)
2.76% allocate_3d_poisson_matrix(matrix&, int)
2.76% main
0.11% __c_mset8
0.03% munmap
  0.03% free_matrix(matrix&)
    0.03% main
======== Data collected at 100Hz frequency

Compiler Feedback

Before working on the routine, we need to understand what the compiler is actually doing by asking ourselves the following questions:

  • What optimizations were applied?
  • What prevented further optimizations?
  • Can very minor modifications of the code affect performance?

The PGI compiler offers you a -Minfo flag with the following options:

  • accel – Print compiler operations related to the accelerator
  • all – Print all compiler output
  • intensity – Print loop intensity information
  • ccff–Add information to the object files for use by tools

How to Enable Compiler Feedback

  • Edit the Makefile

CXX=pgc++ CXXFLAGS=-fast -Minfo=all,intensity,ccff LDFLAGS=${CXXFLAGS}

  • Rebuild
 
[name@server ~]$ make
pgc++ CXXFLAGS=-fast -Minfo=all,intensity,ccff LDFLAGS=-fast -fast   -c -o main.o main.cpp
"vector.h", line 30: warning: variable "vcoefs" was declared but never
          referenced
    double *vcoefs=v.coefs;
            ^

_Z17initialize_vectorR6vectord:
          37, Intensity = 0.0
              Memory set idiom, loop replaced by call to __c_mset8
_Z3dotRK6vectorS1_:
          27, Intensity = 1.00    
              Generated 3 alternate versions of the loop
              Generated vector sse code for the loop
              Generated 2 prefetch instructions for the loop
_Z6waxpbydRK6vectordS1_S1_:
          39, Intensity = 1.00    
              Loop not vectorized: data dependency
              Loop unrolled 4 times
_Z26allocate_3d_poisson_matrixR6matrixi:
          43, Intensity = 0.0
          44, Intensity = 0.0
              Loop not vectorized/parallelized: loop count too small
          45, Intensity = 0.0
              Loop unrolled 3 times (completely unrolled)
          57, Intensity = 0.0
          59, Intensity = 0.0
              Loop not vectorized: data dependency
_Z6matvecRK6matrixRK6vectorS4_:
          29, Intensity = (num_rows*((row_end-row_start)*         2))/(num_rows+(num_rows+(num_rows+((row_end-row_start)+(row_end-row_start)))))
          33, Intensity = 1.00    
              Unrolled inner loop 4 times
              Generated 2 prefetch instructions for the loop
main:
     61, Intensity = 16.00   
         Loop not vectorized/parallelized: potential early exits
pgc++ CXXFLAGS=-fast -Minfo=all,intensity,ccff LDFLAGS=-fast main.o -o cg.x -fast

Computational Intensity

Computational Intensity of a loop is a measure of how much work is being done compared to memory operations.

Computation Intensity = Compute Operations / Memory Operations

Computational Intensity of 1.0 or greater suggests that the loop might run well on a GPU.

Understanding the code

Let's look closely at the following code:

for(int i=0;i<num_rows;i++) {
  double sum=0;
  int row_start=row_offsets[i];
  int row_end=row_offsets[i+1];
  for(int j=row_start; j<row_end;j++) {
    unsigned int Acol=cols[j];
    double Acoef=Acoefs[j]; 
    double xcoef=xcoefs[Acol]; 
    sum+=Acoef*xcoef;
  }
  ycoefs[i]=sum;
}

Given the code above, we search for data dependencies:

  • Does one loop iteration affect other loop iterations?
  • Do loop iterations read from and write to different places in the same array?
  • Is sum a data dependency? No, it’s a reduction.

Onward to the next unit: Adding directives
Back to the lesson plan