Pthreads/fr: Difference between revisions
No edit summary |
No edit summary |
||
Line 18: | Line 18: | ||
Pour paralléliser avec pthreads un programme en série existant, nous utilisons un modèle de programmation où les fils sont créés par un parent, exécutent une partie du travail, puis sont réintégrés au parent. Le parent est soit le ''fil maître'' ou un des autres ''fils esclaves''. | Pour paralléliser avec pthreads un programme en série existant, nous utilisons un modèle de programmation où les fils sont créés par un parent, exécutent une partie du travail, puis sont réintégrés au parent. Le parent est soit le ''fil maître'' ou un des autres ''fils esclaves''. | ||
La fonction <tt>[http://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_create.html pthread_create]</tt> crée des nouveaux | La fonction <tt>[http://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_create.html pthread_create]</tt> crée des nouveaux fils avec ces quatre arguments : | ||
*l'identifiant unique pour le nouveau fil; | *l'identifiant unique pour le nouveau fil; | ||
*l'ensemble des attributs du fil; | *l'ensemble des attributs du fil; |
Revision as of 17:10, 9 January 2017
Introduction
Le terme pthreads provient de POSIX threads, l'une des premières techniques de parallélisation. Tout comme OpenMP, pthreads s'emploie dans un contexte de mémoire partagée et donc habituellement sur un seul nœud où le nombre de fils d'exécution actifs est limité aux cœurs CPU disponibles. On utilise pthreads dans plusieurs langages de programmation, mais surtout en C. En Fortran, la parallélisation de fils d'exécution se fait préférablement avec OpenMP alors qu'en C++, les outils de la bibliothèque Boost sont mieux adaptés.
La bibliothèque pthreads a servi de base aux approches de parallélisation qui ont suivi, dont OpenMP. On peut voir pthreads comme étant un ensemble d'outils primitifs offrant des fonctionnalités élémentaires de parallélisation, contrairement aux APIs conviviales et de haut niveau comme OpenMP. Dans le modèle pthreads, les fils sont générés dynamiquement pour exécuter des sous-procédures dites légères qui exécutent les opérations de façon asynchrone; ces fils sont ensuite détruits après avoir réintégré le processus principal. Puisque tous les fils d'un même programme résident dans le même espace mémoire, il est facile de partager les données à l'aide de variables globales, contrairement à une approche distribuée comme MPI; toute modification aux données partagées risque cependant de créer des situations de compétition (race conditions).
Compilation
Pour utiliser les fonctions et structures de données associées à pthreads dans votre programme C, il faut y inclure le fichier entête (header file) pthread.h et compiler le programme avec un indicateur (flag) pour faire le lien avec la bibliothèque pthreads.
[name@server ~]$ gcc -pthread -o test threads.c
Le nombre de fils pour le programme est défini par une des méthodes suivantes :
- utilisé comme argument dans une ligne de commande;
- entré via une variable d'environnement;
- encodé dans le fichier source (ceci ne permet toutefois pas d'ajuster le nombre de fils à l'exécution).
Création et destruction des pthreads
Pour paralléliser avec pthreads un programme en série existant, nous utilisons un modèle de programmation où les fils sont créés par un parent, exécutent une partie du travail, puis sont réintégrés au parent. Le parent est soit le fil maître ou un des autres fils esclaves.
La fonction pthread_create crée des nouveaux fils avec ces quatre arguments :
- l'identifiant unique pour le nouveau fil;
- l'ensemble des attributs du fil;
- la fonction C que le fil exécute lorsqu'il est amorcé (la routine de lancement);
- l'argument de la routine de lancement.
#include <stdio.h>
#include <pthread.h>
const long NT = 12;
void* task(void* thread_id)
{
long tnumber = (long) thread_id;
printf("Hello World from thread %ld\n",1+tnumber);
}
int main(int argc,char** argv)
{
int success;
long i;
pthread_t threads[NT];
for(i=0; i<NT; ++i) {
success = pthread_create(&threads[i],NULL,task,(void*)i);
if (success != 0) {
printf("ERROR: Unable to create worker thread %ld successfully\n",i);
return 1;
}
}
for(i=0; i<NT; ++i) {
pthread_join(threads[i],NULL);
}
return 0;
}
This simple example creates twelve threads, each one executing the function task with the argument consisting of the thread's index, from 0 to 11. Note that the call of pthread_create is non-blocking, i.e. the root or master thread, which is executing the main function, continues to execute after each of the twelve worker threads is created. After creating the twelve threads, the master thread then goes into the second for loop and calls pthread_join, a blocking function where the master thread waits for the twelve workers to finish executing the function task and rejoin the master thread. While trivial, this program illustrates the basic lifecycle of a POSIX thread: the master thread creates a thread by assigning it a function to run, then waits for the thread to finish and join back into the execution of the master thread.
If you run this test program several times in a row you'll likely notice that the order in which you see the various worker threads saying hello varies, which is what we would expect since they are now running in an asynchronous manner. Each time the program is executed, the twelve threads compete for access to the standard output during the printf call and from one execution of the program to another the winners of this competition will change.
Synchronizing Data Access
In a more realistic program, worker threads will need to read and eventually modify certain data in order to accomplish their tasks. These data normally consist of a set of global variables of different types and dimensions, and with multiple threads reading and writing these data, we need to ensure that the access to these data is synchronized in some fashion to avoid race conditions, i.e. situations in which the program's output depends on the random order in which the asynchronous threads access the data. Typically, we want the parallel version of our program to produce results identical to what we would obtain when running it serially, so the race conditions are unacceptable.
The simplest and most common way to control the reading and writing of data shared among threads is the mutex, derived from the expression 'mutual exclusion'. In pthreads, a mutex is a kind of variable that may be "locked" or "owned" by only one thread at a time. The thread must then release or unlock the mutex once the global data has been read or modified. The code that lies between the call to lock a mutex and the call to unlock it will only be executed by a single thread at a time. To create a mutex in a pthreads program, we declare a global variable of type pthread_mutex_t which must be initialized before it is used by calling pthread_mutex_init. At the program's end we release the resources associated with the mutex by calling pthread_mutex_destroy.
#include <stdio.h>
#include <pthread.h>
const long NT = 12;
pthread_mutex_t mutex;
void* task(void* thread_id)
{
long tnumber = (long) thread_id;
pthread_mutex_lock(&mutex);
printf("Hello World from thread %ld\n",1+tnumber);
pthread_mutex_unlock(&mutex);
}
int main(int argc,char** argv)
{
int success;
long i;
pthread_t threads[NT];
pthread_mutex_init(&mutex,NULL);
for(i=0; i<NT; ++i) {
success = pthread_create(&threads[i],NULL,task,(void*)i);
if (success != 0) {
printf("ERROR: Unable to create worker thread %ld successfully\n",i);
pthread_mutex_destroy(&mutex);
return 1;
}
}
for(i=0; i<NT; ++i) {
pthread_join(threads[i],NULL);
}
pthread_mutex_destroy(&mutex);
return 0;
}
In this example, based on the previous code, access to the standard output channel is serialized - as it normally should be - using a mutex. The call to pthread_mutex_lock is blocking, i.e. the thread will continue to wait indefinitely for the mutex to become available, so you have to take care that no deadlock can occur in your code, that is, that the mutex is guaranteed to become available eventually. This is particularly problematic in a more realistic example where you may have many different mutexes designed to control access to different global data structures. There is also a non-blocking alternative, pthread_mutex_trylock, which if it fails to obtain the mutex lock returns immediately with a non-zero value indicating that the mutex is busy. You should also ensure that no extraneous code appears inside the serialized code block; since this code will be executed in a serial manner, you want it to be as short as it can safely be in order not to reduce your program's parallel performance.
A more subtle form of data synchronization is possible with the read/write lock, pthread_rwlock_t. With this construct, multiple threads can simultaneously read the value of a variable but for write access, the read/write lock behaves like the standard mutex, i.e. no other thread may have have any access (read or write) to the variable. Like with a mutex, a pthread_rwlock_t must be initialized before its first use and destroyed when it is no longer needed during the program. Individual threads can obtain either a read lock by calling pthread_rwlock_rdlock, or a write lock with pthread_rwlock_wrlock. Either one is released using pthread_rwlock_unlock.
Another construct is used to allow multiple threads to wait for a single condition, for example waiting for work to become available for the worker threads. This construct is called a condition variable and has the datatype pthread_cond_t. Like a mutex or read/write lock, a condition variable must be initialized before its first use and destroyed when it is no longer needed. The use of a condition variable also requires a mutex to control access to the variable(s) that are the basis for the condition that is being tested. A thread that needs to wait on a condition will lock the mutex and then call the function pthread_cond_wait with two arguments: the condition variable, and the mutex. The mutex will be released atomically with the creation of the condition variable that the thread is now waiting upon, so that other threads can lock the mutex either to wait on the same condition or to modify one or more variables, thereby changing the condition.
#include <stdio.h>
#include <pthread.h>
const long NT = 2;
pthread_mutex_t mutex;
pthread_cond_t ticker;
int workload;
void* task(void* thread_id)
{
long tnumber = (long) thread_id;
if (tnumber == 0) {
pthread_mutex_lock(&mutex);
while(workload <= 25) {
pthread_cond_wait(&ticker,&mutex);
}
printf("Thread %ld: incrementing workload by 15\n",1+tnumber);
workload += 15;
pthread_mutex_unlock(&mutex);
}
else {
int done = 0;
do {
pthread_mutex_lock(&mutex);
workload += 3;
printf("Thread %ld: current workload is %d\n",1+tnumber,workload);
if (workload > 25) {
done = 1;
pthread_cond_signal(&ticker);
}
pthread_mutex_unlock(&mutex);
} while(!done);
}
}
int main(int argc,char** argv)
{
int success;
long i;
pthread_t threads[NT];
workload = atoi(argv[1]);
if (workload > 25) {
printf("Initial workload must be <= 25, exiting...\n");
return 0;
}
pthread_mutex_init(&mutex,NULL);
pthread_cond_init(&ticker,NULL);
for(i=0; i<NT; ++i) {
success = pthread_create(&threads[i],NULL,task,(void*)i);
if (success != 0) {
printf("ERROR: Unable to create worker thread %ld successfully\n",i);
pthread_mutex_destroy(&mutex);
return 1;
}
}
for(i=0; i<NT; ++i) {
pthread_join(threads[i],NULL);
}
printf("Final workload is %d\n",workload);
pthread_cond_destroy(&ticker);
pthread_mutex_destroy(&mutex);
return 0;
}
In the above example we have two worker threads which modify the value of the integer workload, whose initial value must be less than or equal to 25. The first thread locks the mutex and then waits because workload <= 25, creating the condition variable ticker and releasing the mutex. The second thread can then perform a loop that increments the value of workload by three at each iteration. After each increment the second thread checks if the workload is greater than 25, and when it is, calls pthread_cond_signal to alert the thread waiting on ticker that the condition is now satisfied. If there were more than one thread waiting on ticker we could instead use pthread_cond_broadcast to notify all waiting threads that the condition is satisfied. With the first thread signalled, the second thread sets the exit condition for the loop, releases the mutex, and disappears in the pthread_join. Meanwhile the first thread, having been woken up, increments workload by 15 and exits the function task itself. After the worker threads have been absorbed, the master thread prints out the final value of workload and the program exits.
Pour en savoir plus
Pour plus d'information sur pthreads, sur les arguments optionnels pour les diverses fonctions (les paramètres utilisés dans cette page utilisent l'argument par défaut NULL) et sur les sujets de niveau avancé, nous recommandons l'ouvrage de David Butenhof, Programming with POSIX Threads ou l'excellent tutoriel du Lawrence Livermore National Laboratory.