cc_staff
30
edits
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
== A Primer on Parallel Programming == | == A Primer on Parallel Programming == | ||
{{quote|To pull a bigger wagon it is easier to add more oxen than to find (or build) a bigger ox.|Gropp, Lusk & Skjellum|Using MPI}} | {{quote|To pull a bigger wagon it is easier to add more oxen than to find (or build) a bigger ox.|Gropp, Lusk & Skjellum|Using MPI}} | ||
To build a house as quickly as possible, we do not look for the fastest person to do all the work but instead we hire many people and spread the work among them so that tasks are performed at the same time --- "in parallel". Computational problems are conceptually similar. Since there is a limit to how fast a single machine can perform, we attempt to divide up the computational problem and assign work to be completed in parallel to multiple computers. | To build a house as quickly as possible, we do not look for the fastest person to do all the work but instead we hire as many people as required and spread the work among them so that tasks are performed at the same time --- "in parallel". Computational problems are conceptually similar. Since there is a limit to how fast a single machine can perform, we attempt to divide up the computational problem at hand and assign work to be completed in parallel to multiple computers. This approach is important not only in speeding up computations but also in tackling problems requiring large amounts of memory. | ||
The most significant concept to master in designing and building parallel applications is ''communication''. Complexity arises due to communication requirements. In order for multiple workers to accomplish a task in parallel, they need to be able to communicate with one another. In the context of software, we have many processes each working on part of a solution, needing values that were computed---or are yet to be computed!---by other processes. | The most significant concept to master in designing and building parallel applications is ''communication''. Complexity arises due to communication requirements. In order for multiple workers to accomplish a task in parallel, they need to be able to communicate with one another. In the context of software, we have many processes each working on part of a solution, needing values that were computed---or are yet to be computed!---by other processes. | ||
Line 7: | Line 7: | ||
There are two major models of computational parallelism: shared memory, and distributed memory. | There are two major models of computational parallelism: shared memory, and distributed memory. | ||
In shared memory parallelism (commonly and casually abbreviated SMP), all processors see the same memory image, or to put it another way, all memory is globally addressable and all the processes can ultimately access it. Communication between processes on an SMP machine is implicit --- any process can read and write values to memory that can be subsequently accessed an manipulated directly by others. The challenge in writing these kinds of programs is data consistency: one should take care to ensure data is not modified by more than one process at a time. | In shared memory parallelism (commonly and casually abbreviated SMP), all processors see the same memory image, or to put it another way, all memory is globally addressable and all the processes can ultimately access it. Communication between processes on an SMP machine is implicit --- any process can read and write values to memory that can be subsequently accessed an manipulated directly by others. The challenge in writing these kinds of programs is data consistency: one should take extra care to ensure data is not modified by more than one process at a time. | ||
[[Image:Smp.png|frame|center|'''Figure 1''': ''A conceptual picture of a shared memory architecture'']] | [[Image:Smp.png|frame|center|'''Figure 1''': ''A conceptual picture of a shared memory architecture'']] |