Parallel Programming Model

A Parallel Programming Model has to be chosen in order to do ProgrammingForParallelComputing.


On the page http://en.wikipedia.org/wiki/Parallel_computing.

A parallel programming model is a set of software technologies to express parallel algorithms and match applications with the underlying parallel systems. It encloses the areas of applications, languages, compilers, libraries, communication systems, and parallel I/O. People have to choose a proper parallel programming model or a form of mixture of them to develop their parallel applications on a particular platform.


Much of the work which has been done in this area has been in FortranLanguage, as this is the underlying language of many of the libraries. There is also an interface to CeePlusPlus and some interesting software called the ObjectOrientedMessagePassingInterface (OOMPI), which provides a class-based approach to the MessagePassingInterface. This makes it much easier to write the programs as it is possible to hide nearly all of the parameters of MPI with skillful use of overloading of member functions.

The low-level choices about how to implement a Parallel Programming Model are part of ParallelProgramming and DistributedComputing.

Higher-level choices involve choosing among implementations which provide a program interface for the user to obtain the facilities needed for ParallelComputing?, that is for different tasks to run concurrently. Examples of these are:

If the task to be carried out involves LinearAlgebra, one route to take is to use ScaLapack, which hides most of the details of the InterProcessCommunication.


An alternative is to provide an extension to a language to permit concurrency:


Books


Job Classification

SingleProgramMultipleData (SPMD)
the same program runs on multiple processors, although not necessarily in sync (example: SETI@Home)

MultipleProgramMultipleData (MPMD)
different programs run on different processors.

SingleInstructionMultipleData (SIMD)
the exact same instruction stream is fed simultaneously to all processors. Often each instruction is conditional on the flags in the processor, so each processor may or may not execute the instruction. The ConnectionMachines CM-1 and CM-2, or the Goodyear MPP are examples of this.

MultipleInstructionMultipleData (MIMD)
One of SPMD or MPMD, depending on precise usage.


Practical Experience

ParallelVirtualMachine (pvm)

This was a few years ago the defacto standard, but is now being replaced by the MessagePassingInterface. I have used it on a cluster of PC's to run programs using ScaLapack. It was necessary for the user to nominate the computers to be used for each run, either at run time or in a configuration file. It is possible to run both SPMD and MPMD jobs and to spool extra tasks while the job is running.

MessagePassingInterface (mpi)

There are a number of different implementations of the MPI standard for the data interface. The common one on Linux systems is MPICH, although there is also LAM MPI. MPI is often used on systems where there is a batch scheduling system to run SPMD tasks which need a number of processors known in advance.

Changing from pvm to MPI

This has been quite easy for SPMD programs written in FortranLanguage and using ScaLapack. A few changes were required in the set up and close down sections, but the main calls were unchanged. The makefile needed adjusting to link different libraries.


See also ParallelProgrammingDiscussion


CategoryProgrammingLanguage


EditText of this page (last edited November 7, 2014) or FindPage with title or text search