Parallel Programming - Using MPI

Parallel Programming - Using MPI

MPI is a message passing programming model standards[2], it defines various Terms/Concepts, Data Structures and Function Signatures that are used to passing messages among computer processes.

1. Terms and Concepts

A MPI application consists of multiple Processes(Tasks), each has a unique identifier called Rank. A Process belongs to some Groups and some Communicators. Processes don't share memory or state, the only way to communicate is sending/receiving message.

Programming
Patterns - Usually, multiple Processes are needed to accomplish a task,
there are some popular patterns on how processes are cooperated:
- Peer to Peer (a.k.a. Work Crew) Model, each process behaves equally
- Master/Slave (a.k.a. Master/Worker) Model, one process acts as coordinator, others act as common labor force

2. MPI APIs

MPI APIs can be categorized as:
- Environment Management
- Data Type
- Point-to-Point Communication
- Collective Communication
- One Sided Communication
- Process (Topology, Creation and Management)
- Parallel I/O

Point-to-Point communication occurs between two Processes, while Collective communication involves all Processes in a communicator and One-Sided communication only needs on Process's participation.

3. Collective Communication Patterns

Typical collective communication semantics are somewhat hard to describe, Some diagrams (mainly from Rusty Lusk) are listed below to illustrate the semantic of each primitive:

4. Typical MPI Program Structure

#include "mpi.h"

int main(int argc, char** argv)
{
 int nProc;
 int nThisRank;
 MPI_Init(&argc, &argv);
 MPI_Comm_size(MPI_COMM_WORLD, &nProc);
 MPI_Comm_rank(MPI_COMM_WORLD, &nThisRank);

 if (nThisRank == 0)
 {
  //Core App Logic
 }
 else
 {
  //Core App Logic
 }

 MPI_Finalize();
 return 0;
}

I had written some non-trivial MPI applicatioins:
- Demo various collective communication semantic
- Parallel Numeric PI Calculation
- Map-Reduce-Merge over MPI

You can try the code to get hands on experiences.

5. Combine Message Passing with Threading

Typical MPI applications run one process per core on a machine node. Since all cores on a node share the same physical memory, we can use a combined model, where one process per node and multiple threads per process(OpenMP, PThreads etc) on each node. I.E.:
- Threading within one node
- Message Passing cross nodes

Notes on MPI programming:

1.
MPI's network facilities are message oriented, not connection/stream
oriented (as in Tcp Socket programming), so before receive a message,
you'd better know how large it will be, other wise, message may be
truncated.

2. MPI can dealing communication with both fixed length and variable length data types. For variable length data types, use MPI_Pack/MPI_Unpack and those MPI_xxxv(for instance, MPI_Scatterv, MPI_Gatherv, MPI_Allgatherv, MPI_Alltoallv) functions.
(mpi.net's source contains code that leverages these *v functions, it's good example to learn MPI programming)

[Reference]

0. MPI Official Site
1. MPI V2.2 Official Standard
2. Message Passing Interface on wikipedia

3. MPI on Multicore (PPT)
4. MPI and OpenMP Hybrid Model
5. Combining MPI and OpenMP

Tutorials

11 Parallel Computing Introduction
12 MPI Tutorials @ LLNL (Exercises)
13 Tutorial on MPI by William Gropp (Exercises)
14 C++ MPI Exercises by John Burkardt
15 MPI Hands-On Tutorial by OSC
16 Tutorial on OpenMP and MPI

17 Book online: MPI The Complete Reference
18 Book: Parallel Programming With MPI

19 Clear Message Passing Concept Explanation

MS-MPI/MPI.NET

20 Windows HPC Server 2008 - Using MS-MPI whitepaper
21 Using Microsoft MPI
22 MPI.NET Tutorial

Implementations

- Native
MPICH2 - a MPI implementation
Open MPI (formerly Lam/MPI)
MPI Implementation List (List 1, List 2)

- .Net
Pure MPI.NET (implemented totally using .Net)
MPI.NET (an .Net wrapper around native MPI library)

- Java
Java-MPI binding list
mpiJava
MPJ Express

- Python
PyMPI
MPI4Py

- Matlab
MatlabMPI