by Douglas Eadline
The advent of multi-core processors has increased the need for parallel programs on the largest to the smallest of systems (clusters to laptops). There are many ways to express parallelism in a program. In HPC, the MPI (Message Passing Interface) has been the main tool of most programmers. MPI is often talked about as though it is a computer language on its own. In reality, MPI is an API (Applications Programming Interface), or programming library that allows Fortran and C (and sometimes C++) programs to send messages to each other.
Another method to express parallelism is OpenMP. Unlike MPI, OpenMP is not an API, but an extension to a compiler. To use OpenMP, the programmer adds “pragmas” (comments) to the program that are used as hints by the compiler. The resulting program uses operating system threads to run in parallel. Operating system threads can be thought of as separate subroutines running at the same time that share the same memory space. In addition to the fact that “MP” is in both the names of these methods, there is often some confusion about how each of these parallel paradigms works and where/when they should be applied. This article will explain the differences and provide a better understanding of these two powerful technologies.