Parallel Patterns Library (PPL)
The Parallel Patterns Library (PPL) provides an imperative programming model that promotes scalability and easeofuse for developing concurrent applications. The PPL builds on the scheduling and resource management components of the Concurrency Runtime. It raises the level of abstraction between your application code and the underlying threading mechanism by providing generic, typesafe algorithms and containers that act on data in parallel. The PPL also lets you develop applications that scale by providing alternatives to shared state.
The task Class and related types that are defined in ppltasks.h are portable across platforms. The parallel algorithms and containers are not portable.
The PPL provides the following features:
Task Parallelism: a mechanism to execute several work items (tasks) in parallel
Parallel algorithms: generic algorithms that act on collections of data in parallel
Parallel containers and objects: generic container types that provide safe concurrent access to their elements
Example
The PPL provides a programming model that resembles the Standard Template Library (STL). The following example demonstrates many features of the PPL. It computes several Fibonacci numbers serially and in parallel. Both computations act on a std::array object. The example also prints to the console the time that is required to perform both computations.
The serial version uses the STL std::for_each algorithm to traverse the array and stores the results in a std::vector object. The parallel version performs the same task, but uses the PPL concurrency::parallel_for_each algorithm and stores the results in a concurrency::concurrent_vector object. The concurrent_vector class enables each loop iteration to concurrently add elements without the requirement to synchronize write access to the container.
Because parallel_for_each acts concurrently, the parallel version of this example must sort the concurrent_vector object to produce the same results as the serial version.
Note that the example uses a naïve method to compute the Fibonacci numbers; however, this method illustrates how the Concurrency Runtime can improve the performance of long computations.
// parallelfibonacci.cpp
// compile with: /EHsc
#include <windows.h>
#include <ppl.h>
#include <concurrent_vector.h>
#include <array>
#include <vector>
#include <tuple>
#include <algorithm>
#include <iostream>
using namespace concurrency;
using namespace std;
// Calls the provided work function and returns the number of milliseconds
// that it takes to call that function.
template <class Function>
__int64 time_call(Function&& f)
{
__int64 begin = GetTickCount();
f();
return GetTickCount()  begin;
}
// Computes the nth Fibonacci number.
int fibonacci(int n)
{
if(n < 2)
return n;
return fibonacci(n1) + fibonacci(n2);
}
int wmain()
{
__int64 elapsed;
// An array of Fibonacci numbers to compute.
array<int, 4> a = { 24, 26, 41, 42 };
// The results of the serial computation.
vector<tuple<int,int>> results1;
// The results of the parallel computation.
concurrent_vector<tuple<int,int>> results2;
// Use the for_each algorithm to compute the results serially.
elapsed = time_call([&]
{
for_each (begin(a), end(a), [&](int n) {
results1.push_back(make_tuple(n, fibonacci(n)));
});
});
wcout << L"serial time: " << elapsed << L" ms" << endl;
// Use the parallel_for_each algorithm to perform the same task.
elapsed = time_call([&]
{
parallel_for_each (begin(a), end(a), [&](int n) {
results2.push_back(make_tuple(n, fibonacci(n)));
});
// Because parallel_for_each acts concurrently, the results do not
// have a predetermined order. Sort the concurrent_vector object
// so that the results match the serial version.
sort(begin(results2), end(results2));
});
wcout << L"parallel time: " << elapsed << L" ms" << endl << endl;
// Print the results.
for_each (begin(results2), end(results2), [](tuple<int,int>& pair) {
wcout << L"fib(" << get<0>(pair) << L"): " << get<1>(pair) << endl;
});
}
The following sample output is for a computer that has four processors.
serial time: 9250 ms parallel time: 5726 ms fib(24): 46368 fib(26): 121393 fib(41): 165580141 fib(42): 267914296
Each iteration of the loop requires a different amount of time to finish. The performance of parallel_for_each is bounded by the operation that finishes last. Therefore, you should not expect linear performance improvements between the serial and parallel versions of this example.
Related Topics
Title 
Description 

Describes the role of tasks and task groups in the PPL. 

Describes how to use parallel algorithms such as parallel_for and parallel_for_each. 

Describes the various parallel containers and objects that are provided by the PPL. 

Explains how to cancel the work that is being performed by a parallel algorithm. 

Describes the Concurrency Runtime, which simplifies parallel programming, and contains links to related topics. 