平行模式程式庫 (PPL)Parallel Patterns Library (PPL)

平行模式程式庫 (PPL) 提供命令式程式設計模型,可提升開發並存應用程式時的延展性和方便性。The Parallel Patterns Library (PPL) provides an imperative programming model that promotes scalability and ease-of-use for developing concurrent applications. PPL 建置在並行執行階段的排程和資源管理元件上。The PPL builds on the scheduling and resource management components of the Concurrency Runtime. 其提供在資料上平行作用的泛型、類型安全演算法和容器,可提升應用程式程式碼與基礎執行緒機制之間的抽象層級。It raises the level of abstraction between your application code and the underlying threading mechanism by providing generic, type-safe algorithms and containers that act on data in parallel. PPL 有提供共用狀態的替代方案,也可以讓您開發可調整大小的應用程式。The PPL also lets you develop applications that scale by providing alternatives to shared state.

PPL 提供下列功能:The PPL provides the following features:

  • 工作平行處理原則: 一種機制,以平行方式執行數個工作項目 (工作) 的 Windows 執行緒集區上的運作方式Task Parallelism: a mechanism that works on top of the Windows ThreadPool to execute several work items (tasks) in parallel

  • 平行演算法: 運作以做為並行執行階段中,平行的資料集合上的泛型演算法Parallel algorithms: generic algorithms that works on top of the Concurrency Runtime to act on collections of data in parallel

  • 平行容器和物件: 泛型容器類型,可提供安全的並行存取,其項目Parallel containers and objects: generic container types that provide safe concurrent access to their elements

範例Example

PPL 提供類似的程式設計模型C++標準程式庫。The PPL provides a programming model that resembles the C++ Standard Library. 下列範例示範 PPL 的許多功能。The following example demonstrates many features of the PPL. 它會循序和平行計算數個 Fibonacci 數字。It computes several Fibonacci numbers serially and in parallel. 這兩項計算作std:: array物件。Both computations act on a std::array object. 這個範例也會將執行這兩項計算所需的時間列印到主控台。The example also prints to the console the time that is required to perform both computations.

序列版會使用C++標準程式庫std:: for_each演算法來周遊陣列,並將導致std:: vector物件。The serial version uses the C++ Standard Library std::for_each algorithm to traverse the array and stores the results in a std::vector object. 平行處理的版本會執行相同的工作,但會使用 PPL concurrency:: parallel_for_each演算法,並將結果concurrency:: concurrent_vector物件。The parallel version performs the same task, but uses the PPL concurrency::parallel_for_each algorithm and stores the results in a concurrency::concurrent_vector object. concurrent_vector 類別可讓每個迴圈反覆項目並行加入項目,而不需要同步處理對容器的寫入權限。The concurrent_vector class enables each loop iteration to concurrently add elements without the requirement to synchronize write access to the container.

因為 parallel_for_each 會並行作用,所以此範例的平行版本必須將 concurrent_vector 物件排序,以產生與序列版相同的結果。Because parallel_for_each acts concurrently, the parallel version of this example must sort the concurrent_vector object to produce the same results as the serial version.

請注意,此範例使用 naïve 方法來計算 Fibonacci 數字;不過,這個方法會說明並行執行階段如何改善長時間運算的效能。Note that the example uses a naïve method to compute the Fibonacci numbers; however, this method illustrates how the Concurrency Runtime can improve the performance of long computations.

// parallel-fibonacci.cpp
// compile with: /EHsc
#include <windows.h>
#include <ppl.h>
#include <concurrent_vector.h>
#include <array>
#include <vector>
#include <tuple>
#include <algorithm>
#include <iostream>

using namespace concurrency;
using namespace std;

// Calls the provided work function and returns the number of milliseconds 
// that it takes to call that function.
template <class Function>
__int64 time_call(Function&& f)
{
   __int64 begin = GetTickCount();
   f();
   return GetTickCount() - begin;
}

// Computes the nth Fibonacci number.
int fibonacci(int n)
{
   if(n < 2)
      return n;
   return fibonacci(n-1) + fibonacci(n-2);
}

int wmain()
{
   __int64 elapsed;

   // An array of Fibonacci numbers to compute.
   array<int, 4> a = { 24, 26, 41, 42 };

   // The results of the serial computation.
   vector<tuple<int,int>> results1;

   // The results of the parallel computation.
   concurrent_vector<tuple<int,int>> results2;

   // Use the for_each algorithm to compute the results serially.
   elapsed = time_call([&] 
   {
      for_each (begin(a), end(a), [&](int n) {
         results1.push_back(make_tuple(n, fibonacci(n)));
      });
   });   
   wcout << L"serial time: " << elapsed << L" ms" << endl;
   
   // Use the parallel_for_each algorithm to perform the same task.
   elapsed = time_call([&] 
   {
      parallel_for_each (begin(a), end(a), [&](int n) {
         results2.push_back(make_tuple(n, fibonacci(n)));
      });

      // Because parallel_for_each acts concurrently, the results do not 
      // have a pre-determined order. Sort the concurrent_vector object
      // so that the results match the serial version.
      sort(begin(results2), end(results2));
   });   
   wcout << L"parallel time: " << elapsed << L" ms" << endl << endl;

   // Print the results.
   for_each (begin(results2), end(results2), [](tuple<int,int>& pair) {
      wcout << L"fib(" << get<0>(pair) << L"): " << get<1>(pair) << endl;
   });
}

下列範例輸出適用於具有四個處理器的電腦。The following sample output is for a computer that has four processors.

serial time: 9250 ms
parallel time: 5726 ms

fib(24): 46368
fib(26): 121393
fib(41): 165580141
fib(42): 267914296

迴圈的每個反覆項目都需要不同的時間才能完成。Each iteration of the loop requires a different amount of time to finish. parallel_for_each 的效能受限於上次完成的作業。The performance of parallel_for_each is bounded by the operation that finishes last. 因此,您不應預期此範例的序列版本與平行版本之間會有線性的效能改進。Therefore, you should not expect linear performance improvements between the serial and parallel versions of this example.

標題Title 描述Description
工作平行處理原則Task Parallelism 說明 PPL 中工作和工作群組的角色。Describes the role of tasks and task groups in the PPL.
平行演算法Parallel Algorithms 說明如何使用平行演算法,例如 parallel_forparallel_for_eachDescribes how to use parallel algorithms such as parallel_for and parallel_for_each.
平行容器和物件Parallel Containers and Objects 說明 PPL 提供的各種平行容器和物件。Describes the various parallel containers and objects that are provided by the PPL.
PPL 中的取消Cancellation in the PPL 說明如何取消平行演算法正在執行的工作。Explains how to cancel the work that is being performed by a parallel algorithm.
並行執行階段Concurrency Runtime 說明並行執行階段,它可簡化平行程式設計,並包含相關主題的連結。Describes the Concurrency Runtime, which simplifies parallel programming, and contains links to related topics.