MPI_Iallreduce function

Combines values from all processes and distributes the result back to all processes in a non-blocking way.

Syntax

int MPIAPI MPI_Iallreduce(
  _In_opt_  const void         *sendbuf,
  _Out_opt_       void         *recvbuf,
  _In_            int          count,
  _In_            MPI_Datatype datatype,
  _In_            MPI_Op       op,
  _In_            MPI_Comm     comm,
  _Out_           MPI_Request  *request
);

Parameters

  • sendbuf [in, optional]
    The pointer to the data to be sent to all processes in the group. The number and data type of the elements in the buffer are specified in the count and datatype parameters.

    If the comm parameter references an intracommunicator, you can specify an in-place option by specifying MPI_IN_PLACE in all processes. In this case, the input data is taken at each process from the receive buffer, where it will be replaced by the output data.

  • recvbuf [out, optional]
    The pointer to a buffer to receive the result of the reduction operation.

  • count [in]
    The number of elements to send from this process.

  • datatype [in]
    The data type of each element in the buffer. This parameter must be compatible with the operation as specified in the op parameter.

  • op [in]
    The global reduction operation to perform. The handle can indicate a built-in or application-defined operation. For a list of predefined operations, see MPI_Op.

  • comm [in]
    The MPI_Comm communicator handle.

  • request [out]
    The MPI_Request handle representing the communication operation.

Return value

Returns MPI_SUCCESS on success. Otherwise, the return value is an error code.

In Fortran, the return value is stored in the IERROR parameter.

Fortran

    MPI_IALLREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, REQUEST, IERROR)
        <type> SENDBUF(*), RECVBUF(*)
        INTEGER COUNT, DATATYPE, OP, COMM, REQUEST, IERROR

Remarks

A non-blocking call initiates a collective reduction operation which must be completed in a separate completion call. Once initiated, the operation may progress independently of any computation or other communication at participating processes. In this manner, non-blocking reduction operations can mitigate possible synchronizing effects of reduction operations by running them in the “background.”

All completion calls (e.g., MPI_Wait) are supported for non-blocking reduction operations.

Requirements

Product

Microsoft MPI v7

Header

Mpi.h; Mpif.h

Library

Msmpi.lib

DLL

Msmpi.dll

See also

MPI Collective Functions

MPI_Datatype

MPI_Op

MPI_Allreduce

MPI_Test

MPI_Testall

MPI_Testany

MPI_Testsome

MPI_Wait

MPI_Waitall

MPI_Waitany

MPI_Waitsome