DML_ACTIVATION_RELU_OPERATOR_DESC structure (directml.h)

Performs a rectified linear unit (ReLU) activation function on every element in InputTensor, placing the result into the corresponding element of OutputTensor.

f(x) = max(0, x)

Where max(a,b) returns the larger of the two values a,b.

This operator supports in-place execution, meaning that the output tensor is permitted to alias InputTensor during binding.

Syntax

struct DML_ACTIVATION_RELU_OPERATOR_DESC {
  const DML_TENSOR_DESC *InputTensor;
  const DML_TENSOR_DESC *OutputTensor;
};

Members

InputTensor

Type: const DML_TENSOR_DESC*

The input tensor to read from.

OutputTensor

Type: const DML_TENSOR_DESC*

The output tensor to write the results to.

Availability

This operator was introduced in DML_FEATURE_LEVEL_1_0.

Tensor constraints

InputTensor and OutputTensor must have the same DataType, DimensionCount, and Sizes.

Tensor support

DML_FEATURE_LEVEL_5_1 and above

Tensor Kind Supported dimension counts Supported data types
InputTensor Input 1 to 8 FLOAT32, FLOAT16, INT32, INT16, INT8
OutputTensor Output 1 to 8 FLOAT32, FLOAT16, INT32, INT16, INT8

DML_FEATURE_LEVEL_3_0 and above

Tensor Kind Supported dimension counts Supported data types
InputTensor Input 1 to 8 FLOAT32, FLOAT16
OutputTensor Output 1 to 8 FLOAT32, FLOAT16

DML_FEATURE_LEVEL_2_0 and above

Tensor Kind Supported dimension counts Supported data types
InputTensor Input 4 to 5 FLOAT32, FLOAT16
OutputTensor Output 4 to 5 FLOAT32, FLOAT16

DML_FEATURE_LEVEL_1_0 and above

Tensor Kind Supported dimension counts Supported data types
InputTensor Input 4 FLOAT32, FLOAT16
OutputTensor Output 4 FLOAT32, FLOAT16

Requirements

Requirement Value
Header directml.h