CalibratedBinaryClassificationMetrics Class

Definition

Evaluation results for binary classifiers, including probabilistic metrics.

public sealed class CalibratedBinaryClassificationMetrics : Microsoft.ML.Data.BinaryClassificationMetrics
type CalibratedBinaryClassificationMetrics = class
    inherit BinaryClassificationMetrics
Public NotInheritable Class CalibratedBinaryClassificationMetrics
Inherits BinaryClassificationMetrics
Inheritance
CalibratedBinaryClassificationMetrics

Properties

Accuracy

Gets the accuracy of a classifier which is the proportion of correct predictions in the test set.

(Inherited from BinaryClassificationMetrics)
AreaUnderPrecisionRecallCurve

Gets the area under the precision/recall curve of the classifier.

(Inherited from BinaryClassificationMetrics)
AreaUnderRocCurve

Gets the area under the ROC curve.

(Inherited from BinaryClassificationMetrics)
ConfusionMatrix

The confusion matrix giving the counts of the true positives, true negatives, false positives and false negatives for the two classes of data.

(Inherited from BinaryClassificationMetrics)
Entropy

Gets the test-set entropy, which is the prior log-loss based on the proportion of positive and negative instances in the test set. A classifier's LogLoss lower than the entropy indicates that a classifier does better than predicting the proportion of positive instances as the probability for each instance.

F1Score

Gets the F1 score of the classifier, which is a measure of the classifier's quality considering both precision and recall.

(Inherited from BinaryClassificationMetrics)
LogLoss

Gets the log-loss of the classifier. Log-loss measures the performance of a classifier with respect to how much the predicted probabilities diverge from the true class label. Lower log-loss indicates a better model. A perfect model, which predicts a probability of 1 for the true class, will have a log-loss of 0.

LogLossReduction

Gets the log-loss reduction (also known as relative log-loss, or reduction in information gain - RIG) of the classifier. It gives a measure of how much a model improves on a model that gives random predictions. Log-loss reduction closer to 1 indicates a better model.

NegativePrecision

Gets the negative precision of a classifier which is the proportion of correctly predicted negative instances among all the negative predictions (i.e., the number of negative instances predicted as negative, divided by the total number of instances predicted as negative).

(Inherited from BinaryClassificationMetrics)
NegativeRecall

Gets the negative recall of a classifier which is the proportion of correctly predicted negative instances among all the negative instances (i.e., the number of negative instances predicted as negative, divided by the total number of negative instances).

(Inherited from BinaryClassificationMetrics)
PositivePrecision

Gets the positive precision of a classifier which is the proportion of correctly predicted positive instances among all the positive predictions (i.e., the number of positive instances predicted as positive, divided by the total number of instances predicted as positive).

(Inherited from BinaryClassificationMetrics)
PositiveRecall

Gets the positive recall of a classifier which is the proportion of correctly predicted positive instances among all the positive instances (i.e., the number of positive instances predicted as positive, divided by the total number of positive instances).

(Inherited from BinaryClassificationMetrics)

Applies to