# OneVersusAllTrainer Class

## Definition

The IEstimator<TTransformer> for training a one-versus-all multi-class classifier that uses the specified binary classifier.

public sealed class OneVersusAllTrainer : Microsoft.ML.Trainers.MetaMulticlassTrainer<Microsoft.ML.Data.MulticlassPredictionTransformer<Microsoft.ML.Trainers.OneVersusAllModelParameters>,Microsoft.ML.Trainers.OneVersusAllModelParameters>
type OneVersusAllTrainer = class
inherit MetaMulticlassTrainer<MulticlassPredictionTransformer<OneVersusAllModelParameters>, OneVersusAllModelParameters>
Public NotInheritable Class OneVersusAllTrainer
Inherits MetaMulticlassTrainer(Of MulticlassPredictionTransformer(Of OneVersusAllModelParameters), OneVersusAllModelParameters)
Inheritance
OneVersusAllTrainer

## Remarks

To create this trainer, use OneVersusAll.

### Input and Output Columns

The input label column data must be key type and the feature column must be a known-sized vector of Single.

This trainer outputs the following columns:

Output Column Name Column Type Description
Score Vector of Single The scores of all classes. Higher value means higher probability to fall into the associated class. If the i-th element has the largest value, the predicted label index would be i. Note that i is zero-based index.
PredictedLabel key type The predicted label's index. If its value is i, the actual label would be the i-th category in the key-valued input label type.

### Trainer Characteristics

Is normalization required? Depends on the underlying binary classifier
Is caching required? Yes
Required NuGet in addition to Microsoft.ML None

### Training Algorithm Details

In one-versus-all (OVA) strategy, a binary classification algorithm is used to train one classifier for each class, which distinguishes that class from all other classes. Prediction is then performed by running these binary classifiers and choosing the prediction with the highest confidence score. This algorithm can be used with any of the binary classifiers in ML.NET. A few binary classifiers already have implementation for multi-class problems, thus users can choose either one depending on the context. The OVA version of a binary classifier, such as wrapping a LightGbmBinaryTrainer, can be different from LightGbmMulticlassTrainer, which develops a multi-class classifier directly. Note that even if the classifier indicates that it does not need caching, OneVersusAll will always request caching, as it will be performing multiple passes over the data set. This trainer will request normalization from the data pipeline if the classifier indicates it would benefit from it.

This can allow you to exploit trainers that do not naturally have a multiclass option, for example, using the FastTreeBinaryTrainer to solve a multiclass problem. Alternately, it can allow ML.NET to solve a "simpler" problem even in the cases where the trainer has a multiclass option, but using it directly is not practical due to, usually, memory constraints. For example, while a multiclass logistic regression is a more principled way to solve a multiclass problem, it requires that the trainer store a lot more intermediate state in the form of L-BFGS history for all classes simultaneously, rather than just one-by-one as would be needed for a one-versus-all classification model.

## Properties

 (Inherited from MetaMulticlassTrainer)

## Methods

 Trains a MulticlassPredictionTransformer model. Gets the output columns. (Inherited from MetaMulticlassTrainer)

## Extension Methods

 Given an estimator, return a wrapping object that will call a delegate once Fit(IDataView) is called. It is often important for an estimator to return information about what was fit, which is why the Fit(IDataView) method returns a specifically typed object, rather than just a general ITransformer. However, at the same time, IEstimator are often formed into pipelines with many objects, so we may need to build a chain of estimators via EstimatorChain where the estimator for which we want to get the transformer is buried somewhere in this chain. For that scenario, we can through this method attach a delegate that will be called once fit is called.