Gradient Descent Trainer Class
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
public sealed class OnlineGradientDescentTrainer : Microsoft.ML.Trainers.AveragedLinearTrainer<Microsoft.ML.Data.RegressionPredictionTransformer<Microsoft.ML.Trainers.LinearRegressionModelParameters>,Microsoft.ML.Trainers.LinearRegressionModelParameters>
type OnlineGradientDescentTrainer = class inherit AveragedLinearTrainer<RegressionPredictionTransformer<LinearRegressionModelParameters>, LinearRegressionModelParameters>
Public NotInheritable Class OnlineGradientDescentTrainer Inherits AveragedLinearTrainer(Of RegressionPredictionTransformer(Of LinearRegressionModelParameters), LinearRegressionModelParameters)
Input and Output Columns
This trainer outputs the following columns:
|Output Column Name||Column Type||Description|
||Single||The unbounded score that was predicted by the model.|
|Machine learning task||Regression|
|Is normalization required?||Yes|
|Is caching required?||No|
|Required NuGet in addition to Microsoft.ML||None|
|Exportable to ONNX||Yes|
Training Algorithm Details
Stochastic gradient descent uses a simple yet efficient iterative technique to fit model coefficients using error gradients for convex loss functions. Online Gradient Descent (OGD) implements the standard (non-batch) stochastic gradient descent, with a choice of loss functions, and an option to update the weight vector using the average of the vectors seen over time (averaged argument is set to True by default).
Check the See Also section for links to usage examples.
The feature column that the trainer expects.(Inherited from TrainerEstimatorBase<TTransformer,TModel>)
The label column that the trainer expects. Can be
The weight column that the trainer expects. Can be
|Info||(Inherited from OnlineLinearTrainer<TTransformer,TModel>)|
Trains and returns a ITransformer.(Inherited from TrainerEstimatorBase<TTransformer,TModel>)
|GetOutputSchema(SchemaShape)||(Inherited from TrainerEstimatorBase<TTransformer,TModel>)|
Append a 'caching checkpoint' to the estimator chain. This will ensure that the downstream estimators will be trained against cached data. It is helpful to have a caching checkpoint before trainers that take multiple data passes.
Given an estimator, return a wrapping object that will call a delegate once Fit(IDataView) is called. It is often important for an estimator to return information about what was fit, which is why the Fit(IDataView) method returns a specifically typed object, rather than just a general ITransformer. However, at the same time, IEstimator<TTransformer> are often formed into pipelines with many objects, so we may need to build a chain of estimators via EstimatorChain<TLastTransformer> where the estimator for which we want to get the transformer is buried somewhere in this chain. For that scenario, we can through this method attach a delegate that will be called once fit is called.