# FeatureContributionCalculatingEstimator Class

## Definition

Estimator for FeatureContributionCalculatingTransformer. Computes model-specific per-feature contributions to the score of each input vector.

public sealed class FeatureContributionCalculatingEstimator : Microsoft.ML.Data.TrivialEstimator<Microsoft.ML.Transforms.FeatureContributionCalculatingTransformer>
type FeatureContributionCalculatingEstimator = class
inherit TrivialEstimator<FeatureContributionCalculatingTransformer>
Public NotInheritable Class FeatureContributionCalculatingEstimator
Inherits TrivialEstimator(Of FeatureContributionCalculatingTransformer)
Inheritance
FeatureContributionCalculatingEstimator

## Remarks

### Estimator Characteristics

Does this estimator need to look at the data to train its parameters? No
Input column data type Known-sized vector of Single
Output column data type Known-sized vector of Single
Exportable to ONNX No

Scoring a dataset with a trained model produces a score, or prediction, for each example. To understand and explain these predictions it can be useful to inspect which features influenced them most significantly. This transformer computes a model-specific list of per-feature contributions to the score for each example. These contributions can be positive (they make the score higher) or negative (they make the score lower).

Feature Contribution Calculation is currently supported for the following models:

For linear models, the contribution of a given feature is equal to the product of feature value times the corresponding weight. Similarly, for Generalized Additive Models (GAM), the contribution of a feature is equal to the shape function for the given feature evaluated at the feature value.

For tree-based models, the calculation of feature contribution essentially consists in determining which splits in the tree have the most impact on the final score and assigning the value of the impact to the features determining the split. More precisely, the contribution of a feature is equal to the change in score produced by exploring the opposite sub-tree every time a decision node for the given feature is encountered. Consider a simple case with a single decision tree that has a decision node for the binary feature F1. Given an example that has feature F1 equal to true, we can calculate the score it would have obtained if we chose the subtree corresponding to the feature F1 being equal to false while keeping the other features constant. The contribution of feature F1 for the given example is the difference between the original score and the score obtained by taking the opposite decision at the node corresponding to feature F1. This algorithm extends naturally to models with many decision trees.

Check the See Also section for links to usage examples.

## Methods

 (Inherited from TrivialEstimator) Returns the SchemaShape of the schema which will be produced by the transformer. Used for schema propagation and verification in a pipeline.

## Extension Methods

 Append a 'caching checkpoint' to the estimator chain. This will ensure that the downstream estimators will be trained against cached data. It is helpful to have a caching checkpoint before trainers that take multiple data passes. Given an estimator, return a wrapping object that will call a delegate once Fit(IDataView) is called. It is often important for an estimator to return information about what was fit, which is why the Fit(IDataView) method returns a specifically typed object, rather than just a general ITransformer. However, at the same time, IEstimator are often formed into pipelines with many objects, so we may need to build a chain of estimators via EstimatorChain where the estimator for which we want to get the transformer is buried somewhere in this chain. For that scenario, we can through this method attach a delegate that will be called once fit is called.