DeepScoringExplainer Class
Defines a scoring model based on DeepExplainer.
If the original explainer was using a SHAP DeepExplainer and no initialization data was passed in, the core of the original explainer will be reused. If the original explainer used another method or new initialization data was passed in under initialization_examples, a new explainer will be created.
If transformations was passed in on original_explainer, those transformations will be carried through to the scoring explainer, it will expect raw data, and by default importances will be returned for raw features. If feature_maps are passed in here (NOT intended to be used at the same time as transformations), the explainer will expected transformed data, and by default importances will be returned for transformed data. In either case, the output can be specified by setting get_raw explicitly to True or False on the explainer's explain method.
Initialize the DeepScoringExplainer.
If the original explainer was using a SHAP DeepExplainer and no initialization data was passed in, the core of the original explainer will be reused. If the original explainer used another method or new initialization data was passed in under initialization_examples, a new explainer will be created.
If transformations was passed in on original_explainer, those transformations will be carried through to the scoring explainer, it will expect raw data, and by default importances will be returned for raw features. If feature_maps are passed in here (NOT intended to be used at the same time as transformations), the explainer will expected transformed data, and by default importances will be returned for transformed data. In either case, the output can be specified by setting get_raw explicitly to True or False on the explainer's explain method.
- Inheritance
-
azureml.interpret.scoring.scoring_explainer._scoring_explainer.ScoringExplainerDeepScoringExplainer
Constructor
DeepScoringExplainer(original_explainer, initialization_examples=None, serializer=None, **kwargs)
Parameters
- original_explainer
- <xref:interpret_community.common.base_explainer.BaseExplainer>
The training time deep explainer originally used to explain the model.
- initialization_examples
- array or DataFrame or csr_matrix
A matrix of feature vector examples (# examples x # features) for initializing the explainer.
- serializer
- object
Picklable custom serializer with save and load methods defined for model that is not serializable. The save method returns a dictionary state and load method returns the model.
- feature_maps
- list[ndarray] or list[csr_matrix]
A list of feature maps from raw to generated feature. The list can be numpy arrays or sparse matrices where each array entry (raw_index, generated_index) is the weight for each raw, generated feature pair. The other entries are set to zero. For a sequence of transformations [t1, t2, ..., tn] generating generated features from raw features, the list of feature maps correspond to the raw to generated maps in the same order as t1, t2, etc. If the overall raw to generated feature map from t1 to tn is available, then just that feature map in a single element list can be passed.
Optional list of feature names for the raw features that can be specified if the original explainer computes the explanation on the engineered features.
Optional list of feature names for the engineered features that can be specified if the original explainer has transformations passed in and only computes the importances on the raw features.
- original_explainer
- <xref:interpret_community.common.base_explainer.BaseExplainer>
The training time deep explainer originally used to explain the model.
- initialization_examples
- array or DataFrame or csr_matrix
A matrix of feature vector examples (# examples x # features) for initializing the explainer.
- serializer
- <xref:<xref:object with save method that returns a dictionary state and load method that returns the model.>>
Picklable custom serializer with save and load methods defined for model that is not serializable.
- feature_maps
- list[ndarray] or list[csr_matrix]
A list of feature maps from raw to generated feature. The list can be numpy arrays or sparse matrices where each array entry (raw_index, generated_index) is the weight for each raw, generated feature pair. The other entries are set to zero. For a sequence of transformations [t1, t2, ..., tn] generating generated features from raw features, the list of feature maps correspond to the raw to generated maps in the same order as t1, t2, etc. If the overall raw to generated feature map from t1 to tn is available, then just that feature map in a single element list can be passed.
Optional list of feature names for the raw features that can be specified if the original explainer computes the explanation on the engineered features.
Optional list of feature names for the engineered features that can be specified if the original explainer has transformations passed in and only computes the importances on the raw features.
Methods
explain |
Use the DeepExplainer and deep model for scoring to get the feature importance values of data. |
explain
Use the DeepExplainer and deep model for scoring to get the feature importance values of data.
explain(evaluation_examples, get_raw=None)
Parameters
- evaluation_examples
- array or DataFrame or csr_matrix
A matrix of feature vector examples (# examples x # features) on which to explain the model's output.
- get_raw
- bool
If True, importance values for raw features will be returned. If False, importance values for engineered features will be returned. If unspecified and transformations was passed into the original explainer, raw importances will be returned. If unspecified and feature_maps was passed into the scoring explainer, engineered importance values will be returned.
Returns
For a model with a single output such as regression, this returns a matrix of feature importance values. For models with vector outputs this function returns a list of such matrices, one for each output. The dimension of this matrix is (# examples x # features).
Return type
Feedback
https://aka.ms/ContentUserFeedback.
Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see:Submit and view feedback for