您现在访问的是微软AZURE全球版技术文档网站,若需要访问由世纪互联运营的MICROSOFT AZURE中国区技术文档网站,请访问 https://docs.azure.cn.

模型 interpretability Azure 机器学习 (预览版) Model interpretability in Azure Machine Learning (preview)

适用于:是基本版是企业版               (升级到企业版APPLIES TO: yesBasic edition yesEnterprise edition                    (Upgrade to Enterprise edition)

模型可解释性概述Overview of model interpretability

可解释性对于数据科学家、审核人员和业务决策者确保符合公司政策、行业标准和政府法规而言,都同样至关重要:Interpretability is critical for data scientists, auditors, and business decision makers alike to ensure compliance with company policies, industry standards, and government regulations:

  • 数据科学家需要能够向主管和利益干系人解释其模型,使他们可以了解所发现的结果的价值和准确度。Data scientists need the ability to explain their models to executives and stakeholders, so they can understand the value and accuracy of their findings. 它们还需要利用可解释性来调试其模型,并做出有关如何改进模型的明智决策。They also require interpretability to debug their models and make informed decisions about how to improve them.

  • 法定审核人员需要工具来验证与法规符合性相关的模型,并监视模型的决策对人的影响。Legal auditors require tools to validate models with respect to regulatory compliance and monitor how models' decisions are impacting humans.

  • 业务决策者需要能够为最终用户提供透明度来获得内心的宁静。Business decision makers need peace-of-mind by having the ability to provide transparency for end users. 这使他们可以获得并保有他人的信任。This allows them to earn and maintain trust.

在模型开发过程的以下两个主要阶段中,启用解释机器学习模型的功能非常重要:Enabling the capability of explaining a machine learning model is important during two main phases of model development:

  • 在训练阶段。因为模型设计人员和评估人员可以使用模型的可解释性输出来验证假设条件,并与利益干系人建立信任关系。During the training phase, as model designers and evaluators can use interpretability output of a model to verify hypotheses and build trust with stakeholders. 他们还可以使用模型的见解进行调试,验证模型行为是否匹配目标,并检查是否存在模型不公平性或不重要的特征。They also use the insights into the model for debugging, validating model behavior matches their objectives, and to check for model unfairness or insignificant features.

  • 在推理阶段。因为使部署的模型对外部透明可让主管了解模型在“部署后”的工作情况,及其决策如何对待和影响现实生活中的人们。During the inferencing phase, as having transparency around deployed models empowers executives to understand "when deployed" how the model is working and how its decisions are treating and impacting people in real life.

Azure 机器学习的可解释性Interpretability with Azure Machine Learning

可通过多个 SDK 包使用可解释性类:(了解如何安装适用于 Azure 机器学习的 SDK 包The interpretability classes are made available through multiple SDK packages: (Learn how to install SDK packages for Azure Machine Learning)

  • 主包 azureml.interpret,包含 Microsoft 支持的功能。azureml.interpret, the main package, containing functionalities supported by Microsoft.

  • azureml.contrib.interpret,可以尝试的预览版和试验性功能。azureml.contrib.interpret, preview, and experimental functionalities that you can try.

pip install azureml-interpretpip install azureml-interpret-contrib 用于常规用途,将 pip install azureml-contrib-interpret 用于自动化机器学习用途来获取可解释性包。Use pip install azureml-interpret and pip install azureml-interpret-contrib for general use, and pip install azureml-contrib-interpret for AutoML use to get the interpretability packages.

重要

contrib 命名空间中的内容不完全受支持。Content in the contrib namespace is not fully supported. 试验性功能变成熟后,会逐渐转移到主命名空间。As the experimental functionalities become mature, they will gradually be moved to the main namespace. ..

如何解释模型How to interpret your model

使用 SDK 中的类和方法可以:Using the classes and methods in the SDK, you can:

  • 通过为整个模型和/或单个数据点生成特征重要性值来解释模型预测。Explain model prediction by generating feature importance values for the entire model and/or individual datapoints.
  • 在训练和推理期间,基于真实数据集大规模实现模型可解释性。Achieve model interpretability on real-world datasets at scale, during training and inference.
  • 使用交互式可视化仪表板在训练时发现数据中的模式和解释Use an interactive visualization dashboard to discover patterns in data and explanations at training time

在机器学习中,“特征”是用于预测目标数据点的数据字段。In machine learning, features are the data fields used to predict a target data point. 例如,若要预测信用风险,可以使用年龄、帐户大小和帐龄的数据字段。For example, to predict credit risk, data fields for age, account size, and account age might be used. 在本例中,年龄、帐户大小和帐龄都是特征In this case, age, account size, and account age are features. 特征重要性告知每个数据字段如何影响模型的预测。Feature importance tells you how each data field affected the model's predictions. 例如,年龄可能在预测中广泛使用,而帐户大小和帐龄不会显著影响预测值。For example, age may be heavily used in the prediction while account size and age do not affect the prediction values significantly. 此过程可让数据科学家解释生成的预测,使利益干系人能够洞察哪些特征在模型中最重要。This process allows data scientists to explain resulting predictions, so that stakeholders have visibility into what features are most important in the model.

下面介绍支持的可解释性技术、支持的机器学习模型和支持的运行环境。Learn about supported interpretability techniques, supported machine learning models, and supported run environments here.

支持的可解释性技术Supported interpretability techniques

azureml-interpret 使用 Interpret-Community(一个开源 Python 包,用于训练可解释的模型,并帮助解释黑盒 AI 系统)中开发的可解释性技术。azureml-interpret uses the interpretability techniques developed in Interpret-Community, an open source python package for training interpretable models and helping to explain blackbox AI systems. Interpret-Community 充当此 SDK 支持的解释器的主机,目前支持以下可解释性技术:Interpret-Community serves as the host for this SDK's supported explainers, and currently supports the following interpretability techniques:

可解释性技术Interpretability Technique 说明Description 类型Type
SHAP 树解释器SHAP Tree Explainer SHAP 的树解释器,侧重于特定于树和树系综的多项式时间快速 SHAP 值估算算法。SHAP's tree explainer, which focuses on polynomial time fast SHAP value estimation algorithm specific to trees and ensembles of trees. 特定于模型Model-specific
SHAP 深度解释器SHAP Deep Explainer 深度解释器基于来自 SHAP 的解释,“是用于计算深度学习模型中 SHAP 值的高速近似算法,建立在与 SHAP NIPS 论文中所述 DeepLIFT 的关联的基础之上。Based on the explanation from SHAP, Deep Explainer "is a high-speed approximation algorithm for SHAP values in deep learning models that builds on a connection with DeepLIFT described in the SHAP NIPS paper. 使用 TensorFlow 后端的 TensorFlow 模型和 Keras 模型受支持(还有对 PyTorch 的初步支持)”。TensorFlow models and Keras models using the TensorFlow backend are supported (there is also preliminary support for PyTorch)". 特定于模型Model-specific
SHAP 线性解释器SHAP Linear Explainer SHAP 的线性解释器计算线性模型的 SHAP 值,有时还会解释特征间的关联。SHAP's Linear explainer computes SHAP values for a linear model, optionally accounting for inter-feature correlations. 特定于模型Model-specific
SHAP 内核解释器SHAP Kernel Explainer SHAP 的内核解释器使用特殊加权的本地线性回归来估算任何模型的 SHAP 值。SHAP's Kernel explainer uses a specially weighted local linear regression to estimate SHAP values for any model. 与模型无关Model-agnostic
模拟解释器(全局代理)Mimic Explainer (Global Surrogate) 模拟解释器基于训练全局代理模型来模拟黑盒模型的思路。Mimic explainer is based on the idea of training global surrogate models to mimic blackbox models. 全局代理模型是本质上可解释的模型,经训练后可以尽量准确地给出任何黑盒模型的预测近似值。A global surrogate model is an intrinsically interpretable model that is trained to approximate the predictions of any black box model as accurately as possible. 数据科学家可以解释代理模型,以得出有关黑盒模型的结论。Data scientists can interpret the surrogate model to draw conclusions about the black box model. 可以使用以下可解释模型之一作为代理模型:LightGBM (LGBMExplainableModel)、线性回归 (LinearExplainableModel)、随机梯度下降可解释模型 (SGDExplainableModel) 和决策树 (DecisionTreeExplainableModel)。You can use one of the following interpretable models as your surrogate model: LightGBM (LGBMExplainableModel), Linear Regression (LinearExplainableModel), Stochastic Gradient Descent explainable model (SGDExplainableModel), and Decision Tree (DecisionTreeExplainableModel). 与模型无关Model-agnostic
排列特征重要性解释器 (PFI)Permutation Feature Importance Explainer (PFI) 排列特征重要性是用于解释分类和回归模型的技术,该技术是受 Breiman 的随机林论文(参阅第 10 部分)的启发开发出来的。Permutation Feature Importance is a technique used to explain classification and regression models that is inspired by Breiman's Random Forests paper (see section 10). 从较高层面看,其工作原理是对整个数据集以每次一个特征的形式随机排布数据,并计算相关性能指标的变化程度。At a high level, the way it works is by randomly shuffling data one feature at a time for the entire dataset and calculating how much the performance metric of interest changes. 变化越大,该特征越重要。The larger the change, the more important that feature is. PFI 可以解释任何基础模型的整体行为,但不会解释各个预测。PFI can explain the overall behavior of any underlying model but does not explain individual predictions. 与模型无关Model-agnostic

除了上面所述的可解释性技术,我们还支持另一种基于 SHAP 的解释器(称为 TabularExplainer)。Besides the interpretability techniques described above, we support another SHAP-based explainer, called TabularExplainer. 根据模型,TabularExplainer 会使用受支持的 SHAP 解释器之一:Depending on the model, TabularExplainer uses one of the supported SHAP explainers:

  • 用于所有基于树的模型的 TreeExplainerTreeExplainer for all tree-based models
  • 用于 DNN 模型的 DeepExplainerDeepExplainer for DNN models
  • 用于线性模型的 LinearExplainerLinearExplainer for linear models
  • 用于所有其他模型的 KernelExplainerKernelExplainer for all other models

TabularExplainer 还在直接 SHAP 解释器的基础上做出了重大的功能和性能增强:TabularExplainer has also made significant feature and performance enhancements over the direct SHAP Explainers:

  • 初始化数据集的汇总Summarization of the initialization dataset. 如果解释速度最为重要,我们将汇总初始化数据集,并生成一个较小的代表性样本集,这可以加快生成整体和单个特征重要性值的速度。In cases where speed of explanation is most important, we summarize the initialization dataset and generate a small set of representative samples, which speeds up the generation of overall and individual feature importance values.
  • 对评估数据集采样Sampling the evaluation data set. 如果用户传入较大的评估样本集,但实际上并不需要评估所有这些样本,则可将采样参数设置为 true,以加快计算整体模型解释的速度。If the user passes in a large set of evaluation samples but does not actually need all of them to be evaluated, the sampling parameter can be set to true to speed up the calculation of overall model explanations.

下图显示了受支持解释器的当前结构。The following diagram shows the current structure of supported explainers.

机器学习可解释性体系结构Machine Learning Interpretability Architecture

支持的机器学习模型Supported machine learning models

SDK 的 azureml.interpret 包支持使用以下数据集格式训练的模型:The azureml.interpret package of the SDK supports models trained with the following dataset formats:

  • numpy.array
  • pandas.DataFrame
  • iml.datatypes.DenseData
  • scipy.sparse.csr_matrix

解释函数接受模型和管道作为输入。The explanation functions accept both models and pipelines as input. 如果提供某个模型,该模型必须实现符合 Scikit 约定的预测函数 predictpredict_probaIf a model is provided, the model must implement the prediction function predict or predict_proba that conforms to the Scikit convention. 如果模型不支持此功能,则可以将模型包装到与 Scikit 中的 predictpredict_proba 生成相同结果的函数中,并将该包装器函数与所选解释器配合使用。If your model does not support this, you can wrap your model in a function that generates the same outcome as predict or predict_proba in Scikit and use that wrapper function with the selected explainer. 如果提供了某个管道,则解释函数将假设正在运行的管道脚本会返回预测。If a pipeline is provided, the explanation function assumes that the running pipeline script returns a prediction. 使用此包装技术,azureml.interpret 可以支持通过 PyTorch、TensorFlow 和 Keras 深度学习框架训练的模型以及经典机器学习模型。Using this wrapping technique, azureml.interpret can support models trained via PyTorch, TensorFlow, and Keras deep learning frameworks as well as classic machine learning models.

本地和远程计算目标Local and remote compute target

azureml.interpret 包旨在与本地和远程计算目标配合使用。The azureml.interpret package is designed to work with both local and remote compute targets. 如果在本地运行,SDK 函数不会与任何 Azure 服务联系。If run locally, The SDK functions will not contact any Azure services.

可以在 Azure 机器学习计算上远程运行解释,并将解释信息记录到 Azure 机器学习运行历史记录服务中。You can run explanation remotely on Azure Machine Learning Compute and log the explanation info into the Azure Machine Learning Run History Service. 记录此信息后,Azure 机器学习工作室中便会提供来自该解释的报告和可视化效果,供用户分析。Once this information is logged, reports and visualizations from the explanation are readily available on Azure Machine Learning studio for user analysis.

后续步骤Next steps

  • 参阅操作指南,为本地的模型训练以及 Azure 机器学习远程计算资源上的模型训练启用可解释性。See the how-to for enabling interpretability for models training both locally and on Azure Machine Learning remote compute resources.
  • 参阅示例笔记本了解更多方案。See the sample notebooks for additional scenarios.
  • 如果对文本方案的可解释性感兴趣,请参阅 Interpret-textInterpret-Community 的相关开源存储库)来了解用于 NLP 的可解释性技术。If you're interested in interpretability for text scenarios, see Interpret-text, a related open source repo to Interpret-Community, for interpretability techniques for NLP. azureml.interpret 包目前不支持这些技术,但你可以从文本分类上的示例笔记本入手。azureml.interpret package does not currently support these techniques but you can get started with an example notebook on text classification.