This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
You have trained a classification model, and you want to quantify the influence of each feature on a specific individual prediction. What should you examine?
Global feature importance
Local feature importance
Recall and Precision
Which explainer uses an architecture-appropriate SHAP algorithm to interpret a model?
You must answer all questions before checking your work.
Need help? See our troubleshooting guide or provide specific feedback by reporting an issue.