11-30, 15:55–16:25 (Europe/Amsterdam), Auditorium
SHAP (SHapley Additive exPlanations) is a model agnostic AI explainability framework that can be used for global and local explainability. Starting from scratch, the theory of SHAP values will be explained and the usage of the Python framework will be illustrated on a classification example in the transaction monitoring domain. After the presentation, you will have learned how to use SHAP to investigate feature importance, feature sensitivity and how to explain individual prediction in a human readable output.
The advancements of (black box) artificial intelligence toolkits in recent years have made implementing AI models a commodity. However, model implementation is only the beginning during application development. Understanding, optimizing, and troubleshooting models is remaining as a constant challenge. In particular, the understanding (or explainability) of AI models is expected to become a requirement with the EU AI act, expected to pass this year.
In this presentation, SHAP (SHapley Additive exPlanations), a model agnostic AI explainability framework is explained using the example of a tabular classification problem in Python. First, we look at the theory behind SHAP and demonstrate the practical implementation in Python. Second, the usage of the SHAP framework is showcased for both global and local explainability. For this the filtering of bank transactions for suspicious activities is used as an example. It will be shown how SHAP was used to perform feature selection, understand the model’s sensitivity to individual features and how single predictions can be explained. Last, a translation from SHAP to human readable output will be shown which was developed to explain the model predictions to the end user.
Time breakdown:
- General + domain introduction: 5 min
- SHAP theory and Python framework: 5 min
- Deep-dive into global and local explainability: 10 min
- Converting SHAP values to human readable output: 5 min
- Q&A: 5 min
No previous knowledge expected