Increasing explainability of neural network based retail credit risk models

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: Due to their ’black box’ nature, Artificial Neural Networks (ANN) are not permitted for use in various applications. One such application is mortgage credit risk modeling. Recently, the European Banking Authority stated that one of the main reasons why ANN based models are not adopted in this field is because they do not meet the strict requirements for transparency and ability of performing root cause analysis set forth by legislators and other stakeholders. In this thesis, an ANN model is trained on a mortgage dataset to predict default of customers. To aid in understanding the predictions of the network, two explainability methodologies are adapted to the model; SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME). To test how well these methods aided understanding, a group of experts in the field comprised of employees of a prominent financial institution’s mortgage credit risk model stress testing team were shown the output graphs and asked to make inferences on the relationship between the inputs and outputs. The results indicate that the two investigated explainability methodologies show potential in assisting experts in the field identify, understand, and explain the relationship between the input and output variables. This work, and the continuation of it, has the potential of leading to ANN based models breaking into new business areas and use-cases.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)