Using Layer-wise Relevance Propagation and Sensitivity Analysis Heatmaps to understand the Classification of an Image produced by a Neural Network
Abstract: Neural networks are regarded as state of the art within many areas of machine learning, however due to their growing complexity and size, a question regarding their trustability and understandability has been raised. Thus, neural networks are often being considered a "black-box". This has lead to the emersion of evaluation methods trying to decipher these complex networks. Two of these methods, layer-wise relevance propagation (LRP) and sensitivity analysis (SA), are used to generate heatmaps, which presents pixels in the input image that have an impact on the classification. In this report, the aim is to do a usability-analysis by evaluating and comparing these methods to see how they can be used in order to understand a particular classification. The method used in this report is to iteratively distort image regions that were highlighted as important by the two heatmapping-methods. This lead to the findings that distorting essential features of an image according to the LRP heatmaps lead to a decrease in classification score, while distorting inessential features of an image according to the combination of SA and LRP heatmaps lead to an increase in classification score. The results corresponded well with the theory of the heatmapping-methods and lead to the conclusion that a combination of the two evaluation methods is advocated for, to fully understand a particular classification.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)