An Investigation of Neural Network Structure with Topological Data Analysis

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: Artificial neural networks at the present time gain notable popularity and show astounding results in many machine learning tasks. This, however, also results in a drawback that the understanding of the processes happening inside of learning algorithms decreases. In many cases, the process of choosing a neural network architecture for a problem comes down to selection of network layers by intuition and to manual tuning of network parameters. Therefore, it is important to build a strong theoretical base in this area, both to try to reduce the amount of manual work in the future and to get a better understanding of capabilities of neural networks. In this master thesis, the ideas of applying different topological and geometric methods for the analysis of neural networks were investigated. Despite the difficulties which arise from the novelty of the approach, such as limited amount of related studies, some promising methods of network analysis were established and tested on baseline machine learning datasets. One of the most notable results of the study reveals how neural networks preserve topological features of the data when it is projected into space with low dimensionality. For example, the persistence for MNIST dataset with added rotations of images gets preserved after the projection into 3D space with the use of simple autoencoders; on the other hand, autoencoders with a relatively high weight regularization parameter might be losing this ability.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)