Unsupervised learning of data representations in brain-like neural networks

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Author: Arian Javdan; [2021]

Keywords: ;

Abstract: Recently, there has been a growing interest in brain-plausible neural networks that closely resemble the brain’s structure. However, conventional networks do not make good models for the brain since these connections are modelled differently, hence the interest in brain-plausible networks. The Bayesian Confidence Propagation Neural Network (BCPNN) is a brain-plausible neural network that has been used in fields such as pharmacovigilance, specifically the part of the BCPNN that has been used is the feed-forward module. Aside from the feed-forward module, the BCPNN also has a recurrent module of which less is known. This project, therefore, aims to investigate the properties of the recurrent module as well as investigating if good, representative prototypes, are learnt when taking hidden layer representations of the MNIST dataset, produced by the feed-forward module, and feed them to the recurrent module, effectively combining the two modules. Investigating the properties of the recurrent module was done by creating a set of artificial datasets and using different training schemes to see howwell the BCPNN would recreate a set of patterns. The results show that the network performs significantly better when given more samples but also when some amount of noise is added to the datasets, given that the number of samples is high enough. After giving the recurrent module the hidden layer representations of MNIST, the output was evaluated using standard clustering analysis tools such as the Davies–Bouldin (DB) index and the Adjusted Rand Index (ARI) and the performance of the BCPNN was compared to k-medoids clustering algorithm with three different configurations k=10, k=100 and k=1000 to contextualise the results. In terms of the DB index, the BCPNN outperforms k-medoids at every configuration, but the BCPNN performs similarly to the k=100 configuration in terms of the ARI. With these results in mind, it is concluded that the BCPNN is learning representative prototypes. 

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)