Modelling synaptic rewiring in brain-like neural networks for representation learning

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: This research investigated the concept of a sparsity method inspired by the principles of structural plasticity in the brain in order to create a sparse model of the Bayesian Confidence Propagation Neural Networks (BCPNN) during the training phase. This was done by extending the structural plasticity in the implementation of the BCPNN. While the initial algorithm presented two synaptic states (Active and Silent), this research extended it to three synaptic states (Active, Silent and Absent) with the aim to enhance sparsity configurability and emulate a more brain-like algorithm, drawing parallels with synaptic states observed in the brain. Benchmarking was conducted using the MNIST and Fashion-MNIST dataset, where the proposed threestate model was compared against the previous two-state model in terms of representational learning. The findings suggest that the three-state model not only provides added configurability but also, in certain low-sparsity settings, showcases similar representational learning abilities as the two-state model. Moreover, in high-sparsity settings, the three-state model demonstrates a commendable balance between accuracy and sparsity trade-off.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)