Sequence Disambiguation in a Brain-Like Recurrent Neural Network with Local Associative Learning
Abstract: The learning of sequences is a fundamental ability of biological networks. While there are many artificial networks that are able to successfully learn sequences, it is of particular interest to study networks that attempt to do so in a manner similar to how it is done by a human brain. Our investigation builds on a brain-like network which uses a Hebbian-like learning rule with a probabilistic interpretation, called Bayesian Confidence Propagation Neural Network (BCPNN). Through a synthesis of this learning rule with memory traces the network can be seen as a model of the synaptic learning in biological systems. It has previously been shown that this type of attractor network is able to learn multiple sequences. Sometimes, the sequences that the network needs to learn overlap. It is therefore important that the network is able to distinguish between such overlapping sequences. The original network uses a learning rule with only one time constant. In this study we show that the addition of a second time constant improves the network’s ability to distinguish two overlapping sequences.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)