Comparing Catastrophic Interference between Incremental Moment Matching-Mean and Hard Attention to the Task

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Author: Quintus Roos; William Lilliesköld; [2020]

Keywords: ;

Abstract: When a neural networks trained on data to solve one problem is trained on new data to solve another problem it tends to forget what it had previously knew that made it able to solve the first problem. This phenomenon is called Catastrophic Interference. This thesis compares two state-of-the-art algorithms for reducing Catastrophic Interference (CI) in Neural Networks, specifically Incremental Moment Matching-Mean (IMM-Mean) and Hard Attention to the Task (HAT). Images from three different datasets with increasing complexity, MNIST, Fashion-MNIST, and CIFAR-10, are used to train and test their performances. The algorithms are trained on data from each respective dataset, partitioned into subsets, structured so that new classes of data are introduced with each new problem the algorithms are trained on, an approach known as Incremental Class Learning (ICL). From our results, we conclude that HAT suffers significantly less CI than IMM. Future work should however explore to what extent our conclusion holds when changing some of the parameters used for this thesis.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)